Home >Backend Development >Python Tutorial >Create, Debug, and Deploy Your Code as reusable AWS Lambda Layers

Create, Debug, and Deploy Your Code as reusable AWS Lambda Layers

DDD
DDDOriginal
2025-01-30 00:23:09110browse

AWS Lambda Layers are a nice way of being able to reuse code with your different Lambdas. I have seen many tutorials on how to create layers for existing pip packages, but not as many explaining how to do it with your own code and allowing you to debug it along with your lambda. In my scenario, you can have your layer and several lambdas using this layer, and debug the code of the lambdas and the layer along simulating the AWS environment. I will assume you already have a lambda function created with its template.yml. If not, check the following article on how to create a lambda https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html. After creating it you can download it as a zip file and extract the code and the template.yml from there.

Preparing your layer

First, we need to setup the folder structure for the layer. I like to create a folder called layers, and for each layer create its own folder. AWS Lambda needs a specific folder structure for the layers, where each layer’s code resides in a python/ folder. See the following link for more information on this. https://docs.aws.amazon.com/lambda/latest/dg/packaging-layers.html

Our layer will be called layer_utils. We put then a folder layer_utils inside the python folder and inside we create the files request_handler.py and processor.py with the code. We also need a init.py empty file, which is necessary for python to recognize this as a package. This is how the tree structure should look like

layers/
└── layer_utils/
    └── python/
        ├── layer_utils/
        │   ├── __init__.py
        │   └── request_handler.py
        │   └── processor.py

The request_handler.py will receive a request with a url, and calls the processor who will use the library requests to get the data and return it.

./layers/layer_utils/python/layer_utils/processor.py

import requests

def process_data(url):
    """
    Fetches data from a URL.

    Args:
        url (str): The URL to fetch data from.

    Returns:
        str: The fetched content or an error message.
    """
    try:
        response = requests.get(url)
        response.raise_for_status()  # Raise an error for bad status codes
        return response.text[:200]  # Return the first 200 characters of the response
    except requests.RequestException as e:
        return f"Error fetching data: {str(e)}"

./layers/layer_utils/python/layer_utils/request_handler.py

from layer_utils.processor import process_data

def handle_request(request):
    """
    Handles an incoming request and processes it.

    Args:
        request (dict): The input request data.

    Returns:
        dict: The processed result.
    """
    # Example: Extract 'url' from the request and process it
    if "url" not in request:
        return {"error": "Missing 'data' in request"}

    data = request["url"]
    processed_result = process_data(data)
    return {"result": processed_result}

Here, it is important to note how we import the processor functions, by calling from layer_utils.processor import process_data, instead of from processor import process_data only. Using the absolute path helps avoid import errors later.

Packaging your layer

Alright, now we have our layer code created. But we are not done yet with it. We need now to create an editable package with pip so this can be used by our lambda code. We will follow the PEP 660 style to do so. We need to create two files: requirements.txt and pyproject.toml. The first one will include all the external libraries we need for this layer, in this case request. The second one is the file we need to create an editable package using pip and ensure all dependencies are installed. This allows editing the layer code without needing to repackage it constantly (which we need to debug).

This is how the tree should look like

└── layer_utils
    └── python
        ├── layer_utils
        │   ├── __init__.py
        │   ├── processor.py
        │   └── request_handler.py
        ├── pyproject.toml
        └── requirements.txt

The pyproject.toml will be used by pip to create the package with our layer.

./layers/layer_utils/python/pyproject.toml

layers/
└── layer_utils/
    └── python/
        ├── layer_utils/
        │   ├── __init__.py
        │   └── request_handler.py
        │   └── processor.py

In this file, the setuptools package is necessary for creating the package and the wheel package is used for packaging the code into a distributable format.

The requirements.txt indicates all the external modules our layers need. In our case we only need the requests module, but you could add as many as necessary.

./layers/layer_utils/python/requirements.txt

import requests

def process_data(url):
    """
    Fetches data from a URL.

    Args:
        url (str): The URL to fetch data from.

    Returns:
        str: The fetched content or an error message.
    """
    try:
        response = requests.get(url)
        response.raise_for_status()  # Raise an error for bad status codes
        return response.text[:200]  # Return the first 200 characters of the response
    except requests.RequestException as e:
        return f"Error fetching data: {str(e)}"

I think it is important to keep track of which version of your package you are using, because your external packages will be imported by calling your AWS Lambda Layer resource directly from AWS. If you are debugging directly on your system by running your lambda directly from your python environment instead of using sam local invoke or sam local start-api, you will need to make sure that your local packages installed with pip will be the same as your deployed packages in your layer. I won't explain how to create the external layers since there are many good tutorials for that (for example, this https://www.keyq.cloud/en/blog/creating-an-aws-lambda-layer-for-python-requests-module
).

Setting-up the virtual environment

Now let's create a virtual environment. This is not necessary, but it is recommended, as it isolates dependencies and ensures that the Python environment is consistent with the one that Lambda will use. To do this, in the console on your project dir, input

from layer_utils.processor import process_data

def handle_request(request):
    """
    Handles an incoming request and processes it.

    Args:
        request (dict): The input request data.

    Returns:
        dict: The processed result.
    """
    # Example: Extract 'url' from the request and process it
    if "url" not in request:
        return {"error": "Missing 'data' in request"}

    data = request["url"]
    processed_result = process_data(data)
    return {"result": processed_result}

The first makes python run the venv module with -m venv and create a virtual environment called venv.

The second activates the virtual environment by calling the built-in command source that runs the virtual environment activation script. If you are using Visual Studio Code it might prompt you to switch to the virtual environment. Say yes.

After this, you should see in your shell something like this.

└── layer_utils
    └── python
        ├── layer_utils
        │   ├── __init__.py
        │   ├── processor.py
        │   └── request_handler.py
        ├── pyproject.toml
        └── requirements.txt

The (venv) at the beginning indicates you are on the virtual environment.

Sometimes I like to debug running the python file directly instead of with the SAM tools (because it is faster). To do this, I will install all the external packages on my virtual environment so I can use them locally for development and debugging.

[project]
name = "layer_utils"
version = "0.1.0"

[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"

This is only necessary to debug locally and directly without the SAM tools, so you can skip this step if you don't plan to do this.

We now need to package the layer, so our lambda can find the layer as a python package.

requests==2.32.2

The -e flag indicates that this is an editable package. The path points to where the pyproject.toml file is. Running this will create a new folder layer_utils.egg-info. Nothing to do there, just leave it.

Debugging

Ok, let's now see how we would debug this. This is my folder structure with the layers and the lambdas.

layers/
└── layer_utils/
    └── python/
        ├── layer_utils/
        │   ├── __init__.py
        │   └── request_handler.py
        │   └── processor.py

This is the code of my lambda

import requests

def process_data(url):
    """
    Fetches data from a URL.

    Args:
        url (str): The URL to fetch data from.

    Returns:
        str: The fetched content or an error message.
    """
    try:
        response = requests.get(url)
        response.raise_for_status()  # Raise an error for bad status codes
        return response.text[:200]  # Return the first 200 characters of the response
    except requests.RequestException as e:
        return f"Error fetching data: {str(e)}"

You can run the file and you should get a valid result with no errors.

If you are using Visual Studio Code with pylance, you might see that the import of the layer does not resolve even if the code has worked.

Pylance complaining on VSCode

To solve this, you can edit the settings.json of your workspace. Do Control/Command Shift P, enter Preferences: Open Workspace Settings (JSON) and add the following inside the brackets (if you have more extraPaths, simply add the path)

from layer_utils.processor import process_data

def handle_request(request):
    """
    Handles an incoming request and processes it.

    Args:
        request (dict): The input request data.

    Returns:
        dict: The processed result.
    """
    # Example: Extract 'url' from the request and process it
    if "url" not in request:
        return {"error": "Missing 'data' in request"}

    data = request["url"]
    processed_result = process_data(data)
    return {"result": processed_result}

Now Pylance should resolve this fine.

Adding the layer to your stack

We need now to set up the layer on your lambdas template.yml. We need to add the following inside the Resources: section (adapt the content as per your project)

./lambdas/myLambda/template.yml

└── layer_utils
    └── python
        ├── layer_utils
        │   ├── __init__.py
        │   ├── processor.py
        │   └── request_handler.py
        ├── pyproject.toml
        └── requirements.txt

In the ContentUri you can see how it refers to the relative path where the layer code is. See how it does NOT point to the python folder, since the AWS SAM system will look for the python folder there. Make sure the runtime matches the one you are using in your virtual environment and your lamnbda.
Also, you need to reference the layer on the lambda section of the file

[project]
name = "layer_utils"
version = "0.1.0"

[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"

Notice that in the layers section of the template file we also have the requests layer that is already on our AWS. This will create the layer locally, so SAM will know it needs to read it. It will also deploy this layer on AWS whenever you call AWS deploy.

Debugging with SAM

Let's test this. Let's build it first. I have had issues with building the template from a different path from where it is because in the template there are indications on where the source code is. To avoid this, I recommend to build it directly from the path of the template file.

requests==2.32.2

This will build all the necessary dependencies.

Now we can invoke it. I created an event file to test the lambda

./lambdas/myLambda/events/event.json

python3.12 -m venv venv

source venv/bin/activate

Now we can invoke the file for debugging. Remember that you need Docker installed and running for this. Again, remember to invoke this from the place where the template file is.

(venv) usar@MacBookPro my-lambda-project

This will invoke the function on the template.yml. The -d flag indicates that the debug port is 5678. The -e flag indicates where is the event file that will be submitted to the lambda.

Deploying your Lambda and Layer to AWS

Let's now finalize this by deploying the code to AWS.

layers/
└── layer_utils/
    └── python/
        ├── layer_utils/
        │   ├── __init__.py
        │   └── request_handler.py
        │   └── processor.py

The --guided flag can be used the first time if you have not deployed your lambda yet, as it will help you in the process. After doing this, you can go to the AWS Console and find your layer. You can now use the layer with other lambdas by using the ARN of the layer.

The lambda layer on the AWS Console

Setting up VSCode to debug

If you want to use VSCode to debug, setting breakpoints, etc, we need to do some extra steps.

We need to add a debug configuration. To do this, do Control/Command Shift p and type Debug: Add Configuration.... This will open the launch.json file. You need to add the configuration there.

./.vscode/launch.json

import requests

def process_data(url):
    """
    Fetches data from a URL.

    Args:
        url (str): The URL to fetch data from.

    Returns:
        str: The fetched content or an error message.
    """
    try:
        response = requests.get(url)
        response.raise_for_status()  # Raise an error for bad status codes
        return response.text[:200]  # Return the first 200 characters of the response
    except requests.RequestException as e:
        return f"Error fetching data: {str(e)}"

We are using debugpy that will attach to the SAM LOCAL INVOKE, and here we setup the port 5678 that we saw when invoking with the -d flag. Make sure the localRoot points to the directory where your lambda code is. If you have more configurations, add the part inside the configuration to the list.

We will need the debugpy library to debug. Let's first add it to the requirements.txt of your lambda

./lambdas/myLambda/requirements.txt

from layer_utils.processor import process_data

def handle_request(request):
    """
    Handles an incoming request and processes it.

    Args:
        request (dict): The input request data.

    Returns:
        dict: The processed result.
    """
    # Example: Extract 'url' from the request and process it
    if "url" not in request:
        return {"error": "Missing 'data' in request"}

    data = request["url"]
    processed_result = process_data(data)
    return {"result": processed_result}

Now let's install it with pip

└── layer_utils
    └── python
        ├── layer_utils
        │   ├── __init__.py
        │   ├── processor.py
        │   └── request_handler.py
        ├── pyproject.toml
        └── requirements.txt

Or you could also install it through the requirements.txt file

[project]
name = "layer_utils"
version = "0.1.0"

[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"

We need to create an environment file where we can define an AWS_SAM_LOCAL environment variable that will tell our layer that it is running locally. We create a file .env on our workspace folder.

./.env

requests==2.32.2

Here we define the AWS_SAM_LOCAL so the lambda will know that is running locally via AWS SAM.

We also need to tell our python environment that it needs to use the environment variables from the environment file. This is how it should look like

./.vscode/settings.json

python3.12 -m venv venv

source venv/bin/activate

And finally we need to modify our lambda code so it will know it needs to attach to the debugger when running it locally. At the very beginning of our file, we will add the following piece of code

./lambdas/myLambda/src/lambda_function.py

(venv) usar@MacBookPro my-lambda-project

Now, we invoke the function (again, from the path where the function is):

pip3 install -r ./layers/layer_utils/python/requirements.txt

When the console shows Waiting for debugger to attach..., press F5 or select the Python debugger: Debug using Launch.json

Python debugger selection

And now you are ready to debug your local layers and lambdas!

The above is the detailed content of Create, Debug, and Deploy Your Code as reusable AWS Lambda Layers. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn