Skip to content

Commit

Permalink
07-lambda
Browse files Browse the repository at this point in the history
  • Loading branch information
macielcalebe committed Aug 28, 2024
1 parent da8f26e commit a68dca1
Show file tree
Hide file tree
Showing 12 changed files with 1,500 additions and 5 deletions.
104 changes: 104 additions & 0 deletions content/classes/07-lambda/api_gateway.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
# API Gateway

## API

We will add an API service. Thus, every time there is a call to the API endpoint, whether through the browser or an application, the Lambda function will be triggered.

This will be the schematic drawing:

![](api_gateway_lambda.png)

## Create API

!!! exercise "Question"
Change API name (some new name) and function name (created on previous page).

```python
import boto3
import os
from dotenv import load_dotenv
import random
import string


load_dotenv()

lambda_function_name = "" # Example: sayHello_<INSPER_USERNAME>
api_gateway_name = "" # Example: api_hello_<INSPER_USERNAME>"

id_num = "".join(random.choices(string.digits, k=7))

api_gateway = boto3.client(
"apigatewayv2",
aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY"),
region_name=os.getenv("AWS_REGION"),
)

lambda_function = boto3.client(
"lambda",
aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY"),
region_name=os.getenv("AWS_REGION"),
)

lambda_function_get = lambda_function.get_function(FunctionName=lambda_function_name)

print(lambda_function_get)

api_gateway_create = api_gateway.create_api(
Name=api_gateway_name,
ProtocolType="HTTP",
Version="1.0",
RouteKey="ANY /", # Here you can change to GET POST and provide route like "GET /hello"
Target=lambda_function_get["Configuration"]["FunctionArn"],
)

api_gateway_permissions = lambda_function.add_permission(
FunctionName=lambda_function_name,
StatementId="api-gateway-permission-statement-" + id_num,
Action="lambda:InvokeFunction",
Principal="apigateway.amazonaws.com",
)

print("API Endpoint:", api_gateway_create["ApiEndpoint"])
```

!!! exercise "Question"
access the provided endpoint to check if the API works!

## Show APIs

To list the APIs registered in the account, use:

```python
import boto3
import os
from dotenv import load_dotenv

load_dotenv()

api_gateway = boto3.client(
"apigatewayv2",
aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY"),
region_name=os.getenv("AWS_REGION"),
)

response = api_gateway.get_apis(MaxResults="2000")

# Show APIs name and endpoint
print("APIs:")
for api in response["Items"]:
print(f"- {api['Name']} ({api['ApiEndpoint']})")
```

!!! exercise "Question"
Make sure your API is on the list

## Practicing

!!! exercise "Question"
To practice, you should create a lambda function that returns the number of words in a sentence.

You must create a `POST /word-count` route that receives a phrase in the body of the request.
Binary file added content/classes/07-lambda/api_gateway_lambda.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
71 changes: 71 additions & 0 deletions content/classes/07-lambda/aps03_lambda.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# APS 03

In this assignment, we are going to create a new version of the work from the [API class](../02-api/api_deploy.md#an-api-that-makes-predictions).

## Accept assignment

All assignments delivery will be made using Git repositories. Access the link below to accept the invitation and start working on the third assignment.

[Invitation link](https://classroom.github.com/a/7x8wM5Js){ .ah-button }

## Clone repository

Clone your private repository:

!!! exercise "Question"
Create a `.gitignore` and make sure the `.env` is in it!

## Start working!

Our goal is to transform the `predict` route from class 02 into a lambda function. In other words, assume the model is already trained and that the model pickle can be embedded in the Docker image.

!!! info "Important!"
Realize that we will no longer be using FastAPI. We will create a lambda function that has a handler for **predict**, then we will create an API Gateway that exposes the lambda function.

!!! exercise "Question"
Create the `.py` file with the function handler.

!!! exercise "Question"
Create the `requirements.txt` file with the dependencies.

!!! exercise "Question"
Create the `Dockerfile`

!!! tip "Tip!"
In order to install `lightgbm`, you will need to install some dependencies on the system. So, before `RUN pip install -r requirements.txt` you can add:

```docker
# Install system dependencies
RUN yum install -y libstdc++ cmake gcc-c++ && \
yum clean all && \
rm -rf /var/cache/yum

# Install the specified packages
RUN pip install -r requirements.txt
```

!!! exercise "Question"
Create the Docker image

!!! exercise "Question"
Test the Docker image locally

!!! exercise "Question"
Create a new repository `aps03_<INSPER_USERNAME>` in ECR

!!! exercise "Question"
Tag and push your image to the ECR repository

!!! exercise "Question"
Create a lambda function associated with your image

!!! exercise "Question"
Test the lambda function

!!! exercise "Question"
Create an API Gateway and test it.

Leave in the README an example of how to test your API Gateway (curl command or Python code).

!!! exercise "Question"
Commit and push: mission accomplished!
63 changes: 63 additions & 0 deletions content/classes/07-lambda/aws_lambda.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# AWS Lambda

## FaaS

**Function as a Service** (FaaS) refers to a *cloud computing* model that allows developers to build and run applications and functions *without having to worry about infrastructure management*.

With **FaaS**, developers are able to deploy their code in the form of stateless functions or event handlers that can be invoked on-demand or in response to events.

!!! info "Info!"
**FaaS** is considered a form of serverless computing. The **FaaS** platform takes care of:

- Underlying servers
- Operating systems and platforms
- Scaling and operating the application

Making it simple for developers to **focus** only on writing code for their specific business logic or tasks.

## Market solutions

The main providers of **FaaS** platforms include:

- **AWS Lambda**

- **Google Cloud Functions**

- **Microsoft Azure Functions**

We will work with AWS Lambda!

## AWS Lambda

**AWS Lambda** is Amazon's flagship serverless computing platform that runs your code on high-availability compute infrastructure and performs all the administration of the compute resources.

The Lambda functions can be *triggered* by various events like changes to an S3 bucket or database tables, calls from API Gateway or third party applications, or on a schedule.

## Advantages

Some reasons why Lambda functions should be considered to deploy ML applications:

- **Scalability**: Lambda can automatically scale up or down to handle varying loads. This is important for ML models that may see bursty traffic or need to handle prediction requests at scale.

- **Event-driven**: Lambda functions can be easily triggered by events like incoming data. This makes it simple to run ML predictions every time new data comes in without managing servers.

- **Pay-per-use**: with Lambda, you only pay for the compute resources used to run your code. This saves costs for ML workloads that may be intermittent or only needed during model training cycles.

- **No servers to manage**: Lambda handles all the infrastructure maintenance, so you can focus on coding your ML logic without worrying about servers, scaling, availability etc.

- **Deployment flexibility**: you can host complex ML pipelines or prediction code on Lambda. Models can also be deployed as REST APIs using API Gateway for low-latency predictions.


## Disadvantages

**AWS Lambda** may not always be the best choice for ML applications due to:

- **Limited memory/compute**: Lambda functions have strict memory limits ranging from 128MB to 10240MB. ML models often require much more RAM.

- **Cold start**: when a Lambda function hasn't been invoked in awhile, the first invocation takes longer due to container initialization. This may not be suitable for real-time ML inferences.

- **Stateful dependencies**: Lambda functions are stateless by design. Supporting stateful dependencies like databases for ML model training is challenging.

- **Long-running workloads**: ML model training typically involves batch processing large datasets over long periods which exceeds Lambda's 15 minute timeout limit.

- **GPU/TPU support**: Lambda doesn't support hardware accelerators like GPUs which are essential for many deep learning workloads.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit a68dca1

Please sign in to comment.