By Archana Sawant, Swapnil Kapile, Sachin Pharande, Snehal Chaudhary, Rohit Patange | on 17 March 2023 | in Amazon SageMaker, Machin Learning | Permalink | Comments | Share
At AWS Machine Learning (ML) workshops, customers often ask, “After I deploy an endpoint, where do I go from there?” You can deploy an Amazon SageMaker trained and validated ML model as an online endpoint in production. Alternatively, you can choose which SageMaker functionality to use. For example, you can choose just to train a model or to host one. Whether you choose one SageMaker function.
The following diagram shows how the deployed model is called using serverless architecture. Starting from the client side, a client script calls an Amazon API Gateway API action and passes parameter values. API Gateway is a layer that provides the API to the client. In addition, it seals the backend so that AWS Lambda stays and runs in a protected private network. API Gateway passes the parameter values to the Lambda function. The Lambda function parses the value and sends it to the SageMaker model endpoint. The model performs the prediction and returns the predicted value to Lambda. The Lambda function parses the returned value and sends it back to API Gateway. API Gateway responds to the client with that value.
In this post, I show you how to invoke a model endpoint deployed by SageMaker using API Gateway and Lambda. For testing purposes, we use Postman.
We use a sample notebook provided by SageMaker called Breast Cancer Prediction.ipynb. You have access to this notebook on the SageMaker Examples tab.
Choosing Use creates a folder and loads the notebook. The breast cancer prediction model predicts whether a breast mass is a malignant tumor or benign by looking at features computed from a digitized image of a fine needle aspirate of a breast mass. The data used to train the model consists of the diagnosis as well as the 10 real-valued features that are computed for each cell nucleus (radius, texture, perimeter, area, smoothness, compactness, concavity, concave points, symmetry, and fractal dimension). The prediction returned by the model is either 0 or 1; 0 being benign and 1 being malignant tumor. The Lambda function converts this value to be either B for benign or M for malignant tumor.
SageMaker has managed built-in Jupyter notebooks that allow you to write code in Python or R to explore, analyze, and do some modeling with small set of data. The sample Jupyter notebooks get loaded onto a notebook instance when the notebook instance boots up. Each sample notebook consists of markdown-based comments that explain each step, from downloading training data, performing the training, to deploying a model endpoint. After the model is trained and deployed, you can invoke the model endpoint using the SageMaker runtime API. To make it free from server and infrastructure management, we encapsulate this invocation using API Gateway and Lambda.
To create your model endpoint, complete the following steps:
-
Open the Breast Cancer Prediction.ipynb sample notebook.
-
Comment out the last cell by inserting #, because it deletes the endpoint created in the previous cell.
- Run the entire notebook by choosing Run All on the Cell
Alternatively, you can run each cell one by one by pressing Shift + Enter. If you run each cell, you can learn what each step is doing. For the purpose of this post, I chose Run All to deploy the model as an endpoint after the model training is complete.
Upon creation, you can view this endpoint on the SageMaker console. The default endpoint name looks like linear-endpoint-201803211721, but you can make it more meaningful. I called mine linear-learner-breast-cancer-prediction-endpoint.
Now we have a SageMaker model endpoint. Let’s look at how we call it from Lambda. We use the SageMaker runtime API action and the Boto3 sagemaker-runtime.invoke_endpoint().
- On the Lambda console, on the Functions page, choose Create function.
- For Function name, enter a name.
- For Runtime¸ choose your runtime.
- For Execution role¸ select Create a new role or Use an existing role.
If you chose Create a new role, after the Lambda function is created, go to the Configuration tab and find the name of the IAM role created. Click on the role name which will take you to IAM console.
-
Whether you created a new role or using the existing role, make sure to include the following policy, which gives your function permission to invoke a model endpoint:
{ "Sid": "VisualEditor0", "Effect": "Allow", "Action": "sagemaker:InvokeEndpoint", "Resource": "*" }
The following is the sample Lambda function code:
import os
import io
import boto3
import json
import csv
#grab environment variables
ENDPOINT_NAME = os.environ['ENDPOINT_NAME']
runtime= boto3.client('runtime.sagemaker')
def lambda_handler(event, context):
print("Received event: " + json.dumps(event, indent=2))
data = json.loads(json.dumps(event))
payload = data['data']
print(payload)
response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME,
ContentType='text/csv',
Body=payload)
print(response)
result = json.loads(response['Body'].read().decode())
print(result)
pred = int(result['predictions'][0]['score'])
predicted_label = 'M' if pred == 1 else 'B'
return predicted_label
ENDPOINT_NAME is an environment variable that holds the name of the SageMaker model endpoint you just deployed using the sample notebook. Go to the SageMaker console to find the end point name generated by SageMaker. Enter the name as the environment variable value. It would look like DEMO-linear-endpoint-xxxxxxxxx.
The event that invokes the Lambda function is triggered by API Gateway. API Gateway simply passes the test data through an event.
You can create an API by following these steps:
- On the API Gateway console, choose the REST API
- Choose Build.
- Select New API.
- For API name¸ enter a name (for example, BreastCancerPredition).
- Leave Endpoint Type as Regional.
- Choose Create API.
-
On the Actions menu, choose Create resource.
-
Enter a name for the resource (for example, predictbreastcancer).
-
After the resource is created, on the Actions menu, choose Create Method to create a POST method.
- For Integration type, select Lambda Function.
- For Lambda function, enter the function you created. When the setup is complete, you can deploy the API to a stage.
- On the Actions menu, choose Deploy API.
- Create a new stage called test.
- Choose Deploy.
This step gives you the invoke URL.
For more information on creating an API with API Gateway, see Creating a REST API in Amazon API Gateway. In addition, you can make the API more secure using various methods.
Now that you have an API and a Lambda function in place, let’s look at the test data.
When the sample notebook loads the dataset to the Amazon Simple Storage Service_ (Amazon S3) bucket that you specified, CSV files with data are loaded. The sample notebook separates the dataset into two files: training data and validation data. Each file is saved in its respective folder. The Amazon S3 path to the file looks like /sagemaker/DEMO-breast-cancer-prediction/validation.
The following code is one row of the validation data from the file linear_validation.data in the validation folder:
{"data": "1257815, 5, 1, 3, 1, 2, 2, 1, 1, 6, 4, 3, 2, 5, 5, 6, 5, 4, 4, 23, 6, 3, 2, 7, 8, 8, 3, 2, 1, 6"}
Now that we have the Lambda function, REST API, and test data, let’s test it using Postman, which is an HTTP client for testing web services. Make sure to download the latest version.
When you deployed your API, it provided the invoke URL, which looks like https://{restapi_id}.execute-api.us-west-2.amazonaws.com/prod/predictbreastcancer. It follows the format
https://{restapi_id}.execute-api.{region}.amazonaws.com/{stage_name}/{resource_name}.
For more information about invoking an API in API Gateway, see Invoking a REST API in Amazon API Gateway._
- Enter the invoke URL into Postman.
- Choose POST as method.
- On the Body tab, enter the test data.
- Choose Send to see the returned result as B for the row of test data we looked at earlier.
To learn more about SageMaker Inference, please refer to the following resources:
• SageMaker Inference documentation
• SageMaker Inference recommender
• SageMaker Serverless Inference
• SageMaker Asynchronous Inference
• Inference endpoint testing from studio
• Roundup of re:Invent 2021 Amazon SageMaker announcements
In this post, you created a model endpoint deployed and hosted by SageMaker. Then you created serverless components (a REST API and Lambda function) that invoke the endpoint. Now you know how to call an ML model endpoint hosted by SageMaker using serverless technology. If you have feedback about this post, please leave it in the comments. If you have questions about implementing the example used in this post, you can also open a thread on the Developer Tools forum.