This example shows how to train and deploy a Machine Learning model using 2 approaches
- Amazon SageMaker
- AWS Lambda
Steps to get the example working:
-
Install Python 3.6, docker if not avaialable on your machine already
-
mkvirtualenv python36-sagemaker
. Make sure the virtualenv is activated after you create it. -
pip install jupyter sagemaker numpy scipy scikit-learn pandas
-
Create a new IAM User. You can use an existing IAM User as well but make sure you know the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY of the user account.
-
Add a profile with a name (Ex:up-sagemaker) in the .aws/credentials file as below.
[up-sagemaker] aws_access_key_id = <your-access-key-id> aws_secret_access_key = <yout-secret-access-key>
-
Create an AWS role. For example, SagemakerRole
-
Add a configuration to the .aws/config file
[profile up-sagemaker] region = <your-aws-region> role_arn = <arn of the role created in Step 5> source_profile = up-sagemaker
-
Attach below persmission policies to the IAM role created in Step 5
AmazonEC2ContainerRegistryFullAccess AmazonS3FullAccess IAMReadOnlyAccess AmazonSageMakerFullAccess AmazonEC2FullAccess
-
Run \container\build_and_push.sh to build a docker image with all the software (Python, Libraries etc) and the source code (logic to train, serve, predict) included.
build_and_push.sh <image-name> <profile> Example: build_and_push.sh iris-model up-sagemaker. Note: This script needs to be run from "container" folder in the source code.
-
Run cells in the juPyter notebook train_and_deploy_your_first_model_on_sagemaker.ipynb to train, deploy, test the model.
That's all for the prerequisites and setup.
-
Install Python 3.6, node, npm if not avaialable on your machine already
-
Install serverless framework
npm install -g serverless
Note: Always run npm commands as admin to avoid problems.
-
Install npm packages required for the serverless project. This command installs npm modules under "node_modules" folder.
npm install
-
Set below environment variables to be able to deploy, debug your serverless app onto AWS
`export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export PIP_DEFAULT_TIMEOUT=100
export SLS_DEBUG=*`
-
Deploy the app to AWS cloud
serverless deploy
docker run image train
: for trainingdocker run image serve
: for prediction
The iris data set is used for training the model.
scikit-learn's Random Forest implementation is chosen in order to train the iris classifier.
Precision and Recall are used to evaluate the trained ML model.
The following technologies are used in order to build a RESTful prediction service:
- nginx: a high-performance web server to handle and serve HTTP requests and responses, respectively.
- gunicorn: a Python WSGI HTTP server responsible to run multiple copies of your application and load balance between them.
- flask: a Python micro web framework that lets you implement the controllers for the two SageMaker endpoints
/ping
and/invocations
.
REST Endpoints:
GET /ping
: health endpointPOST /invocations
: predict endpoint that expects a JSON body with the required features
train_and_deploy_your_first_model_on_sagemaker.ipynb
: Jupyter notebook to train/deploy your first ML model on SageMaker