Automating the build and deployment of machine learning models is an important step in creating production machine learning services. Models need to be retrained and deployed when code and/or data are updated. This project provides a full implementation of a CI/CD workflow and includes jupyter notebooks showing how to create, launch, stop, and track the progress of builds using python and Amazon Alexa! The goal of aws-sagemaker-build is to provide a repository of common and useful SageMaker/Step Function pipelines, to be shared with the community and grown by the community.
- Run Linux. (tested on Amazon Linux)
- Install npm >5 and node >8. (instructions)
- Clone this repo.
- Set up an AWS account. (instructions)
- Configure AWS CLI and a local credentials file. (instructions)
- install all need packages
npm install
-
copy config.js.example to config.js
-
create an s3 bucket. instructions. open up config.js and set templateBucket and AssetBucket to the name of your s3 bucket.
-
launch stack
npm run up
npm run up #launches stack
npm run update #updates the launched stack
npm run down #shuts down stack
template is written to /cloudformation/build/template.json
The following diagram describes the flow of the Step Function StateMachine. There are several points where the StateMachine has to poll and wait for a task to complete.
AWS Systems Manager Parameter Store provides a durable, centralized, and scalable data store. We will store the parameters of our training jobs and deployment here and the Step Function's Lambda functions will query the parameters from this store. To change the parameters you just change the JSON string in the store. The example notebooks included with aws-sagemaker-build show how to do this.
- hyperparameters: default=HyperParameters,
- hostinstancecount: default=1,
- hostinstancetype: default=ml.t2.medium,
- traininstancecount: default=1,
- traininstancetype: default=ml.m5.large,
- trainvolumesize: default=10,
- trainmaxrun: default=4,
- inputmode: default=File,
- modelhostingenvironment: default={}
- hyperparameters: default={},
- dockerfile_path_Training:
- dockerfile_path_Inference:
- train: default=true
- build: default={Inference:true, Training:true}
- TrainingImage:
- tensorflowversion: default=1.8,
- trainingsteps: default=1000,
- evaluationsteps: default=100,
- requirementsfile: default=none,
- trainentrypoint: default=none,
- trainsourcefile: default=none,
- pyversion: default=py3,
- hostentrypoint: default=none,
- hostsourcefile: default=none,
- enablecloudwatchmetrics: default=false,
- containerloglevel: default=200,
- mxnetversion: default=1.1,
- trainentrypoint: default=none,
- trainsourcefile: default=none,
- pyversion: default=py3,
- hostentrypoint: default=none,
- hostsourcefile: default=none,
- enablecloudwatchmetrics: default=false,
- containerloglevel: default=200,
- maxtrainingjobs: default=1,
- maxparalleltrainingjobs: default=1,
- algorithm: