This example showcases how you can efficiently train and inference a large number of models in one workflow.
$ cdk deploy
- https://eu-west-1.console.aws.amazon.com/states/home (Make sure to select the region of deployment)
- Open state machine with name like MLTraininingInference*
- Press "Start execution" at the top right
- No Input is required
- Distributed map, which creates a child workflow for every file in the source bucket. All subsequent steps happen for every file in the bucket
- Training of a model
- Waiting for training to finish
- Saving the model to Sagemaker, to make it available for inference
- Inference using saved model and data in the target bucket
- Waiting for inference to finish
- Deletion of saved model
This is a sample solution intended as a starting point and should not be used in a productive setting without thorough analysis and considerations on the user's side.
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.
The folder images/
is based on this repository from AWS Samples.