Note that this repository is outdated: we are now using the next generation of the MLCommons CK workflow automation meta-framework (Collective Mind aka CM) developed by the open working group. Feel free to join this community effort to learn how to modularize ML Systems and automate their benchmarking, optimization and deployment in the real world!
This repository is compatible with the MLCommons CK framework v2.5.8 (Apache 2.0 license):
- CK-powered MLPerf™ benchmark automation and design space exploration
- CK-powered MLPerf™ inference submission automation
This repository contains a collection of stable CK components (automation recipes and workflows) to automate benchmarking, optimization and deployment of ML Systems across diverse platforms, environments, frameworks, models and data sets:
- CK automation recipes for MLOps: [inside CK framework] [in this repo]
- CK portable program workflows: [list]
- CK portable meta packages: [list]
- CK environment detection (software, models, data sets): [list]
- CK OS descriptions: [list]
- CK adaptive containers: [list]
- Developing a platform to automate SW/HW co-design for ML Systems across diverse models, data sets, frameworks and platforms based on user constraints in terms of speed, accuracy, energy and costs: OctoML.ai & cKnowledge.io
- Automating MLPerf(tm) inference benchmark and packing ML models, data sets and frameworks as CK components with a unified API and meta description
- Providing a common format to share artifacts at ML, systems and other conferences: video, Artifact Evaluation
- Redesigning CK together with the community based on user feedback
- Real-world use cases from our partners: overview
Don't hesitate to report issues or submit feature requests here.
Contact Grigori Fursin to join our MLCommons Design Space Exploration Workgroup (subgroup of Best Practices)!