The Modular Accelerated Xecution (MAX) platform is an integrated suite of AI libraries, tools, and technologies that unifies commonly fragmented AI deployment workflows. MAX accelerates time to market for the latest innovations by giving AI developers a single toolchain that unlocks full programmability, unparalleled performance, and seamless hardware portability.
See here to get started with MAX and when you want to report issues or request features, please create a GitHub issue here.
The Discord community is the best place to share your experiences and chat with the team and other community members.
In the examples directory, you will find code examples and Jupyter notebooks that show how to run inference with MAX.
MAX is available in both stable and nightly builds. To install either version, follow the guide to create a project with Magic.
Then clone this repository:
git clone https://github.com/modularml/max.git
If you installed the nightly build, be sure you switch to the nightly
branch,
because the main
branch is for stable releases and might not be compatible
with nightly builds:
git checkout nightly
To show off the full power of MAX, a series of end-to-end pipelines for common AI workloads (and more) are ready to run. As one example, this includes everything needed to self-host the Llama 3.1 text-generation model. All code is provided so that these pipelines can be customized, built upon, or learned from.
In addition to the end-to-end pipelines, there are many examples that exercise various aspects of MAX. You can follow the instructions in the README for each example or notebook you want to run.
Check out the notebooks examples for using MAX Engine 🏎️ for models such as
The tutorials directory contains the "finished" code for tutorials you can read at docs.modular.com/max/tutorials.
To deploy MAX on AWS, you can pull our Docker Container from the the public ECR here: https://gallery.ecr.aws/modular/max-serving
public.ecr.aws/modular/max-serving
Thanks for your interest in contributing to this repository! We are not accepting pull requests yet.
However, we welcome your bug reports. If you have a bug, please file an issue here.
If you need support, the Discord community is the best place to share your experiences and chat with the team and other community members.
This repository and its contributions are licensed under the Apache License v2.0 with LLVM Exceptions (see the LLVM License). MAX and Mojo usage and distribution are licensed under the MAX & Mojo Community License.
You are entirely responsible for checking and validating the licenses of third parties (i.e. Huggingface) for related software and libraries that are downloaded.