Skip to content

Announcing LLM Foundry and the MPT foundation series

Compare
Choose a tag to compare
@dakinggg dakinggg released this 08 May 23:12
· 938 commits to main since this release
67e61a2

🚀 LLM Foundry v0.1.0

This is the first release of MosaicML's LLM Foundry!

Our efficient code for training, evaluating, and deploying LLMs outgrew our examples repository, so we've migrated to a brand new repository dedicated to everything LLMs. Keep watching this space and see the top-level README and our blog post for more details on this announcement!

Model releases

In addition to all the open-source code released here, we're releasing four open-source models that we hope will be useful to the community. All models were trained on the MosaicML platform, using Composer and Streaming. If you're interested in training your own models, or using these models with our optimized inference stack, please reach out!

  • mpt-7b: This is our base 7-billion parameter model, trained for 1 trillion tokens. This model is released with an Apache-2.0 (commercial use permitted) license.
  • mpt-7b-storywriter: All of the models use ALiBi to allow them to exrapolate to longer sequence lengths than they saw during training, but storywriter is our long context model, further pretrained on 65k-token excerpts of a fiction subset of the books3 corpus. This model is released with an Apache-2.0 (commercial use permitted) license.
  • mpt-7b-instruct: This model is instruction finetuned on a dataset we also release, derived from Databrick's Dolly-15k and Anthropic’s Helpful and Harmless datasets. This model is released with a CC-By-SA-3.0 (commercial use permitted) license.
  • mpt-7b-chat: This model is trained to be able to chat by further training on the ShareGPT-Vicuna, HC3, Alpaca, Helpful and Harmless, and Evol-Instruct datasets. This model is released with a CC-By-NC-SA-4.0 (non-commercial use only) license.

Features

Training

We release fully featured code for efficiently training any HuggingFace LLM (including our optimized MPT using FSDP, Composer, and Streaming. Seamlessly scale to multi-gpu and multi-node training, stream your data from one cloud, train on a different cloud, write checkpoints to a third cloud, send your training logs to Weights&Biases, and much more. See the README for more detailed instructions on getting started pretraining and finetuning!

Our MPT model is equipped with the latest advancements in training large transformers (e.g. ALiBi, the LION optimizer, FlashAttention), and is desgined to be easily hackable, configurable, and extendable!

Evaluation

Our evaluation framework, makes it easy to fully re-evaluate any HuggingFace model. We also include copies of the processed data for many popular benchmarks, to make it easy to replicate our evals, and perform your own! We welcome the addition of new benchmarks to our suite. In previous benchmarks, our setup is 8x faster than other eval frameworks on a single GPU and seamlessly achieves linear scaling with multiple GPUs. Built-in support for FSDP makes it possible to evaluate large models and use larger batch sizes for further acceleration.

Inference

MPT is designed to be fast, easy, and cheap to deploy for inference. To begin with, all MPT models are subclassed from the HuggingFace PretrainedModel base class, which means that they are fully compatible with the HuggingFace ecosystem. You can upload MPT models to the HuggingFace Hub, generate outputs with standard pipelines like model.generate(...), build HuggingFace Spaces (see some of ours here!), and more.

What about performance? With MPT’s optimized layers (including FlashAttention and low precision layernorm), the out-of-the-box performance of MPT-7B on GPUs when using model.generate(...) is 1.5x-2x faster than other 7B models like LLaMa-7B. This makes it easy to build fast and flexible inference pipelines with just HuggingFace and PyTorch.

Finally, for the best hosting experience, deploy your MPT models directly on MosaicML’s Inference service. Start with our managed endpoints for models like MPT-7B-Instruct, and/or deploy your own custom model endpoints for optimal cost and data privacy. Check out the Inference blog post for more details!