Mistral NeMo | Mistral AI | Frontier AI in your hands #851
Labels
AI-Chatbots
Topics related to advanced chatbot platforms integrating multiple AI models
finetuning
Tools for finetuning of LLMs e.g. SFT or RLHF
llm
Large Language Models
Models
LLM and ML model repos and links
New-Label
Choose this option if the existing labels are insufficient to describe the content accurately
Mistral NeMo | Mistral AI | Frontier AI in your hands
"Today, we are excited to release Mistral NeMo, a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy are state-of-the-art in its size category. As it relies on standard architecture, Mistral NeMo is easy to use and a drop-in replacement in any system using Mistral 7B.
We have released pre-trained base and instruction-tuned checkpoints checkpoints under the Apache 2.0 license to promote adoption for researchers and enterprises. Mistral NeMo was trained with quantisation awareness, enabling FP8 inference without any performance loss.
The following table compares the accuracy of the Mistral NeMo base model with two recent open-source pre-trained models, Gemma 2 9B, and Llama 3 8B."
Suggested labels
{'label-name': 'Large AI Model', 'label-description': 'Refers to state-of-the-art large AI models like Mistral NeMo with up to 128k tokens context window.', 'gh-repo': 'AI-Chatbots', 'confidence': 63.31}
The text was updated successfully, but these errors were encountered: