Skip to content
Thoughts on GitHub Models?
Cohere logo

Cohere Command R

Command R is a scalable generative model targeting RAG and Tool Use to enable production-scale AI for enterprise.
Context
131k input · 4k output
Training date
Undisclosed
Rate limit tier
Provider support
Playground
Try Cohere Command R
Azure hosted. AI powered, can make mistakes. . Subject to Product Terms & Privacy Statement. Not intended for production/sensitive data.
What are some common features of Gothic architecture?
What are some of the most famous works of Shakespeare?
What are some popular tourist attractions in Paris?

Model navigation navigation

Cohere

Command R is a highly performant generative large language model, optimized for a variety of use cases including reasoning, summarization, and question answering.

The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.

Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.

Resources

For full details of this model, release blog post.

Model Architecture

This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.

Tool use capabilities

Command R has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation.

Command R's tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R may use one of its supplied tools more than once.

The model has been trained to recognise a special directly_answer tool, which it uses to indicate that it doesn't want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the directly_answer tool, but it can be removed or renamed if required.

Grounded Generation and RAG Capabilities

Command R has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG).This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.

Command R's grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.

By default, Command R will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as accurate grounded generation.

The model is trained with a number of other answering modes, which can be selected by prompt changes . A fast citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.

Code Capabilities

Command R has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.

Languages

 (10)
English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Chinese (China), and Arabic

About

Command R is a scalable generative model targeting RAG and Tool Use to enable production-scale AI for enterprise.
Context
131k input · 4k output
Training date
Undisclosed
Rate limit tier
Provider support

Languages

 (10)
English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Chinese (China), and Arabic