Skip to content

Commit

Permalink
add cohere docs
Browse files Browse the repository at this point in the history
  • Loading branch information
EPMatt committed May 14, 2024
1 parent d1e4189 commit 7a60443
Show file tree
Hide file tree
Showing 2 changed files with 59 additions and 8 deletions.
35 changes: 29 additions & 6 deletions docs/docs/plugins/cohere.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,35 @@ Install the plugin in your project with your favorite package manager:

## Usage

:::warning

Documentation is currently work in progress.

:::

### Basic examples

The simplest way to call the text generation model is by using the helper function `generate`:
```
// Basic usage of an LLM
const response = await generate({
model: commandRPlus,
prompt: 'Tell me a joke.',
});
console.log(await response.text());
```

Using the same interface, you can prompt a multimodal model:
```
const response = await generate({
model: commandRPlus,
prompt: [
{ text: 'What animal is in the photo?' },
{ media: { url: imageUrl} },
],
config:{
// control of the level of visual detail when processing image embeddings
// Low detail level also decreases the token usage
visualDetailLevel: 'low',
}
});
console.log(await response.text());
```
## Contributing

Want to contribute to the project? That's awesome! Head over to our [Contribution Guidelines](https://github.com/TheFireCo/genkit-plugins/blob/main/https://github.com/TheFireCo/genkit-plugins/blob/main/CONTRIBUTING.md).
Expand Down
32 changes: 30 additions & 2 deletions plugins/cohere/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,36 @@ Install the plugin in your project with your favorite package manager:

## Usage

> \[!WARNING\]\
> Documentation is currently work in progress.
### Basic examples

The simplest way to call the text generation model is by using the helper function `generate`:
```
// Basic usage of an LLM
const response = await generate({
model: commandRPlus,
prompt: 'Tell me a joke.',
});
console.log(await response.text());
```

Using the same interface, you can prompt a multimodal model:
```
const response = await generate({
model: commandRPlus,
prompt: [
{ text: 'What animal is in the photo?' },
{ media: { url: imageUrl} },
],
config:{
// control of the level of visual detail when processing image embeddings
// Low detail level also decreases the token usage
visualDetailLevel: 'low',
}
});
console.log(await response.text());
```


## Contributing

Expand Down

0 comments on commit 7a60443

Please sign in to comment.