From fab8ab98957c3f0dcac2b10b5a2ca01b345a78d3 Mon Sep 17 00:00:00 2001 From: Nate Sesti Date: Wed, 22 May 2024 16:18:07 -0700 Subject: [PATCH] =?UTF-8?q?=F0=9F=93=9D=20update=20Ollama=20embeddings=20d?= =?UTF-8?q?ocs?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/docs/walkthroughs/codebase-embeddings.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/docs/walkthroughs/codebase-embeddings.md b/docs/docs/walkthroughs/codebase-embeddings.md index 705889bc21..3f093df62d 100644 --- a/docs/docs/walkthroughs/codebase-embeddings.md +++ b/docs/docs/walkthroughs/codebase-embeddings.md @@ -82,13 +82,13 @@ We also support other methods of generating embeddings, which can be configured ### Ollama -[Ollama](https://ollama.ai) is the easiest way to get up and running with open-source language models. It provides an entirely local REST API for working with LLMs, including generating embeddings. The embeddings generated are slightly larger, with a size of 4096 for `codellama:7b`. +[Ollama](https://ollama.ai) is the easiest way to get up and running with open-source language models. It provides an entirely local REST API for working with LLMs, including generating embeddings. We recommend using an embeddings model like `nomic-embed-text`: ```json title="~/.continue/config.json" { "embeddingsProvider": { "provider": "ollama", - "model": "codellama:7b", + "model": "nomic-embed-text", "apiBase": "http://localhost:11434" // optional, defaults to http://localhost:11434 } }