Skip to content

Commit

Permalink
Rename OpenAIProcessorConversationConfig DB model to more apt AiModel…
Browse files Browse the repository at this point in the history
…Api (#998)

* Rename OpenAIProcessorConversationConfig to more apt AiModelAPI

The DB model name had drifted from what it is being used for,
a general chat api provider that supports other chat api providers like
anthropic and google chat models apart from openai based chat models.

This change renames the DB model and updates the docs to remove this
confusion.

Using Ai Model Api we catch most use-cases including chat, stt, image generation etc.
  • Loading branch information
debanjum authored Dec 9, 2024
1 parent df66fb2 commit 9dd3782
Show file tree
Hide file tree
Showing 18 changed files with 108 additions and 105 deletions.
9 changes: 4 additions & 5 deletions documentation/docs/advanced/admin.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,14 +38,13 @@ To add a server chat setting:
- The `Advanced` field doesn't need to be set when self-hosting. When unset, the `Default` chat model is used for all users and the intermediate steps.


### OpenAI Processor Conversation Configs
These settings configure chat model providers to be accessed over API.
The name of this setting is kind of a misnomer, we know, it'll hopefully be changed at some point.
For each chat model provider you [add](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add):
### AI Model API
These settings configure APIs to interact with AI models.
For each AI Model API you [add](http://localhost:42110/server/admin/database/aimodelapi/add):
- `Api key`: Set to your [OpenAI](https://platform.openai.com/api-keys), [Anthropic](https://console.anthropic.com/account/keys) or [Gemini](https://aistudio.google.com/app/apikey) API keys.
- `Name`: Give the configuration any friendly name like `OpenAI`, `Gemini`, `Anthropic`.
- `Api base url`: Set the API base URL. This is only relevant to set if you're using another OpenAI-compatible proxy server like [Ollama](/advanced/ollama) or [LMStudio](/advanced/lmstudio).
![example configuration for openai processor](/img/example_openai_processor_config.png)
![example configuration for ai model api](/img/example_openai_processor_config.png)

### Search Model Configs
Search models are used to generate vector embeddings of your documents for natural language search and chat. You can choose any [embeddings models on HuggingFace](https://huggingface.co/models?pipeline_tag=sentence-similarity) to try, use for your to create vector embeddings of your documents for natural language search and chat.
Expand Down
2 changes: 1 addition & 1 deletion documentation/docs/advanced/litellm.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Using LiteLLM with Khoj makes it possible to turn any LLM behind an API into you
export MISTRAL_API_KEY=<MISTRAL_API_KEY>
litellm --model mistral/mistral-tiny --drop_params
```
3. Create a new [OpenAI Processor Conversation Config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) on your Khoj admin panel
3. Create a new [API Model API](http://localhost:42110/server/admin/database/aimodelapi/add) on your Khoj admin panel
- Name: `proxy-name`
- Api Key: `any string`
- Api Base Url: **URL of your Openai Proxy API**
Expand Down
2 changes: 1 addition & 1 deletion documentation/docs/advanced/ollama.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ Restart your Khoj server after first run or update to the settings below to ensu
```bash
ollama pull llama3.1
```
3. Create a new [OpenAI Processor Conversation Config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) on your Khoj admin panel
3. Create a new [AI Model API](http://localhost:42110/server/admin/database/aimodelapi/add) on your Khoj admin panel
- Name: `ollama`
- Api Key: `any string`
- Api Base Url: `http://localhost:11434/v1/` (default for Ollama)
Expand Down
6 changes: 3 additions & 3 deletions documentation/docs/advanced/use-openai-proxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ This is only helpful for self-hosted users. If you're using [Khoj Cloud](https:/
Khoj natively supports local LLMs [available on HuggingFace in GGUF format](https://huggingface.co/models?library=gguf). Using an OpenAI API proxy with Khoj maybe useful for ease of setup, trying new models or using commercial LLMs via API.
:::

Khoj can use any OpenAI API compatible server including [Ollama](/advanced/ollama), [LMStudio](/advanced/lmstudio) and [LiteLLM](/advanced/litellm).
Khoj can use any OpenAI API compatible server including local providers like [Ollama](/advanced/ollama), [LMStudio](/advanced/lmstudio) and [LiteLLM](/advanced/litellm) and commercial providers like [HuggingFace](https://huggingface.co/docs/api-inference/tasks/chat-completion#using-the-api), [OpenRouter](https://openrouter.ai/docs/quick-start) etc.
Configuring this allows you to use non-standard, open or commercial, local or hosted LLM models for Khoj

Combine them with Khoj can turn your favorite LLM into an AI agent. Allowing you to chat with your docs, find answers from the internet, build custom agents and run automations.
Expand All @@ -20,8 +20,8 @@ For specific integrations, see our [Ollama](/advanced/ollama), [LMStudio](/advan

## General Setup

1. Start your preferred OpenAI API compatible app
2. Create a new [OpenAI Processor Conversation Config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) on your Khoj admin panel
1. Start your preferred OpenAI API compatible app locally or get API keys from commercial AI model providers.
3. Create a new [API Model API](http://localhost:42110/server/admin/database/aimodelapi/add) on your Khoj admin panel
- Name: `any name`
- Api Key: `any string`
- Api Base Url: **URL of your Openai Proxy API**
Expand Down
12 changes: 6 additions & 6 deletions documentation/docs/get-started/setup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -302,11 +302,11 @@ Setup which chat model you'd want to use. Khoj supports local and online chat mo
Using Ollama? See the [Ollama Integration](/advanced/ollama) section for more custom setup instructions.
:::

1. Create a new [OpenAI processor conversation config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) in the server admin settings. This is kind of a misnomer, we know.
1. Create a new [AI Model Api](http://localhost:42110/server/admin/database/aimodelapi/add) in the server admin settings.
- Add your [OpenAI API key](https://platform.openai.com/api-keys)
- Give the configuration a friendly name like `OpenAI`
- (Optional) Set the API base URL. It is only relevant if you're using another OpenAI-compatible proxy server like [Ollama](/advanced/ollama) or [LMStudio](/advanced/lmstudio).<br />
![example configuration for openai processor](/img/example_openai_processor_config.png)
![example configuration for ai model api](/img/example_openai_processor_config.png)
2. Create a new [chat model options](http://localhost:42110/server/admin/database/chatmodeloptions/add)
- Set the `chat-model` field to an [OpenAI chat model](https://platform.openai.com/docs/models). Example: `gpt-4o`.
- Make sure to set the `model-type` field to `OpenAI`.
Expand All @@ -315,22 +315,22 @@ Using Ollama? See the [Ollama Integration](/advanced/ollama) section for more cu
![example configuration for chat model options](/img/example_chatmodel_option.png)
</TabItem>
<TabItem value="anthropic" label="Anthropic">
1. Create a new [OpenAI processor conversation config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) in the server admin settings. This is kind of a misnomer, we know.
1. Create a new [AI Model API](http://localhost:42110/server/admin/database/aimodelapi/add) in the server admin settings.
- Add your [Anthropic API key](https://console.anthropic.com/account/keys)
- Give the configuration a friendly name like `Anthropic`. Do not configure the API base url.
2. Create a new [chat model options](http://localhost:42110/server/admin/database/chatmodeloptions/add)
- Set the `chat-model` field to an [Anthropic chat model](https://docs.anthropic.com/en/docs/about-claude/models#model-names). Example: `claude-3-5-sonnet-20240620`.
- Set the `model-type` field to `Anthropic`.
- Set the `Openai config` field to the OpenAI processor conversation config for Anthropic you created in step 1.
- Set the `ai model api` field to the Anthropic AI Model API you created in step 1.
</TabItem>
<TabItem value="gemini" label="Gemini">
1. Create a new [OpenAI processor conversation config](http://localhost:42110/server/admin/database/openaiprocessorconversationconfig/add) in the server admin settings. This is kind of a misnomer, we know.
1. Create a new [AI Model API](http://localhost:42110/server/admin/database/aimodelapi/add) in the server admin settings.
- Add your [Gemini API key](https://aistudio.google.com/app/apikey)
- Give the configuration a friendly name like `Gemini`. Do not configure the API base url.
2. Create a new [chat model options](http://localhost:42110/server/admin/database/chatmodeloptions/add)
- Set the `chat-model` field to a [Google Gemini chat model](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#gemini-models). Example: `gemini-1.5-flash`.
- Set the `model-type` field to `Gemini`.
- Set the `Openai config` field to the OpenAI processor conversation config for Gemini you created in step 1.
- Set the `ai model api` field to the Gemini AI Model API you created in step 1.

</TabItem>
<TabItem value="offline" label="Offline">
Expand Down
6 changes: 3 additions & 3 deletions src/khoj/configure.py
Original file line number Diff line number Diff line change
Expand Up @@ -232,9 +232,9 @@ def configure_server(
config = FullConfig()
state.config = config

if ConversationAdapters.has_valid_openai_conversation_config():
openai_config = ConversationAdapters.get_openai_conversation_config()
state.openai_client = openai.OpenAI(api_key=openai_config.api_key)
if ConversationAdapters.has_valid_ai_model_api():
ai_model_api = ConversationAdapters.get_ai_model_api()
state.openai_client = openai.OpenAI(api_key=ai_model_api.api_key)

# Initialize Search Models from Config and initialize content
try:
Expand Down
30 changes: 15 additions & 15 deletions src/khoj/database/adapters/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@

from khoj.database.models import (
Agent,
AiModelApi,
ChatModelOptions,
ClientApplication,
Conversation,
Expand All @@ -46,7 +47,6 @@
KhojApiUser,
KhojUser,
NotionConfig,
OpenAIProcessorConversationConfig,
ProcessLock,
PublicConversation,
ReflectiveQuestion,
Expand Down Expand Up @@ -981,7 +981,7 @@ def get_all_conversation_configs():

@staticmethod
async def aget_all_conversation_configs():
return await sync_to_async(list)(ChatModelOptions.objects.prefetch_related("openai_config").all())
return await sync_to_async(list)(ChatModelOptions.objects.prefetch_related("ai_model_api").all())

@staticmethod
def get_vision_enabled_config():
Expand All @@ -1000,12 +1000,12 @@ async def aget_vision_enabled_config():
return None

@staticmethod
def get_openai_conversation_config():
return OpenAIProcessorConversationConfig.objects.filter().first()
def get_ai_model_api():
return AiModelApi.objects.filter().first()

@staticmethod
def has_valid_openai_conversation_config():
return OpenAIProcessorConversationConfig.objects.filter().exists()
def has_valid_ai_model_api():
return AiModelApi.objects.filter().exists()

@staticmethod
@arequire_valid_user
Expand Down Expand Up @@ -1093,7 +1093,7 @@ async def aget_default_conversation_config(user: KhojUser = None):
server_chat_settings: ServerChatSettings = (
await ServerChatSettings.objects.filter()
.prefetch_related(
"chat_default", "chat_default__openai_config", "chat_advanced", "chat_advanced__openai_config"
"chat_default", "chat_default__ai_model_api", "chat_advanced", "chat_advanced__ai_model_api"
)
.afirst()
)
Expand All @@ -1109,15 +1109,15 @@ async def aget_default_conversation_config(user: KhojUser = None):

# Get the user's chat settings, if the server chat settings are not set
user_chat_settings = (
(await UserConversationConfig.objects.filter(user=user).prefetch_related("setting__openai_config").afirst())
(await UserConversationConfig.objects.filter(user=user).prefetch_related("setting__ai_model_api").afirst())
if user
else None
)
if user_chat_settings is not None and user_chat_settings.setting is not None:
return user_chat_settings.setting

# Get the first chat model if even the user chat settings are not set
return await ChatModelOptions.objects.filter().prefetch_related("openai_config").afirst()
return await ChatModelOptions.objects.filter().prefetch_related("ai_model_api").afirst()

@staticmethod
def get_advanced_conversation_config(user: KhojUser):
Expand All @@ -1130,7 +1130,7 @@ def get_advanced_conversation_config(user: KhojUser):
async def aget_advanced_conversation_config(user: KhojUser = None):
server_chat_settings: ServerChatSettings = (
await ServerChatSettings.objects.filter()
.prefetch_related("chat_advanced", "chat_advanced__openai_config")
.prefetch_related("chat_advanced", "chat_advanced__ai_model_api")
.afirst()
)
if server_chat_settings is not None and server_chat_settings.chat_advanced is not None:
Expand Down Expand Up @@ -1258,7 +1258,7 @@ def set_conversation_processor_config(user: KhojUser, new_config: ChatModelOptio
@staticmethod
async def aget_user_conversation_config(user: KhojUser):
config = (
await UserConversationConfig.objects.filter(user=user).prefetch_related("setting__openai_config").afirst()
await UserConversationConfig.objects.filter(user=user).prefetch_related("setting__ai_model_api").afirst()
)
if not config:
return None
Expand Down Expand Up @@ -1313,15 +1313,15 @@ def get_valid_conversation_config(user: KhojUser, conversation: Conversation):
ChatModelOptions.ModelType.OPENAI,
ChatModelOptions.ModelType.GOOGLE,
]
) and conversation_config.openai_config:
) and conversation_config.ai_model_api:
return conversation_config

else:
raise ValueError("Invalid conversation config - either configure offline chat or openai chat")

@staticmethod
async def aget_text_to_image_model_config():
return await TextToImageModelConfig.objects.filter().prefetch_related("openai_config").afirst()
return await TextToImageModelConfig.objects.filter().prefetch_related("ai_model_api").afirst()

@staticmethod
def get_text_to_image_model_config():
Expand All @@ -1343,9 +1343,9 @@ def get_user_text_to_image_model_config(user: KhojUser):

@staticmethod
async def aget_user_text_to_image_model(user: KhojUser) -> Optional[TextToImageModelConfig]:
# Create a custom queryset for prefetching settings__openai_config, handling null cases
# Create a custom queryset for prefetching settings__ai_model_api, handling null cases
settings_prefetch = Prefetch(
"setting", queryset=TextToImageModelConfig.objects.prefetch_related("openai_config")
"setting", queryset=TextToImageModelConfig.objects.prefetch_related("ai_model_api")
)

config = await UserTextToImageModelConfig.objects.filter(user=user).prefetch_related(settings_prefetch).afirst()
Expand Down
8 changes: 4 additions & 4 deletions src/khoj/database/admin.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import csv
import json
from datetime import date, datetime, timedelta, timezone
from datetime import datetime, timedelta

from apscheduler.job import Job
from django.contrib import admin, messages
Expand All @@ -15,14 +15,14 @@

from khoj.database.models import (
Agent,
AiModelApi,
ChatModelOptions,
ClientApplication,
Conversation,
Entry,
GithubConfig,
KhojUser,
NotionConfig,
OpenAIProcessorConversationConfig,
ProcessLock,
ReflectiveQuestion,
SearchModelConfig,
Expand Down Expand Up @@ -232,8 +232,8 @@ class TextToImageModelOptionsAdmin(unfold_admin.ModelAdmin):
search_fields = ("id", "model_name", "model_type")


@admin.register(OpenAIProcessorConversationConfig)
class OpenAIProcessorConversationConfigAdmin(unfold_admin.ModelAdmin):
@admin.register(AiModelApi)
class AiModelApiAdmin(unfold_admin.ModelAdmin):
list_display = (
"id",
"name",
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Generated by Django 5.0.9 on 2024-12-05 09:00

from django.db import migrations


class Migration(migrations.Migration):
dependencies = [
("database", "0075_migrate_generated_assets_and_validate"),
]

operations = [
migrations.RenameModel(
old_name="OpenAIProcessorConversationConfig",
new_name="AiModelApi",
),
migrations.RenameField(
model_name="chatmodeloptions",
old_name="openai_config",
new_name="ai_model_api",
),
migrations.RenameField(
model_name="texttoimagemodelconfig",
old_name="openai_config",
new_name="ai_model_api",
),
]
20 changes: 8 additions & 12 deletions src/khoj/database/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ class Type(models.TextChoices):
enabled_trial_at = models.DateTimeField(null=True, default=None, blank=True)


class OpenAIProcessorConversationConfig(DbBaseModel):
class AiModelApi(DbBaseModel):
name = models.CharField(max_length=200)
api_key = models.CharField(max_length=200)
api_base_url = models.URLField(max_length=200, default=None, blank=True, null=True)
Expand All @@ -200,9 +200,7 @@ class ModelType(models.TextChoices):
chat_model = models.CharField(max_length=200, default="bartowski/Meta-Llama-3.1-8B-Instruct-GGUF")
model_type = models.CharField(max_length=200, choices=ModelType.choices, default=ModelType.OFFLINE)
vision_enabled = models.BooleanField(default=False)
openai_config = models.ForeignKey(
OpenAIProcessorConversationConfig, on_delete=models.CASCADE, default=None, null=True, blank=True
)
ai_model_api = models.ForeignKey(AiModelApi, on_delete=models.CASCADE, default=None, null=True, blank=True)


class VoiceModelOption(DbBaseModel):
Expand Down Expand Up @@ -504,26 +502,24 @@ class ModelType(models.TextChoices):
model_name = models.CharField(max_length=200, default="dall-e-3")
model_type = models.CharField(max_length=200, choices=ModelType.choices, default=ModelType.OPENAI)
api_key = models.CharField(max_length=200, default=None, null=True, blank=True)
openai_config = models.ForeignKey(
OpenAIProcessorConversationConfig, on_delete=models.CASCADE, default=None, null=True, blank=True
)
ai_model_api = models.ForeignKey(AiModelApi, on_delete=models.CASCADE, default=None, null=True, blank=True)

def clean(self):
# Custom validation logic
error = {}
if self.model_type == self.ModelType.OPENAI:
if self.api_key and self.openai_config:
if self.api_key and self.ai_model_api:
error[
"api_key"
] = "Both API key and OpenAI config cannot be set for OpenAI models. Please set only one of them."
] = "Both API key and AI Model API cannot be set for OpenAI models. Please set only one of them."
error[
"openai_config"
"ai_model_api"
] = "Both API key and OpenAI config cannot be set for OpenAI models. Please set only one of them."
if self.model_type != self.ModelType.OPENAI:
if not self.api_key:
error["api_key"] = "The API key field must be set for non OpenAI models."
if self.openai_config:
error["openai_config"] = "OpenAI config cannot be set for non OpenAI models."
if self.ai_model_api:
error["ai_model_api"] = "AI Model API cannot be set for non OpenAI models."
if error:
raise ValidationError(error)

Expand Down
12 changes: 3 additions & 9 deletions src/khoj/migrations/migrate_server_pg.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,11 +60,7 @@

from packaging import version

from khoj.database.models import (
ChatModelOptions,
OpenAIProcessorConversationConfig,
SearchModelConfig,
)
from khoj.database.models import AiModelApi, ChatModelOptions, SearchModelConfig
from khoj.utils.yaml import load_config_from_file, save_config_to_file

logger = logging.getLogger(__name__)
Expand Down Expand Up @@ -121,16 +117,14 @@ def migrate_server_pg(args):
if openai.get("chat-model") is None:
openai["chat-model"] = "gpt-3.5-turbo"

openai_config = OpenAIProcessorConversationConfig.objects.create(
api_key=openai.get("api-key"), name="default"
)
openai_model_api = AiModelApi.objects.create(api_key=openai.get("api-key"), name="default")

ChatModelOptions.objects.create(
chat_model=openai.get("chat-model"),
tokenizer=processor_conversation.get("tokenizer"),
max_prompt_size=processor_conversation.get("max-prompt-size"),
model_type=ChatModelOptions.ModelType.OPENAI,
openai_config=openai_config,
ai_model_api=openai_model_api,
)

save_config_to_file(raw_config, args.config_file)
Expand Down
Loading

0 comments on commit 9dd3782

Please sign in to comment.