Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding MistralAI mode #2065

Merged
merged 17 commits into from
Sep 24, 2024
Merged

Adding MistralAI mode #2065

merged 17 commits into from
Sep 24, 2024

Conversation

itsliamdowd
Copy link
Contributor

@itsliamdowd itsliamdowd commented Aug 22, 2024

Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

I added a Mistral mode where users can interact with Mistral's API for chat completion and generating embeddings. This mode requires the llama-index-embeddings-mistralai package in llama index to work properly and uses the already installed llama-index-llms-openai-like package for the llm to work.

Type of Change

Please delete options that are not relevant.

  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

  • I stared at the code and made sure it makes sense

Test Configuration:

  • Firmware version:
  • Hardware:
  • Toolchain:
  • SDK:

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my code

Copy link
Collaborator

@jaluma jaluma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution, it is very helpful. Please review the comments below :)

private_gpt/components/embedding/embedding_component.py Outdated Show resolved Hide resolved
private_gpt/components/llm/llm_component.py Outdated Show resolved Hide resolved
private_gpt/settings/settings.py Outdated Show resolved Hide resolved
private_gpt/ui/ui.py Outdated Show resolved Hide resolved
@itsliamdowd
Copy link
Contributor Author

Thanks for the corrections - I removed the Mistral LLM case but this results in settings being pulled from the OpenAI settings page instead of the Mistral settings page. Should I eliminate the Mistral settings page altogether?

@jaluma
Copy link
Collaborator

jaluma commented Aug 28, 2024

Thanks for the corrections - I removed the Mistral LLM case but this results in settings being pulled from the OpenAI settings page instead of the Mistral settings page. Should I eliminate the Mistral settings page altogether?

I would say yes. With settings-mistralai.yaml, it's all that you need to load load the model, right? Reviewing again the PR, I realised that the documentation update would be missing :)

@itsliamdowd
Copy link
Contributor Author

Yes it would need api_base, api_key, model, embedding_model, prompt_style, and request_timeout. I removed the settings-mistral.yaml because all of these can be included in the OpenAI like settings with a documentation update.

jaluma
jaluma previously approved these changes Sep 9, 2024
Copy link
Collaborator

@jaluma jaluma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for delay @itsliamdowd.
LGTM. Thank you.
Can you merge main and execute poetry lock --no-update?

# Conflicts:
#	poetry.lock
# Conflicts:
#	poetry.lock
@jaluma jaluma merged commit f9182b3 into zylon-ai:main Sep 24, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants