Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

integration: Add Hugging Face local models; ST for embeddings #402

Merged
merged 3 commits into from
Nov 4, 2024

Conversation

tomaarsen
Copy link
Contributor

@tomaarsen tomaarsen commented Oct 28, 2024

Hello!

Pull Request overview

  • Add Hugging Face local models (both LLMs & embeddings)
  • Use Sentence Transformers (with ONNX) for local embeddings

Details

I did some simple testing locally, which worked. I've also added some environment variables for the embedding model, for example to specify the backend as torch, ONNX or OpenVINO. Sentence Transformers with ONNX is equivalent to FastEmbed, except the latter requires that the model was already pre-exported. This means they only support ~6 models, whereas Sentence Transformers supports almost 10k: https://huggingface.co/models?library=sentence-transformers.

cc @marcusschiesser as we discussed this on LinkedIn.

  • Tom Aarsen

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced support for Hugging Face models, enhancing model compatibility and performance.
    • Added new environment variables for configuring Sentence Transformers.
    • New initialization functions for Hugging Face models in settings.
    • Interactive model selection for Hugging Face through user prompts.
  • Bug Fixes

    • Updated dependency management to replace outdated libraries with Hugging Face dependencies.
  • Documentation

    • Expanded ModelProvider options to include "huggingface."

These changes collectively improve the flexibility and functionality of the application for users leveraging AI model integrations.

Copy link

changeset-bot bot commented Oct 28, 2024

🦋 Changeset detected

Latest commit: 42632fb

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
create-llama Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

Copy link

coderabbitai bot commented Oct 28, 2024

Caution

Review failed

The pull request is closed.

Walkthrough

This pull request introduces significant changes to the model handling architecture by replacing the existing FastEmbed implementation with Sentence Transformers using ONNX. It adds support for local models through Hugging Face, enhancing compatibility and performance. Modifications are made to environment variable configurations, dependency management, and type definitions to accommodate the new "huggingface" provider. Additionally, new initialization functions for Hugging Face models are introduced, ensuring seamless integration into existing functionalities.

Changes

File Path Change Summary
.changeset/plenty-pumpkins-fold.md Added patch "create-llama" to facilitate local model addition via Hugging Face.
helpers/env-variables.ts Enhanced getModelEnvs with two new environment variables for Hugging Face: EMBEDDING_BACKEND and EMBEDDING_TRUST_REMOTE_CODE.
helpers/python.ts Updated getAdditionalDependencies to replace "fastembed" with "huggingface" dependencies and removed Python version constraints. Added handling for "huggingface" dependencies.
helpers/types.ts Added "huggingface" option to ModelProvider type.
templates/components/settings/python/settings.py Introduced init_huggingface and init_huggingface_embedding functions to initialize Hugging Face models, modifying existing functions for "groq" and "anthropic" to use Hugging Face embeddings.
helpers/providers/huggingface.ts New file added to manage interactions with Hugging Face models, including user prompts for model selection.
helpers/providers/index.ts Updated to import askHuggingfaceQuestions and added handling for the "huggingface" model provider in askModelConfig.

Possibly related PRs

  • feat: use llamacloud for chat #149: The main PR introduces a patch labeled "create-llama" that is directly referenced in this PR, indicating a related focus on model handling.
  • feat: add azure model provider #184: This PR adds an Azure model provider and includes a patch titled "create-llama," suggesting a connection to the main PR's focus on enhancing model compatibility and performance.
  • bump create-llama and update event handler #260: This PR updates the create-llama package, which is directly related to the main PR's changes involving the same patch name, indicating a shared context in model handling improvements.
  • Dynamically select model for Groq #278: The introduction of a patch titled "create-llama" for dynamically selecting models for Groq suggests a connection to the main PR's focus on model handling and compatibility.
  • fix: .env not loaded on poetry run generate #348: This PR addresses loading environment variables, which is relevant to the main PR's changes involving model configurations that may rely on such variables.

🐰 In the garden where models grow,
Hugging Face brings a vibrant glow.
With new patches and settings bright,
We hop along, in pure delight!
Dependencies dance, and types align,
A joyful leap, all things combine! 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (4)
.changeset/plenty-pumpkins-fold.md (1)

5-5: Consider enhancing the changeset description.

While the description captures the main changes, it could be more detailed about the benefits and impact. Also, there's a minor grammatical issue.

Consider this improved version:

-Add local models via Hugging Face; use Sentence Transformers w. ONNX instead of FastEmbed (support for more models, etc)
+Add local models via Hugging Face; use Sentence Transformers w. ONNX instead of FastEmbed (supports ~10,000 models vs. FastEmbed's 6 models, improved performance through ONNX, configurable backend options including torch/ONNX/OpenVINO, etc.).
🧰 Tools
🪛 LanguageTool

[style] ~5-~5: In American English, abbreviations like “etc.” require a period.
Context: ... of FastEmbed (support for more models, etc)

(ETC_PERIOD)

helpers/types.ts (1)

Missing isConfigured() implementation for Hugging Face provider

The codebase shows that while "huggingface" is added as a ModelProvider type, there is no corresponding implementation of isConfigured() for this provider. Other providers like OpenAI, Mistral, and Anthropic have implementations that check for both config.apiKey and environment variables.

  • Missing implementation file for Hugging Face provider in helpers/providers/
  • Other providers implement isConfigured() to validate their configuration requirements
  • The implementation should verify Hugging Face specific configuration (API key and/or model paths)
🔗 Analysis chain

Line range hint 14-20: Verify ModelConfig.isConfigured() implementation for Hugging Face provider.

Since ModelConfig.isConfigured() determines provider configuration validity, ensure it properly handles the new "huggingface" provider's requirements, which may differ from other providers (e.g., local model paths vs API keys).

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Find and verify the isConfigured() implementation for the new provider
# Expected: Implementation should handle Hugging Face specific configuration

# Search for isConfigured implementation
ast-grep --pattern 'isConfigured() {
  $$$
}'

# Search for Hugging Face specific configuration handling
rg -l "huggingface.*isConfigured|isConfigured.*huggingface" 

Length of output: 109


Script:

#!/bin/bash
# Let's try a different approach to find the implementation

# Search for ModelConfig class and its implementation
ast-grep --pattern 'class $_ implements ModelConfig {
  $$$
}'

# Also search for any type implementation
ast-grep --pattern 'type ModelConfig = {
  $$$
}'

# Look for files containing ModelConfig
rg -l "ModelConfig"

# Search for huggingface provider related code
rg -l "huggingface" --type ts

Length of output: 1000


Script:

#!/bin/bash
# Let's check the implementation in provider files and env-variables

# Check ModelConfig usage in env-variables.ts
rg -A 10 "isConfigured" helpers/env-variables.ts

# Check ModelProvider enum and huggingface related code
rg -A 5 "ModelProvider|huggingface" helpers/types.ts

# Check provider implementations
rg -A 10 "isConfigured" helpers/providers/

Length of output: 4638

templates/components/settings/python/settings.py (1)

143-161: Add input validation and improve error handling.

While the implementation is solid, consider these improvements:

  1. Add validation for the backend environment variable to ensure it's one of "torch", "onnx", or "openvino"
  2. Add a docstring describing the function's purpose and configuration options
  3. Improve exception handling by preserving the original error context

Here's a suggested implementation:

 def init_huggingface_embedding():
+    """Initialize Hugging Face embedding model with configurable backend.
+    
+    Environment Variables:
+        EMBEDDING_MODEL: Name of the model (default: all-MiniLM-L6-v2)
+        EMBEDDING_BACKEND: Backend to use (torch, onnx, or openvino) (default: onnx)
+        EMBEDDING_TRUST_REMOTE_CODE: Whether to trust remote code (default: false)
+    """
     try:
         from llama_index.embeddings.huggingface import HuggingFaceEmbedding
     except ImportError as err:
         raise ImportError(
             "Hugging Face support is not installed. Please install it with `poetry add llama-index-embeddings-huggingface`"
-        )
+        ) from err

     embedding_model = os.getenv("EMBEDDING_MODEL", "all-MiniLM-L6-v2")
     backend = os.getenv("EMBEDDING_BACKEND", "onnx")
+    if backend not in ["torch", "onnx", "openvino"]:
+        raise ValueError(f"Invalid backend: {backend}. Must be one of: torch, onnx, openvino")
     trust_remote_code = (
         os.getenv("EMBEDDING_TRUST_REMOTE_CODE", "false").lower() == "true"
     )
🧰 Tools
🪛 Ruff

147-149: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

helpers/env-variables.ts (1)

342-345: Enhance the EMBEDDING_BACKEND description

The description could be more informative about performance implications of different backends.

Suggested update:

-              "The backend to use for the Sentence Transformers embedding model, either 'torch', 'onnx', or 'openvino'. Defaults to 'onnx'.",
+              "The backend to use for the Sentence Transformers embedding model. Options:\n- 'torch': PyTorch backend (CPU/GPU)\n- 'onnx': ONNX Runtime (optimized CPU inference)\n- 'openvino': Intel OpenVINO (optimized for Intel hardware)\nDefaults to 'onnx' for best CPU performance.",
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between b8f7861 and e37ff64.

📒 Files selected for processing (5)
  • .changeset/plenty-pumpkins-fold.md (1 hunks)
  • helpers/env-variables.ts (1 hunks)
  • helpers/python.ts (2 hunks)
  • helpers/types.ts (1 hunks)
  • templates/components/settings/python/settings.py (4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
templates/components/settings/python/settings.py (1)

Pattern templates/**: For files under the templates folder, do not report 'Missing Dependencies Detected' errors.

🪛 LanguageTool
.changeset/plenty-pumpkins-fold.md

[style] ~5-~5: In American English, abbreviations like “etc.” require a period.
Context: ... of FastEmbed (support for more models, etc)

(ETC_PERIOD)

🪛 Ruff
templates/components/settings/python/settings.py

147-149: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


168-170: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🔇 Additional comments (9)
.changeset/plenty-pumpkins-fold.md (1)

1-3: LGTM on the patch version bump!

Using a patch version bump is appropriate as these changes add new functionality in a backward-compatible manner.

helpers/types.ts (1)

12-12: LGTM! The type addition is well-integrated.

The addition of "huggingface" to the ModelProvider type union is consistent with the existing pattern and supports the PR's objective of integrating Hugging Face local models.

templates/components/settings/python/settings.py (2)

24-25: LGTM: Clean integration of Hugging Face provider.

The new case follows the established pattern and maintains consistency with the existing code structure.


188-189: LGTM: Good migration from FastEmbed to Hugging Face embeddings.

The changes consistently implement Hugging Face embeddings for providers that don't offer their own embedding solutions. The comments clearly explain the rationale.

Also applies to: 209-210

helpers/env-variables.ts (2)

339-352: Verify integration with existing provider configuration

The implementation follows the established pattern for provider-specific environment variables, which is good. However, let's verify if any other Hugging Face specific configurations might be needed.

Let's check for other potential Hugging Face related configurations:

#!/bin/bash
# Search for other potential Hugging Face configurations
rg -i "hugg(ing)?face|hf_" -A 5

339-352: Review the security implications of EMBEDDING_TRUST_REMOTE_CODE

The EMBEDDING_TRUST_REMOTE_CODE environment variable allows execution of custom code from models. This could pose security risks if not properly validated.

Let's check if there are any security measures in place:

Consider:

  1. Adding a warning in the description about security implications
  2. Setting a default value of false for production environments
  3. Adding validation to prevent this in production unless explicitly required

Suggested description update:

-              "Whether to trust remote code for the embedding model, required for some models with custom code.",
+              "Whether to trust remote code for the embedding model. WARNING: Only enable this if you trust the model source, as it allows execution of custom code. Required for some models with custom code. Defaults to false.",
helpers/python.ts (3)

225-234: LGTM: Addition of 'huggingface' provider and dependencies

The introduction of the huggingface provider and its associated dependencies is correct and enhances the model handling capabilities.


181-182: ⚠️ Potential issue

Verify the necessity of adding 'llama-index-embeddings-huggingface' for 'groq' provider

The addition of the llama-index-embeddings-huggingface dependency under the groq provider may be unnecessary unless the groq provider requires Hugging Face embeddings. Please confirm if this dependency is required for the groq provider.


191-192: ⚠️ Potential issue

Verify the necessity of adding 'llama-index-embeddings-huggingface' for 'anthropic' provider

Similarly, adding the llama-index-embeddings-huggingface dependency under the anthropic provider may not be needed unless it explicitly relies on Hugging Face embeddings. Please ensure that this dependency is required for the anthropic provider.

templates/components/settings/python/settings.py Outdated Show resolved Hide resolved
templates/components/settings/python/settings.py Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (4)
helpers/providers/huggingface.ts (1)

17-19: Add documentation and consider extensibility

The type definition could benefit from:

  1. JSDoc documentation explaining the purpose and usage
  2. Additional configuration options for future extensibility

Add documentation and prepare for future options:

+/**
+ * Parameters for configuring Hugging Face model selection prompts
+ * @property askModels - Whether to prompt for model selection
+ */
 type HuggingfaceQuestionsParams = {
   askModels: boolean;
+  // Future options might include:
+  // modelType?: 'llm' | 'embedding';
+  // quantization?: boolean;
 };
helpers/providers/index.ts (2)

81-83: Consider adding error handling for model configuration.

The implementation follows the established pattern correctly. However, consider adding error handling to gracefully handle cases where the Hugging Face model configuration fails.

 case "huggingface":
-  modelConfig = await askHuggingfaceQuestions({ askModels });
+  try {
+    modelConfig = await askHuggingfaceQuestions({ askModels });
+  } catch (error) {
+    throw new Error(`Failed to configure Hugging Face model: ${error.message}`);
+  }
   break;

Line range hint 16-20: Add documentation for framework-specific providers.

Consider adding JSDoc comments to the ModelConfigQuestionsParams type to document which providers are available for specific frameworks.

+/**
+ * Parameters for configuring model providers
+ * @property framework - The template framework. Some providers (e.g., Hugging Face)
+ *                      are only available with specific frameworks.
+ */
 export type ModelConfigQuestionsParams = {
   openAiKey?: string;
   askModels: boolean;
   framework?: TemplateFramework;
 };
templates/components/settings/python/settings.py (1)

118-136: Improve error handling and add documentation

The implementation is good, but could benefit from these improvements:

  1. Enhance error handling by propagating the original exception
  2. Add docstring documenting the environment variables

Apply this diff:

 def init_huggingface_embedding():
+    """Initialize Hugging Face embedding model.
+    
+    Environment Variables:
+        EMBEDDING_MODEL: Model name (default: all-MiniLM-L6-v2)
+        EMBEDDING_BACKEND: Backend to use (default: onnx, options: torch, onnx, openvino)
+        EMBEDDING_TRUST_REMOTE_CODE: Whether to trust remote code (default: false)
+    """
     try:
         from llama_index.embeddings.huggingface import HuggingFaceEmbedding
-    except ImportError:
+    except ImportError as err:
         raise ImportError(
             "Hugging Face support is not installed. Please install it with `poetry add llama-index-embeddings-huggingface`"
-        )
+        ) from err
🧰 Tools
🪛 Ruff

122-124: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between e37ff64 and 0fd6aec.

📒 Files selected for processing (3)
  • helpers/providers/huggingface.ts (1 hunks)
  • helpers/providers/index.ts (3 hunks)
  • templates/components/settings/python/settings.py (4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
templates/components/settings/python/settings.py (1)

Pattern templates/**: For files under the templates folder, do not report 'Missing Dependencies Detected' errors.

🪛 eslint
helpers/providers/huggingface.ts

[error] 1-undefined: Parsing error: The keyword 'import' is reserved

🪛 Ruff
templates/components/settings/python/settings.py

122-124: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


143-145: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🔇 Additional comments (7)
helpers/providers/huggingface.ts (3)

1-4: LGTM! Clean import structure

The imports are well-organized and appropriate for the CLI interaction requirements.

🧰 Tools
🪛 eslint

[error] 1-undefined: Parsing error: The keyword 'import' is reserved


5-15: 🛠️ Refactor suggestion

Consider expanding the model selection

The current implementation only includes one LLM and one embedding model, which doesn't align with the PR's objective of supporting the wide range of models available on Hugging Face (~10,000 models).

Consider:

  1. Adding more pre-vetted models to the MODELS array
  2. Adding popular embedding models to EMBEDDING_MODELS
  3. Adding documentation about model selection criteria

Let's verify the model compatibility:


21-61: ⚠️ Potential issue

Enhance robustness and type safety of model configuration

The current implementation has several areas for improvement:

  1. The isConfigured() method always returns true without actually validating the configuration
  2. No validation of model compatibility or existence
  3. No type safety for model selection
  4. Limited error handling

Consider implementing these improvements:

 export async function askHuggingfaceQuestions({
   askModels,
 }: HuggingfaceQuestionsParams): Promise<ModelConfigParams> {
   const config: ModelConfigParams = {
     model: DEFAULT_MODEL,
     embeddingModel: DEFAULT_EMBEDDING_MODEL,
     dimensions: DEFAULT_DIMENSIONS,
     isConfigured(): boolean {
-      return true;
+      return Boolean(
+        this.model &&
+        this.embeddingModel &&
+        this.dimensions &&
+        MODELS.includes(this.model) &&
+        this.embeddingModel in EMBEDDING_MODELS
+      );
     },
   };

+  // Type safety for model selection
+  type ModelChoice = typeof MODELS[number];
+  type EmbeddingModelChoice = keyof typeof EMBEDDING_MODELS;

   if (askModels) {
     const { model } = await prompts(
       {
         type: "select",
         name: "model",
         message: "Which Hugging Face model would you like to use?",
         choices: MODELS.map(toChoice),
         initial: 0,
+        validate: (value: ModelChoice) => MODELS.includes(value) || 'Invalid model selection',
       },
       questionHandlers,
     );
+    if (!model) throw new Error('Model selection is required');
     config.model = model;

     const { embeddingModel } = await prompts(

Let's verify the model configuration:

helpers/providers/index.ts (2)

8-8: LGTM! Import statement follows consistent patterns.

The import follows the established naming conventions and file structure patterns used for other providers.


43-43: Verify the framework restriction for Hugging Face.

The Hugging Face provider is only available when using the FastAPI framework. This seems intentional given the local model integration requirements, but it's worth confirming.

✅ Verification successful

Let me gather more specific information about the Hugging Face integration.


Framework restriction for Hugging Face is correctly implemented

The FastAPI framework restriction for Hugging Face is intentional and properly implemented. This is because:

  • Hugging Face integration relies on Python-specific libraries (llama-index-llms-huggingface and llama-index-embeddings-huggingface)
  • The implementation in templates/components/settings/python/settings.py shows Python-specific initialization for Hugging Face models
  • The TypeScript implementation only includes Hugging Face embeddings support, not the full LLM capabilities
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify if Hugging Face integration is consistently restricted to FastAPI
# Test 1: Check for FastAPI-specific configurations or requirements
rg -A 5 "huggingface.*fastapi"

# Test 2: Check for framework-specific model configurations
ast-grep --pattern 'framework === "fastapi"'

Length of output: 1054


Script:

#!/bin/bash
# Check for Hugging Face related configurations and implementations
rg -A 5 "huggingface" -i

# Check for model provider related patterns
ast-grep --pattern 'provider === "huggingface"'

Length of output: 12731

templates/components/settings/python/settings.py (2)

24-25: LGTM: Hugging Face provider integration

The addition of the "huggingface" case follows the established pattern and is correctly implemented.


163-164: LGTM: Good integration of Hugging Face embeddings

The integration of Hugging Face embeddings for Groq and Anthropic is well implemented with clear explanatory comments.

Also applies to: 184-185

Copy link
Collaborator

@marcusschiesser marcusschiesser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tomaarsen thanks a lot for the PR - I added support for the askModels feature and please have a look at my comments

Comment on lines +5 to +11
const MODELS = ["HuggingFaceH4/zephyr-7b-alpha"];
type ModelData = {
dimensions: number;
};
const EMBEDDING_MODELS: Record<string, ModelData> = {
"all-MiniLM-L6-v2": { dimensions: 384 },
};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tomaarsen please add other models that are running well locally. (I added a model selector that you can call with create-llama --ask-models)

@@ -168,8 +181,8 @@ def init_anthropic():
}

Settings.llm = Anthropic(model=model_map[os.getenv("MODEL")])
# Anthropic does not provide embeddings, so we use FastEmbed instead
init_fastembed()
# Anthropic does not provide embeddings, so we use open Sentence Transformer models instead
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tomaarsen we actually had a PR that was replacing sentence transformers (llama-index-embeddings-huggingface@0.2.0) with fastembed (see https://github.com/run-llama/create-llama/pull/162/files) - the reasons have been:

  1. sentence transformers was a too large dependency
  2. we had problems running it in the docker container of RAGapp (some pytorch issue that I forgot about)
    before we replace it back - how is the situation now?

Copy link
Collaborator

@marcusschiesser marcusschiesser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @tomaarsen

@marcusschiesser marcusschiesser merged commit 0b0ed11 into run-llama:main Nov 4, 2024
1 of 46 checks passed
@tomaarsen
Copy link
Contributor Author

Hello @marcusschiesser,

Apologies for my delay, I had some issues on my projects that I needed to tackle first.

I saw #162, although I didn't know the reasoning other than "improved embedding model usage". As for the dependency size - the size is primarily due to the torch dependency, which installs with CUDA by default on Linux, even if your device doesn't have a GPU. In that case, it's preferable to first install torch with pip install torch --index-url https://download.pytorch.org/whl/cpu.

If you do have a GPU & you'd like to run locally, you're usually best off accepting the large torch dependency.

As for the docker issue that you described: I'm not aware of any torch-specific docker issue, in theory there shouldn't be any incompatibilities with docker and torch.

Lastly, I'll make a follow-up to #414 to add more recommended embedding models.

  • Tom Aarsen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants