-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
integration: Add Hugging Face local models; ST for embeddings #402
Conversation
🦋 Changeset detectedLatest commit: 42632fb The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
Caution Review failedThe pull request is closed. WalkthroughThis pull request introduces significant changes to the model handling architecture by replacing the existing FastEmbed implementation with Sentence Transformers using ONNX. It adds support for local models through Hugging Face, enhancing compatibility and performance. Modifications are made to environment variable configurations, dependency management, and type definitions to accommodate the new "huggingface" provider. Additionally, new initialization functions for Hugging Face models are introduced, ensuring seamless integration into existing functionalities. Changes
Possibly related PRs
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (4)
.changeset/plenty-pumpkins-fold.md (1)
5-5
: Consider enhancing the changeset description.While the description captures the main changes, it could be more detailed about the benefits and impact. Also, there's a minor grammatical issue.
Consider this improved version:
-Add local models via Hugging Face; use Sentence Transformers w. ONNX instead of FastEmbed (support for more models, etc) +Add local models via Hugging Face; use Sentence Transformers w. ONNX instead of FastEmbed (supports ~10,000 models vs. FastEmbed's 6 models, improved performance through ONNX, configurable backend options including torch/ONNX/OpenVINO, etc.).🧰 Tools
🪛 LanguageTool
[style] ~5-~5: In American English, abbreviations like “etc.” require a period.
Context: ... of FastEmbed (support for more models, etc)(ETC_PERIOD)
helpers/types.ts (1)
Missing isConfigured() implementation for Hugging Face provider
The codebase shows that while "huggingface" is added as a ModelProvider type, there is no corresponding implementation of
isConfigured()
for this provider. Other providers like OpenAI, Mistral, and Anthropic have implementations that check for both config.apiKey and environment variables.
- Missing implementation file for Hugging Face provider in
helpers/providers/
- Other providers implement
isConfigured()
to validate their configuration requirements- The implementation should verify Hugging Face specific configuration (API key and/or model paths)
🔗 Analysis chain
Line range hint
14-20
: Verify ModelConfig.isConfigured() implementation for Hugging Face provider.Since ModelConfig.isConfigured() determines provider configuration validity, ensure it properly handles the new "huggingface" provider's requirements, which may differ from other providers (e.g., local model paths vs API keys).
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find and verify the isConfigured() implementation for the new provider # Expected: Implementation should handle Hugging Face specific configuration # Search for isConfigured implementation ast-grep --pattern 'isConfigured() { $$$ }' # Search for Hugging Face specific configuration handling rg -l "huggingface.*isConfigured|isConfigured.*huggingface"Length of output: 109
Script:
#!/bin/bash # Let's try a different approach to find the implementation # Search for ModelConfig class and its implementation ast-grep --pattern 'class $_ implements ModelConfig { $$$ }' # Also search for any type implementation ast-grep --pattern 'type ModelConfig = { $$$ }' # Look for files containing ModelConfig rg -l "ModelConfig" # Search for huggingface provider related code rg -l "huggingface" --type tsLength of output: 1000
Script:
#!/bin/bash # Let's check the implementation in provider files and env-variables # Check ModelConfig usage in env-variables.ts rg -A 10 "isConfigured" helpers/env-variables.ts # Check ModelProvider enum and huggingface related code rg -A 5 "ModelProvider|huggingface" helpers/types.ts # Check provider implementations rg -A 10 "isConfigured" helpers/providers/Length of output: 4638
templates/components/settings/python/settings.py (1)
143-161
: Add input validation and improve error handling.While the implementation is solid, consider these improvements:
- Add validation for the
backend
environment variable to ensure it's one of "torch", "onnx", or "openvino"- Add a docstring describing the function's purpose and configuration options
- Improve exception handling by preserving the original error context
Here's a suggested implementation:
def init_huggingface_embedding(): + """Initialize Hugging Face embedding model with configurable backend. + + Environment Variables: + EMBEDDING_MODEL: Name of the model (default: all-MiniLM-L6-v2) + EMBEDDING_BACKEND: Backend to use (torch, onnx, or openvino) (default: onnx) + EMBEDDING_TRUST_REMOTE_CODE: Whether to trust remote code (default: false) + """ try: from llama_index.embeddings.huggingface import HuggingFaceEmbedding except ImportError as err: raise ImportError( "Hugging Face support is not installed. Please install it with `poetry add llama-index-embeddings-huggingface`" - ) + ) from err embedding_model = os.getenv("EMBEDDING_MODEL", "all-MiniLM-L6-v2") backend = os.getenv("EMBEDDING_BACKEND", "onnx") + if backend not in ["torch", "onnx", "openvino"]: + raise ValueError(f"Invalid backend: {backend}. Must be one of: torch, onnx, openvino") trust_remote_code = ( os.getenv("EMBEDDING_TRUST_REMOTE_CODE", "false").lower() == "true" )🧰 Tools
🪛 Ruff
147-149: Within an
except
clause, raise exceptions withraise ... from err
orraise ... from None
to distinguish them from errors in exception handling(B904)
helpers/env-variables.ts (1)
342-345
: Enhance theEMBEDDING_BACKEND
descriptionThe description could be more informative about performance implications of different backends.
Suggested update:
- "The backend to use for the Sentence Transformers embedding model, either 'torch', 'onnx', or 'openvino'. Defaults to 'onnx'.", + "The backend to use for the Sentence Transformers embedding model. Options:\n- 'torch': PyTorch backend (CPU/GPU)\n- 'onnx': ONNX Runtime (optimized CPU inference)\n- 'openvino': Intel OpenVINO (optimized for Intel hardware)\nDefaults to 'onnx' for best CPU performance.",
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (5)
- .changeset/plenty-pumpkins-fold.md (1 hunks)
- helpers/env-variables.ts (1 hunks)
- helpers/python.ts (2 hunks)
- helpers/types.ts (1 hunks)
- templates/components/settings/python/settings.py (4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
templates/components/settings/python/settings.py (1)
Pattern
templates/**
: For files under thetemplates
folder, do not report 'Missing Dependencies Detected' errors.
🪛 LanguageTool
.changeset/plenty-pumpkins-fold.md
[style] ~5-~5: In American English, abbreviations like “etc.” require a period.
Context: ... of FastEmbed (support for more models, etc)(ETC_PERIOD)
🪛 Ruff
templates/components/settings/python/settings.py
147-149: Within an
except
clause, raise exceptions withraise ... from err
orraise ... from None
to distinguish them from errors in exception handling(B904)
168-170: Within an
except
clause, raise exceptions withraise ... from err
orraise ... from None
to distinguish them from errors in exception handling(B904)
🔇 Additional comments (9)
.changeset/plenty-pumpkins-fold.md (1)
1-3
: LGTM on the patch version bump!Using a patch version bump is appropriate as these changes add new functionality in a backward-compatible manner.
helpers/types.ts (1)
12-12
: LGTM! The type addition is well-integrated.The addition of "huggingface" to the ModelProvider type union is consistent with the existing pattern and supports the PR's objective of integrating Hugging Face local models.
templates/components/settings/python/settings.py (2)
24-25
: LGTM: Clean integration of Hugging Face provider.The new case follows the established pattern and maintains consistency with the existing code structure.
188-189
: LGTM: Good migration from FastEmbed to Hugging Face embeddings.The changes consistently implement Hugging Face embeddings for providers that don't offer their own embedding solutions. The comments clearly explain the rationale.
Also applies to: 209-210
helpers/env-variables.ts (2)
339-352
: Verify integration with existing provider configurationThe implementation follows the established pattern for provider-specific environment variables, which is good. However, let's verify if any other Hugging Face specific configurations might be needed.
Let's check for other potential Hugging Face related configurations:
#!/bin/bash # Search for other potential Hugging Face configurations rg -i "hugg(ing)?face|hf_" -A 5
339-352
: Review the security implications ofEMBEDDING_TRUST_REMOTE_CODE
The
EMBEDDING_TRUST_REMOTE_CODE
environment variable allows execution of custom code from models. This could pose security risks if not properly validated.Let's check if there are any security measures in place:
Consider:
- Adding a warning in the description about security implications
- Setting a default value of
false
for production environments- Adding validation to prevent this in production unless explicitly required
Suggested description update:
- "Whether to trust remote code for the embedding model, required for some models with custom code.", + "Whether to trust remote code for the embedding model. WARNING: Only enable this if you trust the model source, as it allows execution of custom code. Required for some models with custom code. Defaults to false.",helpers/python.ts (3)
225-234
: LGTM: Addition of 'huggingface' provider and dependenciesThe introduction of the
huggingface
provider and its associated dependencies is correct and enhances the model handling capabilities.
181-182
:⚠️ Potential issueVerify the necessity of adding 'llama-index-embeddings-huggingface' for 'groq' provider
The addition of the
llama-index-embeddings-huggingface
dependency under thegroq
provider may be unnecessary unless thegroq
provider requires Hugging Face embeddings. Please confirm if this dependency is required for thegroq
provider.
191-192
:⚠️ Potential issueVerify the necessity of adding 'llama-index-embeddings-huggingface' for 'anthropic' provider
Similarly, adding the
llama-index-embeddings-huggingface
dependency under theanthropic
provider may not be needed unless it explicitly relies on Hugging Face embeddings. Please ensure that this dependency is required for theanthropic
provider.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (4)
helpers/providers/huggingface.ts (1)
17-19
: Add documentation and consider extensibility
The type definition could benefit from:
- JSDoc documentation explaining the purpose and usage
- Additional configuration options for future extensibility
Add documentation and prepare for future options:
+/**
+ * Parameters for configuring Hugging Face model selection prompts
+ * @property askModels - Whether to prompt for model selection
+ */
type HuggingfaceQuestionsParams = {
askModels: boolean;
+ // Future options might include:
+ // modelType?: 'llm' | 'embedding';
+ // quantization?: boolean;
};
helpers/providers/index.ts (2)
81-83
: Consider adding error handling for model configuration.
The implementation follows the established pattern correctly. However, consider adding error handling to gracefully handle cases where the Hugging Face model configuration fails.
case "huggingface":
- modelConfig = await askHuggingfaceQuestions({ askModels });
+ try {
+ modelConfig = await askHuggingfaceQuestions({ askModels });
+ } catch (error) {
+ throw new Error(`Failed to configure Hugging Face model: ${error.message}`);
+ }
break;
Line range hint 16-20
: Add documentation for framework-specific providers.
Consider adding JSDoc comments to the ModelConfigQuestionsParams
type to document which providers are available for specific frameworks.
+/**
+ * Parameters for configuring model providers
+ * @property framework - The template framework. Some providers (e.g., Hugging Face)
+ * are only available with specific frameworks.
+ */
export type ModelConfigQuestionsParams = {
openAiKey?: string;
askModels: boolean;
framework?: TemplateFramework;
};
templates/components/settings/python/settings.py (1)
118-136
: Improve error handling and add documentation
The implementation is good, but could benefit from these improvements:
- Enhance error handling by propagating the original exception
- Add docstring documenting the environment variables
Apply this diff:
def init_huggingface_embedding():
+ """Initialize Hugging Face embedding model.
+
+ Environment Variables:
+ EMBEDDING_MODEL: Model name (default: all-MiniLM-L6-v2)
+ EMBEDDING_BACKEND: Backend to use (default: onnx, options: torch, onnx, openvino)
+ EMBEDDING_TRUST_REMOTE_CODE: Whether to trust remote code (default: false)
+ """
try:
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
- except ImportError:
+ except ImportError as err:
raise ImportError(
"Hugging Face support is not installed. Please install it with `poetry add llama-index-embeddings-huggingface`"
- )
+ ) from err
🧰 Tools
🪛 Ruff
122-124: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (3)
- helpers/providers/huggingface.ts (1 hunks)
- helpers/providers/index.ts (3 hunks)
- templates/components/settings/python/settings.py (4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
templates/components/settings/python/settings.py (1)
Pattern templates/**
: For files under the templates
folder, do not report 'Missing Dependencies Detected' errors.
🪛 eslint
helpers/providers/huggingface.ts
[error] 1-undefined: Parsing error: The keyword 'import' is reserved
🪛 Ruff
templates/components/settings/python/settings.py
122-124: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
143-145: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
🔇 Additional comments (7)
helpers/providers/huggingface.ts (3)
1-4
: LGTM! Clean import structure
The imports are well-organized and appropriate for the CLI interaction requirements.
🧰 Tools
🪛 eslint
[error] 1-undefined: Parsing error: The keyword 'import' is reserved
5-15
: 🛠️ Refactor suggestion
Consider expanding the model selection
The current implementation only includes one LLM and one embedding model, which doesn't align with the PR's objective of supporting the wide range of models available on Hugging Face (~10,000 models).
Consider:
- Adding more pre-vetted models to the
MODELS
array - Adding popular embedding models to
EMBEDDING_MODELS
- Adding documentation about model selection criteria
Let's verify the model compatibility:
21-61
:
Enhance robustness and type safety of model configuration
The current implementation has several areas for improvement:
- The
isConfigured()
method always returnstrue
without actually validating the configuration - No validation of model compatibility or existence
- No type safety for model selection
- Limited error handling
Consider implementing these improvements:
export async function askHuggingfaceQuestions({
askModels,
}: HuggingfaceQuestionsParams): Promise<ModelConfigParams> {
const config: ModelConfigParams = {
model: DEFAULT_MODEL,
embeddingModel: DEFAULT_EMBEDDING_MODEL,
dimensions: DEFAULT_DIMENSIONS,
isConfigured(): boolean {
- return true;
+ return Boolean(
+ this.model &&
+ this.embeddingModel &&
+ this.dimensions &&
+ MODELS.includes(this.model) &&
+ this.embeddingModel in EMBEDDING_MODELS
+ );
},
};
+ // Type safety for model selection
+ type ModelChoice = typeof MODELS[number];
+ type EmbeddingModelChoice = keyof typeof EMBEDDING_MODELS;
if (askModels) {
const { model } = await prompts(
{
type: "select",
name: "model",
message: "Which Hugging Face model would you like to use?",
choices: MODELS.map(toChoice),
initial: 0,
+ validate: (value: ModelChoice) => MODELS.includes(value) || 'Invalid model selection',
},
questionHandlers,
);
+ if (!model) throw new Error('Model selection is required');
config.model = model;
const { embeddingModel } = await prompts(
Let's verify the model configuration:
helpers/providers/index.ts (2)
8-8
: LGTM! Import statement follows consistent patterns.
The import follows the established naming conventions and file structure patterns used for other providers.
43-43
: Verify the framework restriction for Hugging Face.
The Hugging Face provider is only available when using the FastAPI framework. This seems intentional given the local model integration requirements, but it's worth confirming.
✅ Verification successful
Let me gather more specific information about the Hugging Face integration.
Framework restriction for Hugging Face is correctly implemented
The FastAPI framework restriction for Hugging Face is intentional and properly implemented. This is because:
- Hugging Face integration relies on Python-specific libraries (
llama-index-llms-huggingface
andllama-index-embeddings-huggingface
) - The implementation in
templates/components/settings/python/settings.py
shows Python-specific initialization for Hugging Face models - The TypeScript implementation only includes Hugging Face embeddings support, not the full LLM capabilities
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify if Hugging Face integration is consistently restricted to FastAPI
# Test 1: Check for FastAPI-specific configurations or requirements
rg -A 5 "huggingface.*fastapi"
# Test 2: Check for framework-specific model configurations
ast-grep --pattern 'framework === "fastapi"'
Length of output: 1054
Script:
#!/bin/bash
# Check for Hugging Face related configurations and implementations
rg -A 5 "huggingface" -i
# Check for model provider related patterns
ast-grep --pattern 'provider === "huggingface"'
Length of output: 12731
templates/components/settings/python/settings.py (2)
24-25
: LGTM: Hugging Face provider integration
The addition of the "huggingface" case follows the established pattern and is correctly implemented.
163-164
: LGTM: Good integration of Hugging Face embeddings
The integration of Hugging Face embeddings for Groq and Anthropic is well implemented with clear explanatory comments.
Also applies to: 184-185
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tomaarsen thanks a lot for the PR - I added support for the askModels feature and please have a look at my comments
const MODELS = ["HuggingFaceH4/zephyr-7b-alpha"]; | ||
type ModelData = { | ||
dimensions: number; | ||
}; | ||
const EMBEDDING_MODELS: Record<string, ModelData> = { | ||
"all-MiniLM-L6-v2": { dimensions: 384 }, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tomaarsen please add other models that are running well locally. (I added a model selector that you can call with create-llama --ask-models
)
@@ -168,8 +181,8 @@ def init_anthropic(): | |||
} | |||
|
|||
Settings.llm = Anthropic(model=model_map[os.getenv("MODEL")]) | |||
# Anthropic does not provide embeddings, so we use FastEmbed instead | |||
init_fastembed() | |||
# Anthropic does not provide embeddings, so we use open Sentence Transformer models instead |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tomaarsen we actually had a PR that was replacing sentence transformers (llama-index-embeddings-huggingface@0.2.0
) with fastembed (see https://github.com/run-llama/create-llama/pull/162/files) - the reasons have been:
- sentence transformers was a too large dependency
- we had problems running it in the docker container of RAGapp (some pytorch issue that I forgot about)
before we replace it back - how is the situation now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @tomaarsen
Hello @marcusschiesser, Apologies for my delay, I had some issues on my projects that I needed to tackle first. I saw #162, although I didn't know the reasoning other than "improved embedding model usage". As for the dependency size - the size is primarily due to the If you do have a GPU & you'd like to run locally, you're usually best off accepting the large As for the docker issue that you described: I'm not aware of any Lastly, I'll make a follow-up to #414 to add more recommended embedding models.
|
Hello!
Pull Request overview
Details
I did some simple testing locally, which worked. I've also added some environment variables for the embedding model, for example to specify the
backend
as torch, ONNX or OpenVINO. Sentence Transformers with ONNX is equivalent to FastEmbed, except the latter requires that the model was already pre-exported. This means they only support ~6 models, whereas Sentence Transformers supports almost 10k: https://huggingface.co/models?library=sentence-transformers.cc @marcusschiesser as we discussed this on LinkedIn.
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation
ModelProvider
options to include "huggingface."These changes collectively improve the flexibility and functionality of the application for users leveraging AI model integrations.