-
Notifications
You must be signed in to change notification settings - Fork 72
LCORE-1335: Parametrized RHAIIS port and added library mode tests #1164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -22,14 +22,10 @@ providers: | |
| - provider_id: vllm | ||
| provider_type: remote::vllm | ||
| config: | ||
| url: http://${env.RHAIIS_URL}:8000/v1/ | ||
| base_url: http://${env.RHAIIS_URL}:${env.RHAIIS_PORT}/v1/ | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: #!/bin/bash
# Check how base_url is used in remote::vllm provider configuration across the codebase
rg -n "base_url" --type yaml -C2 --glob '**/run-*.yaml'
echo "---"
# Check if other config files use 'url' or 'base_url' for vllm
rg -n "provider_type: remote::vllm" -A5 --type yamlRepository: lightspeed-core/lightspeed-stack Length of output: 2727 🏁 Script executed: #!/bin/bash
# Search for vllm provider implementation or config schema
fd -type f -name "*.py" | xargs rg -l "remote::vllm" 2>/dev/null | head -10
echo "---"
# Also search for vllm config handling
rg -n "vllm" --type py -B2 -A5 | grep -E "(base_url|url|config)" | head -30Repository: lightspeed-core/lightspeed-stack Length of output: 497 🏁 Script executed: #!/bin/bash
# Search for vllm config handling in Python files
rg -n "url|base_url" --type py -B3 -A3 | grep -i vllm -B5 -A5 | head -50
echo "---"
# Look for config class definitions or dataclass patterns
rg -n "class.*vllm|@dataclass" --type py -i | head -20
echo "---"
# Search for any schema or type definitions
fd -type f -name "*.py" -path "*vllm*" 2>/dev/null
echo "---"
# Check for llama-stack requirements or dependencies
cat requirements.txt 2>/dev/null | grep -i llama
cat pyproject.toml 2>/dev/null | grep -i llamaRepository: lightspeed-core/lightspeed-stack Length of output: 719 🏁 Script executed: #!/bin/bash
# Search for any documentation about vllm provider config in the repo
rg -n "vllm" -i docs/ --type md -B3 -A3 2>/dev/null | head -50
echo "---"
# Check for any schema or type hints in the repo
rg -n "url.*vllm|vllm.*url" -i --type py -B2 -A2 | head -40Repository: lightspeed-core/lightspeed-stack Length of output: 2809 🌐 Web query:
💡 Result: In llama-stack 0.4.3, the
|
||
| api_token: ${env.RHAIIS_API_KEY} | ||
| tls_verify: false | ||
| max_tokens: 2048 | ||
| - provider_id: openai | ||
| provider_type: remote::openai | ||
| config: | ||
| api_key: ${env.OPENAI_API_KEY} | ||
| - config: {} | ||
| provider_id: sentence-transformers | ||
| provider_type: inline::sentence-transformers | ||
|
|
@@ -54,6 +50,9 @@ providers: | |
| - config: {} | ||
| provider_id: rag-runtime | ||
| provider_type: inline::rag-runtime | ||
| - config: {} # Enable MCP (Model Context Protocol) support | ||
| provider_id: model-context-protocol | ||
| provider_type: remote::model-context-protocol | ||
| vector_io: | ||
| - config: | ||
| persistence: | ||
|
|
@@ -143,7 +142,7 @@ registered_resources: | |
| shields: | ||
| - shield_id: llama-guard | ||
| provider_id: llama-guard | ||
| provider_shield_id: openai/gpt-4o-mini | ||
| provider_shield_id: vllm/${env.RHAIIS_MODEL} | ||
| vector_stores: | ||
| - embedding_dimension: 768 | ||
| embedding_model: sentence-transformers/all-mpnet-base-v2 | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quote the URL in the connectivity check to prevent shell word-splitting.
If
RHAIIS_URLincludes a protocol prefix (e.g.,https://host) or any special characters, the unquoted expansion could cause unexpected behavior.Proposed fix
📝 Committable suggestion
🤖 Prompt for AI Agents