Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .github/workflows/e2e_tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -169,6 +169,7 @@ jobs:
checkpoint_format: huggingface
device: cpu
distributed_backend: null
dpo_output_dir: "."
provider_id: huggingface
provider_type: inline::huggingface
safety:
Expand Down Expand Up @@ -262,4 +263,4 @@ jobs:
uv sync

echo "Running comprehensive e2e test suite..."
make test-e2e
make test-e2e
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@ dependencies = [
"fastapi>=0.115.12",
"uvicorn>=0.34.3",
"kubernetes>=30.1.0",
"llama-stack==0.2.17",
"llama-stack-client==0.2.17",
"llama-stack==0.2.18",
"llama-stack-client==0.2.18",
"rich>=14.0.0",
"cachetools>=6.1.0",
"prometheus-client>=0.22.1",
Expand Down
1 change: 1 addition & 0 deletions run.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ providers:
checkpoint_format: huggingface
device: cpu
distributed_backend: null
dpo_output_dir: "."
provider_id: huggingface
provider_type: inline::huggingface
safety:
Expand Down
2 changes: 1 addition & 1 deletion src/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

# Minimal and maximal supported Llama Stack version
MINIMAL_SUPPORTED_LLAMA_STACK_VERSION = "0.2.17"
MAXIMAL_SUPPORTED_LLAMA_STACK_VERSION = "0.2.17"
MAXIMAL_SUPPORTED_LLAMA_STACK_VERSION = "0.2.18"

UNABLE_TO_PROCESS_RESPONSE = "Unable to process this request"

Expand Down
16 changes: 8 additions & 8 deletions uv.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading