Skip to content

Commit f1b9578

Browse files
authored
Extend shorthand support for the llama stack run command (#465)
**Summary:** Extend the shorthand run command so it can run successfully when config exists under DISTRIBS_BASE_DIR (i.e. ~/.llama/distributions). For example, imagine you created a new stack using the `llama stack build` command where you named it "my-awesome-llama-stack". ``` $ llama stack build > Enter a name for your Llama Stack (e.g. my-local-stack): my-awesome-llama-stack ``` To run the stack you created you will have to use long config path: ``` llama stack run ~/.llama/distributions/llamastack-my-awesome-llama-stack/my-awesome-llama-stack-run.yaml ``` With this change, you can start it using the stack name instead of full path: ``` llama stack run my-awesome-llama-stack ``` **Test Plan:** Verify command fails when stack doesn't exist ``` python3 -m llama_stack.cli.llama stack run my-test-stack ``` Output [FAILURE] ``` usage: llama stack run [-h] [--port PORT] [--disable-ipv6] config llama stack run: error: File /Users/vladimirivic/.llama/distributions/llamastack-my-test-stack/my-test-stack-run.yaml does not exist. Please run `llama stack build` to generate (and optionally edit) a run.yaml file ``` Create a new stack using `llama stack build`. Name it `my-test-stack`. Verify command runs successfully ``` python3 -m llama_stack.cli.llama stack run my-test-stack ``` Output [SUCCESS] ``` Listening on ['::', '0.0.0.0']:5000 INFO: Started server process [80146] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://['::', '0.0.0.0']:5000 (Press CTRL+C to quit) ```
1 parent 57bafd0 commit f1b9578

File tree

1 file changed

+12
-1
lines changed

1 file changed

+12
-1
lines changed

llama_stack/cli/stack/run.py

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,10 @@ def _run_stack_run_cmd(self, args: argparse.Namespace) -> None:
4848

4949
from llama_stack.distribution.build import ImageType
5050
from llama_stack.distribution.configure import parse_and_maybe_upgrade_config
51-
from llama_stack.distribution.utils.config_dirs import BUILDS_BASE_DIR
51+
from llama_stack.distribution.utils.config_dirs import (
52+
BUILDS_BASE_DIR,
53+
DISTRIBS_BASE_DIR,
54+
)
5255
from llama_stack.distribution.utils.exec import run_with_pty
5356

5457
if not args.config:
@@ -68,6 +71,14 @@ def _run_stack_run_cmd(self, args: argparse.Namespace) -> None:
6871
BUILDS_BASE_DIR / ImageType.docker.value / f"{args.config}-run.yaml"
6972
)
7073

74+
if not config_file.exists() and not args.config.endswith(".yaml"):
75+
# check if it's a build config saved to ~/.llama dir
76+
config_file = Path(
77+
DISTRIBS_BASE_DIR
78+
/ f"llamastack-{args.config}"
79+
/ f"{args.config}-run.yaml"
80+
)
81+
7182
if not config_file.exists():
7283
self.parser.error(
7384
f"File {str(config_file)} does not exist. Please run `llama stack build` to generate (and optionally edit) a run.yaml file"

0 commit comments

Comments
 (0)