Skip to content

Conversation

@eyevana
Copy link
Contributor

@eyevana eyevana commented Aug 21, 2025

Updating llama-stack version

Description

Getting started doc out of sync with project dependencies; results in errors when installing 0.2.16.

https://docs.google.com/document/d/171PHjne9r0yWvEAV8pBnPJQUt7ouHYF-43eP-eGZgQk/edit?tab=t.0

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • Documentation
    • Updated the Getting Started guide to reference the latest package version (0.2.17) in the installation command for a smoother setup.
    • Corrected the final code example’s formatting (properly closed code fence and trailing newline), ensuring accurate rendering and easier copy-paste for users.
    • Minor clarity improvements to improve reliability of the quick-start flow without altering any functionality.

Updating llama-stack version
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 21, 2025

Walkthrough

Updates docs/getting_started.md to bump llama-stack install version from 0.2.16 to 0.2.17 and fixes the final code block’s closing fence/newline.

Changes

Cohort / File(s) Summary
Docs: Getting Started
docs/getting_started.md
Updated install pin to llama-stack==0.2.17; fixed final code block with proper closing fence and trailing newline.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Possibly related PRs

Suggested reviewers

  • tisnik

Poem

I nudge the guide with gentle paws,
A hop from .16 to .17—no flaws.
Code fences close, the lines align,
Trails end neat, like burrows fine.
Thump-thump! Docs are crisp and clean—
A rabbit’s joy in versions seen. 🐇✨

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
docs/getting_started.md (1)

12-15: Confirm Python version compatibility before listing 3.13

Based on a quick PyPI check (see your script output), torch 2.8.0, faiss-cpu 1.9.0.post1, and triton 3.4.0 all publish cp313 wheels, whereas peft, datasets, litellm, mcp, and opentelemetry-sdk do not yet provide Python 3.13 wheels. To avoid early-release pitfalls, please update docs/getting_started.md as follows:

• File: docs/getting_started.md, lines 12–15
• Change the Python line to explicitly recommend 3.12 and call out 3.13 support status. For example:

- 1. Python 3.12 or 3.13
+ 1. Python 3.12 (recommended for full compatibility)

• Add a note below, e.g.:

> Note: Python 3.13 wheels are currently available for torch, faiss-cpu, and triton only; other dependencies may require source builds.

Let me know if you’d like help drafting a full compatibility matrix.

🧹 Nitpick comments (6)
docs/getting_started.md (6)

18-20: Prefer the official uv installer or pipx to avoid PATH issues; add cross-platform package notes.

pip install --user uv often leaves the uv binary off PATH on many systems. Recommend the upstream installer and alternatives, and broaden OS guidance for curl/jq.

Apply:

-1. `pip install --user uv`
-1. `sudo dnf install curl jq`
+1. Install uv (recommended):
+   - Linux/macOS: `curl -LsSf https://astral.sh/uv/install.sh | sh`
+   - Or with pipx: `pipx install uv`
+1. Install curl and jq:
+   - Fedora/RHEL: `sudo dnf install curl jq`
+   - Debian/Ubuntu: `sudo apt-get update && sudo apt-get install -y curl jq`
+   - macOS (Homebrew): `brew install curl jq`

23-25: Make the “Clone LCS repository” step actionable (clone + cd).

As written, it’s easy to miss changing directories before running uv add.

Apply:

-1. Clone LCS repository
-1. Add and install all required dependencies
+1. Clone LCS repository and enter it
+   ```bash
+   git clone https://github.com/lightspeed-core/lightspeed-stack.git
+   cd lightspeed-stack
+   ```
+1. Add and install all required dependencies

26-45: Lock and sync for reproducibility after adding dependencies.

Without a lock step, fresh environments may resolve different versions than your example output.

Apply:

     "trl>=0.18.2"
     ```
+1. Resolve and install the lockfile for reproducibility
+   ```bash
+   uv lock && uv sync --frozen
+   ```

47-112: The long “Installed packages” transcript is highly volatile—trim or annotate to prevent drift.

This list will rot quickly and confuse readers when counts/versions differ by platform/time.

Apply:

-    ```text
-    Resolved 195 packages in 1.19s
-          Built lightspeed-stack @ file:///tmp/ramdisk/lightspeed-stack
-    Prepared 12 packages in 1.72s
-    Installed 60 packages in 4.47s
-     + accelerate==1.9.0
-     + autoevals==0.0.129
-     + blobfile==3.0.0
-     ...
-     + trl==0.20.0
-     + wrapt==1.17.2
-     + xxhash==3.5.0
-    ```
+    ```text
+    Installed packages (abbreviated):
+      + llama-stack==0.2.17
+      + lightspeed-stack==<local path>
+      + torch==2.7.1
+      + fastapi==…
+      + opentelemetry-sdk==…
+      … (output varies by platform and time)
+    ```

229-235: Clarify server.port=8321 in run.yaml when using “library mode.”

The intro claims “only one port (e.g., 8080) will be accessible,” but this run.yaml also sets a Llama Stack server port (8321). In library mode, that server block is typically unused; consider commenting this or adding a note to avoid confusion.

Proposed tweak:

 server:
-  auth: null
-  host: null
-  port: 8321
+  # In library mode this section is not used; LCS serves on 8080.
+  auth: null
+  host: null
+  port: 8321

26-45: Optional: consider pinning a few historically fragile deps.

Torch, Triton, FAISS, and TRL often have platform/ABI constraints. Pinning to known-good versions in pyproject and relying on uv sync --frozen will reduce “works on my machine” drift.

I can open a follow-up PR to move these from doc commands into pyproject dependencies with explicit pins and platform markers.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between fa8a413 and ba21e90.

📒 Files selected for processing (1)
  • docs/getting_started.md (2 hunks)
🔇 Additional comments (3)
docs/getting_started.md (3)

327-341: Fixed closing code fence—nice catch.

The final JSON example now renders correctly.


274-281: Run target exists in Makefile – no fallback needed

The Makefile already defines a run target (see line 8: run: ## Run the service locally), so the additional fallback command is unnecessary.


26-29: llama-stack 0.2.17 confirmed and repo-wide pins updated
Version 0.2.17 exists on PyPI, and there are no remaining references to 0.2.16. All llama-stack pins (and llama-stack-client) in the project have been bumped to 0.2.17, so this change is ready to merge.

• pyproject.toml:
– "llama-stack==0.2.17"
– "llama-stack-client==0.2.17"
• examples/pyproject.llamastack.toml:
– "llama-stack==0.2.17"
• docs/getting_started.md:
– uv add "llama-stack==0.2.17"

@tisnik tisnik merged commit e2ec07f into lightspeed-core:main Aug 21, 2025
20 of 23 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Sep 10, 2025
15 tasks
@coderabbitai coderabbitai bot mentioned this pull request Oct 2, 2025
15 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants