Skip to content

Conversation

@eicherseiji
Copy link
Contributor

Why are these changes needed?

Improve test coverage for example config from Ray Serve LLM landing page: https://docs.ray.io/en/latest/serve/llm/serving-llms.html

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @eicherseiji, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on improving the testability and maintainability of Ray Serve LLM documentation examples. It achieves this by extracting an existing LLM deployment configuration into a dedicated, testable Python script, which is then used to dynamically generate the documentation snippet. The changes also include adjustments to facilitate automated testing of this example within CI environments.

Highlights

  • Test Coverage Improvement: Extracted the Qwen LLM example from the Ray Serve documentation into a new, dedicated Python file (doc/source/serve/doc_code/qwen_example.py). This enables automated testing of the example configuration, significantly improving test coverage for the Ray Serve LLM landing page.
  • CI/Testing Enhancements: Implemented monkeypatching within the extracted example to ensure serve.run operates in a non-blocking mode and to remove accelerator_type requirements. This makes the example suitable for execution in CI environments without needing actual hardware accelerators, streamlining automated testing.
  • Documentation Synchronization: Updated the serving-llms.rst documentation to use a literalinclude directive. This change ensures that the Qwen LLM example displayed in the documentation is directly pulled from the newly created test file, guaranteeing consistency between the documentation and the testable code.
  • Documentation Rendering Improvement: Modified the autopydantic.rst template to enable the display of field summaries and default values for Pydantic models in the generated documentation. This enhances the clarity and detail of configuration object documentation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request extracts a documentation example into a separate, testable file. To improve robustness, I've suggested a fix to address a potential race condition in the test validation logic that could lead to a KeyError.

Signed-off-by: Seiji Eicher <seiji@anyscale.com>
@eicherseiji eicherseiji added the go add ONLY when ready to merge, run all tests label Jul 20, 2025
@eicherseiji eicherseiji changed the title Extract Ray Serve LLM example for testing Extract Ray Serve LLM docs example for testing Jul 21, 2025
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
@eicherseiji eicherseiji changed the title Extract Ray Serve LLM docs example for testing Add Ray Serve LLM docs examples to test Jul 21, 2025
@gemini-code-assist
Copy link
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

eicherseiji and others added 10 commits July 25, 2025 19:29
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
@eicherseiji eicherseiji marked this pull request as ready for review August 5, 2025 21:32
@eicherseiji eicherseiji requested review from a team as code owners August 5, 2025 21:32
Copy link
Contributor

@kunling-anyscale kunling-anyscale left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Copy link
Contributor

@nrghosh nrghosh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thanks!

Copy link
Contributor

@angelinalg angelinalg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stamp

@eicherseiji
Copy link
Contributor Author

@kouroshHakha :shipit: ?

Copy link
Contributor

@kouroshHakha kouroshHakha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should do some file reorg before merging. Let me know what you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we put all of these in doc/source/llm/doc_code/<serve_qwen>/ or sth with a better name in brackets <>?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

srcs = glob(["*.py"]),
visibility = ["//doc:__subpackages__"],
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we simply add the content of this file to ray/doc/BUILD instead of creating a separate BUILD here? Basically I want to not create new opinionated paths for where the doc tests are. We can use the same convention for adding a new section in that file to introduce a section for all ray serve llm doc tests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

@kouroshHakha kouroshHakha changed the title Add Ray Serve LLM docs examples to test [docs][serve.llm] Add Ray Serve LLM docs examples to test Aug 6, 2025
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
@kouroshHakha kouroshHakha enabled auto-merge (squash) August 11, 2025 22:55
@kouroshHakha kouroshHakha merged commit da64afb into ray-project:master Aug 11, 2025
6 checks passed
sampan-s-nayak pushed a commit that referenced this pull request Aug 12, 2025
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: sampan <sampan@anyscale.com>
dioptre pushed a commit to sourcetable/ray that referenced this pull request Aug 20, 2025
…t#54763)

Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Andrew Grosser <dioptre@gmail.com>
jugalshah291 pushed a commit to jugalshah291/ray_fork that referenced this pull request Sep 11, 2025
…t#54763)

Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: jugalshah291 <shah.jugal291@gmail.com>
dstrodtman pushed a commit to dstrodtman/ray that referenced this pull request Oct 6, 2025
…t#54763)

Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Douglas Strodtman <douglas@anyscale.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

go add ONLY when ready to merge, run all tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants