Skip to content

Conversation

@KrishnanPrash
Copy link
Contributor

@KrishnanPrash KrishnanPrash commented Aug 8, 2025

Overview:

For structured output / guided decoding change specifically, see #2404 that was included in this PR:

For min_tokens/ignore_eos at top level, see the rest of this PR and description.


With the changes proposed in this PR, we aim to provide support to pass the ignore_eos and min_tokens field at the root level of the json body of an inference request. This should aid with compatibility with third-party benchmarking tools and improve the UX of our API.

Current State:
We send parameters in the json body as:

{
  "model": "...",
  "messages": [...],
  "nvext": {
    "ignore_eos": true,
    "use_raw_prompt": true
  }
}

Proposal:

{
  "model": "...",
  "messages": [...],
  "ignore_eos": true, // <--- Moved to top level
  "min_tokens": ..., // Added support
  "nvext": {
    // NVIDIA specific parameters
  }
}

Details:

Note:

  • Currently min_tokens isn't supported, so no conflict in adding support for this.
  • However, since ignore_eos is supported within nvext, support for existing workflow is maintained, while also enabling root-level access for ignore_eos

Since ignore_eos can be specified in two locations, we currently allow the ignore_eos in nvext to override any value set by the root-level field.

Summary by CodeRabbit

  • New Features

    • Added support for specifying ignore_eos and min_tokens parameters in API requests, with clear precedence rules between common and extended fields.
    • Introduced a unified structure for handling common extension fields in both chat and completion requests.
  • Bug Fixes

    • Ensured all request constructions explicitly initialize the new common extension fields to default values, improving consistency and backward compatibility.
  • Tests

    • Added comprehensive tests to validate correct parsing, precedence, and serialization of the new extension fields.

@KrishnanPrash KrishnanPrash requested a review from a team as a code owner August 8, 2025 22:02
@copy-pr-bot
Copy link

copy-pr-bot bot commented Aug 8, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@github-actions github-actions bot added the feat label Aug 8, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 8, 2025

Walkthrough

This change introduces a new CommonExt struct to encapsulate shared extension fields (ignore_eos, min_tokens) for OpenAI-compatible request types, integrating it into both chat and completion request structures. Trait implementations and provider logic are updated to enforce field precedence between common and nvext. Tests are added and updated to verify correct serialization, deserialization, and precedence handling.

Changes

Cohort / File(s) Change Summary
Common Extension Struct and Trait
lib/llm/src/protocols/openai/common_ext.rs
Adds CommonExt struct with ignore_eos and min_tokens fields, builder, merge logic, validation, and a CommonExtProvider trait with methods for effective field retrieval. Includes comprehensive unit tests.
Chat Completion Request Integration
lib/llm/src/protocols/openai/chat_completions.rs
Adds common: CommonExt field to NvCreateChatCompletionRequest, updates docs, implements CommonExtProvider, updates trait logic for field precedence, and modifies OpenAIStopConditionsProvider to use new methods.
Completion Request Integration
lib/llm/src/protocols/openai/completions.rs
Adds common: CommonExt field to NvCreateCompletionRequest, implements CommonExtProvider, and updates OpenAIStopConditionsProvider for effective field retrieval.
Provider Trait Refactor
lib/llm/src/protocols/openai.rs
Adds get_ignore_eos method to OpenAIStopConditionsProvider and updates extract_stop_conditions to use it, centralizing ignore_eos retrieval logic.
Request Construction Updates
lib/llm/src/entrypoint/input/batch.rs, lib/llm/src/entrypoint/input/text.rs, lib/llm/src/protocols/openai/responses.rs
Updates construction of NvCreateChatCompletionRequest to explicitly initialize the common field with its default value.
Test Updates for CommonExt
lib/llm/src/http/service/openai.rs, lib/llm/tests/http-service.rs, lib/llm/tests/openai_completions.rs, lib/llm/tests/preprocessor.rs
Updates tests to include explicit initialization of the common field in request structs, ensuring all code paths set this field.
New CommonExt Behavior Tests
lib/llm/tests/test_common_ext.rs
Adds new tests verifying serialization, deserialization, and precedence logic for ignore_eos and min_tokens in both chat and completion requests, confirming compatibility and correct behavior.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant API
    participant RequestStruct
    participant CommonExt
    participant NvExt

    Client->>API: Sends request with ignore_eos/min_tokens (root/nvext)
    API->>RequestStruct: Deserialize request
    RequestStruct->>CommonExt: Access ignore_eos/min_tokens
    RequestStruct->>NvExt: Access ignore_eos (if present)
    CommonExt->>RequestStruct: Provide effective values (nvext takes precedence for ignore_eos)
    RequestStruct->>API: Return effective ignore_eos/min_tokens
    API->>Client: Responds using effective values
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

In fields of code where rabbits hop,
A common thread now ties each stop—
Ignore the EOS, or heed its call,
Min tokens set for one and all.
Extensions merged, precedence clear,
Our tests all pass—let’s give a cheer!
🐇✨


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@KrishnanPrash KrishnanPrash marked this pull request as draft August 8, 2025 22:04
Co-authored-by: Ryan McCormick <rmccormick@nvidia.com>
Signed-off-by: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (8)
lib/llm/src/entrypoint/input/text.rs (1)

121-123: Explicit CommonExt initialization: LGTM; consider wiring min_tokens via common here

Setting common: Default::default() is correct and keeps behavior unchanged while nvext.ignore_eos remains authoritative per precedence rules.

Optionally, you can capitalize on the new root-level support and remove the prior limitation about min_tokens by setting it via common (no need to touch async-openai builder). For example, if you want to apply a large min_tokens when inspecting templates:

-        let req = NvCreateChatCompletionRequest {
-            inner,
-            common: Default::default(),
-            nvext: Some(nvext),
-        };
+        // Use a pre-built `common` if you want to affect root-level fields like `min_tokens`
+        let req = NvCreateChatCompletionRequest {
+            inner,
+            common,
+            nvext: Some(nvext),
+        };

Then, outside this range (illustrative only), compute common conditionally:

// Above, after building `inner`
let mut common = Default::default();
// If you want to force longer streaming regardless of EOS for inspection:
if /* inspect_template */ false {
    // e.g., common.min_tokens = Some(8192);
}
lib/llm/src/entrypoint/input/batch.rs (1)

225-229: CommonExt initialization in batch evaluate: LGTM; consider optional min_tokens for batch

Initialization is correct. If batch runs benefit from discouraging early stops, consider setting common.min_tokens (e.g., to an evaluation-specific value) to avoid premature termination by EOS without relying on provider-specific nvext.

lib/llm/src/protocols/openai.rs (1)

66-69: Doc mismatch: default impl only checks NvExt

The comment says the method considers both CommonExt and NvExt, but the default impl reads only from NvExt. Either adjust the comment or have implementors override (as you do for completions).

-    /// Get the effective ignore_eos value, considering both CommonExt and NvExt.
+    /// Get the effective ignore_eos value.
+    ///
+    /// Default implementation reads from NvExt only. Types embedding CommonExt
+    /// should override to apply precedence (e.g., NvExt overrides CommonExt).
     fn get_ignore_eos(&self) -> Option<bool> {
         self.nvext().and_then(|nv| nv.ignore_eos)
     }
lib/llm/tests/test_common_ext.rs (1)

109-146: Serialization shape check (chat) — strong

Verifies flattened JSON placement and effective precedence. Consider adding a similar serialization round-trip test for completions for symmetry, though optional.

lib/llm/src/protocols/openai/common_ext.rs (3)

34-35: Redundant range validation on a u32

min_tokens is already u32, so it cannot be negative. The #[validate(range(min = 0))] check adds no value and just slows down validation.


45-54: Consider returning a struct instead of a tuple for clarity

merge_with_nvext returns a naked (Option<bool>, Option<u32>), which is easy to misuse (argument order confusion). Returning a small struct such as MergedCommon { ignore_eos, min_tokens } makes call sites self-documenting and extensible.


57-67: common_ext() need not return Option

Every implementor so far owns a real CommonExt value, never None. Returning &CommonExt directly would remove one level of Option handling everywhere.

lib/llm/src/protocols/openai/chat_completions.rs (1)

147-167: Avoid duplicating precedence logic – delegate to merge_with_nvext

effective_ignore_eos / effective_min_tokens re-implement the merging already provided by CommonExt::merge_with_nvext. Call that helper instead to keep a single source of truth.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5375af2 and 76fa80f.

📒 Files selected for processing (12)
  • lib/llm/src/entrypoint/input/batch.rs (1 hunks)
  • lib/llm/src/entrypoint/input/text.rs (1 hunks)
  • lib/llm/src/http/service/openai.rs (2 hunks)
  • lib/llm/src/protocols/openai.rs (3 hunks)
  • lib/llm/src/protocols/openai/chat_completions.rs (5 hunks)
  • lib/llm/src/protocols/openai/common_ext.rs (1 hunks)
  • lib/llm/src/protocols/openai/completions.rs (4 hunks)
  • lib/llm/src/protocols/openai/responses.rs (1 hunks)
  • lib/llm/tests/http-service.rs (3 hunks)
  • lib/llm/tests/openai_completions.rs (1 hunks)
  • lib/llm/tests/preprocessor.rs (1 hunks)
  • lib/llm/tests/test_common_ext.rs (1 hunks)
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-06-24T20:59:35.725Z
Learnt from: ishandhanani
PR: ai-dynamo/dynamo#1626
File: lib/llm/src/preprocessor.rs:238-239
Timestamp: 2025-06-24T20:59:35.725Z
Learning: In lib/llm/src/preprocessor.rs, the `sampling_options` call in the `preprocess_request` method is placed in the common section after the match statement on `request.prompt_input_type()`, meaning it applies to both `PromptInput::Tokens` and `PromptInput::Text` request types.

Applied to files:

  • lib/llm/src/entrypoint/input/batch.rs
  • lib/llm/tests/preprocessor.rs
  • lib/llm/src/protocols/openai/common_ext.rs
  • lib/llm/src/protocols/openai/completions.rs
  • lib/llm/src/protocols/openai/chat_completions.rs
📚 Learning: 2025-06-16T20:02:54.935Z
Learnt from: PeaBrane
PR: ai-dynamo/dynamo#1236
File: lib/llm/src/mocker/protocols.rs:85-112
Timestamp: 2025-06-16T20:02:54.935Z
Learning: When using derive_builder::Builder macro, the macro generates the builder struct and its methods, but does NOT generate a `builder()` method on the original struct. A manual `impl StructName { pub fn builder() -> StructNameBuilder { StructNameBuilder::default() } }` is required to provide the convenient `StructName::builder()` API pattern.

Applied to files:

  • lib/llm/tests/openai_completions.rs
📚 Learning: 2025-07-14T21:25:56.930Z
Learnt from: ryanolson
PR: ai-dynamo/dynamo#1919
File: lib/runtime/src/engine.rs:168-168
Timestamp: 2025-07-14T21:25:56.930Z
Learning: The AsyncEngineContextProvider trait in lib/runtime/src/engine.rs was intentionally changed from `Send + Sync + Debug` to `Send + Debug` because the Sync bound was overly constraining. The trait should only require Send + Debug as designed.

Applied to files:

  • lib/llm/tests/http-service.rs
  • lib/llm/tests/preprocessor.rs
📚 Learning: 2025-06-13T22:07:24.843Z
Learnt from: kthui
PR: ai-dynamo/dynamo#1424
File: lib/runtime/src/pipeline/network/egress/push_router.rs:204-209
Timestamp: 2025-06-13T22:07:24.843Z
Learning: The codebase uses async-nats version 0.40, not the older nats crate. Error handling should use async_nats::error::Error variants, not nats::Error variants.

Applied to files:

  • lib/llm/tests/http-service.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: pre-merge-rust (lib/bindings/python)
  • GitHub Check: pre-merge-rust (.)
  • GitHub Check: Build and Test - dynamo
🔇 Additional comments (20)
lib/llm/src/http/service/openai.rs (2)

1240-1242: Tests updated for CommonExt presence: LGTM

Adding common: Default::default() aligns tests with the new struct shape and preserves behavior.


1267-1269: Tests updated for CommonExt presence: LGTM

Consistent initialization of common across tests ensures future fields (e.g., min_tokens) remain tested/serializable without breaking shapes.

lib/llm/tests/openai_completions.rs (1)

39-43: Include CommonExt in test request construction: LGTM

This keeps samples aligned with the new request schema and avoids future breakage when asserting JSON snapshots.

lib/llm/tests/http-service.rs (3)

768-770: NvCustom client test: CommonExt initialization is correct

Good consistency with the updated struct. No behavior changes.


806-808: NvCustom client test (failing model path): LGTM

Explicit common init keeps schema stable across success/error flows.


846-847: NvCustom client test with context: LGTM

Maintains parity across context-aware path as well.

lib/llm/src/protocols/openai/responses.rs (1)

176-191: Initialize common on conversion — good

Explicitly setting common: Default::default() keeps the struct well-formed after the CommonExt addition and avoids serde surprises later.

lib/llm/tests/preprocessor.rs (1)

269-273: Test struct construction kept in sync

Adding common: Default::default() ensures the test Request::from helper builds a valid request post-refactor. Looks good.

lib/llm/src/protocols/openai.rs (2)

25-25: Exporting common_ext module

Good addition; keeps the module graph coherent with the new CommonExt plumbing.


151-153: Chat-completions override for get_ignore_eos already in place

I’ve confirmed that chat_completions.rs implements:

fn get_ignore_eos(&self) -> Option<bool> {
    self.effective_ignore_eos()
}

so the indirection via self.get_ignore_eos() is honored and no further changes are needed.

lib/llm/tests/test_common_ext.rs (6)

12-29: Covers root-level fields (chat) — good

Deserialization and effective getters for ignore_eos and min_tokens are validated. Solid baseline.


30-49: Precedence test (chat) is correct

Confirms NvExt overrides CommonExt for ignore_eos and preserves min_tokens. Exactly the intended behavior.


51-69: Backward compatibility (chat) validated

Ensures legacy nvext.ignore_eos still works without root fields. Nice.


70-87: Root-level fields (completions) — good

Mirrors the chat tests for completions; correct assertions.


88-107: Precedence test (completions) — good

NvExt override is enforced; min_tokens remains from CommonExt. Looks correct.


148-162: min_tokens root-only behavior — good

Confirms min_tokens is surfaced via CommonExt only. Matches design.

lib/llm/src/protocols/openai/completions.rs (4)

138-153: Precedence logic is correct

nvext.ignore_eos overrides common.ignore_eos; min_tokens sourced from common. Matches PR intent and tests.


160-163: Stop conditions now surface min_tokens

Delegating to effective_min_tokens() wires min_tokens through to StopConditions. Good.


172-174: Override get_ignore_eos to honor precedence

Returning effective_ignore_eos() ensures StopConditions respect NvExt-over-CommonExt. Good.


41-46: serde(flatten) integration is correct and CommonExt implements Default

  • In lib/llm/src/protocols/openai/common_ext.rs (lines 21–23), CommonExt is annotated with #[derive(..., Default)], so default construction paths are supported as expected.
  • The min_tokens: Option<u32> field and its precedence logic are fully tested in merge_with_nvext, covering both None and Some cases.

No changes required here.

@KrishnanPrash KrishnanPrash marked this pull request as ready for review August 11, 2025 20:08
Copy link
Contributor

@rmccorm4 rmccorm4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Flipping precedence to top level -> nvext based on offline discussion, then should be ready for re-review (and making clippy and related checks pass)

@rmccorm4 rmccorm4 changed the title feat: Adding support for min_tokens and ignore_eos (outside of nvext) feat: Add frontend support for min_tokens and ignore_eos (outside of nvext) and Structured Output / Guided Decoding Aug 12, 2025
@KrishnanPrash KrishnanPrash merged commit 18bb779 into main Aug 12, 2025
11 checks passed
@KrishnanPrash KrishnanPrash deleted the kprashanth/ux_refactor branch August 12, 2025 20:46
hhzhang16 pushed a commit that referenced this pull request Aug 27, 2025
… of `nvext`) and Structured Output / Guided Decoding (#2380)

Signed-off-by: KrishnanPrash <140860868+KrishnanPrash@users.noreply.github.com>
Co-authored-by: Ryan McCormick <rmccormick@nvidia.com>
Co-authored-by: Ayush Agarwal <ayushag@nvidia.com>
Signed-off-by: Hannah Zhang <hannahz@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants