Skip to content

Conversation

@grahamking
Copy link
Contributor

@grahamking grahamking commented May 29, 2025

  • Add Granite to our tokenizer
  • Fix pre-processor to load context length correctly
  • Add strftime_now Jinja function for prompt templates
  • Update llama.cpp
  • Handle trtllm errors when not using trtllm

Support depends on the engine:

  • mistral.rs, our default engine, doesn't support Granite yet.

  • llama.cpp does and works very well:

dynamo-run out=llamacpp ~/llms/granite-3.3-2b-instruct-Q4_K_M.gguf --context-length 16384
  • vllm also works very well:
dynamo-run in=http out=vllm ~/llms/granite-3.3-2b-instruct --context-length 16384
  • sglang mostly works, but it doesn't catch the stop token, so we do in the HTTP ingress, and log an error. The Text ingress doesn't catch it because I disabled it to make the raw echo engine work. A bit of work to do here.

Closes: #1245

Summary by CodeRabbit

  • New Features

    • Added support for the "Granite" architecture in model configuration.
    • Introduced a new function for template processing that allows inserting the current date and time in custom formats.
  • Improvements

    • Enhanced how model context length is determined by prioritizing configuration sources for more accurate extraction.
    • Improved error handling and logging for imports, providing better diagnostics and stability if optional dependencies fail to load.
  • Dependency Updates

    • Updated the llama-cpp-2 dependency to version 0.1.107 for improved compatibility and features.

- Add Granite to our tokenizer
- Fix pre-processor to load context length correctly
- Add strftime_now Jinja function for prompt templates
- Update llama.cpp
- Handle trtllm errors when not using trtllm

Support depends on the engine:

- `mistral.rs`, our default engine, doesn't support Granite yet.

- `llama.cpp` does and works very well:
```
dynamo-run out=llamacpp ~/llms/granite-3.3-2b-instruct-Q4_K_M.gguf --context-length 16384
```

- `vllm` also works very well:
```
dynamo-run in=http out=vllm ~/llms/granite-3.3-2b-instruct --context-length 16384
```

- `sglang` mostly works, but it doesn't catch the stop token, so we do
  in the HTTP ingress, and log an error. The Text ingress doesn't catch it
  because I disabled it to make the raw echo engine work. A bit of work to
  do here.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented May 29, 2025

Walkthrough

This update introduces IBM Granite model support by adding a new enum variant, adjusts model context length extraction logic to prioritize config.json, and enhances template formatting with a new strftime_now function. It also broadens error handling for TensorRT-LLM imports and updates a dependency version in the LlamaCPP engine.

Changes

File(s) Change Summary
lib/llm/src/gguf.rs Added Granite variant to GGUFArchitecture enum.
lib/llm/src/model_card/create.rs Changed context length extraction: now prefers max_position_embeddings from config.json.
lib/llm/src/preprocessor/prompt/template/tokcfg.rs Added new public function strftime_now for formatted current datetime in templates.
lib/llm/src/preprocessor/prompt/template/formatters.rs Registered strftime_now in Jinja environment; minor order adjustment for adding functions/filters.
lib/bindings/python/src/dynamo/llm/init.py Broadened import exception handling for TensorRT-LLM and added error logging.
lib/engines/llamacpp/Cargo.toml Updated llama-cpp-2 dependency version from 0.1.103 to 0.1.107.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ModelLoader
    participant ConfigReader

    User->>ModelLoader: Load model (e.g., IBM Granite)
    ModelLoader->>ConfigReader: Read config.json
    alt max_position_embeddings found
        ConfigReader-->>ModelLoader: Return context length
    else not found
        ConfigReader->>ConfigReader: Read tokenizer_config.json
        alt model_max_length found
            ConfigReader-->>ModelLoader: Return context length
        else not found
            ConfigReader-->>ModelLoader: Return 0
        end
    end
    ModelLoader-->>User: Model loaded with context length
Loading

Assessment against linked issues

Objective Addressed Explanation
Add support for IBM Granite models (#1245)

Suggested labels

size/L

Suggested reviewers

  • biswapanda
  • ryanolson
  • paulhendricks
  • kkranen
  • tanmayv25
  • nnshah1
  • alec-flowers
  • GuanLuo
  • tmonty12
  • ishandhanani
  • ptarasiewiczNV
  • oandreeva-nv
  • nv-anants

Poem

A Granite rabbit hops with glee,
Now Granite models run, you see!
With context found from configs neat,
And time-stamps in templates—what a treat!
Broader logs and engines tuned,
This codebase sings a Granite tune.
🐇✨


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
lib/llm/src/preprocessor/prompt/template/tokcfg.rs (1)

230-235: Consider improving the function design and documentation.

The strftime_now function implementation is functionally correct, but there are a few considerations:

  1. Return type mismatch: The function signature suggests it can fail with Result<Value, Error>, but the implementation never returns an error. Consider whether this should return Value directly or if error handling for invalid format strings is needed.

  2. Local timezone assumption: Using Local::now() assumes the local timezone is appropriate for all use cases. Consider if UTC time might be more predictable in server environments.

  3. Missing documentation: The function lacks documentation explaining the expected format string syntax and behavior.

Consider this improvement:

+/// Formats the current local date and time according to the given format string.
+/// Uses chrono's format syntax (e.g., "%Y-%m-%d %H:%M:%S").
+/// Returns the formatted string as a safe template value.
 pub fn strftime_now(format_str: &str) -> Result<Value, Error> {
     let local: DateTime<Local> = Local::now();
-    Ok(Value::from_safe_string(
-        local.format(format_str).to_string(),
-    ))
+    match local.format(format_str).to_string() {
+        formatted => Ok(Value::from_safe_string(formatted)),
+    }
 }

Or if error handling isn't needed:

+/// Formats the current local date and time according to the given format string.
-pub fn strftime_now(format_str: &str) -> Result<Value, Error> {
+pub fn strftime_now(format_str: &str) -> Value {
     let local: DateTime<Local> = Local::now();
-    Ok(Value::from_safe_string(
-        local.format(format_str).to_string(),
-    ))
+    Value::from_safe_string(local.format(format_str).to_string())
 }
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7677f74 and 4819d98.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • lib/bindings/python/src/dynamo/llm/__init__.py (2 hunks)
  • lib/engines/llamacpp/Cargo.toml (1 hunks)
  • lib/llm/src/gguf.rs (1 hunks)
  • lib/llm/src/model_card/create.rs (1 hunks)
  • lib/llm/src/preprocessor/prompt/template/formatters.rs (2 hunks)
  • lib/llm/src/preprocessor/prompt/template/tokcfg.rs (2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
lib/llm/src/preprocessor/prompt/template/formatters.rs (1)
lib/llm/src/preprocessor/prompt/template/tokcfg.rs (3)
  • raise_exception (39-41)
  • strftime_now (230-235)
  • tojson (196-228)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: pre-merge-rust (lib/bindings/python)
  • GitHub Check: pre-merge-rust (.)
  • GitHub Check: pre-merge-rust (lib/runtime/examples)
  • GitHub Check: Build and Test - vllm
🔇 Additional comments (8)
lib/engines/llamacpp/Cargo.toml (1)

41-41:

✅ Verification successful

Verify the dependency version upgrade is safe and beneficial.

The llama-cpp-2 dependency has been updated from 0.1.103 to 0.1.107. While this aligns with the PR's goal of improving llama.cpp support for Granite models, please ensure this version upgrade is stable and compatible.


🏁 Script executed:

#!/bin/bash
# Description: Check for security advisories and verify the version exists
# Expected: Confirm version 0.1.107 exists and check for any security issues

# Check if the version exists on crates.io
curl -s "https://crates.io/api/v1/crates/llama-cpp-2" | jq '.versions[] | select(.num == "0.1.107")'

# Check for security advisories
gh api graphql -f query='
{
  securityVulnerabilities(first: 10, ecosystem: RUST, package: "llama-cpp-2") {
    nodes {
      advisory {
        summary
        severity
        publishedAt
      }
      vulnerableVersionRange
      firstPatchedVersion {
        identifier
      }
    }
  }
}'

Length of output: 2553


Confirmed llama-cpp-2 v0.1.107 is Available and Secure

  • Crate v0.1.107 exists on crates.io (not yanked)
  • No Rust security advisories reported for this version
  • License remains “MIT OR Apache-2.0”

This bump aligns with the PR’s goals and can be merged as-is.

lib/llm/src/preprocessor/prompt/template/tokcfg.rs (1)

20-20: Import addition looks good.

The chrono::{DateTime, Local} import is correctly added to support the new strftime_now function.

lib/llm/src/preprocessor/prompt/template/formatters.rs (2)

18-18: Import update is correct.

The import statement has been properly updated to include the new strftime_now function alongside the existing imports.


55-56: Function registration is properly implemented.

The strftime_now function is correctly registered in the Jinja environment, making it available for use in prompt templates. The reordering of function registrations (moving raise_exception after the tojson filter) doesn't affect functionality but maintains a consistent organization.

lib/bindings/python/src/dynamo/llm/__init__.py (2)

16-16: Logging import addition is appropriate.

Adding the logging import enables proper error reporting for TensorRT-LLM import failures.


50-53: Excellent improvement to error handling.

The enhanced error handling is a significant improvement over silently ignoring all import failures. This approach:

  1. Catches broader exceptions: Beyond just ImportError, this handles any exception that might occur during TensorRT-LLM imports
  2. Provides debugging information: Logs the full exception with stack trace using logger.exception()
  3. Maintains isolation: Prevents TensorRT-LLM issues from affecting other engine imports
  4. Uses appropriate logger: Creates a module-specific logger using __name__

This aligns perfectly with the PR's objective of enhancing error handling for trtllm when it's not in use.

lib/llm/src/model_card/create.rs (1)

118-131: LGTM! Improved context length extraction with proper fallback logic.

The cascading fallback approach is well-implemented:

  1. First attempts the standard HuggingFace max_position_embeddings from config.json
  2. Falls back to model_max_length from tokenizer_config.json
  3. Defaults to 0 to let the engine handle it

This should improve compatibility across different model configurations and follows best practices for configuration precedence.

lib/llm/src/gguf.rs (1)

61-61: LGTM! Clean addition of Granite architecture support.

The new Granite variant is correctly added to the GGUFArchitecture enum, following the existing naming convention and placement pattern.

env.add_filter("tojson", tojson);

env.add_function("raise_exception", raise_exception);
env.add_function("strftime_now", strftime_now);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this hooking into? Do some chat/jinja templates have some "<strftime_now>"-like placeholders in them, and this is saying to populate them with our custom strftime_now function?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly what you said.

The Python HF transformers library supports that, it's documented here:
https://huggingface.co/docs/transformers/main/en/chat_templating_writing#callable-functions

Granite's chat template has it: https://huggingface.co/ibm-granite/granite-3.3-2b-instruct/blob/main/tokenizer_config.json#L224 (you have to scroll a long long way to the right).

I'm surprised we got this far without it.

@grahamking grahamking merged commit 7d0c938 into main May 29, 2025
15 checks passed
@grahamking grahamking deleted the gk-granite branch May 29, 2025 19:25
grahamking added a commit that referenced this pull request Jun 2, 2025
- Add Granite to our tokenizer
- Fix pre-processor to load context length correctly
- Add strftime_now Jinja function for prompt templates
- Update llama.cpp
- Handle trtllm errors when not using trtllm

Support depends on the engine:

- `mistral.rs`, our default engine, doesn't support Granite yet.

- `llama.cpp` does and works very well:
```
dynamo-run out=llamacpp ~/llms/granite-3.3-2b-instruct-Q4_K_M.gguf --context-length 16384
```

- `vllm` also works very well:
```
dynamo-run in=http out=vllm ~/llms/granite-3.3-2b-instruct --context-length 16384
```

- `sglang` mostly works, but it doesn't catch the stop token, so we do in the HTTP ingress, and log an error. The Text ingress doesn't catch it because I disabled it to make the raw echo engine work. A bit of work to do here.

Closes: #1245
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE]: Support IBM Granite models

3 participants