Skip to content

Conversation

@rick-github
Copy link
Contributor

Return an empty array if the JSON schema sets maxItems == 0.

Before:

$ ./build/bin/llama-cli -m ./tmp/mnt/models/quantize/gemma-1.1-2b-it.Q8_0.gguf -j "$(python -c $'import json, pydantic\nclass Result(pydantic.BaseModel):  colours:list[str]=pydantic.Field(max_length=0)\nprint(json.dumps(Result.model_json_schema()))')" --no-display-prompt  -p "Here are some colours: " -no-cnv
...

system_info: n_threads = 8 (n_threads_batch = 8) / 24 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | 

parse: error parsing grammar: expecting '}' at -1})? "]" space
colours-kv ::= "\"colours\"" space ":" space colours
root ::= "{" space colours-kv "}" space
space ::= | " " | "\n"{1,2} [ \t]{0,20}
string ::= "\"" char* "\"" space


char ::= [^"\\\x7F\x00-\x1F] | [\\] (["\\bfnrt] | "u" [0-9a-fA-F]{4})
colours ::= "[" space (string ("," space string){0,-1})? "]" space
colours-kv ::= "\"colours\"" space ":" space colours
root ::= "{" space colours-kv "}" space
space ::= | " " | "\n"{1,2} [ \t]{0,20}
string ::= "\"" char* "\"" space

llama_grammar_init_impl: failed to parse grammar
main: failed to initialize sampling subsystem

After:

$ ./build/bin/llama-cli -m ./tmp/mnt/models/quantize/gemma-1.1-2b-it.Q8_0.gguf -j "$(python -c $'import json, pydantic\nclass Result(pydantic.BaseModel):  colours:list[str]=pydantic.Field(max_length=0)\nprint(json.dumps(Result.model_json_schema()))')" --no-display-prompt  -p "Here are some colours: " -no-cnv
...

system_info: n_threads = 8 (n_threads_batch = 8) / 24 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | 

sampler seed: 3851950458
sampler params: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
	top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist 
generate: n_ctx = 4096, n_batch = 2048, n_predict = -1, n_keep = 1

{"colours": [
  	    	  	  	  	  	]
}
    	   	  	  	  	   [end of text]

llama_perf_sampler_print:    sampling time =       8.25 ms /    41 runs   (    0.20 ms per token,  4971.50 tokens per second)
llama_perf_context_print:        load time =     394.69 ms
llama_perf_context_print: prompt eval time =     112.49 ms /     7 tokens (   16.07 ms per token,    62.23 tokens per second)
llama_perf_context_print:        eval time =    1579.12 ms /    33 runs   (   47.85 ms per token,    20.90 tokens per second)
llama_perf_context_print:       total time =    2158.70 ms /    40 tokens

Fixes: #13116

@rick-github rick-github requested a review from ngxson as a code owner April 25, 2025 19:39
@github-actions github-actions bot added testing Everything test related examples python python script changes server labels Apr 25, 2025
@ngxson ngxson changed the title fix: handle maxItems == 0 in JSON schema (#13116) grammar : handle maxItems == 0 in JSON schema (#13116) Apr 26, 2025
@ngxson ngxson merged commit d5fe4e8 into ggml-org:master Apr 26, 2025
50 checks passed
pockers21 pushed a commit to pockers21/llama.cpp that referenced this pull request Apr 28, 2025
Co-authored-by: Richard Lyons <frob@cloudstaff.com>
@rick-github rick-github deleted the maxItems branch May 7, 2025 14:36
timwu pushed a commit to timwu/llama.cpp that referenced this pull request Dec 20, 2025
Co-authored-by: Richard Lyons <frob@cloudstaff.com>
SamuelOliveirads pushed a commit to SamuelOliveirads/llama.cpp that referenced this pull request Dec 29, 2025
* grammar : fix JSON Schema for string regex with top-level alt. (ggml-org#9903)

Prior to this commit, using a JSON Schema containing a string
with `pattern` regular expression that uses top-level alternation
(e.g. `"pattern": "^A|B|C|D$"`) would result in invalid JSON
output from the constrained sampling grammar, because it
ended up creating a grammar rule like this for the string:

```
thing ::= "\"" "A" | "B" | "C" | "D" "\"" space
```

Note that this rule will only match a starting quote for the "A" case,
and will only match an ending quote for the "D" case,
so this rule will always produce invalid JSON when used for sampling
(that is, the JSON will always be lacking the starting quote,
the ending quote, or both).

This was fixed in a simple way by adding parentheses to the
generated rule (for all string pattern rules, to keep it simple),
such that the new generated rule looks like this (correct):

```
thing ::= "\"" ("A" | "B" | "C" | "D") "\"" space
```

* grammars : add English-only grammar (ggml-org#10612)

* grammar : handle maxItems == 0 in JSON schema (ggml-org#13117)

Co-authored-by: Richard Lyons <frob@cloudstaff.com>

* grammar-parser : fix possible null-deref (ggml-org#9004)

Fixes: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=70680

Signed-off-by: David Korczynski <david@adalogics.com>

* llama : fix typo in llama-grammar.h [no ci] (ggml-org#11816)

* * server: fix "--grammar-file" parameter (ggml-org#12285)

* common : use std::string_view now that we target c++17 (ggml-org#14319)

* json : support `enum` values within `allOf` (ggml-org#15830)

* grammar : use int64_t to avoid int overflows in int schema to grammar conversion logic (ggml-org#16626)

* grammar : support array references in json schema (ggml-org#16792)

* grammar : support array references in json schema

* Update json-schema-to-grammar.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* grammar : improve regex when naming ref derived rules

* grammar : replace non-conformant definitions array with anyOf test case

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
# Conflicts:
#	tests/test-json-schema-to-grammar.cpp

* merge fix

* llama : minor grammar refactor (ggml-org#10897)

* llama: fix error on bad grammar (ggml-org#12628)

* grammar : fix integer overflow (ggml-org#17381)

* Fix DoS / integer overflow

* Remove optional, use INT64_MAX instead as placeholder value (it's technically -1, so it fits :)

* White space

* Actually, since it's unsigned, use UINT64_MAX
# Conflicts:
#	src/llama-grammar.cpp

* grammar: fix regression caused by ggml-org#17381 (ggml-org#17412)

* grammar: fix regression caused by ggml-org#17381

* more readable
# Conflicts:
#	src/llama-grammar.cpp

* Merge Fix

* Fix warnings

---------

Signed-off-by: David Korczynski <david@adalogics.com>
Co-authored-by: Joe Eli McIlvain <joe.eli.mac@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: frob <rick+github@frob.com.au>
Co-authored-by: Richard Lyons <frob@cloudstaff.com>
Co-authored-by: DavidKorczynski <david@adalogics.com>
Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>
Co-authored-by: firecoperana <firecoperana>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Aldehir Rojas <hello@alde.dev>
Co-authored-by: Olivier Chafik <olivier.chafik@gmail.com>
Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
Co-authored-by: Xuan-Son Nguyen <son@huggingface.co>
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

examples python python script changes server testing Everything test related

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Misc. bug: JSON schema that defines array with 0 elements generates un-parseable GBNF

3 participants