Skip to content

Releases: fraxy-v/llama.cpp

b2873

18 May 07:54
7e4d3d7
Compare
Choose a tag to compare
Update llama.cpp

Co-authored-by: slaren <slarengh@gmail.com>

b2872

18 May 07:36
564a650
Compare
Choose a tag to compare
Update llama.cpp

Co-authored-by: slaren <slarengh@gmail.com>

b2871

16 May 07:29
Compare
Choose a tag to compare
post review

b2870

15 May 10:59
Compare
Choose a tag to compare
fix: vsnprintf terminates with 0, string use not correct

b2869

15 May 10:28
Compare
Choose a tag to compare
fix compile error

b2867

13 May 10:23
9aa6724
Compare
Choose a tag to compare
llama : rename jina tokenizers to v2 (#7249)

* refactor: rename jina tokenizers to v2

* refactor: keep refactoring non-breaking

b2524

25 Mar 07:07
7733f0c
Compare
Choose a tag to compare
ggml : support AVX512VNNI (#6280)

This change causes some quants (e.g. Q4_0, Q8_0) to go faster on some
architectures (e.g. AMD Zen 4).

b2505

22 Mar 15:23
0fa2dc2
Compare
Choose a tag to compare
Merge branch 'master' into master