-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
1 addition
and
1 deletion.
There are no files selected for viewing
Submodule llama.cpp
updated
28 files
+25 −0 | .github/workflows/zig-build.yml | |
+2 −0 | CMakeLists.txt | |
+5 −2 | Makefile | |
+4 −3 | Package.swift | |
+1 −0 | README.md | |
+32 −14 | build.zig | |
+238 −0 | convert-bloom-hf-to-gguf.py | |
+216 −0 | convert-mpt-hf-to-gguf.py | |
+8 −63 | convert-refact-hf-to-gguf.py | |
+33 −4 | examples/infill/infill.cpp | |
+1 −1 | examples/parallel/parallel.cpp | |
+2 −2 | examples/server/api_like_OAI.py | |
+13 −2 | examples/server/server.cpp | |
+62 −107 | ggml-alloc.c | |
+11 −5 | ggml-alloc.h | |
+385 −0 | ggml-backend.c | |
+143 −0 | ggml-backend.h | |
+500 −78 | ggml-cuda.cu | |
+4 −0 | ggml-cuda.h | |
+18 −1 | ggml-metal.h | |
+152 −9 | ggml-metal.m | |
+12 −6 | ggml-metal.metal | |
+23 −45 | ggml.c | |
+9 −7 | ggml.h | |
+70 −42 | gguf-py/gguf/gguf.py | |
+5 −5 | k_quants.h | |
+840 −60 | llama.cpp | |
+13 −11 | scripts/sync-ggml.sh |