Releases: arthw/llama.cpp
Releases · arthw/llama.cpp
b3151
ci : fix macos x86 build (#7940) In order to use old `macos-latest` we should use `macos-12` Potentially will fix: https://github.com/ggerganov/llama.cpp/issues/6975
b3145
rpc : fix ggml_backend_rpc_supports_buft() (#7918)
b3014
update HIP_UMA #7399 (#7414) * update HIP_UMA #7399 add use of hipMemAdviseSetCoarseGrain when LLAMA_HIP_UMA is enable. - get x2 on prompte eval and x1.5 on token gen with rocm6.0 on ryzen 7940HX iGPU (780M/gfx1103) * simplify code, more consistent style --------- Co-authored-by: slaren <slarengh@gmail.com>
b2986
readme : remove trailing space (#7469)
b2953
Tokenizer SPM fixes for phi-3 and llama-spm (#7375) * Update brute force test: special tokens * Fix added tokens - Try to read 'added_tokens.json'. - Try to read 'tokenizer_config.json'. - Try to read 'tokenizer.json'. * Fix special tokens rtrim Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * server : fix test regexes
update_oneapi_2024.1-b2861-69a0609
update CI with oneapi 2024.1
add_oneapi_runtime-b2866-6cf75b2
fix path
add_oneapi_runtime-b2865-d2ca97b
fix path