Skip to content

Actions: ggml-org/llama.cpp

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
102,781 workflow runs
102,781 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

vulkan: fix assertion when qy_needs_dequant
EditorConfig Checker #22206: Pull request #12068 opened by jeffbolznv
February 25, 2025 14:14 26m 52s jeffbolznv:qy_dequant_assert
February 25, 2025 14:14 26m 52s
vulkan: fix assertion when qy_needs_dequant
Server #11196: Pull request #12068 opened by jeffbolznv
February 25, 2025 14:14 33m 5s jeffbolznv:qy_dequant_assert
February 25, 2025 14:14 33m 5s
vulkan: fix assertion when qy_needs_dequant
Pull Request Labeler #8526: Pull request #12068 opened by jeffbolznv
February 25, 2025 14:14 21m 25s
February 25, 2025 14:14 21m 25s
tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars
Pull Request Labeler #8525: Pull request #12034 synchronize by ochafik
February 25, 2025 14:13 9m 54s
February 25, 2025 14:13 9m 54s
llama : refactor llama_kv_cache, llama_context and llm_build_context
Python Type-Check #1917: Pull request #11213 synchronize by ggerganov
February 25, 2025 14:11 1m 38s gg/llama-kv-cache
February 25, 2025 14:11 1m 38s
llama : refactor llama_kv_cache, llama_context and llm_build_context
Server #11194: Pull request #11213 synchronize by ggerganov
February 25, 2025 14:11 8m 33s gg/llama-kv-cache
February 25, 2025 14:11 8m 33s
llama : refactor llama_kv_cache, llama_context and llm_build_context
EditorConfig Checker #22204: Pull request #11213 synchronize by ggerganov
February 25, 2025 14:11 20s gg/llama-kv-cache
February 25, 2025 14:11 20s
llama : refactor llama_kv_cache, llama_context and llm_build_context
flake8 Lint #17534: Pull request #11213 synchronize by ggerganov
February 25, 2025 14:11 21s gg/llama-kv-cache
February 25, 2025 14:11 21s
llama : refactor llama_kv_cache, llama_context and llm_build_context
CI #19796: Pull request #11213 synchronize by ggerganov
February 25, 2025 14:11 34m 46s gg/llama-kv-cache
February 25, 2025 14:11 34m 46s
llama : refactor llama_kv_cache, llama_context and llm_build_context
Pull Request Labeler #8524: Pull request #11213 synchronize by ggerganov
February 25, 2025 14:11 18s
February 25, 2025 14:11 18s
tool-call: add support for tool-calls using Model Context Protocol
EditorConfig Checker #22203: Pull request #11556 synchronize by bandoti
February 25, 2025 13:45 18s bandoti:llamacli-tools
February 25, 2025 13:45 18s
tool-call: add support for tool-calls using Model Context Protocol
Pull Request Labeler #8523: Pull request #11556 synchronize by bandoti
February 25, 2025 13:45 12s
February 25, 2025 13:45 12s
ggml: aarch64: implement SVE kernels for q2_k_q8_k vector dot
Server #11192: Pull request #12064 synchronize by Vithulep
February 25, 2025 13:33 7m 47s Vithulep:Q2_k_SVE_Kernel
February 25, 2025 13:33 7m 47s
ggml: aarch64: implement SVE kernels for q2_k_q8_k vector dot
EditorConfig Checker #22202: Pull request #12064 synchronize by Vithulep
February 25, 2025 13:33 22s Vithulep:Q2_k_SVE_Kernel
February 25, 2025 13:33 22s
ggml: aarch64: implement SVE kernels for q2_k_q8_k vector dot
Pull Request Labeler #8522: Pull request #12064 synchronize by Vithulep
February 25, 2025 13:33 12s
February 25, 2025 13:33 12s
Cache based tokenization for the server input prompts
CI #19793: Pull request #12067 opened by vnicolici
February 25, 2025 13:08 Action required vnicolici:cache-based-tokenization
February 25, 2025 13:08 Action required