Skip to content

Feature matrix

Eve edited this page Mar 3, 2025 · 10 revisions
CPU (AVX/AVX2) CPU (ARM NEON) Metal CUDA ROCm SYCL Vulkan Kompute
K-quants ✅ 🐢⁵ 🚫
I-quants ✅ 🐢⁴ ✅ 🐢⁴ ✅ 🐢⁴ Partial¹ ✅ 🐢⁴ 🚫
Parallel Multi-GPU⁶ N/A N/A N/A Sequential only Sequential only
K cache quants 🚫
MoE architecture 🚫
  • ✅: feature works
  • 🚫: feature does not work
  • ❓: unknown, please contribute if you can test it yourself
  • 🐢: feature is slow
  • ¹: IQ3_S and IQ1_S, see #5886
  • ²: Only with -ngl 0
  • ³: Inference is 50% slower
  • ⁴: Slower than K-quants of comparable size
  • ⁵: Generally the CUDA or ROCM backends are faster, though there are cases where Vulkan has faster text generation. See #10879 for benchmarks.
  • ⁶: By default, all GPU backends can utilize multiple devices by running them sequentially. The CUDA code (which is also used for ROCm via HIP) also has code for running GPUs in parallel via --split-mode row. However, this is optimized relatively poorly and is only faster if the interconnect speed is fast vs. the speed of a single GPU.
  • ⁶: Only q8_0 and iq4_nl

Users Guide

Useful information for users that doesn't fit into Readme.

Technical Details

These are information useful for Maintainers and Developers which does not fit into code comments

Github Actions Main Branch Status

Click on a badge to jump to workflow. This is here as a useful general view of all the actions so that we may notice quicker if main branch automation is broken and where.

  • bench action status
  • build action status
  • close-issue action status
  • code-coverage action status
  • docker action status
  • editorconfig action status
  • gguf-publish action status
  • labeler action status
  • nix-ci-aarch64 action status
  • nix-ci action status
  • nix-flake-update action status
  • nix-publish-flake action status
  • python-check-requirements action status
  • python-lint action status
  • server action status
Clone this wiki locally