Skip to content
graph TD;
ggml --> whisper.cpp
ggml --> llama.cpp
llama.cpp --> coding
llama.cpp --> providers

subgraph coding[Coding]
    llama.vim
    llama.vscode
    llama.qtcreator
end

subgraph providers[Providers]
    LlamaBarn
end

ggml[<a href="https://github.com/ggml-org/ggml"                       style="text-decoration:none;">ggml</a>            <br><span style="font-size:10px;">Machine learning library</span>];
whisper.cpp[<a href="https://github.com/ggml-org/whisper.cpp"         style="text-decoration:none;">whisper.cpp</a>     <br><span style="font-size:10px;">speech-to-text</span>];
llama.cpp[<a href="https://github.com/ggml-org/llama.cpp"             style="text-decoration:none;">llama.cpp</a>       <br><span style="font-size:10px;">LLM inference</span>];
llama.vim[<a href="https://github.com/ggml-org/llama.vim"             style="text-decoration:none;">llama.vim</a>       <br><span style="font-size:10px;">Vim/Neovim plugin</span>];
llama.vscode[<a href="https://github.com/ggml-org/llama.vscode"       style="text-decoration:none;">llama.vscode</a>    <br><span style="font-size:10px;">VSCode plugin</span>];
llama.qtcreator[<a href="https://github.com/ggml-org/llama.qtcreator" style="text-decoration:none;">llama.qtcreator</a> <br><span style="font-size:10px;">Qt Creator plugin</span>];
LlamaBarn[<a href="https://github.com/ggml-org/LlamaBarn"             style="text-decoration:none;">LlamaBarn</a>       <br><span style="font-size:10px;">macOS app</span>];
Loading

News

Use cases

Chat STT Mobile Infra Cloud Code
LM Studio MacWhisper PocketPal AI RamaLama Hugging Face llama.vim
KoboldCpp VLC media player LLMFarm paddler llama.vscode
LocalAI wchess ChatterUI llama-swap VSCode
Jan superwhisper SmolChat Docker Model Runner
text-generation-webui hyprnote LlamaBarn

Partners

Pinned Loading

  1. ggml ggml Public

    Tensor library for machine learning

    C++ 13.8k 1.4k

  2. llama.cpp llama.cpp Public

    LLM inference in C/C++

    C++ 92.6k 14.4k

  3. llama.vim llama.vim Public

    Vim plugin for LLM-assisted code/text completion

    Vim Script 1.8k 88

  4. llama.vscode llama.vscode Public

    VS Code extension for LLM-assisted code/text completion

    TypeScript 1.1k 93

  5. ci ci Public

    CI for ggml and related projects

    Shell 31 10

  6. llama.qtcreator llama.qtcreator Public

    Forked from cristianadam/llama.qtcreator

    Local LLM-assisted text completion for Qt Creator.

    C++ 39 3

Repositories

Showing 10 of 15 repositories