Skip to content
/ llmhub Public

LLMHub is a lightweight management platform designed to streamline the operation and interaction with various language models (LLMs). It provides an intuitive command-line interface (CLI) and a RESTful API to manage, start, stop, and interact with LLMs. The platform supports running multiple models with different configurations and context sizes.

Notifications You must be signed in to change notification settings

jmather/llmhub

Repository files navigation

NOTICE

This repo is deprecated. The current version is built in node.

LLMHub CLI

LLMHub CLI is a command-line interface tool designed to manage and interact with various LLM (Large Language Model) servers. It allows you to start, stop, update, and manage LLM processes easily and efficiently.

Features

  • Manage LLM servers
  • Start, stop, and update LLM processes
  • List available models and their statuses
  • OpenAI-compatible API endpoints for completions and models
  • Easily configurable via YAML files
  • Supports different engines and quantization formats

Installation

You can install LLMHub CLI directly from PyPI:

pip install llmhub-cli

Usage

After installation, you can use the llmhub command to interact with the tool. Below are some example commands:

Start a Process

llmhub start MythoMax-L2-13B

Stop a Process

llmhub stop MythoMax-L2-13B

Update All Processes

llmhub update

List All Models

llmhub list-models

Check Status

llmhub status

Configuration

The configuration is handled via YAML files. You can place your config.yaml file in the ~/.llmhub/ directory or specify a custom path when initializing the ConfigManager.

Example Configuration

on_start:
  MythoMax-L2-13B:
    quant: Q5_K_M
    engine: llamacppserver
    context_size: [512, 1024, 2048]

port: 8080
enable_proxy: true
engine_port_min: 8081
engine_port_max: 10000

engines:
  llamacppserver:
    path: /path/to/llamacppserver
    arguments: --color -t 20 --parallel 2 --mlock --metrics --verbose
    model_flag: "-m"
    context_size_flag: "-c"
    port_flag: "--port"
    api_key_flag: "--api-key"
    file_types: [gguf]

API Endpoints

LLMHub CLI also provides OpenAI-compatible API endpoints:

  • /v1/completions: Handle completion requests.
  • /v1/chat/completions: Handle chat completion requests.
  • /v1/models: List available models.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License.

Contact

For any questions or issues, please open an issue on the GitHub repository.

About

LLMHub is a lightweight management platform designed to streamline the operation and interaction with various language models (LLMs). It provides an intuitive command-line interface (CLI) and a RESTful API to manage, start, stop, and interact with LLMs. The platform supports running multiple models with different configurations and context sizes.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published