Skip to content

Benchmarking tool for stress testing self-hosted LLMs, Azure OpenAI, and OpenAI APIs. Measure latency, throughput, and optimize performance. 🚀

Notifications You must be signed in to change notification settings

theodoreniu/llmperf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Perf

llmperf is a benchmarking tool for stress testing self-hosted LLMs, Azure OpenAI, and OpenAI APIs. It provides efficient concurrent request management, detailed performance analysis, and helps developers optimize inference latency and throughput. With customizable test parameters, it is suitable for various LLM workload evaluation scenarios.

1. Configure your env

Copy and set up the environment variables from .env.example.

cp .env.example .env

2. Launch your App

Copy and set up the environment variables.

./start.sh

About

Benchmarking tool for stress testing self-hosted LLMs, Azure OpenAI, and OpenAI APIs. Measure latency, throughput, and optimize performance. 🚀

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages