Skip to content

Latest commit

 

History

History
19 lines (12 loc) · 573 Bytes

README.md

File metadata and controls

19 lines (12 loc) · 573 Bytes

LLM Perf

llmperf is a benchmarking tool for stress testing self-hosted LLMs, Azure OpenAI, and OpenAI APIs. It provides efficient concurrent request management, detailed performance analysis, and helps developers optimize inference latency and throughput. With customizable test parameters, it is suitable for various LLM workload evaluation scenarios.

1. Configure your env

Copy and set up the environment variables from .env.example.

cp .env.example .env

2. Launch your App

Copy and set up the environment variables.

./start.sh