A unified language model server built upon vllm and infinity.
pip install -e .
imitater -c config/example.yaml
Show configuration instruction.
- name: OpenAI model name
- token: OpenAI token
- name: Display name
- path: Model name on hub or local model path
- device: Device IDs
- port: Port ID
- maxlen: Maximum model length (optional)
- agent_type: Agent type (optional) {react, aligned}
- template: Template jinja file (optional)
- gen_config: Generation config folder (optional)
- name: Display name
- path: Model name on hub or local model path
- device: Device IDs (does not support multi-gpus)
- port: Port ID
- batch_size: Batch size (optional)
Note
Chat template is required for the chat models.
Use export USE_MODELSCOPE_HUB=1
to download model from modelscope.
python tests/test_openai.py -c config/example.yaml
- Response choices.
- Rerank model support.