Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
Typo when copying Qwen api to Gemini api
  • Loading branch information
TigeR0se authored Jun 26, 2024
1 parent 84027eb commit 36f6656
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions model_worker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ pip install -U google-generativeai==0.7.0
"API_MODEL": "YOUR_MODEL"
}
```
NOTE: `API_MODEL` is the model name of QWen LLM API.
NOTE: `API_MODEL` is the model name of Gemini LLM API.
You can find the model name in the [Gemini LLM model list](https://ai.google.dev/gemini-api).
If you meet the `429 Resource has been exhausted (e.g. check quota).`, it may because the rate limit of your Gemini API.

Expand Down Expand Up @@ -80,4 +80,4 @@ NOTE: `API_BASE` is the URL started in the Ollama LLM server and `API_MODEL` is
}
```

NOTE: You should create a new Python script <custom_model>.py in the ufo/llm folder like the format of the <placeholder>.py, which needs to inherit `BaseService` as the parent class, as well as the `__init__` and `chat_completion` methods. At the same time, you need to add the dynamic import of your file in the `get_service` method of `BaseService`.
NOTE: You should create a new Python script <custom_model>.py in the ufo/llm folder like the format of the <placeholder>.py, which needs to inherit `BaseService` as the parent class, as well as the `__init__` and `chat_completion` methods. At the same time, you need to add the dynamic import of your file in the `get_service` method of `BaseService`.

0 comments on commit 36f6656

Please sign in to comment.