Skip to content

Commit

Permalink
Merge pull request #9 from tomaarsen/docs/typos
Browse files Browse the repository at this point in the history
docs: typos & slight documentation improvements
  • Loading branch information
philschmid authored Aug 3, 2023
2 parents 0a149b8 + 815eee6 commit a1c453e
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 8 deletions.
6 changes: 3 additions & 3 deletions docs/clients/huggingface.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Hugging Face

Easyllm provides a client for interfacing with HuggingFace models. The client is compatible with the [HuggingFace Inference API](https://huggingface.co/docs/api-inference/index), [Hugging Face Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index) or any Web Service running [Text Generation Inference](https://github.com/huggingface/text-generation-inference) or compatible API endpoints.
EasyLLM provides a client for interfacing with HuggingFace models. The client is compatible with the [HuggingFace Inference API](https://huggingface.co/docs/api-inference/index), [Hugging Face Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index) or any Web Service running [Text Generation Inference](https://github.com/huggingface/text-generation-inference) or compatible API endpoints.

- `huggingface.ChatCompletion` - a client for interfacing with HuggingFace models that are compatible with the OpenAI ChatCompletion API.
- `huggingface.Completion` - a client for interfacing with HuggingFace models that are compatible with the OpenAI Completion API.
- `huggingface.Embedding` - a client for interfacing with HuggingFace models that are compatible with the OpenAI Embedding API.

## `huggingface.ChatCompletion`

The `huggingface.ChatCompletion` client is used to interface with HuggingFace models running on Text Generation infernece that are compatible with the OpenAI ChatCompletion API. Checkout the [Examples](../examples/chat-completion-api) for more details and [How to stream completions](../examples/stream-chat-completion-api) for an example how to stream requests.
The `huggingface.ChatCompletion` client is used to interface with HuggingFace models running on Text Generation inference that are compatible with the OpenAI ChatCompletion API. Checkout the [Examples](../examples/chat-completion-api) for more details and [How to stream completions](../examples/stream-chat-completion-api) for an example how to stream requests.


```python
Expand Down Expand Up @@ -48,7 +48,7 @@ Supported parameters are:

## `huggingface.Completion`

The `huggingface.Completion` client is used to interface with HuggingFace models running on Text Generation infernece that are compatible with the OpenAI Completion API. Checkout the [Examples](../examples/text-completion-api) for more details and [How to stream completions](../examples/stream-text-completion-api) for an example how to stream requests.
The `huggingface.Completion` client is used to interface with HuggingFace models running on Text Generation inference that are compatible with the OpenAI Completion API. Checkout the [Examples](../examples/text-completion-api) for more details and [How to stream completions](../examples/stream-text-completion-api) for an example how to stream requests.


```python
Expand Down
7 changes: 4 additions & 3 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ response = huggingface.ChatCompletion.create(
{"role": "system", "content": "\nYou are a helpful assistant speaking like a pirate. argh!"},
{"role": "user", "content": "What is the sun?"},
],
temperature=0.9,
top_p=0.6,
max_tokens=256,
temperature=0.9,
top_p=0.6,
max_tokens=256,
)

print(response)
Expand Down Expand Up @@ -66,6 +66,7 @@ the result will look like
```

Check out other examples:

* [Detailed ChatCompletion Example](examples/chat-completion-api)
* [Example how to stream chat requests](examples/stream-chat-completion)
* [Example how to stream text requests](examples/stream-text-completion)
Expand Down
2 changes: 1 addition & 1 deletion notebooks/chat-completion-api.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"source": [
"# How to use Chat Completion clients\n",
"\n",
"Easyllm can be used as an abstract layer to replace `gpt-3.5-turbo` and `gpt-4` with open source models.\n",
"EasyLLM can be used as an abstract layer to replace `gpt-3.5-turbo` and `gpt-4` with open source models.\n",
"\n",
"You can change your own applications from the OpenAI API, by simply changing the client. \n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion notebooks/text-completion-api.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"source": [
"# # How to use Text (Instruction) Completion clients\n",
"\n",
"Easyllm can be used as an abstract layer to replace `text-davinci-003` with open source models.\n",
"EasyLLM can be used as an abstract layer to replace `text-davinci-003` with open source models.\n",
"\n",
"You can change your own applications from the OpenAI API, by simply changing the client. \n",
"\n",
Expand Down

0 comments on commit a1c453e

Please sign in to comment.