Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Support HuggingFace Text Generation including meta-llama2 model #266

Merged
merged 4 commits into from
Aug 3, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -67,12 +67,13 @@ SELECT {{ products.value() | huggingface_table_question_answering(query=question

Please check [Table Question Answering](https://huggingface.co/docs/api-inference/detailed_parameters#table-question-answering-task) for further information.

| Name | Required | Default | Description |
| -------------- | -------- | ------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| query | Y | | The query in plain text that you want to ask the table. |
| model | N | google/tapas-base-finetuned-wtq | The model id of a pretrained model hosted inside a model repo on huggingface.co. See: https://huggingface.co/models?pipeline_tag=table-question-answering |
| use_cache | N | true | There is a cache layer on the inference API to speedup requests we have already seen |
| wait_for_model | N | false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done |
| Name | Required | Default | Description |
|----------------|----------|---------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|
| query | Y | | The query in plain text that you want to ask the table. |
| endpoint | N | | The inference endpoint URL, when using `endpoint`, it replaces the original default value of `model`. |
| model | N | google/tapas-base-finetuned-wtq | The model id of a pre-trained model hosted inside a model repo on huggingface.co. See: https://huggingface.co/models?pipeline_tag=table-question-answering |
| use_cache | N | true | There is a cache layer on the inference API to speedup requests we have already seen |
| wait_for_model | N | false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done |


## Examples
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
# Text Generation

The [Text Generation](https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task) is one of the Natural Language Processing tasks supported by Hugging Face.

## Using the `huggingface_text_generation` filter.

The result will be a string from `huggingface_text_generation`.

:::info
The **Text Generation** default model is **gpt2**, If you would like to use the [Meta LLama2](https://huggingface.co/meta-llama) models, you have two methods to do:

1. Subscribe to the [Pro Account](https://huggingface.co/pricing#pro).
- Set the Meta LLama2 model using the `model` keyword argument in `huggingface_text_generation`, e.g: `meta-llama/Llama-2-13b-chat-hf`.

2. Using [Inference Endpoint](https://huggingface.co/inference-endpoints).
- Select one of the [Meta LLama2](https://huggingface.co/meta-llama) Models and deploy it to the [Inference Endpoint](https://huggingface.co/inference-endpoints).
- Set the endpoint URL using the `endpoint` keyword argument in `huggingface_text_generation`.
:::

**Sample 1 - Subscribe to the [Pro Account](https://huggingface.co/pricing#pro)**:

```sql
{% set data = [
{
"rank": 1,
"institution": "Massachusetts Institute of Technology (MIT)",
"location code":"US",
"location":"United States"
},
{
"rank": 2,
"institution": "University of Cambridge",
"location code":"UK",
"location":"United Kingdom"
},
{
"rank": 3,
"institution": "Stanford University"
"location code":"US",
"location":"United States"
}
-- other universities.....
] %}

SELECT {{ data | huggingface_text_generation(query="Which university is the top-ranked university?", model="meta-llama/Llama-2-13b-chat-hf") }} as result
```

**Sample 1 - Response**:

```json
[
{
"result": "Answer: Based on the provided list, the top-ranked university is Massachusetts Institute of Technology (MIT) with a rank of 1."
}
]
```

**Sample 2 - Using [Inference Endpoint](https://huggingface.co/inference-endpoints)**:


```sql
{% req universities %}
SELECT rank,institution,"location code", "location" FROM read_csv_auto('2023-QS-World-University-Rankings.csv')
{% endreq %}

SELECT {{ universities.value() | huggingface_text_generation(query="Which university located in the UK is ranked at the top of the list?", endpoint='xxx.yyy.zzz.huggingface.cloud') }} as result
```

**Sample 2 - Response**:

```json
[
{
"result": "Answer: Based on the list provided, the top-ranked university in the UK is the University of Cambridge, which is ranked at number 2."
}
]
```

### Arguments

Some default value was changed, so it may different from [Text Generation](https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task) default value.

| Name | Required | Default | Description |
|----------------------|----------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| query | Y | | The query in plain text that you want to ask the table. |
| endpoint | N | | The inference endpoint URL, when using `endpoint`, it replaces the original default value of `model`. |
| model | N | gpt2 | The model id of a pre-trained model hosted inside a model repo on huggingface.co. See: https://huggingface.co/models?pipeline_tag=text-generation |
| top_k | N | | Integer value to define the top tokens considered within the sample operation to create new text. |
| top_p | N | | Float value to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top_p. |
| temperature | N | 0.1 | Range: (0.0 - 100.0). The temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability. |
| repetition_penalty | N | | Range: (0.0 - 100.0). The more a token is used within generation the more it is penalized to not be picked in successive generation passes. |
| max_new_tokens | N | 250 | The amount of new tokens to be generated, this does not include the input length it is a estimate of the size of generated text you want. Each new tokens slows down the request, so look for balance between response times and length of text generated. |
| max_time | N | | Range (0-120.0). The amount of time in seconds that the query should take maximum. Network can cause some overhead so it will be a soft limit. Use that in combination with max_new_tokens for best results. |
| return_full_text | N | false | If set to False, the return results will not contain the original query making it easier for prompting. |
| num_return_sequences | N | 1 | The number of proposition you want to be returned. |
| do_sample | N | | Whether or not to use sampling, use greedy decoding otherwise. |
| use_cache | N | true | There is a cache layer on the inference API to speedup requests we have already seen |
| wait_for_model | N | false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done |
4 changes: 4 additions & 0 deletions packages/doc/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -176,6 +176,10 @@ const sidebars = {
type: 'doc',
id: 'extensions/huggingface/huggingface-table-question-answering',
},
{
type: 'doc',
id: 'extensions/huggingface/huggingface-text-generation',
},
]
},
// {
Expand Down
Loading