diff --git a/packages/doc/docs/extensions/huggingface/huggingface-table-question-answering.mdx b/packages/doc/docs/extensions/huggingface/huggingface-table-question-answering.mdx index 5768e1b1..c17f02a0 100644 --- a/packages/doc/docs/extensions/huggingface/huggingface-table-question-answering.mdx +++ b/packages/doc/docs/extensions/huggingface/huggingface-table-question-answering.mdx @@ -67,12 +67,13 @@ SELECT {{ products.value() | huggingface_table_question_answering(query=question Please check [Table Question Answering](https://huggingface.co/docs/api-inference/detailed_parameters#table-question-answering-task) for further information. -| Name | Required | Default | Description | -| -------------- | -------- | ------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- | -| query | Y | | The query in plain text that you want to ask the table. | -| model | N | google/tapas-base-finetuned-wtq | The model id of a pretrained model hosted inside a model repo on huggingface.co. See: https://huggingface.co/models?pipeline_tag=table-question-answering | -| use_cache | N | true | There is a cache layer on the inference API to speedup requests we have already seen | -| wait_for_model | N | false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done | +| Name | Required | Default | Description | +|----------------|----------|---------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------| +| query | Y | | The query in plain text that you want to ask the table. | +| endpoint | N | | The inference endpoint URL, when using `endpoint`, it replaces the original default value of `model`. | +| model | N | google/tapas-base-finetuned-wtq | The model id of a pre-trained model hosted inside a model repo on huggingface.co. See: https://huggingface.co/models?pipeline_tag=table-question-answering | +| use_cache | N | true | There is a cache layer on the inference API to speedup requests we have already seen | +| wait_for_model | N | false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done | ## Examples diff --git a/packages/doc/docs/extensions/huggingface/huggingface-text-generation.mdx b/packages/doc/docs/extensions/huggingface/huggingface-text-generation.mdx new file mode 100644 index 00000000..2954ce03 --- /dev/null +++ b/packages/doc/docs/extensions/huggingface/huggingface-text-generation.mdx @@ -0,0 +1,98 @@ +# Text Generation + +The [Text Generation](https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task) is one of the Natural Language Processing tasks supported by Hugging Face. + +## Using the `huggingface_text_generation` filter. + +The result will be a string from `huggingface_text_generation`. + +:::info + The **Text Generation** default model is **gpt2**, If you would like to use the [Meta LLama2](https://huggingface.co/meta-llama) models, you have two methods to do: + + 1. Subscribe to the [Pro Account](https://huggingface.co/pricing#pro). + - Set the Meta LLama2 model using the `model` keyword argument in `huggingface_text_generation`, e.g: `meta-llama/Llama-2-13b-chat-hf`. + + 2. Using [Inference Endpoint](https://huggingface.co/inference-endpoints). + - Select one of the [Meta LLama2](https://huggingface.co/meta-llama) Models and deploy it to the [Inference Endpoint](https://huggingface.co/inference-endpoints). + - Set the endpoint URL using the `endpoint` keyword argument in `huggingface_text_generation`. +::: + +**Sample 1 - Subscribe to the [Pro Account](https://huggingface.co/pricing#pro)**: + +```sql +{% set data = [ + { + "rank": 1, + "institution": "Massachusetts Institute of Technology (MIT)", + "location code":"US", + "location":"United States" + }, + { + "rank": 2, + "institution": "University of Cambridge", + "location code":"UK", + "location":"United Kingdom" + }, + { + "rank": 3, + "institution": "Stanford University" + "location code":"US", + "location":"United States" + } + -- other universities..... +] %} + +SELECT {{ data | huggingface_text_generation(query="Which university is the top-ranked university?", model="meta-llama/Llama-2-13b-chat-hf") }} as result +``` + +**Sample 1 - Response**: + +```json +[ + { + "result": "Answer: Based on the provided list, the top-ranked university is Massachusetts Institute of Technology (MIT) with a rank of 1." + } +] +``` + +**Sample 2 - Using [Inference Endpoint](https://huggingface.co/inference-endpoints)**: + + +```sql +{% req universities %} + SELECT rank,institution,"location code", "location" FROM read_csv_auto('2023-QS-World-University-Rankings.csv') +{% endreq %} + +SELECT {{ universities.value() | huggingface_text_generation(query="Which university located in the UK is ranked at the top of the list?", endpoint='xxx.yyy.zzz.huggingface.cloud') }} as result +``` + +**Sample 2 - Response**: + +```json +[ + { + "result": "Answer: Based on the list provided, the top-ranked university in the UK is the University of Cambridge, which is ranked at number 2." + } +] +``` + +### Arguments + +Some default value was changed, so it may different from [Text Generation](https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task) default value. + +| Name | Required | Default | Description | +|----------------------|----------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| query | Y | | The query in plain text that you want to ask the table. | +| endpoint | N | | The inference endpoint URL, when using `endpoint`, it replaces the original default value of `model`. | +| model | N | gpt2 | The model id of a pre-trained model hosted inside a model repo on huggingface.co. See: https://huggingface.co/models?pipeline_tag=text-generation | +| top_k | N | | Integer value to define the top tokens considered within the sample operation to create new text. | +| top_p | N | | Float value to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top_p. | +| temperature | N | 0.1 | Range: (0.0 - 100.0). The temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability. | +| repetition_penalty | N | | Range: (0.0 - 100.0). The more a token is used within generation the more it is penalized to not be picked in successive generation passes. | +| max_new_tokens | N | 250 | The amount of new tokens to be generated, this does not include the input length it is a estimate of the size of generated text you want. Each new tokens slows down the request, so look for balance between response times and length of text generated. | +| max_time | N | | Range (0-120.0). The amount of time in seconds that the query should take maximum. Network can cause some overhead so it will be a soft limit. Use that in combination with max_new_tokens for best results. | +| return_full_text | N | false | If set to False, the return results will not contain the original query making it easier for prompting. | +| num_return_sequences | N | 1 | The number of proposition you want to be returned. | +| do_sample | N | | Whether or not to use sampling, use greedy decoding otherwise. | +| use_cache | N | true | There is a cache layer on the inference API to speedup requests we have already seen | +| wait_for_model | N | false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done | diff --git a/packages/doc/sidebars.js b/packages/doc/sidebars.js index eef38056..50a71543 100644 --- a/packages/doc/sidebars.js +++ b/packages/doc/sidebars.js @@ -176,6 +176,10 @@ const sidebars = { type: 'doc', id: 'extensions/huggingface/huggingface-table-question-answering', }, + { + type: 'doc', + id: 'extensions/huggingface/huggingface-text-generation', + }, ] }, // { diff --git a/packages/extension-huggingface/README.md b/packages/extension-huggingface/README.md index a485b06f..67667b0c 100644 --- a/packages/extension-huggingface/README.md +++ b/packages/extension-huggingface/README.md @@ -27,6 +27,7 @@ VulcanSQL support using Hugging Face tasks by [VulcanSQL Filters](https://vulcan **⚠️ Caution**: Hugging Face has a [rate limit](https://huggingface.co/docs/api-inference/faq#rate-limits), so it does not allow sending large datasets to the Hugging Face library for processing. Otherwise, using a different Hugging Face model may yield different results or even result in failure. + ### Table Question Answering The [Table Question Answering](https://huggingface.co/docs/api-inference/detailed_parameters#table-question-answering-task) is one of the Natural Language Processing tasks supported by Hugging Face. @@ -50,7 +51,7 @@ The result will be converted to a JSON string from `huggingface_table_question_a "description": "Query Your Data Warehouse Like Exploring One Big View." }, { - "repository": "hell-word", + "repository": "hello-world", "topic": [], "description": "Sample repository for testing" } @@ -79,9 +80,6 @@ SELECT {{ data | huggingface_table_question_answering(query="How many repositori {% set question = "List display name where gender are female?" %} --- The "model" keyword argument is optional. If not provided, the default value is 'google/tapas-base-finetuned-wtq'. --- The "wait_for_model" keyword argument is optional. If not provided, the default value is false. --- The "use_cache" keyword argument is optional. If not provided, the default value is true. SELECT {{ products.value() | huggingface_table_question_answering(query=question, model="microsoft/tapex-base-finetuned-wtq", wait_for_model=true, use_cache=true) }} ``` @@ -94,3 +92,111 @@ SELECT {{ products.value() | huggingface_table_question_answering(query=question } ] ``` + +### Table Question Answering Arguments + +Please check [Table Question Answering](https://huggingface.co/docs/api-inference/detailed_parameters#table-question-answering-task) for further information. + +| Name | Required | Default | Description | +|----------------|----------|---------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------| +| query | Y | | The query in plain text that you want to ask the table. | +| endpoint | N | | The inference endpoint URL, when using `endpoint`, it replaces the original default value of `model`. | +| model | N | google/tapas-base-finetuned-wtq | The model id of a pre-trained model hosted inside a model repo on huggingface.co. See: https://huggingface.co/models?pipeline_tag=table-question-answering | +| use_cache | N | true | There is a cache layer on the inference API to speedup requests we have already seen | +| wait_for_model | N | false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done | + + +### Text Generation + +The [Text Generation](https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task) is one of the Natural Language Processing tasks supported by Hugging Face. + +Using the `huggingface_text_generation` filter. The result will be a string from `huggingface_text_generation`. + +**📢 Notice**: The **Text Generation** default model is **gpt2**, If you would like to use the [Meta LLama2](https://huggingface.co/meta-llama) models, you have two methods to do: + +1. Subscribe to the [Pro Account](https://huggingface.co/pricing#pro). + - Set the Meta LLama2 model using the `model` keyword argument in `huggingface_text_generation`, e.g: `meta-llama/Llama-2-13b-chat-hf`. + +2. Using [Inference Endpoint](https://huggingface.co/inference-endpoints). + - Select one of the [Meta LLama2](https://huggingface.co/meta-llama) Models and deploy it to the [Inference Endpoint](https://huggingface.co/inference-endpoints). + - Set the endpoint URL using the `endpoint` keyword argument in `huggingface_text_generation`. + +**Sample 1 - Subscribe to the [Pro Account](https://huggingface.co/pricing#pro)**: + +```sql +{% set data = [ + { + "rank": 1, + "institution": "Massachusetts Institute of Technology (MIT)", + "location code":"US", + "location":"United States" + }, + { + "rank": 2, + "institution": "University of Cambridge", + "location code":"UK", + "location":"United Kingdom" + }, + { + "rank": 3, + "institution": "Stanford University" + "location code":"US", + "location":"United States" + } + -- other universities..... +] %} + +SELECT {{ data | huggingface_text_generation(query="Which university is the top-ranked university?", model="meta-llama/Llama-2-13b-chat-hf") }} as result +``` + +**Sample 1 - Response**: + +```json +[ + { + "result": "Answer: Based on the provided list, the top-ranked university is Massachusetts Institute of Technology (MIT) with a rank of 1." + } +] +``` + +**Sample 2 - Using [Inference Endpoint](https://huggingface.co/inference-endpoints)**: + + +```sql +{% req universities %} + SELECT rank,institution,"location code", "location" FROM read_csv_auto('2023-QS-World-University-Rankings.csv') +{% endreq %} + +SELECT {{ universities.value() | huggingface_text_generation(query="Which university located in the UK is ranked at the top of the list?", endpoint='xxx.yyy.zzz.huggingface.cloud') }} as result +``` + +**Sample 2 - Response**: + +```json +[ + { + "result": "Answer: Based on the list provided, the top-ranked university in the UK is the University of Cambridge, which is ranked at number 2." + } +] +``` + +### Text Generation Arguments + +Some default value was changed, so it may different from [Text Generation](https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task) default value. + +| Name | Required | Default | Description | +|----------------------|----------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| query | Y | | The query in plain text that you want to ask the table. | +| endpoint | N | | The inference endpoint URL, when using `endpoint`, it replaces the original default value of `model`. | +| model | N | gpt2 | The model id of a pre-trained model hosted inside a model repo on huggingface.co. See: https://huggingface.co/models?pipeline_tag=text-generation | +| top_k | N | | Integer value to define the top tokens considered within the sample operation to create new text. | +| top_p | N | | Float value to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top_p. | +| temperature | N | 0.1 | Range: (0.0 - 100.0). The temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability. | +| repetition_penalty | N | | Range: (0.0 - 100.0). The more a token is used within generation the more it is penalized to not be picked in successive generation passes. | +| max_new_tokens | N | 250 | The amount of new tokens to be generated, this does not include the input length it is a estimate of the size of generated text you want. Each new tokens slows down the request, so look for balance between response times and length of text generated. | +| max_time | N | | Range (0-120.0). The amount of time in seconds that the query should take maximum. Network can cause some overhead so it will be a soft limit. Use that in combination with max_new_tokens for best results. | +| return_full_text | N | false | If set to False, the return results will not contain the original query making it easier for prompting. | +| num_return_sequences | N | 1 | The number of proposition you want to be returned. | +| do_sample | N | | Whether or not to use sampling, use greedy decoding otherwise. | +| use_cache | N | true | There is a cache layer on the inference API to speedup requests we have already seen | +| wait_for_model | N | false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done | diff --git a/packages/extension-huggingface/src/index.ts b/packages/extension-huggingface/src/index.ts index ef73ef56..3940ff15 100644 --- a/packages/extension-huggingface/src/index.ts +++ b/packages/extension-huggingface/src/index.ts @@ -4,7 +4,14 @@ import { Runner as HuggingFaceTableQuestionAnsweringFilterRunner, } from './lib/filters/tableQuestionAnswering'; +import { + Builder as HuggingFaceTextGenerationFilterBuilder, + Runner as HuggingFaceTextGenerationFilterRunner, +} from './lib/filters/textGeneration'; + export default [ HuggingFaceTableQuestionAnsweringFilterBuilder, HuggingFaceTableQuestionAnsweringFilterRunner, + HuggingFaceTextGenerationFilterBuilder, + HuggingFaceTextGenerationFilterRunner, ]; diff --git a/packages/extension-huggingface/src/lib/filters/tableQuestionAnswering.ts b/packages/extension-huggingface/src/lib/filters/tableQuestionAnswering.ts index a5559467..14e80baa 100644 --- a/packages/extension-huggingface/src/lib/filters/tableQuestionAnswering.ts +++ b/packages/extension-huggingface/src/lib/filters/tableQuestionAnswering.ts @@ -3,82 +3,58 @@ import { InternalError, createFilterExtension, } from '@vulcan-sql/core'; -import axios, { AxiosError } from 'axios'; -import { convertToHuggingFaceTable } from '../utils'; -import { isArray } from 'class-validator'; -import { has } from 'lodash'; -type HuggingFaceOptions = { - accessToken: string; -}; +import { convertToHuggingFaceTable, postRequest } from '../utils'; +import { has, isArray, isEmpty, omit } from 'lodash'; +import { + InferenceNLPOptions, + HuggingFaceOptions, + apiInferenceEndpoint, +} from '../model'; -// More information described the options, see: https://huggingface.co/docs/api-inference/detailed_parameters#table-question-answering-task +// More information described the options. See: https://huggingface.co/docs/api-inference/detailed_parameters#table-question-answering-task type TableQuestionAnsweringOptions = { inputs: { query: string; table: Record; }; - options: { - use_cache: boolean; - wait_for_model: boolean; - }; -}; - -const request = async (url: string, data: any, token: string) => { - try { - const result = await axios.post(url, data, { - headers: { Authorization: `Bearer ${token}` }, - }); - return result.data; - } catch (error) { - const axiosError = error as AxiosError; - // https://axios-http.com/docs/handling_errors - // if response has error, throw the response error, or throw the request error - if (axiosError.response) - throw new Error(JSON.stringify(axiosError.response?.data)); - throw new Error(axiosError.message); - } + options?: InferenceNLPOptions; }; -// default recommended model, see https://huggingface.co/docs/api-inference/detailed_parameters#table-question-answering-task +/** + * Get table question answering url. Used recommend model be default value. + * See: https://huggingface.co/docs/api-inference/detailed_parameters#table-question-answering-task + * */ const getUrl = (model = 'google/tapas-base-finetuned-wtq') => - `https://api-inference.huggingface.co/models/${model}`; + `${apiInferenceEndpoint}/${model}`; export const TableQuestionAnsweringFilter: FunctionalFilter = async ({ args, value, options, }) => { - if (!options || !(options as HuggingFaceOptions).accessToken) - throw new InternalError('please given access token'); + const token = (options as HuggingFaceOptions)?.accessToken; + if (!token) throw new InternalError('please given access token'); if (!isArray(value)) throw new InternalError('Input value must be an array of object'); - if (!(typeof args === 'object') || !has(args, 'query')) throw new InternalError('Must provide "query" keyword argument'); if (!args['query']) throw new InternalError('The "query" argument must have value'); - const token = (options as HuggingFaceOptions).accessToken; // Convert the data result format to table value format const table = convertToHuggingFaceTable(value); - const context = { - inputs: { - query: args['query'], - table, - }, - options: { - use_cache: args['use_cache'] ? args['use_cache'] : true, - wait_for_model: args['wait_for_model'] ? args['wait_for_model'] : false, - }, + // omit hidden value '__keywords' from args, it generated from nunjucks and not related to HuggingFace. + const { query, model, endpoint, ...inferenceOptions } = omit(args, '__keywords'); + const payload = { + inputs: { query, table }, } as TableQuestionAnsweringOptions; - - // Get table question answering url - const url = args['model'] ? getUrl(args['model']) : getUrl(); + if (!isEmpty(inferenceOptions)) payload.options = inferenceOptions; try { - const results = await request(url, context, token); + const url = endpoint ? endpoint : getUrl(model); + const results = await postRequest(url, payload, token); // convert to JSON string to make user get the whole result after parsing it in SQL return JSON.stringify(results); } catch (error) { diff --git a/packages/extension-huggingface/src/lib/filters/textGeneration.ts b/packages/extension-huggingface/src/lib/filters/textGeneration.ts new file mode 100644 index 00000000..3dd72140 --- /dev/null +++ b/packages/extension-huggingface/src/lib/filters/textGeneration.ts @@ -0,0 +1,96 @@ +import { + FunctionalFilter, + InternalError, + createFilterExtension, +} from '@vulcan-sql/core'; +import { has, isArray, isEmpty, omit, pick } from 'lodash'; +import { + HuggingFaceOptions, + InferenceNLPOptions, + apiInferenceEndpoint, +} from '../model'; +import { postRequest } from '../utils'; + +// More information described the options. See: https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task +type TextGenerationOptions = { + inputs: string; + parameters?: { + // Integer to define the top tokens considered within the sample operation to create new text. + top_k?: number; + // Float to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top_p. + top_p?: number; + // Default: 0.1. Range: (0.0 - 100.0). The temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability. + temperature?: number; + // Range: (0.0 - 100.0). The more a token is used within generation the more it is penalized to not be picked in successive generation passes. + repetition_penalty?: number; + // Default: 250. The amount of new tokens to be generated, this does not include the input length it is a estimate of the size of generated text you want. Each new tokens slows down the request, so look for balance between response times and length of text generated. + max_new_tokens?: number; + // Range (0-120.0). The amount of time in seconds that the query should take maximum. Network can cause some overhead so it will be a soft limit. Use that in combination with max_new_tokens for best results. + max_time?: number; + // Default: false. If set to False, the return results will not contain the original query making it easier for prompting. + return_full_text?: boolean; + // Default: 1. The number of proposition you want to be returned. + num_return_sequences?: number; + // Whether or not to use sampling, use greedy decoding otherwise. + do_sample?: boolean; + }; + options?: InferenceNLPOptions; +}; + +/** + * Get text generation url. Used gpt2 model be default value. + * See: https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task + * */ +const getUrl = (model = 'gpt2') => `${apiInferenceEndpoint}/${model}`; + +export const TextGenerationFilter: FunctionalFilter = async ({ + args, + value, + options, +}) => { + const token = (options as HuggingFaceOptions)?.accessToken; + if (!token) throw new InternalError('please given access token'); + + if (!isArray(value)) + throw new InternalError('Input value must be an array of object'); + if (!(typeof args === 'object') || !has(args, 'query')) + throw new InternalError('Must provide "query" keyword argument'); + if (!args['query']) + throw new InternalError('The "query" argument must have value'); + + // Convert the data result to JSON string as question context + const context = JSON.stringify(value); + // omit hidden value '__keywords' from args, it generated from nunjucks and not related to HuggingFace. + const { query, model, endpoint, ...otherArgs } = omit(args, '__keywords'); + const inferenceOptions = pick(otherArgs, ['use_cache', 'wait_for_model']); + const parameters = omit(otherArgs, ['use_cache', 'wait_for_model', 'endpoint']); + const payload = { + inputs: `Context: ${context}. Question: ${query}}`, + parameters: { + return_full_text: false, + max_new_tokens: 250, + temperature: 0.1, + } + } as TextGenerationOptions; + if (!isEmpty(parameters)) payload.parameters = parameters; + if (!isEmpty(inferenceOptions)) payload.options = inferenceOptions; + + try { + // if not given endpoint, use default HuggingFace inference endpoint + const url = endpoint ? endpoint : getUrl(model); + const results = await postRequest(url, payload, token); + // get the "generated_text" field, and trim leading and trailing white space. + return String(results[0]['generated_text']).trim(); + } catch (error) { + throw new InternalError( + `Error when sending data to Hugging Face for executing TextGeneration tasks, details: ${ + (error as Error).message + }` + ); + } +}; + +export const [Builder, Runner] = createFilterExtension( + 'huggingface_text_generation', + TextGenerationFilter +); diff --git a/packages/extension-huggingface/src/lib/model.ts b/packages/extension-huggingface/src/lib/model.ts new file mode 100644 index 00000000..a28922e2 --- /dev/null +++ b/packages/extension-huggingface/src/lib/model.ts @@ -0,0 +1,14 @@ +export type HuggingFaceOptions = { + accessToken: string; +}; + +export const apiInferenceEndpoint = + 'https://api-inference.huggingface.co/models'; + +// For more information. See: https://huggingface.co/docs/api-inference/detailed_parameters#natural-language-processing +export type InferenceNLPOptions = { + // Default: true. There is a cache layer on the inference API to speedup requests we have already seen. Most models can use those results as is as models are deterministic (meaning the results will be the same anyway). + use_cache?: boolean; + // Default: false. If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. + wait_for_model?: boolean; +}; diff --git a/packages/extension-huggingface/src/lib/utils/index.ts b/packages/extension-huggingface/src/lib/utils/index.ts index f4940e37..8f123512 100644 --- a/packages/extension-huggingface/src/lib/utils/index.ts +++ b/packages/extension-huggingface/src/lib/utils/index.ts @@ -1 +1,2 @@ export * from './converter'; +export * from './request'; diff --git a/packages/extension-huggingface/src/lib/utils/request.ts b/packages/extension-huggingface/src/lib/utils/request.ts new file mode 100644 index 00000000..cdad0c76 --- /dev/null +++ b/packages/extension-huggingface/src/lib/utils/request.ts @@ -0,0 +1,17 @@ +import axios, { AxiosError } from 'axios'; + +export const postRequest = async (url: string, data: any, token: string) => { + try { + const result = await axios.post(url, data, { + headers: { Authorization: `Bearer ${token}` }, + }); + return result.data; + } catch (error) { + const axiosError = error as AxiosError; + // https://axios-http.com/docs/handling_errors + // if response has error, throw the response error, or throw the request error + if (axiosError.response) + throw new Error(JSON.stringify(axiosError.response?.data)); + throw new Error(axiosError.message); + } +}; diff --git a/packages/extension-huggingface/test/tableQuestionAnswering.spec.ts b/packages/extension-huggingface/test/tableQuestionAnswering.spec.ts index a9048ece..cdbad3dd 100644 --- a/packages/extension-huggingface/test/tableQuestionAnswering.spec.ts +++ b/packages/extension-huggingface/test/tableQuestionAnswering.spec.ts @@ -2,265 +2,305 @@ import faker from '@faker-js/faker'; import { getTestCompiler } from '@vulcan-sql/test-utility'; import * as dotenv from 'dotenv'; import * as path from 'path'; +import { repositories } from './test-data/repositories'; // support reading the env from .env file if exited when running test case dotenv.config({ path: path.resolve(__dirname, '.env') }); -const data = [ - { - repository: 'vulcan-sql', - stars: 1000, - topic: ['analytics', 'data-lake', 'data-warehouse', 'api-builder'], - description: - 'Create and share Data APIs fast! Data API framework for DuckDB, ClickHouse, Snowflake, BigQuery, PostgreSQL', - }, - { - repository: 'accio', - stars: 500, - topic: [ - 'data-analytics', - 'data-lake', - 'data-warehouse', - 'bussiness-intelligence', - ], - description: 'Query Your Data Warehouse Like Exploring One Big View.', - }, - { - repository: 'hello-world', - stars: 0, - topic: [], - description: 'Sample repository for testing', - }, -]; - -it( - 'Should throw error when not pass the "query" argument', - async () => { - const token = process.env['HF_ACCESS_TOKEN']; - const { compileAndLoad, execute } = await getTestCompiler({ - extensions: { huggingface: path.join(__dirname, '..', 'src') }, - huggingface: { - accessToken: token, - }, - }); +describe('Test "huggingface_table_question_answering" filter', () => { + it( + 'Should throw error when not pass the "query" argument', + async () => { + const token = process.env['HF_ACCESS_TOKEN']; + const { compileAndLoad, execute } = await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: token, + }, + }); - const sql = `{% set data = ${JSON.stringify( - data - )} %}SELECT {{ data | huggingface_table_question_answering("Not contains query argument!") }}`; + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_table_question_answering("Not contains query argument!") }}`; - // Act - await compileAndLoad(sql); + // Act + await compileAndLoad(sql); - // Assert - await expect(execute({})).rejects.toThrow( - 'Must provide "query" keyword argument' - ); - }, - 50 * 1000 -); + // Assert + await expect(execute({})).rejects.toThrow( + 'Must provide "query" keyword argument' + ); + }, + 50 * 1000 + ); -it( - 'Should throw error when pass the "query" argument by dynamic parameter', - async () => { - const token = process.env['HF_ACCESS_TOKEN']; - const { compileAndLoad, execute } = await getTestCompiler({ - extensions: { huggingface: path.join(__dirname, '..', 'src') }, - huggingface: { - accessToken: token, - }, - }); + it( + 'Should throw error when pass the "query" argument but value is undefined', + async () => { + const token = process.env['HF_ACCESS_TOKEN']; + const { compileAndLoad, execute } = await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: token, + }, + }); - const sql = `{% set data = ${JSON.stringify( - data - )} %}SELECT {{ data | huggingface_table_question_answering(query=context.param.value) }}`; + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_table_question_answering(query=undefined) }}`; - // Act - await compileAndLoad(sql); + // Act + await compileAndLoad(sql); - // Assert - await expect( - execute({ value: 'what repository has most stars?' }) - ).rejects.toThrow('The "query" argument must have value'); - }, - 50 * 1000 -); - -it('Should throw error when input value not be array of object', async () => { - const token = process.env['HF_ACCESS_TOKEN']; - const { compileAndLoad, execute } = await getTestCompiler({ - extensions: { huggingface: path.join(__dirname, '..', 'src') }, - huggingface: { - accessToken: token, + // Assert + await expect(execute({})).rejects.toThrow( + 'The "query" argument must have value' + ); }, - }); + 50 * 1000 + ); + + it( + 'Should throw error when pass the "query" argument but value is empty string', + async () => { + const token = process.env['HF_ACCESS_TOKEN']; + const { compileAndLoad, execute } = await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: token, + }, + }); - const sql = `{% set data = 'not-array-data' %}SELECT {{ data | huggingface_table_question_answering(query="Does the filter work or not") }}`; + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_table_question_answering(query='') }}`; - // Act - await compileAndLoad(sql); + // Act + await compileAndLoad(sql); - // Assert - await expect(execute({})).rejects.toThrow( - 'Input value must be an array of object' + // Assert + await expect(execute({})).rejects.toThrow( + 'The "query" argument must have value' + ); + }, + 50 * 1000 ); -}); -it( - 'Should throw error when not provide access token', - async () => { + it('Should throw error when input value not be array of object', async () => { + const token = process.env['HF_ACCESS_TOKEN']; const { compileAndLoad, execute } = await getTestCompiler({ extensions: { huggingface: path.join(__dirname, '..', 'src') }, huggingface: { - accessToken: '', + accessToken: token, }, }); - const sql = `{% set data = ${JSON.stringify( - data - )} %}SELECT {{ data | huggingface_table_question_answering("${faker.internet.password()}") }}`; + const sql = `{% set data = 'not-array-data' %}SELECT {{ data | huggingface_table_question_answering(query="Does the filter work or not") }}`; // Act await compileAndLoad(sql); // Assert - await expect(execute({})).rejects.toThrow('please given access token'); - }, - 50 * 1000 -); - -it( - 'Should throw error when not set hugging face options', - async () => { - const { compileAndLoad, execute } = await getTestCompiler({ - extensions: { huggingface: path.join(__dirname, '..', 'src') }, - }); - - const sql = `{% set data = ${JSON.stringify( - data - )} %}SELECT {{ data | huggingface_table_question_answering("${faker.internet.password()}") }}`; - - // Act - await compileAndLoad(sql); + await expect(execute({})).rejects.toThrow( + 'Input value must be an array of object' + ); + }); - // Assert - await expect(execute({})).rejects.toThrow('please given access token'); - }, - 50 * 1000 -); - -it( - 'Should get correct expected value when provided "neulab/omnitab-large-1024shot-finetuned-wtq-1024shot" model and wait it for model', - async () => { - const expected = JSON.stringify({ - // neulab/omnitab-large-1024shot-finetuned-wtq-1024shot will return the result including space in the beginning of the vulcan-sql -> ' vulcan-sql' - answer: ' vulcan-sql', - }); - const token = process.env['HF_ACCESS_TOKEN']; - const { compileAndLoad, execute, getExecutedQueries, getCreatedBinding } = - await getTestCompiler({ + it( + 'Should throw error when not provide access token', + async () => { + const { compileAndLoad, execute } = await getTestCompiler({ extensions: { huggingface: path.join(__dirname, '..', 'src') }, huggingface: { - accessToken: token, + accessToken: '', }, }); - const sql = `{% set data = ${JSON.stringify( - data - )} %}SELECT {{ data | huggingface_table_question_answering(query="what repository has most stars?", model="neulab/omnitab-large-1024shot-finetuned-wtq-1024shot", wait_for_model=true) }}`; + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_table_question_answering("${faker.internet.password()}") }}`; - // Act - await compileAndLoad(sql); - await execute({}); + // Act + await compileAndLoad(sql); - // Assert - const queries = await getExecutedQueries(); - const bindings = await getCreatedBinding(); - - expect(queries[0]).toBe('SELECT $1'); - expect(bindings[0].get('$1')).toEqual(expected); - }, - 50 * 1000 -); - -it.each([ - { - question: 'what repository has most stars?', - expected: { - answer: 'vulcan-sql', - coordinates: [[0, 0]], - cells: ['vulcan-sql'], - aggregator: 'NONE', - }, - }, - { - question: 'what repository has lowest stars?', - expected: { - answer: 'hello-world', - coordinates: [[2, 0]], - cells: ['hello-world'], - aggregator: 'NONE', - }, - }, - { - question: 'How many stars does the vulcan-sql repository have?', - expected: { - answer: 'SUM > 1000', - coordinates: [[0, 1]], - cells: ['1000'], - aggregator: 'SUM', + // Assert + await expect(execute({})).rejects.toThrow('please given access token'); }, - }, - { - question: 'How many stars does the accio repository have?', - expected: { - answer: 'AVERAGE > 500', - coordinates: [[1, 1]], - cells: ['500'], - aggregator: 'AVERAGE', - }, - }, - { - question: 'How many repositories related to data-lake topic?', - expected: { - answer: 'COUNT > vulcan-sql, accio', - coordinates: [ - [0, 0], - [1, 0], - ], - cells: ['vulcan-sql', 'accio'], - aggregator: 'COUNT', - }, - }, -])( - 'Should get correct expected answer when asking question', - async ({ question, expected }) => { - // Arrange + 50 * 1000 + ); - const token = process.env['HF_ACCESS_TOKEN']; - const { compileAndLoad, execute, getExecutedQueries, getCreatedBinding } = - await getTestCompiler({ + it( + 'Should throw error when not set hugging face options', + async () => { + const { compileAndLoad, execute } = await getTestCompiler({ extensions: { huggingface: path.join(__dirname, '..', 'src') }, - huggingface: { - accessToken: token, - }, }); - const sql = `{% set data = ${JSON.stringify( - data - )} %}SELECT {{ data | huggingface_table_question_answering(query="${question}", wait_for_model=true) }}`; + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_table_question_answering("${faker.internet.password()}") }}`; - // Act - await compileAndLoad(sql); - await execute({}); + // Act + await compileAndLoad(sql); - // Assert - const queries = await getExecutedQueries(); - const bindings = await getCreatedBinding(); - - expect(queries[0]).toBe('SELECT $1'); - // parse the result to object and match the expected value - const result = JSON.parse(bindings[0].get('$1')); - expect(result).toEqual(expected); - }, - 50 * 1000 -); + // Assert + await expect(execute({})).rejects.toThrow('please given access token'); + }, + 50 * 1000 + ); + + it( + 'Should get correct expected value when provided "neulab/omnitab-large-1024shot-finetuned-wtq-1024shot" model and wait it for model', + async () => { + const expected = JSON.stringify({ + // neulab/omnitab-large-1024shot-finetuned-wtq-1024shot will return the result including space in the beginning of the vulcan-sql -> ' vulcan-sql' + answer: ' vulcan-sql', + }); + const token = process.env['HF_ACCESS_TOKEN']; + const { compileAndLoad, execute, getExecutedQueries, getCreatedBinding } = + await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: token, + }, + }); + + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_table_question_answering(query="what repository has most stars?", model="neulab/omnitab-large-1024shot-finetuned-wtq-1024shot", wait_for_model=true) }}`; + + // Act + await compileAndLoad(sql); + await execute({}); + + // Assert + const queries = await getExecutedQueries(); + const bindings = await getCreatedBinding(); + + expect(queries[0]).toBe('SELECT $1'); + expect(bindings[0].get('$1')).toEqual(expected); + }, + 50 * 1000 + ); + + it.each([ + { + question: 'what repository has most stars?', + expected: { + answer: 'vulcan-sql', + coordinates: [[0, 0]], + cells: ['vulcan-sql'], + aggregator: 'NONE', + }, + }, + { + question: 'what repository has lowest stars?', + expected: { + answer: 'hello-world', + coordinates: [[2, 0]], + cells: ['hello-world'], + aggregator: 'NONE', + }, + }, + { + question: 'How many stars does the vulcan-sql repository have?', + expected: { + answer: 'SUM > 1000', + coordinates: [[0, 1]], + cells: ['1000'], + aggregator: 'SUM', + }, + }, + { + question: 'How many stars does the accio repository have?', + expected: { + answer: 'AVERAGE > 500', + coordinates: [[1, 1]], + cells: ['500'], + aggregator: 'AVERAGE', + }, + }, + { + question: 'How many repositories related to data-lake topic?', + expected: { + answer: 'COUNT > vulcan-sql, accio', + coordinates: [ + [0, 0], + [1, 0], + ], + cells: ['vulcan-sql', 'accio'], + aggregator: 'COUNT', + }, + }, + ])( + 'Should get correct $expected answer when asking $question', + async ({ question, expected }) => { + // Arrange + + const token = process.env['HF_ACCESS_TOKEN']; + const { compileAndLoad, execute, getExecutedQueries, getCreatedBinding } = + await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: token, + }, + }); + + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_table_question_answering(query="${question}", wait_for_model=true) }}`; + + // Act + await compileAndLoad(sql); + await execute({}); + + // Assert + const queries = await getExecutedQueries(); + const bindings = await getCreatedBinding(); + // parse the result to object and match the expected value + const result = JSON.parse(bindings[0].get('$1')); + + expect(queries[0]).toBe('SELECT $1'); + expect(result).toEqual(expected); + }, + 50 * 1000 + ); + + it( + 'Should get correct result when pass the "query" argument by dynamic parameter', + async () => { + const expected = { + answer: 'vulcan-sql', + coordinates: [[0, 0]], + cells: ['vulcan-sql'], + aggregator: 'NONE', + }; + const token = process.env['HF_ACCESS_TOKEN']; + const { compileAndLoad, execute, getExecutedQueries, getCreatedBinding } = + await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: token, + }, + }); + + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_table_question_answering(query=context.params.value) }}`; + + // Act + await compileAndLoad(sql); + await execute({ value: 'what repository has most stars?' }); + + // Assert + const queries = await getExecutedQueries(); + const bindings = await getCreatedBinding(); + // parse the result to object and match the expected value + const result = JSON.parse(bindings[0].get('$1')); + + expect(queries[0]).toBe('SELECT $1'); + expect(result).toEqual(expected); + }, + 50 * 1000 + ); +}); diff --git a/packages/extension-huggingface/test/test-data/repositories.ts b/packages/extension-huggingface/test/test-data/repositories.ts new file mode 100644 index 00000000..9b03a276 --- /dev/null +++ b/packages/extension-huggingface/test/test-data/repositories.ts @@ -0,0 +1,29 @@ +export const repositories = [ + { + repository: 'vulcan-sql', + stars: 1000, + topic: ['analytics', 'data-lake', 'data-warehouse', 'api-builder'], + public: true, + description: + 'Create and share Data APIs fast! Data API framework for DuckDB, ClickHouse, Snowflake, BigQuery, PostgreSQL', + }, + { + repository: 'accio', + stars: 500, + topic: [ + 'data-analytics', + 'data-lake', + 'data-warehouse', + 'bussiness-intelligence', + ], + public: true, + description: 'Query Your Data Warehouse Like Exploring One Big View.', + }, + { + repository: 'hello-world', + stars: 0, + topic: [], + public: false, + description: 'Sample repository for testing', + }, +]; diff --git a/packages/extension-huggingface/test/textGeneration.spec.ts b/packages/extension-huggingface/test/textGeneration.spec.ts new file mode 100644 index 00000000..e27ab78d --- /dev/null +++ b/packages/extension-huggingface/test/textGeneration.spec.ts @@ -0,0 +1,248 @@ +import faker from '@faker-js/faker'; +import { getTestCompiler } from '@vulcan-sql/test-utility'; +import * as dotenv from 'dotenv'; +import * as path from 'path'; +import { repositories } from './test-data/repositories'; + +// support reading the env from .env file if exited when running test case +dotenv.config({ path: path.resolve(__dirname, '.env') }); +describe('Test "huggingface_text_generation" filter', () => { + it( + 'Should throw error when not pass the "query" argument', + async () => { + const token = process.env['HF_ACCESS_TOKEN']; + const { compileAndLoad, execute } = await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: token, + }, + }); + + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_text_generation("Not contains query argument!") }}`; + + // Act + await compileAndLoad(sql); + + // Assert + await expect(execute({})).rejects.toThrow( + 'Must provide "query" keyword argument' + ); + }, + 50 * 1000 + ); + + it( + 'Should throw error when pass the "query" argument but value is undefined', + async () => { + const token = process.env['HF_ACCESS_TOKEN']; + const { compileAndLoad, execute } = await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: token, + }, + }); + + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_text_generation(query=undefined) }}`; + + // Act + await compileAndLoad(sql); + + // Assert + await expect(execute({})).rejects.toThrow( + 'The "query" argument must have value' + ); + }, + 50 * 1000 + ); + + it( + 'Should throw error when pass the "query" argument but value is empty string', + async () => { + const token = process.env['HF_ACCESS_TOKEN']; + const { compileAndLoad, execute } = await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: token, + }, + }); + + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_text_generation(query='') }}`; + + // Act + await compileAndLoad(sql); + + // Assert + await expect(execute({})).rejects.toThrow( + 'The "query" argument must have value' + ); + }, + 50 * 1000 + ); + + it('Should throw error when input value not be array of object', async () => { + const token = process.env['HF_ACCESS_TOKEN']; + const { compileAndLoad, execute } = await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: token, + }, + }); + + const sql = `{% set data = 'not-array-data' %}SELECT {{ data | huggingface_text_generation(query="Does the filter work or not") }}`; + + // Act + await compileAndLoad(sql); + + // Assert + await expect(execute({})).rejects.toThrow( + 'Input value must be an array of object' + ); + }); + + it( + 'Should throw error when not provide access token', + async () => { + const { compileAndLoad, execute } = await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: '', + }, + }); + + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_text_generation("${faker.internet.password()}") }}`; + + // Act + await compileAndLoad(sql); + + // Assert + await expect(execute({})).rejects.toThrow('please given access token'); + }, + 50 * 1000 + ); + + it( + 'Should not throw when passing the "query" argument by dynamic parameter through HuggingFace default recommended "gpt2" model', + async () => { + const token = process.env['HF_ACCESS_TOKEN']; + const { compileAndLoad, execute } = await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: token, + }, + }); + + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_text_generation(query=context.params.value, wait_for_model=true, use_cache=false) }}`; + + await compileAndLoad(sql); + // Assert + await expect( + execute({ value: 'what repository has most stars?' }) + ).resolves.not.toThrow(); + }, + 100 * 1000 + ); + + // Skip the test case because the "meta-llama/Llama-2-13b-chat-hf" model need to upgrade your huggingface account to Pro Account by paying $9 per month + it.skip( + 'Should get correct result when pass the "query" argument by dynamic parameter through "meta-llama/Llama-2-13b-chat-hf" model', + async () => { + const token = process.env['HF_ACCESS_TOKEN']; + const { compileAndLoad, execute, getExecutedQueries, getCreatedBinding } = + await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: token, + }, + }); + + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_text_generation(query=context.params.value,model="meta-llama/Llama-2-13b-chat-hf", wait_for_model=true, use_cache=false) }}`; + + await compileAndLoad(sql); + await execute({ value: 'what repository has most stars?' }); + + // Assert + const queries = await getExecutedQueries(); + const bindings = await getCreatedBinding(); + + expect(queries[0]).toBe('SELECT $1'); + expect(bindings[0].get('$1')).toEqual( + 'Answer: Based on the information provided, the repository with the most stars is "vulcan-sql" with 1000 stars.' + ); + }, + 100 * 1000 + ); + + // Skip the test case because the "meta-llama/Llama-2-13b-chat-hf" model need to upgrade your huggingface account to Pro Account by paying $9 per month + it.skip.each([ + { + question: 'what repository has most stars?', + expected: + 'Answer: Based on the information provided, the repository with the most stars is "vulcan-sql" with 1000 stars.', + }, + { + question: 'what repository has lowest stars?', + expected: + 'Answer: Based on the information provided, the repository with the lowest stars is "hello-world" with 0 stars.', + }, + { + question: 'How many stars does the vulcan-sql repository have?', + expected: + 'Answer: Based on the information provided, the vulcan-sql repository has 1000 stars.', + }, + { + question: 'How many stars does the accio repository have?', + expected: + 'Answer: Based on the information provided, the accio repository has 500 stars.', + }, + { + question: 'How many repositories related to data-lake topic?', + expected: `Answer: Based on the provided list of repositories, there are 2 repositories related to the data-lake topic: + + 1. vulcan-sql + 2. accio + + Both of these repositories have the data-lake topic in their description.`, + }, + ])( + 'Should get "$expected" answer when asking "$question" through "meta-llama/Llama-2-13b-chat-hf" model', + async ({ question, expected }) => { + // Arrange + const token = process.env['HF_ACCESS_TOKEN']; + const { compileAndLoad, execute, getExecutedQueries, getCreatedBinding } = + await getTestCompiler({ + extensions: { huggingface: path.join(__dirname, '..', 'src') }, + huggingface: { + accessToken: token, + }, + }); + + const sql = `{% set data = ${JSON.stringify( + repositories + )} %}SELECT {{ data | huggingface_text_generation(query="${question}", model="meta-llama/Llama-2-13b-chat-hf", wait_for_model=true) }}`; + + // Act + await compileAndLoad(sql); + await execute({}); + + // Assert + const queries = await getExecutedQueries(); + const bindings = await getCreatedBinding(); + + expect(queries[0]).toBe('SELECT $1'); + expect(bindings[0].get('$1')).toEqual(expected); + }, + 50 * 1000 + ); +});