Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the batching to the llm_embedding #54

Merged
merged 1 commit into from
Nov 12, 2024
Merged

Conversation

dorbanianas
Copy link
Member

No description provided.

@@ -19,17 +19,17 @@ static void LlmEmbeddingScalarFunction(DataChunk &args, ExpressionState &state,
auto model_details_json = CoreScalarParsers::Struct2Json(args.data[1], 1)[0];
auto model_details = ModelManager::CreateModelDetails(con, model_details_json);

auto embeddings = nlohmann::json::array();
vector<string> prepared_inputs;
for (auto &row : inputs) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prepared_inputs is not a clear name, let's find a better name, you can also provide the input size for prepared_inputs in its ctor.

@queryproc queryproc merged commit 8a1a350 into main Nov 12, 2024
@queryproc queryproc deleted the feat/llm-embedding-batching branch November 12, 2024 01:04
@queryproc queryproc mentioned this pull request Nov 19, 2024
15 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants