Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support setting maxConcurrentChunks for Generic OpenAI embedder #2655

Conversation

hdelossantos
Copy link
Contributor

Pull Request Type

  • [ x] ✨ feat
  • 🐛 fix
  • ♻️ refactor
  • 💄 style
  • 🔨 chore
  • 📝 docs

Relevant Issues

resolves #2654

What is in this change?

Adds an advanced settings collapsible menu to the Generic OpenAI embedder configuration with input to optionally set max concurrent chunks. If no input is provided, the maxConcurrentChunks variable is defaulted to 500.

image

Additional Information

Developer Validations

  • [x ] I ran yarn lint from the root of the repo & committed changes
  • [x ] Relevant documentation has been updated
  • [x ] I have tested my code functionality
  • [x ] Docker build succeeds locally

…er through configuration. This allows setting a batch size for endpoints which don't support the default of 500
…ting-embeddings-batch-size-for-openai-compatible
…ting-embeddings-batch-size-for-openai-compatible
@hdelossantos hdelossantos changed the title 2654 support setting embeddings batch size for openai compatible feat: support setting maxConcurrentChunks for Generic OpenAI embedder Nov 20, 2024
make getting to ensure proper type and format
@timothycarambat
Copy link
Member

Excellent work, just made a minor update to make the maxEmbeddingChunks a class getter so make sure the value in the ENV is parsed to number and always something we can work with.

Also updated field UI to the new UI so it looks right.

@timothycarambat timothycarambat merged commit 304796e into Mintplex-Labs:master Nov 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEAT]: Add support for specifying maxConcurrentChunks for Generic OpenAI Embedder
2 participants