You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is major update on the path to Generative Semantic Search
The extractor pipeline was one of the first components in txtai, going all the way back to 1.0. Since then, much has changed both with txtai and externally. This pipeline has a lot of potential but it needs a couple updates.
Make the following upgrades to the Extractor pipeline.
Ability to run embeddings searches. Given that content is supported, text can be retrieved from the embeddings instance.
In addition to extractive qa, support text generation models, sequence to sequence models and custom pipelines
Better detection of when a tokenizer should be used (word vector models only)
These changes will enable a prompt-driven approach to question-answering with LLMs. This includes Hugging Face models and external services like OpenAI/Cohere. Services can be called directly or with another library like langchain. Custom pipelines only require a __call__ interface.
The text was updated successfully, but these errors were encountered:
hi @davidmezzetti ! Thank you for your all works. I want to use "Cohere/Cohere-embed-english-v3.0" embedding model in the following code:
`%%capture
from txtai import Embeddings
Works with a list, dataset or generator
data = [
"US tops 5 million confirmed virus cases",
"Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
"Beijing mobilises invasion craft along coast as Taiwan tensions escalate",
"The National Park Service warns against sacrificing slower friends in a bear attack",
"Maine man wins $1M from $25 lottery ticket",
"Make huge profits without work, earn up to $100,000 a day"
]
This is major update on the path to
Generative Semantic Search
The extractor pipeline was one of the first components in txtai, going all the way back to 1.0. Since then, much has changed both with txtai and externally. This pipeline has a lot of potential but it needs a couple updates.
Make the following upgrades to the Extractor pipeline.
These changes will enable a prompt-driven approach to question-answering with LLMs. This includes Hugging Face models and external services like OpenAI/Cohere. Services can be called directly or with another library like langchain. Custom pipelines only require a
__call__
interface.The text was updated successfully, but these errors were encountered: