You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm looking for a way to cache either the KnowledgeBank Object, or the Text Embeddings Produced when we load some data into embedding format to be ready for RAG.
Describe the solution you'd like
I've tried to find the embeddings but not sure where they are being produced at, hence unable to save them.
Also, it seems like the default example only loads from some files, instead of the produced embeddings themselves.
I've tried pickle and dill the entire knowledge base, but some private attributes are not copied over.
This is very helpful for quick code iterations when building the LLM agents, when the data is the same throughout.
Thanks for such a useful library!
The text was updated successfully, but these errors were encountered:
Hi, I'm looking for a way to cache either the KnowledgeBank Object, or the Text Embeddings Produced when we load some data into embedding format to be ready for RAG.
Describe the solution you'd like I've tried to find the embeddings but not sure where they are being produced at, hence unable to save them. Also, it seems like the default example only loads from some files, instead of the produced embeddings themselves. I've tried pickle and dill the entire knowledge base, but some private attributes are not copied over.
This is very helpful for quick code iterations when building the LLM agents, when the data is the same throughout.
Thanks for such a useful library!
Since the RAG module in AgentScope is built with Llama-index, so the embedding storage follows the format of the Llama-index by calling the persist methods. The default path of the embedding is ./runs/{knowledge_id}/default__vector_store.json.
BTW, KnowledgeBank is more like a dispatcher and Knowledge(e.g., LlamaIndexKnowledge) is the one for embedding generation/retrieval.
Hi, I'm looking for a way to cache either the KnowledgeBank Object, or the Text Embeddings Produced when we load some data into embedding format to be ready for RAG.
Describe the solution you'd like
I've tried to find the embeddings but not sure where they are being produced at, hence unable to save them.
Also, it seems like the default example only loads from some files, instead of the produced embeddings themselves.
I've tried pickle and dill the entire knowledge base, but some private attributes are not copied over.
This is very helpful for quick code iterations when building the LLM agents, when the data is the same throughout.
Thanks for such a useful library!
The text was updated successfully, but these errors were encountered: