You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Google Vertex AI, in particular the models Gemini 1.5 Flash and Gemini 1.5 Pro support prompt caching or context caching. We should enable users to use that feature through Haystack to reduce costs and latency. https://cloud.google.com/vertex-ai/generative-ai/docs/context-cache/context-cache-overview
Describe the solution you'd like
We need to implement a way to first create a context cache and then to reference the contents of the context cache in a prompt request.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Google Vertex AI, in particular the models Gemini 1.5 Flash and Gemini 1.5 Pro support prompt caching or context caching. We should enable users to use that feature through Haystack to reduce costs and latency. https://cloud.google.com/vertex-ai/generative-ai/docs/context-cache/context-cache-overview
Describe the solution you'd like
We need to implement a way to first create a context cache and then to reference the contents of the context cache in a prompt request.
The text was updated successfully, but these errors were encountered: