You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To implement Litellm, I propose we depreciate the llm_factory, and inherit the completion class from litellm to hit the inference APIs for the users directly.
Issue: reduce the need for inference providers with continuous-eval.
Advantage: Allow users to use multiple LLMs with fallbacks/caching without building the core infra.
The text was updated successfully, but these errors were encountered: