Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion: COnsider using OmniAI gem to increase number of providers available #85

Open
MadBomber opened this issue Aug 21, 2024 · 9 comments

Comments

@MadBomber
Copy link

I've started looking closely at the omniai gem as a way to increase the number of providers to which I can link.

@swerner
Copy link
Contributor

swerner commented Aug 21, 2024

Cool! Will check this out, have also been looking at OpenRouter. Are there particular providers you're looking for?

@MadBomber
Copy link
Author

OpenAI, Anthropic and Mistral are the primary providers that I use. The thing that appeals to me about the OmniAI library is that I can ignore the provider and focus on the model name. Using OmniAI I can craft a client for the correct provider by just knowing what model I want to use. That is in addition to its support for LocalAI and Ollama in the local environment via its client code for OpenAI.

OpenRouter is also on my list to investigate.

@swerner
Copy link
Contributor

swerner commented Aug 22, 2024

Ah I see, so for configuring a sublayer app, you'd rather just be able to say "gpt-4o" rather than having to say both OpenAI as a provider and "gpt-4o" as the model?

We should be able to build a way to key off of only the model name so that we use the correct provider behind the scenes similar to what omniai is doing.

@AndrewBKang
Copy link
Contributor

@MadBomber that's helpful to think about, an interface where you can specify the model name only and then have the gem infer the rest does sound pleasant to use! Will definitely take a look at omniai! So that and the local support. Any other features?

@MadBomber
Copy link
Author

Yes, my suggestion was prescriptive; sorry. From a hype POV, we are pounded with the latest model names from the different providers. Its becoming a blur. Who has the best model for image, audio text, code? By focusing on the model and allowing the framework to chose the provider I think is a desirable simplification.

Linking model name to provider is not necessarily a simple thing to do. I think a regex approach to linking model names to providers would work... but there I go again being prescriptive.

@swerner
Copy link
Contributor

swerner commented Aug 22, 2024

Got it, that's helpful and yeah a good point that we're exposing a little bit too much configuration that doesn't really matter a lot of the time.

We can probably simplify the configuration a lot for the common cases (using openai, using anthropic, using google) but expose the ability to customize the configuration if you're using mistral locally vs on one of the major providers like groq vs on your own personal cloud somewhere

@MadBomber
Copy link
Author

@AndrewBKang you asked about other features. I have not built anything with sublayer yet; but, after reading through the very nice documentation I was wondering whether there might be a way to support embeddings and function callbacks.

@AndrewBKang
Copy link
Contributor

@MadBomber Awesome! glad you've been checking the docs out! they're going to get another facelift pretty soon!

what kind of embeddings and function callbacks were you thinking!?

@MadBomber
Copy link
Author

@AndrewBKang I'm currently looking into enterprise-sized HR employee handbook, rules regulations and benefits domain. Where source material changes on a regular basis. That material may also be unique to different employee-group - for example benefit packages. With the added complexity of differences between states/countries w/r/t rules and regulations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants