-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why re-invent the wheel? #113
Comments
You bring up a good point—tools like Litellm and others are already out there and do a great job. But honestly, history is full of success stories that started with someone "reinventing the wheel." Python, for example, came along when we already had C, Perl, and Java, yet it succeeded because it focused on simplicity and readability. It wasn’t just about what it did, but how it did it. The thing is, progress doesn’t always come from doing something completely new. Sometimes it’s about taking an existing idea and iterating on it—making it a little better, a little different, or just more aligned with a specific need. And often, the act of building itself leads to a deeper understanding of the problem space and opens doors to innovation. Sure, Litellm and others are great, but there’s always room for someone to come in with fresh eyes and create something unexpected. Supporting that kind of work isn’t just about the outcome—it’s about fostering curiosity, creativity, and growth. Even if the result isn’t a revolution, the process itself is invaluable. Who knows? Maybe this iteration will be the one that sparks something big. |
Plus, these tools themselves are also reinventing the wheel in a way. Litellm and others often act as wrappers around powerful tools like OpenAI, Mistral, and others. But think about it—weren’t these tools also reinventing the wheel when OpenAI first made its mark? OpenAI wasn’t the first AI company, and tools like Mistral built on those foundations, refining the approach, targeting specific needs, and pushing boundaries. Reinvention is just part of how progress works. |
Looking into LiteLLM's source makes me reinvent the wheel |
Cool reply. Tools only become the tools since they apply specifically. |
Yeah I mean that's the real question, why is this wheel different?
…On Sat, Nov 30, 2024, 8:11 AM MartinGuo ***@***.***> wrote:
makes
Cool reply. Tools only become the tools since they apply specifically.
Are there any highlights on this project regarding the comparison?
—
Reply to this email directly, view it on GitHub
<#113 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADU2N5XAF4NDW5GULTDNDUT2DHIRPAVCNFSM6AAAAABSYHC5CGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBYHE4TKMBWGQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Hi @TashaSkyUp, I am not contributor here or in any of the existing libraries, but I am currently checking the available solutions to easily query different LLM providers. Also I like to look into other people codebase, and judge it, so I can give you some elements of response from the point of view of a python package developer. First my use-case: all I want is a python API for sending messages to LLM providers and getting a completion back
|
Thank you for the thoughtful analysis. I agree litellm is no longer lite. But I imagine it was at some point. Also Ive already seen in this repository that there are plans to start expanding the codebase to cover more features. I imagine this is exactly how litellm started out. "oh we will keep it lite, clean and simple!" Just like this repo is now. Your other comments around industry standards / what people should do are your opinion / possibly rules you dictate to your subordinates. This my friend is the wilds of github.. people from all nations, socio-economic status, education levels, (dis)abilities and a thousand other different qualifiers contribute here. If the repo owners want to be inclusive that not everyone is a CS major with 10 years in industry (lucky bastards). Then great, if not.. maybe they should host it on their own git. |
A lot of these But it would be just good to hear from the authors as to what the intent and direction of this repo is, as it would give some insights as to where to contribute. |
@andrewyng Loved your classes! And I would love to hear your thoughts or your teams thoughts on this thread! What do you think? How will aisuite differentiate itself from the others? |
Addresses code qa critique andrewyng/aisuite#113 (comment)
* feat(base_llm): initial commit for common base config class Addresses code qa critique andrewyng/aisuite#113 (comment) * feat(base_llm/): add transform request/response abstract methods to base config class --------- Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* feat(base_llm): initial commit for common base config class Addresses code qa critique andrewyng/aisuite#113 (comment) * feat(base_llm/): add transform request/response abstract methods to base config class * feat(cohere-+-clarifai): refactor integrations to use common base config class * fix: fix linting errors * refactor(anthropic/): move anthropic + vertex anthropic to use base config * test: fix xai test * test: fix tests * fix: fix linting errors * test: comment out WIP test * fix(transformation.py): fix is pdf used check * fix: fix linting error
…ere (#7117) * fix use new format for Cohere config * fix base llm http handler * Litellm code qa common config (#7116) * feat(base_llm): initial commit for common base config class Addresses code qa critique andrewyng/aisuite#113 (comment) * feat(base_llm/): add transform request/response abstract methods to base config class --------- Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com> * use base transform helpers * use base_llm_http_handler for cohere * working cohere using base llm handler * add async cohere chat completion support on base handler * fix completion code * working sync cohere stream * add async support cohere_chat * fix types get_model_response_iterator * async / sync tests cohere * feat cohere using base llm class * fix linting errors * fix _abc error * add cohere params to transformation * remove old cohere file * fix type error * fix merge conflicts * fix cohere merge conflicts * fix linting error * fix litellm.llms.custom_httpx.http_handler.HTTPHandler.post * fix passing cohere specific params --------- Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
…y, mistral, codestral, nvidia nim, cerebras, volcengine, text completion codestral, sambanova, maritalk to base llm config Addresses feedback from andrewyng/aisuite#113 (comment)
* feat(base_llm): initial commit for common base config class Addresses code qa critique andrewyng/aisuite#113 (comment) * feat(base_llm/): add transform request/response abstract methods to base config class * feat(cohere-+-clarifai): refactor integrations to use common base config class * fix: fix linting errors * refactor(anthropic/): move anthropic + vertex anthropic to use base config * test: fix xai test * test: fix tests * fix: fix linting errors * test: comment out WIP test * fix(transformation.py): fix is pdf used check * fix: fix linting error
…ere (#7117) * fix use new format for Cohere config * fix base llm http handler * Litellm code qa common config (#7116) * feat(base_llm): initial commit for common base config class Addresses code qa critique andrewyng/aisuite#113 (comment) * feat(base_llm/): add transform request/response abstract methods to base config class --------- Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com> * use base transform helpers * use base_llm_http_handler for cohere * working cohere using base llm handler * add async cohere chat completion support on base handler * fix completion code * working sync cohere stream * add async support cohere_chat * fix types get_model_response_iterator * async / sync tests cohere * feat cohere using base llm class * fix linting errors * fix _abc error * add cohere params to transformation * remove old cohere file * fix type error * fix merge conflicts * fix cohere merge conflicts * fix linting error * fix litellm.llms.custom_httpx.http_handler.HTTPHandler.post * fix passing cohere specific params --------- Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
* feat(base_llm): initial commit for common base config class Addresses code qa critique andrewyng/aisuite#113 (comment) * feat(base_llm/): add transform request/response abstract methods to base config class * feat(cohere-+-clarifai): refactor integrations to use common base config class * fix: fix linting errors * refactor(anthropic/): move anthropic + vertex anthropic to use base config * test: fix xai test * test: fix tests * fix: fix linting errors * test: comment out WIP test * fix(transformation.py): fix is pdf used check * fix: fix linting error
…ere (#7117) * fix use new format for Cohere config * fix base llm http handler * Litellm code qa common config (#7116) * feat(base_llm): initial commit for common base config class Addresses code qa critique andrewyng/aisuite#113 (comment) * feat(base_llm/): add transform request/response abstract methods to base config class --------- Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com> * use base transform helpers * use base_llm_http_handler for cohere * working cohere using base llm handler * add async cohere chat completion support on base handler * fix completion code * working sync cohere stream * add async support cohere_chat * fix types get_model_response_iterator * async / sync tests cohere * feat cohere using base llm class * fix linting errors * fix _abc error * add cohere params to transformation * remove old cohere file * fix type error * fix merge conflicts * fix cohere merge conflicts * fix linting error * fix litellm.llms.custom_httpx.http_handler.HTTPHandler.post * fix passing cohere specific params --------- Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
…oth use base llm config Addresses andrewyng/aisuite#113 (comment)
…#7157) * refactor(ollama/): refactor ollama `/api/generate` to use base llm config Addresses andrewyng/aisuite#113 (comment) * test: skip unresponsive test * test(test_secret_manager.py): mark flaky test * test: fix google sm test
* refactor(ollama/): refactor ollama `/api/generate` to use base llm config Addresses andrewyng/aisuite#113 (comment) * test: skip unresponsive test * test(test_secret_manager.py): mark flaky test * test: fix google sm test * fix: fix init.py
Hi @vemonet , thank you for the feedback on litellm. Here’s what we’ve done / are doing about this. Is this what you wanted? 1. ‘LLM providers don't have a unified parent abstract class’All chat providers (exc. Bedrock) now inherit from a parent abstract class For reference, here's the refactor on: 2. 'no clear coherent structure for LLM providers'We refactored Standard naming convention: All folders in llms/ are now named after their litellm provider name. enforced test Common enforced structure: https://github.com/BerriAI/litellm/tree/30e147a315d29ba3efe61a179e80409a77754a42/litellm/llms/watsonx
3. ‘As a bonus they are redefining a LiteLLMBase class many times at different place’Removed redefiniton: Defined in just 1 place Clarity on usage: Renamed it to 4. ‘Another red flag: there is a symlinks to requirements.txt in the main package folder’Removed the symlinks to requirements.txt 5. ‘Configuration for linting tools is a complete mess’LiteLLM follows the Google Python Style Guide. We run:
If there's any way we can improve further here, let me know. [PLANNED FOR NEXT WEEK]1. ‘The list of mandatory dependencies for
|
@TashaSkyUp - Please wait for the next set of features to be announced/added. Differentiators will become evident over the course of next few releases. Thanks for using aisuite and giving feedback. We are committed to maintain and enhance this library long term. |
LiteLLM is the reason why I look for AIsuite. I just need a simple client, not a whole proxy or gateway server like LiteLLM. |
does AI Suite offer
Does AIsuite offer other features like caching, routing, proxy server etc. which litellm provides ? Thats the reason for mostly using Litellm right? |
thanks for the feedback @dinhanhx. We wrote litellm to be a lightweight library for llm calling, since langchain felt bloated at the time, so this is good to know. Will work on moving proxy code outside default pip package. Should be live by end of week hopefully. |
@TashaSkyUp - Thanks for opening this thread. If the issue is resolved and if your question has been answered, can I close it? |
Yes I think the thread has served its purpose. Thank you everyone. |
…sdk dep's Addresses feedback from andrewyng/aisuite#113 (comment)
* feat(base_llm): initial commit for common base config class Addresses code qa critique andrewyng/aisuite#113 (comment) * feat(base_llm/): add transform request/response abstract methods to base config class * feat(cohere-+-clarifai): refactor integrations to use common base config class * fix: fix linting errors * refactor(anthropic/): move anthropic + vertex anthropic to use base config * test: fix xai test * test: fix tests * fix: fix linting errors * test: comment out WIP test * fix(transformation.py): fix is pdf used check * fix: fix linting error
…ere (BerriAI#7117) * fix use new format for Cohere config * fix base llm http handler * Litellm code qa common config (BerriAI#7116) * feat(base_llm): initial commit for common base config class Addresses code qa critique andrewyng/aisuite#113 (comment) * feat(base_llm/): add transform request/response abstract methods to base config class --------- Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com> * use base transform helpers * use base_llm_http_handler for cohere * working cohere using base llm handler * add async cohere chat completion support on base handler * fix completion code * working sync cohere stream * add async support cohere_chat * fix types get_model_response_iterator * async / sync tests cohere * feat cohere using base llm class * fix linting errors * fix _abc error * add cohere params to transformation * remove old cohere file * fix type error * fix merge conflicts * fix cohere merge conflicts * fix linting error * fix litellm.llms.custom_httpx.http_handler.HTTPHandler.post * fix passing cohere specific params --------- Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
BerriAI#7151) * refactor(sagemaker/): separate chat + completion routes + make them both use base llm config Addresses andrewyng/aisuite#113 (comment) * fix(main.py): pass hf model name + custom prompt dict to litellm params
…BerriAI#7157) * refactor(ollama/): refactor ollama `/api/generate` to use base llm config Addresses andrewyng/aisuite#113 (comment) * test: skip unresponsive test * test(test_secret_manager.py): mark flaky test * test: fix google sm test
* refactor(ollama/): refactor ollama `/api/generate` to use base llm config Addresses andrewyng/aisuite#113 (comment) * test: skip unresponsive test * test(test_secret_manager.py): mark flaky test * test: fix google sm test * fix: fix init.py
Litellm and probably a few others have already done all of this.. this energy and time would be better spent on litellm.
I mean I guess there's a couple possible reasons but I don't see them being mentioned so.
The text was updated successfully, but these errors were encountered: