-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggestion: make chatgpt-shell-api-url-base customizable for each model #272
Comments
I think the easiest way to do this would be to modify |
Hey, thanks for filing!
Sounds good. In addition to this change, how about creating chatgpt-shell-openrouter.el and using chatgpt-shell-openai-make-model to list the models you use? We can add more over time, and/or let demand by other folks populate it. Interested in sending a PR? |
I'll put together a pull request when I have a chance. For some models
(e.g. llama3.3), there are multiple providers that can be accessed through
open router with different pricing and throughput. Which ones are selected
are controlled by additional parameters so I'll need to look into what the
best way to deal that is.
…On Sun, Dec 8, 2024, 11:21 PM xenodium ***@***.***> wrote:
Hey, thanks for filing!
modify chatgpt-shell-openai-make-model to take :usr-base and :provider
options that override the defaults. This seems like an easy solution.
Sounds good. In addition to this change, how about creating
chatgpt-shell-openrouter.el and using chatgpt-shell-openai-make-model to
list the models you use? We can add more over time, and/or let demand by
other folks populate it.
Interested in sending a PR?
—
Reply to this email directly, view it on GitHub
<#272 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFAOSGQMV4HFZIATLT7RED2EVAIFAVCNFSM6AAAAABTHUYXTSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMRXGEZTCNJXGI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I'm thinking the best way to do this would be to first modify This is the page that explains the provider routing features in API that I mentioned earlier. |
Yup. This sounds good to me.
|
This is in progress here though I haven't had a chance to finish it yet. https://github.com/djr7C4/chatgpt-shell/tree/openrouter |
Looking good.
No worries. No hurry from my end. |
This is working now for llama-70b. Interestingly, OpenRouter supports o1 (not the preview version). It doesn't currently work though because OpenRouter streams responses such as ": OPENROUTER PROCESSING\n\n" which seem to confuse |
openai/o1 should be working now. |
Moving chatgpt-shell-api-url-base into the parameters for chatgpt-shell-openai-make-model would allow users to use different models that use the same API as OpenAI. This API has been adopted as a sort of psuedo standard and some other companies support it for their services.
My particular use case is that I'm interested in running llama3.3 and Qwen QwQ 32B on openrouter.ai which uses the same API as OpenAI (just with a different URL). In fact their example Python code actually uses the openai module and they don't provide their own module.
This service also supports many other models that are not yet supported by this project so this would greatly expand the range of models that users could access.
The text was updated successfully, but these errors were encountered: