Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API - websocket ? #191

Closed
bakman2 opened this issue May 5, 2023 · 1 comment
Closed

API - websocket ? #191

bakman2 opened this issue May 5, 2023 · 1 comment

Comments

@bakman2
Copy link

bakman2 commented May 5, 2023

Thanks a lot for this, works flawlessly on my m2 mac with node-red.

I have not used the OpenAI API but I can imagine because they have computing power that responses are fast and a restAPI would suffice - it would be nice to have a websocket server available in addition to REST so that we can follow along with the response instead of waiting.

@mudler
Copy link
Owner

mudler commented May 6, 2023

Hey, thanks 👋

There is available support for SSE and tokens are streamed one-by-one - however it's available only on llama.cpp compatible models (doesn't work for instance, with gpt4all-j, yet).

This is issue is a dup of #109 where you can follow up progress, therefore I'm closing it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants