You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks a lot for this, works flawlessly on my m2 mac with node-red.
I have not used the OpenAI API but I can imagine because they have computing power that responses are fast and a restAPI would suffice - it would be nice to have a websocket server available in addition to REST so that we can follow along with the response instead of waiting.
The text was updated successfully, but these errors were encountered:
There is available support for SSE and tokens are streamed one-by-one - however it's available only on llama.cpp compatible models (doesn't work for instance, with gpt4all-j, yet).
This is issue is a dup of #109 where you can follow up progress, therefore I'm closing it.
Thanks a lot for this, works flawlessly on my m2 mac with node-red.
I have not used the OpenAI API but I can imagine because they have computing power that responses are fast and a restAPI would suffice - it would be nice to have a websocket server available in addition to REST so that we can follow along with the response instead of waiting.
The text was updated successfully, but these errors were encountered: