-
-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Streaming responses #5
Comments
A pull request would be much appreciated. Perhaps there could be another parameter to get_chat_response to specify the return type: text or stream, with text as default |
Ok, I'll try to take a look tomorrow. LMK if you find anything out about how it's done in the browser: I haven't done any research yet. |
I implemented streaming response, but I don't know how to output it on the console. |
I solved the problem, but the method was ugly. |
@A-kirami I have written a rough implementation of text streaming. Can you check if it works as intended (It works for me but need verification)
if __name__ == "__main__":
with open("config.json", "r") as f:
config = json.load(f)
chatbot = Chatbot(config)
if 'session_token' in config:
chatbot.refresh_session()
while True:
prompt = input("You: ")
for message in chatbot.get_chat_response(prompt, output="stream"):
print(message) This should print out the message as it streams. I'm not sure how to deal with the console so that it is overwritten as more data comes in. |
It works normally. If you need to print it on the console, you can refer to the implementation here. https://github.com/A-kirami/ChatGPT/blob/main/src/revChatGPT/__main__.py#L69-L81 |
Like this: |
I am terrible at this. It keeps erasing the lines in my implementation |
I'll copy your code over and modify it |
This doesn't work in all terminals:
|
This doesn't work properly in all terminals either for some reason. My hair is falling off lol |
The error seems to be that this deletes the number of lines it is aware of but in some terminals, line count is counted differently |
The approach I was attempting (but haven't perfected yet) is to only write a line once it is has completely loaded, rather than deleting lines after they're written. Would that work here? |
That is what I'm attempting to do. I write the whole message and then when a new one comes in, I try to delete the previous message. It's not clearing the right number of lines though. |
I'm suggesting you not try to delete the previous message (as I did with the please wait prompt), but rather keep track of how many lines you've written, and only write each new line. |
Tried that. An issue with this is that some environments treat lines differently. For example, my current omz shell treats each new line in the terminal as a new line (which changes depending on window size). However, Python can only see \n as new line |
I tried counting the characters and saving them, then deleting that number of characters when a new message comes in. However, \b isn't supported in some terminals either. |
Not even ChatGPT knows the solution to this... |
I think streaming is more suited for GUI applications. Terminals don't like overwrites |
Do you have a pointer to a commit that has streaming working but doesn't have the terminal stuff solved yet? I can restart my efforts from there. |
Streaming working on latest commit |
Of main or async-dev? |
main |
On that commit I'm getting
|
It returns a string. I didn't update the docs yet. You need to call with output="stream" |
I've been running it with |
It can only be implemented custom right now. |
Example code: #5 (comment) |
Cleaning up, then will PR. |
Opened #23 |
This comment was marked as outdated.
This comment was marked as outdated.
I ought to have some better way to detect auth errors |
Yeah, that has tripped me up a few times too. Can we store more persistent credentials to allow it to log in again? |
The thing with the next auth tokens is that it needs to be refreshed every hour via !refresh. It stays valid that way. However, if you don't refresh it, it expires. |
I haven't figured out their hashing system on the login page. It hashes the password client side |
I'll open an issue for this |
The https://chat.openai.com/chat web interface begins displaying responses as soon as the model generates them, allowing you to begin reading lengthy responses much sooner.
Do you know how we would go about implementing a similar feature on the command line? It probably doesn't need to stream every word, but line by line streaming would be cool.
Happy to help, whether that means picking up where you left off or figuring it out from scratch...
The text was updated successfully, but these errors were encountered: