Improve the intake conversation logic #6
Replies: 4 comments 1 reply
-
Thanks for all the feedback, we're still waiting for a project lead though |
Beta Was this translation helpful? Give feedback.
-
I know :-), see: #5 (comment) |
Beta Was this translation helpful? Give feedback.
-
Upon rereading I don't really understand the point of this post. DONE is an incredibly simple approach just for demonstration purposes. Imagine that the user has a "I'm done describing my symptoms" button on the UI. |
Beta Was this translation helpful? Give feedback.
-
I completely agree that "DONE" is enough for an initial demonstration, but when considering the overall user experience in a conversational application, there's room for improvement, albeit perhaps minor considering the entire application. In my understanding, the intake system should primarily function as an advanced assistant capable of engaging in natural language conversations with users. Can you imagine if the user interface involved voice-only interactions via a phone? To me, that seems like a promising direction to explore. The use of a button or a placeholder, in my opinion, is a (conversational) design "anti-pattern". In conversations with a doctor, it's usually the doctor who guides and assesses whether the patient needs to share more symptoms. Patients typically don't need to explicitly say "DONE" (or the intent to having finished the symptom lists). Instead is the doctor that takes the lead in shaping the conversation. At least, this is the common "social practice". So, I envision a virtual conversational system behaving in a similar manner. These are just my thoughts on the matter. |
Beta Was this translation helpful? Give feedback.
-
Hi Dave,
first of all, I want to thank you for your perseverance in your values of sharing your work in so many directions. Specifically, thank you for sharing cognitive architecture learning (ACE, etc.) and this specific project, once again.
Your intake prompt is a masterpiece for its conciseness and power: Medical_Intake Intake Prompt.
I also enjoyed your chat.py for its supreme simplicity and clarity!
I know your main goal now is to maximize simplicity for the proof of concept; nevertheless, I'm perplexed regarding the logic that guides the end of the conversation between the chatbot and the user. Following your code and prompt, if I understand correctly, you put in the user's hands the decision to conclude the conversation, so the user must enter the word "DONE" to continue. See: chat.py Line 65
This is a simple effective solution but a bit misleading and not the best in terms of user experience. You can avoid this user decision because the LLM could maybe decide by itself when it has gathered all the necessary information (to be verified, I have to test what I'm writing). So my proposal is to add some logic to the original prompt to let the LLM decide himself to conclude the conversation with the patient.
So, going back to the intake conversation, the LLM could emit a JSON signal
{ "end_conversation": true }
to be caught by your chat.py program. In this way you delegate the responsibility to evaluate the information intake to the LLM itself, not the user.You can do all this maybe just adding a rule in the original prompt with something like:
Of course this is the base idea and the implementation could depend on the specific LLM model capabilities. E.g. chat completion models are not perfectly good to mix text generation and e.g. JSON. TBV.
What do you think?
Use LiteLLM
By the way, minor note: you could substitute direct OpenAI completion API calls using instead LiteLLM in that way, with a minor modification in your code, you let your applications independent from the specific LLM provider.
Beta Was this translation helpful? Give feedback.
All reactions