Replies: 1 comment 3 replies
-
I was having a conversation today with the voice team about this proposal. The problem is that it will work for LLMs but it will not work for the default built-in Home Assistant conversation agent. Given that it's the default, we need to make sure this works. ResponseIntentThe idea is to add a new type of intent called the "ResponseIntent". These intents are meant to capture responses and not be executed. A response intent has a fixed set of responses, and it can have different sentences mapped to it. Take for example the YesNoResponseIntent with 2 possible options
Using ResponseIntentThe user will specify in The user will have to inspect the An automation, would look like this: triggers: []
conditions: []
actions:
- action: assist_satellite.start_conversation
target:
entity_id: assist_satellite.home_assistant_voice_21c684
data:
message: You left the garage door open. Shall I close it?
intent_response: YesNo
response_variable: "response"
- choose:
- conditions:
- condition: template
value_template: "{{ response.intent.response == \"yes\" }}"
sequence:
- variables:
response: Closed
- action: cover.close_cover
target:
entity_id: cover.garage_door
- conditions:
- condition: template
value_template: "{{ response.intent.response == \"no\" }}"
sequence:
- variables:
response: Doing nothing
default:
- variables:
response: I didn't understand it
- action: assist_satellite.announce
metadata: {}
data:
message: "{{ response }}"
target:
entity_id: assist_satellite.home_assistant_voice_21c684 ConstraintsThe behavior of the service will be slightly different based on what pipeline is selected for the Assist Satellite. If the pipeline is leveraging the default Home Assistant conversation agent, the user needs to:
If the pipeline is leveraging a conversation agent that uses LLMs:
|
Beta Was this translation helpful? Give feedback.
-
Context
With the introduction of Assist Satellite entities, we now have a representation of devices that allow users to talk with Assist. Some of these devices are also able to start a conversation with a user without the user initiating the conversation via a wake-word or button press.
Proposal
Introduce a new Assist Satellite service
start_conversation
to allow Home Assistant to start talking to a user via an Assist Satellite device. The conversation will always start by an announcement, and then the user has the ability to respond to it.Use cases:
Solution
Add a new service
assist_satellite.start_conversation
that allows targetingassist_satellite
entities that support a new supported featureSTART_CONVERSATION
.The service will have the following service fields:
start_message
andstart_media_id
. These fields will work exactly like theannounce
service: either one has to be provided. If no media ID is provided, use TTS of the selected pipeline to generate media.extra_system_prompt
. This text field will allow providing extra context for the LLM to be able to understand the response of the user. Example can be: "The entitycover.garage_door
has been open for 30 minutes and we asked the user if they want to close it". This context is important if the LLM then sees that the user said the words "Sure".Exploration
start_conversation
Experiment: Add start_conversation service to assist satellite core#125934extra_system_prompt
Add extra prompt to assist pipeline and conversation core#124743Beta Was this translation helpful? Give feedback.
All reactions