-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Marcelle/selective render #630
base: v1.0
Are you sure you want to change the base?
Conversation
Desktop App for this PRThe following build is available for testing: The app is signed and notarized for macOS. After downloading, unzip the file and drag the Goose.app to your Applications folder. This link is provided by nightly.link and will work even if you're not logged into GitHub. |
how do these messages render in the UI / CLI? |
yeah nice - would be nice in the GUI if it could have some goose like icon or prefix to make it clear it is a side channel message or just from goose? (optional though as most people won't necessarily know or care where it came from probably) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, i like this solution! left a comment below on how we can filter these from model inference
&tools, | ||
).await?; | ||
capabilities.record_usage(usage).await; | ||
|
||
// Yield the assistant's response | ||
yield response.clone(); | ||
|
||
// if ctx limit added goose message yield it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thinking through this, we need to make sure that the goose role messages never get to an LLM provider.
I think we can do that one of two ways:
- make the agent filter the messages out before any provider calls
- update the providers to filter (which i see you've done for anthropic but i think is missing from others?)
vaguely i think the former is a bit easier, maybe add a helper method on the capabilities struct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@baxen im updating the providers to filter goose-role messages, and its more involved than would be wise to do for tomorrow. im putting it in its own PR. ive found and updated all the providers to skip processing goose-role messages, but testing all of it will spill into next week.
.count_everything(system_prompt, messages, tools, &resources, model); | ||
|
||
let mut new_messages = messages.to_vec(); | ||
if approx_count > target_limit { | ||
let user_msg_size = self.text_content_size(new_messages.last(), model); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are we sure we should take this out? was the intention here to catch if the user's message is too big itself to fit into the limit size? I think that's a reasonable edge case. But if we leave it in I think we'd have to filter to make sure the message checked is a user message
This change allows "Goose" (not user, not assistant) to show information to the user which is outside of the user's conversation with the llm, and avoids sending those out-of-band messages to the llm.
ideally, visually, it would be nice to update the prompt so Goose-messages show as a third speaker.
changelist:
this keeps the Goose messages in the message history.