Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle parallel function calls for OpenAI LLMs #522

Merged
merged 4 commits into from
Sep 30, 2024

Conversation

kwindla
Copy link
Contributor

@kwindla kwindla commented Sep 30, 2024

The hard work was done here by @jeevan057

#346 (review)

I rebased, fixed one small bug, and modified the code from the original PR to use the newer context aggregator frames approach. I've tested this heavily.

There's one question in the comments of the 14-function-calling.py example, but I don't think that should block a merge.

transport.input(),
context_aggregator.user(),
llm,
fl_out,
# fl_out,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we just remove the frame loggers?

Copy link
Contributor

@chadbailey59 chadbailey59 Sep 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably. I want to leave them in an example or two so people know they exist, though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking the same thing as @chadbailey59 . No reason not to leave them in a few examples as, um, examples.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we leave them we should uncomment them I think.

# possible question for Aleix and Chad about what the right way
# to trigger speech is, now, with the new queues/async/sync refactors.
# await llm.push_frame(TextFrame("Let me check on that."))
logger.debug(f"Starting fetch_weather_from_api with function_name: {function_name}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rule is... if a processor can push frames from a different task then it should be async. But function calls happen in the same task, no? If so, this should be OK.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realized that, by default, all processors should be async and not the other way around. It makes more sense and it's safer this way. That was my initial idea but then changed my mine. But I think that was the right idea. I'll make the change in the morning and I believe that line should be safe.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will rebase and test again after this change.

Copy link
Contributor

@chadbailey59 chadbailey59 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still want to do some downstream testing with RTVI, but I think this is good to merge 👍

@kwindla kwindla force-pushed the rebase-openai-multi-function-call branch from ae16277 to 0499fe4 Compare September 30, 2024 23:10
@kwindla kwindla merged commit 47b943a into main Sep 30, 2024
3 checks passed
@kwindla kwindla deleted the rebase-openai-multi-function-call branch September 30, 2024 23:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants