Replies: 1 comment
-
Clever! You may be interested in #131 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Use case:
I use the default gpt-4 model, and when User Proxy executes the code, it sends the entire result to chat. Sometimes this can be more than 6k tokens with useless information.
Performing a simple task can cost a few dollars because of this.
To analyze the long result I see a solution:
This would reduce the cost a lot.
Beta Was this translation helpful? Give feedback.
All reactions