You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It also should not begin solving the issue if the issue itself uses too many tokens even after summarization. AutoPR should always resummarize the issue context between steps and keep track of how much tokens left.
Some context may be temporary offloaded to pull request comments if it is not required for the next step.
This may also require doing one change at a time, and after that change is done it can be removed from context.
I might be misunderstanding, but this isn't a viable way of using the LLM. If you ask it to summarize something, it'll do it, into whatever short form you like.
Another way to interpret what you're saying is that it should split issues if they're too long. I'm down to explore this, as a separate trigger on the issue itself (akin to #86 (comment)).
It also should not begin solving the issue if the issue itself uses too many tokens even after summarization. AutoPR should always resummarize the issue context between steps and keep track of how much tokens left.
Some context may be temporary offloaded to pull request comments if it is not required for the next step.
This may also require doing one change at a time, and after that change is done it can be removed from context.
Related to #88 and #93
The text was updated successfully, but these errors were encountered: