-
Notifications
You must be signed in to change notification settings - Fork 624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Estimated Time to Review (ETR) #303
Comments
@Archetypically thats a nice idea Of course it's just an estimation, but I do agree it can give the reviewer a consistent baseline, and valuable estimation of the size and complexity of the PR I opened a pull request: Example results can be found in: You are welcome to give feedback about the prompt. anything else you think should be added to it ? |
@mrT23 -- appreciate that quick turnaround! We experienced with a similar prompt, so this is pretty close to what we landed at as well. One thing that was born out of feedback and prototyping was the concept of categorical sizing; we landed on T-shirt sizing.
And we reported this alongside the time. Generally, we found that the LLM wasn't hugely accurate in certain cases, but the T-shirt sizing was pretty useful. We also decided to add prompt parts about ignoring certain types of files that didn't need reviewing; think auto-generated GraphQL schemas or |
@Archetypically your observation regarding the fact the LLMs are better in comparing and ranking than in giving absolute values is correct. i will change the prompt i think the t-shrit size is a bit cumbersome and an unintuitive ranking option, both for the LLM, and a human. Why not just a number - "review effort, on a scale of 1-5" ? Regarding ignoring certain types - we already do that, see 'pr_agent/settings/language_extensions.toml'. you are welcome to open a PR with more types, or dynamically edit the 'extra' field. |
in any case, since this is a nice feature, I think I will turn it on by default. thanks for the suggestion @Archetypically merged #306 |
A great value-add feature would be an estimated time to review that the agent could generate.
This could be based on whatever the model thinks is a significant metric (lines of code changed, types of files changed, number of files changes, deletions vs. modifications vs. additions, etc).
This is valuable from the perspective of the PR author by having some feedback if a PR is going to feel "too big" to review (and therefore the downstream action would be to break up the PR into multiple PRs).
Additionally, this would help reviewers by addressing that cognitive load of "this PR will take forever to review" which might negatively impact the responsiveness of the reviewer.
Suggested post here by @tjwp
The text was updated successfully, but these errors were encountered: