Experimental use of LLMs for completing issues #121
adamamer20
started this conversation in
Ideas
Replies: 2 comments
-
According to OpenAI's modification of SWE-bench, Aider performs better than SWE-Agent, while Agentless is the best performing one. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thanks @rht! Always on top :). I took a look at Agentless and right now the documentation is fairly lacking (it shows how to reproduce the SWE-Bench result but that's it). We can try using Aider, it seems well documented and easy to setup! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I believe with a good CI suite in place and a good template for issues (eg. current behavior, expected behavior...) we could use a local LLM like DeepSeek-Coder to solve the easiest issues and simply review the changes. This would aid development greatly because it would allow to focus only on the most difficult issues and less on actual mantainance.
Look at this repo: https://github.com/princeton-nlp/SWE-agent?tab=readme-ov-file
Beta Was this translation helpful? Give feedback.
All reactions