Piping output of a suggested command back to GPT for improved iterations #590
Closed
adityakavalur
started this conversation in
General
Replies: 1 comment
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Just want to make sure if the below is currently possible (and if not, if it is something that is in the planned feature list)
In chat/REPL mode, it seems like providing information is restricted to stdin and user prompts. There is no way to pipe the output of a command to the next user prompt, or have it be a prompt of its own. This becomes useful when the command/code snippet suggested by GPT is incorrect. Easily providing this stdout/stderr to GPT would speed up the development cycle.
In the above error if GPT was able to see the mpi.h error it would help out a lot. I get that it can become expensive to add tokens arbitrarily, an alternative would be to provide a scratch file that gets prepended every time as context? So the compile command can pipe the output to a file
sgpt.scratch
that would then get appended to the follow up question.Beta Was this translation helpful? Give feedback.
All reactions