-
-
Notifications
You must be signed in to change notification settings - Fork 332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove trailing Markdown code tags in completion suggestions #726
Conversation
Thanks for submitting your first pull request! You are awesome! 🤗 |
MockProvider, | ||
{ | ||
"model_id": "model", | ||
"responses": ["```python\nTest python code\n```"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if there is a trailing new line after the triple backtick? Will the test still passe? Maybe worth to paramterise it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks good in general, thank you! One suggestion to test if it works with trailing whitespace/new lines too.
Thanks @krassowski! The current code does not work with trailing whitespace/newlines (since in my testing GPT never returned any whitespace after the closing markdown suffix) but I will add some code to handle any trailing whitespace and will adjust the test accordingly. |
@bartleusink Thanks for taking up this issue #698, much appreciated. I was also trying to fix this one, see PR #698, which I will drop once we have a better one (thanks @krassowski for your feedback on that one). I tested your modification and am getting erratic behavior, can you also try your solution out to see if I am doing something wrong? See two examples below (one where the backticks remain, and one where they are gone but the forward ticks remain). Is this the intended behavior? |
Frankly, the issue is that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I spoke with @dlqqq and we both are inclined to move forward with this PR, thanks a lot to @bartleusink @krassowski for all your help with resolving this issue. I closed my PR #698 in favor of this.
However, the proposed PR still fails if the LLM generates commentary. We should not expect every LLM to generate a suitable code completion even with artful prompting. After all, a LLM may sometimes only generate code, and sometimes also generate commentary. We should make a best effort at generating correct code even when the LLM produces unnecessary commentary.
If we expect LLMs to never generate commentary, why not do some post-processing to “assist” LLMs which are unable to do so? I recommend using a line-by-line approach as suggested in my PR #698. More specifically, we should drop all lines of text not surrounded by markdown code delimiters.
Can you implement this and include the unit tests?
This is not what I meant to say (and I think I did not say this). The expectation in my view should be that LLMs generate commentary wrapped in appropriate comment syntax for given language.
This sounds like a good default, especially if customisable, but I would suggest that this is planned for separately and implemented in a new pull request. This is because implementing it correctly is not trivial if you consider that an LLM may begin generating code with or without prefixing it with markdown code delimiters, and that those may be included in the code itself, especially code for generating Markdown (which is not an uncommon task, including in notebooks).
That's great, and I understand that the internal back channels may be easier for communication, but it would be great if design decisions for open source projects were discussed in public issues, or at least in public meetings :) |
742fd78
to
6078fa8
Compare
We agree, infact @srdas had planned to discuss this in the last JLab meeting on Wed, but got sidetracked with other priorities, he has attempted to post all discussion points in his comments here. I see that @dlqqq has approved this PR, is there anything else missing before we merge this? |
6078fa8
to
23350ba
Compare
@3coins I was about to merge this before realizing that this PR may introduce conflicts with #717, which is currently in review. @krassowski Will #717 supersede the implementation in this PR? If so, I think we should close this PR in favor of #717. |
23350ba
to
313b8b7
Compare
…lab#726) * Remove closing markdown identifiers (jupyterlab#686) * Remove whitespace after closing markdown identifier * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This is my attempt at fixing #686
I have also added a unit test and adapted the
MockProvider
a little bit so that unit tests can test different responses.As a result I've also changed the assertions in the
test_handle_stream_request
test to what I think are the intended responses.