Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix magic commands when using non-chat providers w/ history #1075

Merged
merged 2 commits into from
Nov 1, 2024

Conversation

alanmeeson
Copy link
Contributor

When we're calling provider.generate, ultimately we're calling the generate function as specified in langchain_core.language_models.llms.BaseLLM. This function expects a list of string prompts, where each should have a separate generation performed for it - this being to allow taking advantage of batch processing where possible.

The bug here is that we pass in each message from the context as a separate generation question, rather than combining them together into a single prompt including both the context and human message.

The edit here concatenates them all with two new lines between each message and ensures that we pass only a single element list to the generate function.

@dlqqq dlqqq added the bug Something isn't working label Nov 1, 2024
@dlqqq dlqqq changed the title Fixes context generation for non-chat providers. Fix generation when using BaseLLM providers with context in magic commands Nov 1, 2024
@dlqqq dlqqq changed the title Fix generation when using BaseLLM providers with context in magic commands Fix generation when using context with non-chat providers in magic commands Nov 1, 2024
@dlqqq dlqqq changed the title Fix generation when using context with non-chat providers in magic commands Fix magic commands when using non-chat providers w/ history Nov 1, 2024
alanmeeson and others added 2 commits November 1, 2024 15:03
When we're calling provider.generate, ultimately we're calling the generate
 function as specified in langchain_core.language_models.llms.BaseLLM.
This function expects a list of string prompts, where each should have a
separate generation performed for it - this being to allow taking advantage
 of batch processing where possible.

The bug here is that we pass in each message from the context as a separate
generation question,  rather than combining them together into a single
prompt including both the context and human message.

The edit here concatenates them all with two new lines between each message
 and ensures that we pass only a single element list to the generate
function.
Copy link
Member

@dlqqq dlqqq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great catch, thank you! The explanation you included w/ your PR description is precisely correct. I love how you also described why this PR fixes it.

@dlqqq dlqqq enabled auto-merge (squash) November 1, 2024 22:06
@dlqqq dlqqq merged commit da1f5f6 into jupyterlab:main Nov 1, 2024
10 checks passed
@dlqqq
Copy link
Member

dlqqq commented Nov 1, 2024

@meeseeksdev please backport to v3-dev

meeseeksmachine pushed a commit to meeseeksmachine/jupyter-ai that referenced this pull request Nov 1, 2024
dlqqq pushed a commit that referenced this pull request Nov 1, 2024
…/ history (#1080)

Co-authored-by: Alan Meeson <alanmeeson@users.noreply.github.com>
@dlqqq dlqqq mentioned this pull request Nov 4, 2024
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants