Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LangChain Integration #60
LangChain Integration #60
Changes from 3 commits
de34033
ae19171
33d0adc
8e75813
e4da6f0
cdca3b8
9a2d11b
3d52297
bfacb16
3bafcf6
a98f69a
4c66a58
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to take care of this anymore. Before, there were a "completion" and "edits" endpoints, but now we only have a "chat" endpoint I believe. Let's research a little bit, but I think we only need the
ChatOpenAI
class here.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are those fields in
params
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at it again, "a lot" is an overstatement, sorry. On top of the
model_parameters
dict that gets merged into it and aside fromprompt
(or the other variants based on whether it's a "chat" or "edits" model)GPT3CompletionModel.get_params()
introduces just:n
: I assume this is the number of responses you want the API to generateinvoke()
returns a single response anyway, so I assume we can ignore this onestop
: despite beingNone
all the time and probably not necessary to include ininvoke()
invoke()
takesstop
as an argument; I'll just go ahead and add itmax_tokens
: it seems this is taken at client initialization in LangChaininvoke()
call, or to change its value prior to the callCorrect me if I'm wrong, but since
model_parameters
is already used to initialize the client and since AFAICT it's not changed after that, I don't think we need to include its contents ininvoke()
.I'll go ahead and make the other changes, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I didn't forget what the code does, the only field that should go in each request/invoke (instead of using them to initialize the client) is
max_tokens
, because for each paragraph we restrict the model to generate up to twice (or so) the number of tokens in the input paragraph. So that should go into each request, not the client (or update the client before each request?).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, after I made the comment above I discovered that
invoke()
does takemax_tokens
as well asstop
; I've added it in my most recent commits. I assume we still don't need to changen
from 1, which AFAICT is the default forinvoke()
as well, so I left that out of the call toinvoke()
.