-
Notifications
You must be signed in to change notification settings - Fork 45k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maximum context length exceeded after read_file
, ingest_file
, search_files
#2801
Comments
Didn't fix it for me unfortunately |
I will give it a try , stuck on that same problem since yesterday evening |
Yea came back and it still had error oof. |
I think I know what it is (at least in my case). I pre-seeded redis using:
Changing it to |
Nope, still bugged out:
|
solved it for me |
Error back. cannot really reproduce why. investigate it furtherly |
Hold on there, that is a different command and issues with it should be treated separately. @claytondukes |
read_file
and search_files
read_file
and search_files
read_file
, search_files
read_file
, search_files
read_file
, search_files
read_file
, search_files
read_file
, ingest_file
, search_files
FYI this seems to be fixed in master branch, if anyone needs a quick solution |
I am still having the issue and updated to master: openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 8481 tokens (8481 in your prompt; 0 for the completion). Please reduce your prompt; or completion length. I received this error after it read a local file I asked it to summarize. |
STABLE BRANCH: endless since 4 days (this error is as old as 2 weeks but it somehow came back almost like some sabotage its crazy how those errors come back) constant look at this now its crashing even just 1 command in multiple times in a row:
|
@GoMightyAlgorythmGo have you tried raising the temperature a bit in the configuration? |
This issue also applies for |
I'm facing the same error (using
maybe adding a simple |
same here: openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 128580 tokens (128580 in your prompt; 0 for the completion). Please reduce your prompt; or completion length. |
We are aware of this issue and working on it. Instead of commenting, please upvote the issue if you're having the same problem. |
Same issue today. It came up when using read_file on a .txt (ebook) |
Description
When attempting to analyze a code file using the Auto-GPT project, I encountered an error due to exceeding the maximum context length of the OpenAI model. The model's maximum context length is 8191 tokens, but my request used 19023 tokens.
Steps to Reproduce
Expected Behavior
The AI should be able to analyze the code and suggest improvements without exceeding the maximum context length.
Actual Behavior
The program encountered an error due to exceeding the maximum context length of the OpenAI model (19023 tokens used, while the limit is 8191 tokens).
Possible Solution
Consider implementing a method for breaking down the code file into smaller sections or reducing the context length by removing unnecessary content before passing it to the OpenAI model.
Additional Context
Please let me know if there are any workarounds or if a fix is planned for this issue.
The text was updated successfully, but these errors were encountered: