-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RAG with GPT-4o: Calculated available context size -271 was not non-negative
LlamaIndex exception.
#1372
Comments
Try to add "max_token:2048" in config2.yaml file as following. Note: 2048 is int not string. |
Uesful Answer!! |
Since no further responses are needed, we will close it. Please reopen it if necessary. |
This issue can also occur when you create an index with embedding model Make sure to set the |
For me the problem was in PromptHelper, fixed it using |
Bug description
Hi, I have been struggling trying to run RAG using GPT-4o in the v0.8.1 of MetaGPT.
When I run the first code example, it following error occurs:
This is my configuration file:
Bug solved method
I have checked the code, and found that it happens because the context size of the
gpt-4o
model is not defined (this also happens withgpt-4-turbo
, which is not so recent) in themetagpt/utils/token_counter.py
file. Therefore, the default context size (3900) is used, resulting in this error.The exception is thrown by LlamaIndex, and is not informative enough to understand what is going on.
This problem should be handled internally by MetaGPT. Adding a
context_size
field to the configuration file may be useful, as it would allow users to use models that are not yet supported, as well as limit the length of requests sent to the LLM provider (if there was a reason to do it).The text was updated successfully, but these errors were encountered: