Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maximum context length exceeded after read_file, ingest_file, search_files #2801

Closed
DiNaSoR opened this issue Apr 21, 2023 · 17 comments · Fixed by #3222
Closed

Maximum context length exceeded after read_file, ingest_file, search_files #2801

DiNaSoR opened this issue Apr 21, 2023 · 17 comments · Fixed by #3222
Labels
bug Something isn't working function: workspace

Comments

@DiNaSoR
Copy link

DiNaSoR commented Apr 21, 2023

Description

When attempting to analyze a code file using the Auto-GPT project, I encountered an error due to exceeding the maximum context length of the OpenAI model. The model's maximum context length is 8191 tokens, but my request used 19023 tokens.

Steps to Reproduce

  1. Run Auto-GPT in GPT3.5 only mode
  2. Set up an AI with the following parameters:
    • AI Name: Yoyo
    • Role: Lua coder
    • Goal 1: Improve the code file WoWinArabic_Chat.lua and document it then save it.
  3. Authorize the analyze_code command with the code file WoWinArabic_Chat.lua

Expected Behavior

The AI should be able to analyze the code and suggest improvements without exceeding the maximum context length.

Actual Behavior

The program encountered an error due to exceeding the maximum context length of the OpenAI model (19023 tokens used, while the limit is 8191 tokens).

Possible Solution

Consider implementing a method for breaking down the code file into smaller sections or reducing the context length by removing unnecessary content before passing it to the OpenAI model.

Additional Context

  • Auto-GPT version: 0.2.2
  • Python version: 3.11
  • Operating System: Win11

Please let me know if there are any workarounds or if a fix is planned for this issue.

F:\Projects\autogpt\Auto-GPT>python -m autogpt --gpt3only
Warning: The file 'auto-gpt.json' does not exist. Local memory would not be saved to a file.
GPT3.5 Only Mode:  ENABLED
NEWS:  ::UPDATED:: Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here. If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag
Welcome to Auto-GPT!  run with '--help' for more information.
Create an AI-Assistant:  Enter the name of your AI and its role below. Entering nothing will load defaults.
Name your AI:  For example, 'Entrepreneur-GPT'
AI Name: Yoyo
Yoyo here!  I am at your service.
Describe your AI's role:  For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'
Yoyo is: you are lua coder
Enter up to 5 goals for your AI:  For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'
Enter nothing to load defaults, enter nothing when finished.
Goal 1: improve the code file WoWinArabic_Chat.lua and documented it then save it.
Goal 2:
Using memory of type:  LocalCache
Using Browser:  chrome
 THOUGHTS:  I think I should start by analyzing the code file WoWinArabic_Chat.lua to see what improvements can be made.
REASONING:  Analyzing the code file will give me a better understanding of the current state of the code and help me identify areas that can be improved.
PLAN:
-  Analyze the code file
-  Identify areas for improvement
-  Make necessary changes
-  Document changes
-  Save the file
CRITICISM:  I need to make sure that I am thorough in my analysis and that I document all changes that I make.
NEXT ACTION:  COMMAND = analyze_code ARGUMENTS = {'code': 'WoWinArabic_Chat.lua'}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for ...
Input:y
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
SYSTEM:  Command analyze_code returned: I'm sorry, but you forgot to provide the code for me to analyze. Please provide the code so I can give you suggestions for improvements.
 THOUGHTS:  I need to read the code file WoWinArabic_Chat.lua to get the code and analyze it.
REASONING:  I need to read the code file to get the code and analyze it.
PLAN:
-  Read the code file
-  Analyze the code
-  Identify areas for improvement
-  Make necessary changes
-  Document changes
-  Save the file
CRITICISM:  I need to make sure that I am thorough in my analysis and that I document all changes that I make.
NEXT ACTION:  COMMAND = read_file ARGUMENTS = {'file': 'WoWinArabic_Chat.lua'}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for ...
Input:y
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "F:\Projects\autogpt\Auto-GPT\autogpt\__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1635, in invoke
    rv = super().invoke(ctx)
         ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Projects\autogpt\Auto-GPT\autogpt\cli.py", line 151, in main
    agent.start_interaction_loop()
  File "F:\Projects\autogpt\Auto-GPT\autogpt\agent\agent.py", line 184, in start_interaction_loop
    self.memory.add(memory_to_add)
  File "F:\Projects\autogpt\Auto-GPT\autogpt\memory\local.py", line 76, in add
    embedding = create_embedding_with_ada(text)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Projects\autogpt\Auto-GPT\autogpt\llm_utils.py", line 155, in create_embedding_with_ada
    return openai.Embedding.create(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "C:\Users\user\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 19023 tokens (19023 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
@claytondukes
Copy link

Didn't fix it for me unfortunately

@Jens-AIMLX
Copy link

I will give it a try , stuck on that same problem since yesterday evening

@Drlordbasil
Copy link

Didn't fix it for me unfortunately

Yea came back and it still had error oof.

@claytondukes
Copy link

I think I know what it is (at least in my case).

I pre-seeded redis using:

python data_ingestion.py --dir DataImport --overlap 100 --max_length 2000

Changing it to --max_length 1000 helped.

@claytondukes
Copy link

Nope, still bugged out:

NEXT ACTION:  COMMAND = search_files ARGUMENTS = {'directory': 'docs/'}
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 10578 tokens (10578 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

@Jens-AIMLX
Copy link

I will give it a try , stuck on that same problem since yesterday evening

solved it for me

@Jens-AIMLX
Copy link

I will give it a try , stuck on that same problem since yesterday evening

solved it for me

Error back. cannot really reproduce why. investigate it furtherly

@Pwuts
Copy link
Member

Pwuts commented Apr 22, 2023

Nope, still bugged out:

NEXT ACTION:  COMMAND = search_files ARGUMENTS = {'directory': 'docs/'}

Hold on there, that is a different command and issues with it should be treated separately. @claytondukes

@Pwuts Pwuts added the bug Something isn't working label Apr 22, 2023
@Pwuts Pwuts changed the title OpenAI's model maximum context length exceeded when trying to analyze a code file Prompt overflow issues with read_file and search_files Apr 22, 2023
@Pwuts Pwuts changed the title Prompt overflow issues with read_file and search_files Maximum context length exceeded in read_file, search_files Apr 22, 2023
@Pwuts Pwuts changed the title Maximum context length exceeded in read_file, search_files Maximum context length exceeded after read_file, search_files Apr 22, 2023
@Pwuts Pwuts moved this to 🔖 Ready in AutoGPT development kanban Apr 22, 2023
@Pwuts Pwuts changed the title Maximum context length exceeded after read_file, search_files Maximum context length exceeded after read_file, ingest_file, search_files Apr 22, 2023
@richstokes
Copy link

FYI this seems to be fixed in master branch, if anyone needs a quick solution

This was referenced Apr 24, 2023
@finster869
Copy link

I am still having the issue and updated to master: openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 8481 tokens (8481 in your prompt; 0 for the completion). Please reduce your prompt; or completion length. I received this error after it read a local file I asked it to summarize.

@GoMightyAlgorythmGo
Copy link

GoMightyAlgorythmGo commented Apr 25, 2023

STABLE BRANCH: endless since 4 days (this error is as old as 2 weeks but it somehow came back almost like some sabotage its crazy how those errors come back) constant look at this now its crashing even just 1 command in multiple times in a row:

-  Use the 'search_files' command to find all files in the system.
-  Use the 'delete_file' command to remove useless files.
-  Use the 'write_to_file' command to document the name and function of useful files.
CRITICISM:  I need to be careful not to delete any files that are actually useful. I should also make sure to document all useful files, even if they seem insignificant at the moment.
NEXT ACTION:  COMMAND = search_files ARGUMENTS = {'directory': '/'}
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Programming\Auto-GPT_AI_8\autogpt\__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1635, in invoke
    rv = super().invoke(ctx)
         ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Programming\Auto-GPT_AI_8\autogpt\cli.py", line 151, in main
    agent.start_interaction_loop()
  File "C:\Programming\Auto-GPT_AI_8\autogpt\agent\agent.py", line 184, in start_interaction_loop
    self.memory.add(memory_to_add)
  File "C:\Programming\Auto-GPT_AI_8\autogpt\memory\pinecone.py", line 47, in add
    vector = create_embedding_with_ada(data)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Programming\Auto-GPT_AI_8\autogpt\llm_utils.py", line 155, in create_embedding_with_ada
    return openai.Embedding.create(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_resources\embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 135580 tokens (135580 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

C:\Programming\Auto-GPT_AI_8>python -m autogpt --gpt3only --use-memory pinecone --continuous
Error creating Redis search index:  Index already exists
Continuous Mode:  ENABLED
WARNING:  Continuous mode is not recommended. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise. Use at your own risk.
GPT3.5 Only Mode:  ENABLED
NEWS:  Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here. If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag # INCLUDED COMMAND 'send_tweet' IS DEPRICATED, AND WILL BE REMOVED IN THE NEXT STABLE RELEASE Base Twitter functionality (and more) is now covered by plugins: https://github.com/Significant-Gravitas/Auto-GPT-Plugins ## Changes to Docker configuration The workdir has been changed from /home/appuser to /app. Be sure to update any volume mounts accordingly.
Welcome back!  Would you like me to return to being AI_GPT_8?
Continue with the last settings?
Name:  AI_GPT_8
Role:  an AI to iteratively and autonomously develop the project's main goal being a free language model-based system on the user's PC, leveraging meta-learning with the help of neural networks, machine learning, etc. AI_GPT_8 should then improve upon the free language model serving as a potential core of the application. Investigate inconsistencies to make informed decisions. Ensure your future self can easily access and recall essential information, as working memory is limited. To prevent setbacks, regularly test with this question: "How will I benefit from or retrieve context-relevant details from a vast amount of information after forgetting or losing my memory?" When unsure of the next steps or stuck in loops, evaluate various tools, future enhancements, or intermediate goals to progress or expand your options.
Goals: ["'Check files and remove them if they are useless and if they are useful document the name and the files function.'\n", "'Developing strategies to maintain continuous progress, avoid excessive planning, and prevent random or unrelated actions. Focus on understanding the project's core objectives, differentiating between AI_GPT_8/Auto-GPT vs the projectendgoal/application being developed. Regularly review and prioritize tasks, ensuring alignment with the project's goals. Implement checkpoints and feedback loops to identify areas for improvement or automation, making adjustments to the development process as needed. When progress is slow, seriously contemplate moving on to the next goal.'\n", "'Continue the iterative development of the project's end goal/application, leveraging the user's high-performance gaming PC (year 2023) to enhance a language model with meta-learning. Balance planning with actions while considering uncompleted tasks and previous progress. Reflect on CMD output if helpful. Engage in reasonable, time-limited experimentation to gain insights that benefit the project, while staying mindful of potential loops or unproductive activities. When progress is slow, seriously consider moving on to the next goal.'\n", "'For a limited time and action window, explore strategies to overcome memory limitations and the impact of frequent shutdowns/restarts on the development process. Develop rapid recall methods and context restoration for unfinished tasks, preparing and saving context for future continuation. Periodically review the overall plan and improve approaches to mitigate work repetition due to cache-memory and context loss. If progress is stalled, focus on tasks that bring you closer to the goal. Avoid inactivity and evaluate multiple options to enhance efficiency. When progress slows, seriously consider moving on to the next goal.'\n", "'Adopt best practices in iterative and modular development to optimize a consistent and efficient workflow. Consider future trends, overlapping needs, subgoals, plans, and issues while spending a reasonable amount of time on research for long-term project benefits. Identify improvement or automation opportunities using periodic checkpoints and feedback systems. Recognize inefficiencies or resource gaps, noting areas for enhancement, which could be helpful for suggestions or adjustments. Create clear, accessible reminders of essential information and tasks. Regularly review these reminders to prevent forgetting and maintain progress. Adapt the development process accordingly. When progress is slow, seriously consider moving on to the next goal.'\n"]
Continue (y/n): y
Using memory of type:  PineconeMemory
Using Browser:  chrome
 THOUGHTS:  I suggest we start by checking the files in the current directory to see if there are any useless files that can be removed. We can then document the name and function of any useful files. This will help us to better understand the project and ensure that we are only working with relevant files.
REASONING:  Checking the files in the current directory is a good starting point as it will help us to identify any files that are no longer needed. This will help us to free up space and ensure that we are only working with relevant files. Documenting the name and function of any useful files will also help us to better understand the project and ensure that we are only working with relevant files.
PLAN:
-  Check files in current directory
-  Remove useless files
-  Document name and function of useful files
CRITICISM:  We should be careful not to accidentally delete any important files. We should also ensure that we are only documenting files that are actually useful to the project.
NEXT ACTION:  COMMAND = search_files ARGUMENTS = {'directory': '.'}
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Programming\Auto-GPT_AI_8\autogpt\__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1635, in invoke
    rv = super().invoke(ctx)
         ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Programming\Auto-GPT_AI_8\autogpt\cli.py", line 151, in main
    agent.start_interaction_loop()
  File "C:\Programming\Auto-GPT_AI_8\autogpt\agent\agent.py", line 184, in start_interaction_loop
    self.memory.add(memory_to_add)
  File "C:\Programming\Auto-GPT_AI_8\autogpt\memory\pinecone.py", line 47, in add
    vector = create_embedding_with_ada(data)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Programming\Auto-GPT_AI_8\autogpt\llm_utils.py", line 155, in create_embedding_with_ada
    return openai.Embedding.create(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_resources\embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 135615 tokens (135615 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

C:\Programming\Auto-GPT_AI_8>python -m autogpt --gpt3only --use-memory pinecone --continuous
Error creating Redis search index:  Index already exists
Continuous Mode:  ENABLED
WARNING:  Continuous mode is not recommended. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise. Use at your own risk.
GPT3.5 Only Mode:  ENABLED
NEWS:  Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here. If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag # INCLUDED COMMAND 'send_tweet' IS DEPRICATED, AND WILL BE REMOVED IN THE NEXT STABLE RELEASE Base Twitter functionality (and more) is now covered by plugins: https://github.com/Significant-Gravitas/Auto-GPT-Plugins ## Changes to Docker configuration The workdir has been changed from /home/appuser to /app. Be sure to update any volume mounts accordingly.
Welcome back!  Would you like me to return to being AI_GPT_8?
Continue with the last settings?
Name:  AI_GPT_8
Role:  an AI to iteratively and autonomously develop the project's main goal being a free language model-based system on the user's PC, leveraging meta-learning with the help of neural networks, machine learning, etc. AI_GPT_8 should then improve upon the free language model serving as a potential core of the application. Investigate inconsistencies to make informed decisions. Ensure your future self can easily access and recall essential information, as working memory is limited. To prevent setbacks, regularly test with this question: "How will I benefit from or retrieve context-relevant details from a vast amount of information after forgetting or losing my memory?" When unsure of the next steps or stuck in loops, evaluate various tools, future enhancements, or intermediate goals to progress or expand your options.
Goals: ["'Check files and remove them if they are useless and if they are useful document the name and the files function.'\n", "'Developing strategies to maintain continuous progress, avoid excessive planning, and prevent random or unrelated actions. Focus on understanding the project's core objectives, differentiating between AI_GPT_8/Auto-GPT vs the projectendgoal/application being developed. Regularly review and prioritize tasks, ensuring alignment with the project's goals. Implement checkpoints and feedback loops to identify areas for improvement or automation, making adjustments to the development process as needed. When progress is slow, seriously contemplate moving on to the next goal.'\n", "'Continue the iterative development of the project's end goal/application, leveraging the user's high-performance gaming PC (year 2023) to enhance a language model with meta-learning. Balance planning with actions while considering uncompleted tasks and previous progress. Reflect on CMD output if helpful. Engage in reasonable, time-limited experimentation to gain insights that benefit the project, while staying mindful of potential loops or unproductive activities. When progress is slow, seriously consider moving on to the next goal.'\n", "'For a limited time and action window, explore strategies to overcome memory limitations and the impact of frequent shutdowns/restarts on the development process. Develop rapid recall methods and context restoration for unfinished tasks, preparing and saving context for future continuation. Periodically review the overall plan and improve approaches to mitigate work repetition due to cache-memory and context loss. If progress is stalled, focus on tasks that bring you closer to the goal. Avoid inactivity and evaluate multiple options to enhance efficiency. When progress slows, seriously consider moving on to the next goal.'\n", "'Adopt best practices in iterative and modular development to optimize a consistent and efficient workflow. Consider future trends, overlapping needs, subgoals, plans, and issues while spending a reasonable amount of time on research for long-term project benefits. Identify improvement or automation opportunities using periodic checkpoints and feedback systems. Recognize inefficiencies or resource gaps, noting areas for enhancement, which could be helpful for suggestions or adjustments. Create clear, accessible reminders of essential information and tasks. Regularly review these reminders to prevent forgetting and maintain progress. Adapt the development process accordingly. When progress is slow, seriously consider moving on to the next goal.'\n"]
Continue (y/n):
Using memory of type:  PineconeMemory
Using Browser:  chrome
 THOUGHTS:  I suggest we start by checking the files in the current directory to see if there are any useless files that
 can be removed. We can then document the name and function of any useful files. This will help us to better understand the project and streamline our development process.
REASONING:  Checking the files in the current directory is a good starting point as it will help us to identify any files that are no longer needed and can be removed. Documenting the name and function of any useful files will help us to better understand the project and streamline our development process.
PLAN:
-  Check files in current directory
-  Remove any useless files
-  Document the name and function of any useful files
CRITICISM:  I need to ensure that I am thorough in my file analysis and do not accidentally delete any important files. I should also make sure to document the function of each useful file accurately.
NEXT ACTION:  COMMAND = search_files ARGUMENTS = {'directory': '.'}
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Programming\Auto-GPT_AI_8\autogpt\__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1635, in invoke
    rv = super().invoke(ctx)
         ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Programming\Auto-GPT_AI_8\autogpt\cli.py", line 151, in main
    agent.start_interaction_loop()
  File "C:\Programming\Auto-GPT_AI_8\autogpt\agent\agent.py", line 184, in start_interaction_loop
    self.memory.add(memory_to_add)
  File "C:\Programming\Auto-GPT_AI_8\autogpt\memory\pinecone.py", line 47, in add
    vector = create_embedding_with_ada(data)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Programming\Auto-GPT_AI_8\autogpt\llm_utils.py", line 155, in create_embedding_with_ada
    return openai.Embedding.create(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_resources\embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 135598 tokens (135598 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

C:\Programming\Auto-GPT_AI_8>python -m autogpt --gpt3only --use-memory pinecone --continuous
Error creating Redis search index:  Index already exists
Continuous Mode:  ENABLED
WARNING:  Continuous mode is not recommended. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise. Use at your own risk.
GPT3.5 Only Mode:  ENABLED
NEWS:  Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here. If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag # INCLUDED COMMAND 'send_tweet' IS DEPRICATED, AND WILL BE REMOVED IN THE NEXT STABLE RELEASE Base Twitter functionality (and more) is now covered by plugins: https://github.com/Significant-Gravitas/Auto-GPT-Plugins ## Changes to Docker configuration The workdir has been changed from /home/appuser to /app. Be sure to update any volume mounts accordingly.
Welcome back!  Would you like me to return to being AI_GPT_8?
Continue with the last settings?
Name:  AI_GPT_8
Role:  an AI to iteratively and autonomously develop the project's main goal being a free language model-based system on the user's PC, leveraging meta-learning with the help of neural networks, machine learning, etc. AI_GPT_8 should then improve upon the free language model serving as a potential core of the application. Investigate inconsistencies to make informed decisions. Ensure your future self can easily access and recall essential information, as working memory is limited. To prevent setbacks, regularly test with this question: "How will I benefit from or retrieve context-relevant details from a vast amount of information after forgetting or losing my memory?" When unsure of the next steps or stuck in loops, evaluate various tools, future enhancements, or intermediate goals to progress or expand your options.
Goals: ["'Check files and remove them if they are useless and if they are useful document the name and the files function.'\n", "'Developing strategies to maintain continuous progress, avoid excessive planning, and prevent random or unrelated actions. Focus on understanding the project's core objectives, differentiating between AI_GPT_8/Auto-GPT vs the projectendgoal/application being developed. Regularly review and prioritize tasks, ensuring alignment with the project's goals. Implement checkpoints and feedback loops to identify areas for improvement or automation, making adjustments to the development process as needed. When progress is slow, seriously contemplate moving on to the next goal.'\n", "'Continue the iterative development of the project's end goal/application, leveraging the user's high-performance gaming PC (year 2023) to enhance a language model with meta-learning. Balance planning with actions while considering uncompleted tasks and previous progress. Reflect on CMD output if helpful. Engage in reasonable, time-limited experimentation to gain insights that benefit the project, while staying mindful of potential loops or unproductive activities. When progress is slow, seriously consider moving on to the next goal.'\n", "'For a limited time and action window, explore strategies to overcome memory limitations and the impact of frequent shutdowns/restarts on the development process. Develop rapid recall methods and context restoration for unfinished tasks, preparing and saving context for future continuation. Periodically review the overall plan and improve approaches to mitigate work repetition due to cache-memory and context loss. If progress is stalled, focus on tasks that bring you closer to the goal. Avoid inactivity and evaluate multiple options to enhance efficiency. When progress slows, seriously consider moving on to the next goal.'\n", "'Adopt best practices in iterative and modular development to optimize a consistent and efficient workflow. Consider future trends, overlapping needs, subgoals, plans, and issues while spending a reasonable amount of time on research for long-term project benefits. Identify improvement or automation opportunities using periodic checkpoints and feedback systems. Recognize inefficiencies or resource gaps, noting areas for enhancement, which could be helpful for suggestions or adjustments. Create clear, accessible reminders of essential information and tasks. Regularly review these reminders to prevent forgetting and maintain progress. Adapt the development process accordingly. When progress is slow, seriously consider moving on to the next goal.'\n"]
Continue (y/n):
Using memory of type:  PineconeMemory
Using Browser:  chrome
 THOUGHTS:  I suggest we start by checking the files in the current directory to see if there are any useless files that
 can be removed. We can then document the name and function of any useful files. This will help us to better understand the project and streamline our development process.
REASONING:  Checking the files in the current directory is a good first step to understanding the project and identifying any unnecessary files. Documenting the name and function of useful files will also help us to better organize our work
 and avoid duplication of effort.
PLAN:
-  Check files in current directory
-  Remove any useless files
-  Document name and function of useful files
CRITICISM:  I need to be careful not to accidentally delete any important files. I should also make sure to document the
 function of each useful file accurately.
NEXT ACTION:  COMMAND = search_files ARGUMENTS = {'directory': '.'}
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Programming\Auto-GPT_AI_8\autogpt\__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1635, in invoke
    rv = super().invoke(ctx)
         ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Programming\Auto-GPT_AI_8\autogpt\cli.py", line 151, in main
    agent.start_interaction_loop()
  File "C:\Programming\Auto-GPT_AI_8\autogpt\agent\agent.py", line 184, in start_interaction_loop
    self.memory.add(memory_to_add)
  File "C:\Programming\Auto-GPT_AI_8\autogpt\memory\pinecone.py", line 47, in add
    vector = create_embedding_with_ada(data)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Programming\Auto-GPT_AI_8\autogpt\llm_utils.py", line 155, in create_embedding_with_ada
    return openai.Embedding.create(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_resources\embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 135579 tokens (135579 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

C:\Programming\Auto-GPT_AI_8>python -m autogpt --gpt3only --use-memory pinecone --continuous
Error creating Redis search index:  Index already exists
Continuous Mode:  ENABLED
WARNING:  Continuous mode is not recommended. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise. Use at your own risk.
GPT3.5 Only Mode:  ENABLED
NEWS:  Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here. If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag # INCLUDED COMMAND 'send_tweet' IS DEPRICATED, AND WILL BE REMOVED IN THE NEXT STABLE RELEASE Base Twitter functionality (and more) is now covered by plugins: https://github.com/Significant-Gravitas/Auto-GPT-Plugins ## Changes to Docker configuration The workdir has been changed from /home/appuser to /app. Be sure to update any volume mounts accordingly.
Welcome back!  Would you like me to return to being AI_GPT_8?
Continue with the last settings?
Name:  AI_GPT_8
Role:  an AI to iteratively and autonomously develop the project's main goal being a free language model-based system on the user's PC, leveraging meta-learning with the help of neural networks, machine learning, etc. AI_GPT_8 should then improve upon the free language model serving as a potential core of the application. Investigate inconsistencies to make informed decisions. Ensure your future self can easily access and recall essential information, as working memory is limited. To prevent setbacks, regularly test with this question: "How will I benefit from or retrieve context-relevant details from a vast amount of information after forgetting or losing my memory?" When unsure of the next steps or stuck in loops, evaluate various tools, future enhancements, or intermediate goals to progress or expand your options.
Goals: ["'Check files and remove them if they are useless and if they are useful document the name and the files function.'\n", "'Developing strategies to maintain continuous progress, avoid excessive planning, and prevent random or unrelated actions. Focus on understanding the project's core objectives, differentiating between AI_GPT_8/Auto-GPT vs the projectendgoal/application being developed. Regularly review and prioritize tasks, ensuring alignment with the project's goals. Implement checkpoints and feedback loops to identify areas for improvement or automation, making adjustments to the development process as needed. When progress is slow, seriously contemplate moving on to the next goal.'\n", "'Continue the iterative development of the project's end goal/application, leveraging the user's high-performance gaming PC (year 2023) to enhance a language model with meta-learning. Balance planning with actions while considering uncompleted tasks and previous progress. Reflect on CMD output if helpful. Engage in reasonable, time-limited experimentation to gain insights that benefit the project, while staying mindful of potential loops or unproductive activities. When progress is slow, seriously consider moving on to the next goal.'\n", "'For a limited time and action window, explore strategies to overcome memory limitations and the impact of frequent shutdowns/restarts on the development process. Develop rapid recall methods and context restoration for unfinished tasks, preparing and saving context for future continuation. Periodically review the overall plan and improve approaches to mitigate work repetition due to cache-memory and context loss. If progress is stalled, focus on tasks that bring you closer to the goal. Avoid inactivity and evaluate multiple options to enhance efficiency. When progress slows, seriously consider moving on to the next goal.'\n", "'Adopt best practices in iterative and modular development to optimize a consistent and efficient workflow. Consider future trends, overlapping needs, subgoals, plans, and issues while spending a reasonable amount of time on research for long-term project benefits. Identify improvement or automation opportunities using periodic checkpoints and feedback systems. Recognize inefficiencies or resource gaps, noting areas for enhancement, which could be helpful for suggestions or adjustments. Create clear, accessible reminders of essential information and tasks. Regularly review these reminders to prevent forgetting and maintain progress. Adapt the development process accordingly. When progress is slow, seriously consider moving on to the next goal.'\n"]
Continue (y/n): y
Using memory of type:  PineconeMemory
Using Browser:  chrome
 THOUGHTS:  I think the next step should be to check for useless files and document the useful ones. This will help us clean up the system and have a better understanding of what files we have. We can use the 'search_files' command to find all files in a directory and then delete the useless ones using the 'delete_file' command. For the useful files, we can document their names and functions using the 'write_to_file' command.
REASONING:  Cleaning up the system and documenting useful files will help us have a better understanding of what we have
 and what we need to work with. This will also help us avoid confusion and save time in the long run.
PLAN:
-  Use the 'search_files' command to find all files in a directory
-  Use the 'delete_file' command to delete useless files
-  Use the 'write_to_file' command to document the names and functions of useful files
CRITICISM:  I need to ensure that I am not deleting any important files and that I am documenting all useful files. I should also make sure that I am not spending too much time on this task and that I am moving on to the next goal if progress is slow.
NEXT ACTION:  COMMAND = search_files ARGUMENTS = {'directory': '/'}
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Programming\Auto-GPT_AI_8\autogpt\__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1635, in invoke
    rv = super().invoke(ctx)
         ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\click\decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Programming\Auto-GPT_AI_8\autogpt\cli.py", line 151, in main
    agent.start_interaction_loop()
  File "C:\Programming\Auto-GPT_AI_8\autogpt\agent\agent.py", line 184, in start_interaction_loop
    self.memory.add(memory_to_add)
  File "C:\Programming\Auto-GPT_AI_8\autogpt\memory\pinecone.py", line 47, in add
    vector = create_embedding_with_ada(data)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Programming\Auto-GPT_AI_8\autogpt\llm_utils.py", line 155, in create_embedding_with_ada
    return openai.Embedding.create(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_resources\embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "C:\Users\marco\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 135665 tokens (135665 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

C:\Programming\Auto-GPT_AI_8>

@Pwuts
Copy link
Member

Pwuts commented Apr 26, 2023

@GoMightyAlgorythmGo have you tried raising the temperature a bit in the configuration?

@talkahe
Copy link

talkahe commented Apr 26, 2023

This issue also applies for execute_shell command as shown in #3244.

@ram-sh
Copy link

ram-sh commented Apr 29, 2023

I'm facing the same error (using stable) when working with code files, Auto-GPT crashes when trying to read large files:

openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 8433 tokens (8433 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

maybe adding a simple Try could at least eliminate the need to restart the Auto-GPT task?

@andrasfe
Copy link

same here: openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 128580 tokens (128580 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

@Pwuts
Copy link
Member

Pwuts commented Apr 29, 2023

We are aware of this issue and working on it. Instead of commenting, please upvote the issue if you're having the same problem.

@arrfonseca
Copy link

arrfonseca commented Apr 30, 2023

Same issue today. It came up when using read_file on a .txt (ebook)
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 304213 tokens (304213 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
I'm under Stable Branch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working function: workspace
Projects
None yet
Development

Successfully merging a pull request may close this issue.