Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FileNotFoundError #635

Closed
mrahjoo opened this issue Oct 12, 2023 · 18 comments · Fixed by #643
Closed

FileNotFoundError #635

mrahjoo opened this issue Oct 12, 2023 · 18 comments · Fixed by #643
Labels
Bug Something isn't working

Comments

@mrahjoo
Copy link

mrahjoo commented Oct 12, 2023

Describe the bug

Traceback (most recent call last):
File "D:\Python311\Lib\site-packages\interpreter\code_interpreters\subprocess_code_interpreter.py", line 65, in run
self.start_process()
File "D:\Python311\Lib\site-packages\interpreter\code_interpreters\subprocess_code_interpreter.py", line 43, in start_process
self.process = subprocess.Popen(self.start_cmd.split(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Python311\Lib\subprocess.py", line 1026, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "D:\Python311\Lib\subprocess.py", line 1538, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] The system cannot find the file specified

Reproduce

After updating the interpreter this happend and no python can be executed.

Expected behavior

While attempting to run a Python command

Screenshots

No response

Open Interpreter version

0.1.9

Python version

3.11.5

Operating System name and version

Microsoft Windows 11 Home

Additional context

No response

@mrahjoo mrahjoo added the Bug Something isn't working label Oct 12, 2023
@kllgjc
Copy link

kllgjc commented Oct 12, 2023

same, can't figure it out

@umer-rs
Copy link

umer-rs commented Oct 13, 2023

a294b34

It's due to this.

@xinranli0809
Copy link

same error

@nielsbosma
Copy link

I have the same issue.

@hashimotogigantes
Copy link

hashimotogigantes commented Oct 15, 2023

Same!

@Shivanandroy
Copy link

Is there any alternative? What changes needs to be done in the source code: (interpreter/code_interpreters/languages/python.py)

@hashimotogigantes
Copy link

I have found a solution.
 
self.start_cmd = shlex.quote(sys.executable) + " -i -q -u"

Change the above code in "\code_interpreters\languages\python.py" as follows

    executable = shlex.quote(sys.executable)
    if executable.startswith("'") and executable.endswith("'"):
        executable = executable[1:-1]
    self.start_cmd = executable + " -i -q -u"

@leifktaylor
Copy link
Contributor

leifktaylor commented Oct 15, 2023

Update:
I have reproduced the bug, it's exclusive to windows

Resolution PR: #643

It's due to quoting sys.executable:
In the Python class, you're using shlex.quote(sys.executable) to construct the start_cmd. The shlex.quote function is typically used to handle shell-escaping of strings in Unix-like environments. On Windows, however, this is problematic as it might add single quotes around the executable path, which Windows doesn't handle the way Unix-like OSs do.

By replacing this line in code_interpreters/languages/python.py:
self.start_cmd = shlex.quote(sys.executable) + " -i -q -u"

With:

        executable = sys.executable
        if os.name != 'nt':  # not Windows
            executable = shlex.quote(executable)
        self.start_cmd = executable + " -i -q -u"

The issue is resolved on windows systems.
This was pointed out earlier by @hashimotogigantes

@KillianLucas @jordanbtucker

Error reproduced output:

  print('Hello world')


  Traceback (most recent call last):
    File
  "C:\Users\leifkt\killian\open-interpreter\interpreter\code_interpreters\subprocess_code_interpreter.py",
  line 65, in run
      self.start_process()
    File
  "C:\Users\leifkt\killian\open-interpreter\interpreter\code_interpreters\subprocess_code_interpreter.py",
  line 43, in start_process
      self.process = subprocess.Popen(self.start_cmd.split(),
    File "C:\Users\leifkt\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 969, in __init__
      self._execute_child(args, executable, preexec_fn, close_fds,
    File "C:\Users\leifkt\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1438, in
  _execute_child

Output after implementing above solution:

  print("hello world")


hello world

Regression test of changes on linux:

stream result: {
"id": "chatcmpl-8A400poNdUfjCTfuwscaibFGi2QSf",
"object": "chat.completion.chunk",
"created": 1697409344,
"model": "gpt-4-0613",
"choices": [
  {
    "index": 0,
    "delta": {},
    "finish_reason": "stop"
  }
]
}

The Python script was executed successfully. It printed out the message: hello world.

leifktaylor added a commit to leifktaylor/open-interpreter that referenced this issue Oct 15, 2023
@leifktaylor leifktaylor mentioned this issue Oct 15, 2023
10 tasks
This was referenced Oct 18, 2023
@pmiddlet72
Copy link

I referenced this for a couple of other bug callouts that appear to be related. This fix appears to work for my use case (but same error dump) as well.

unaidedelf8777 added a commit to unaidedelf8777/open-interpreter that referenced this issue Oct 20, 2023
* Fixed a bug in setup_text_llm

* chore: update test suite

This adds a system_message prepending test, a .reset() test, and makes the math testing a little more robust, while also trying to prevent some edge cases where the llm would respond with explanations or an affirmative 'Sure I can do that. Here's the result...' or similar responses instead of just the exepcted result.

* New documentation site: https://docs.openinterpreter.com/

* feat: add %tokens magic command that counts tokens via tiktoken

* feat: add estimated cost from litellm to token counter

* fix: add note about only including current messages

* chore: add %tokens to README

* fix: include generated code in token count; round to 6 decimals

* Put quotes around sys.executable (bug fix)

* Added powershell language

* Adding Mistral support

* Removed /archive, adding Mistral support

* Removed /archive, adding Mistral support

* First version of ooba-powered setup_local_text_llm

* First version of ooba-powered setup_local_text_llm

* Second version of ooba-powered setup_local_text_llm

* Testing tests

* More flexible tests

* Paused math test

Let's look into this soon. Failing a lot

* Improved tests

* feat: add support for loading different config.yaml files

This adds a --config_file option that allows users to specify a path to a config file or the name of a config file in their Open Interpreter config directory and use that config file when invoking interpreter.

It also adds similar functionality to the --config parameter allowing users to open and edit different config files.

To simplify finding and loading files I also added a utility to return the path to a directory in the Open Interpreter config directory and moved some other points in the code from using a manually constructed path to utilizing the same utility method for consistency and simplicity.

* feat: add optional prompt token/cost estimate to %tokens

This gives  and optional  argument that will estimate the tokens and cost of any provided prompt to allow users to consider the implications of what they are going to send before it has an impact on their token usage.

* Paused math test

* Switched tests to turbo

* More Ooba

* Using Eric's tests

* The Local Update

* Alignment

* Alignment

* Fixed shell blocks not ending on error bug

* Added useful flags to generator

* Fixed Mistral HTML entities + backticks problem

* Fixed Mistral HTML entities + backticks problem

* OpenAI messages -> text LLMs are now non-function-calling

* OpenAI messages -> text LLMs are now non-function-calling

* Better messaging

* Incremented version, updated litellm

* Skipping nested test

* Exposed Procedures

* Exposed get_relevant_procedures_string

* Better procedures exposure

* Better procedures exposure

* Exits properly in colab

* Better exposed procedures

* Better exposed procedures

* More powerful reset function, incremented version

* WELCOME HACKERS!

The Open Interpreter Hackathon is on.

* Welcome hackers!

* Fix typo in setup_text_llm.py

recieve -> receive

* Welcome hackers!

* The OI hackathon has wrapped! Thank you everyone!

* THE HACKATHON IS ON

* ● The Open Interpreter Hackathon has been extended!

* Join the hackathon! https://lablab.ai/event/open-interpreter-hackathon

* Thank you hackathon participants!

* Fix "depracated" typo

* Update python.py

Resolves issue: OpenInterpreter#635

* Update python.py

More robust handling.

* Fix indentation in language_map.py

* Made semgrep optional, updated packages, pinned LiteLLM

* Fixed end_of_message and end_of_code flags

* Add container timeout for easier server integration of OI. controllable via env var 'OI_CONTAINER_TIMEOUT'. defaults to no timeout. Also add type safety to core/core.py

* Update things, resolve merge conflicts.

* fixed the tests, since they imported and assumed that was a instance, but is wasnt. now uses interpreter.create_interpreter()

---------

Co-authored-by: Kyle Huang <kyle@anyscale.com>
Co-authored-by: Eric allen <eric@ericrallen.dev>
Co-authored-by: killian <63927363+KillianLucas@users.noreply.github.com>
Co-authored-by: DaveChini <Dave@ctc.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Jamie Dubs <jamie@jamiedubs.com>
Co-authored-by: Leif Taylor <leiftaylor@gmail.com>
Co-authored-by: chenpeng08 <chenpeng08@baidu.com>
joshuavial pushed a commit to joshuavial/open-interpreter that referenced this issue Nov 16, 2023
Resolves issue: OpenInterpreter#635

Former-commit-id: 50aede2
Former-commit-id: 245f3d6d3b25179c6e917e7c6980db7e9ea4f7a4
Former-commit-id: e5e891dca969a2fc469fe7d25fa93569011aa244 [formerly 4906a3f65a981237f906c235f7b12a6ec5822cf1]
Former-commit-id: f64a78adac718fe983898018a22c811fe17c9067
joshuavial pushed a commit to joshuavial/open-interpreter that referenced this issue Nov 16, 2023
@pieterhouwen
Copy link

Late to the party but I'm running into the same issue and the solution above did not work :(

Here are my details:

Python version: Python 3.11.5
PATH:

...
C:\Users\Pieter>echo %PATH%
C:\Program Files (x86)\VMware\VMware Workstation\bin\;C:\Program Files\Python311\Scripts\;C:\Program Files\Python311\;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\libnvvp;
...

OI version:

C:\Users\Pieter>interpreter --version
Open Interpreter 0.1.17

LIne 24 - 28 of C:\Users\Pieter\AppData\Roaming\Python\Python311\site-packages\interpreter\core\code_interpreters\languages\python.py:

        else:
            executable = sys.executable
            if os.name != "nt":  # not Windows
                executable = shlex.quote(executable)
            self.start_cmd = executable + " -i -q -u"

My setup:

Open Interpreter connection to LM Studio v0.2.9 using model config:

{
  "name": "deepseek-ai_deepseek-coder-6.7b-instruct",
  "arch": "llama",
  "quant": "Q6_K",
  "context_length": 16384,
  "embedding_length": 4096,
  "num_layers": 32,
  "rope": {
    "freq_base": 100000,
    "dimension_count": 128
  },
  "head_count": 32,
  "head_count_kv": 32,
  "parameters": "7B"
}

GPU offload: 25 layers
Context length: 10000
Prompt eval batch size: 1000
Entire model in RAM: true
CPU threads: 6

@leiftaylorcb
Copy link

@pieterhouwen Can we get the error message you're getting?
Have you tried this with python 3.10? Just curious.

Need a little bit more error output/stacktrace in order to diagnose root cause.

@pieterhouwen
Copy link

@leiftaylorcb Here is my complete stacktrace:

 Traceback (most recent call last):
    File
  "C:\Users\Pieter\AppData\Roaming\Python\Python311\site-packages\interpreter\core\code_interpreters\subprocess_code_i
  nterpreter.py", line 77, in run
      self.start_process()
    File
  "C:\Users\Pieter\AppData\Roaming\Python\Python311\site-packages\interpreter\core\code_interpreters\subprocess_code_i
  nterpreter.py", line 48, in start_process
      self.process = subprocess.Popen(
                     ^^^^^^^^^^^^^^^^^
    File "C:\Program Files\Python311\Lib\subprocess.py", line 1026, in __init__
      self._execute_child(args, executable, preexec_fn, close_fds,
    File "C:\Program Files\Python311\Lib\subprocess.py", line 1538, in _execute_child
      hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  FileNotFoundError: [WinError 2] The system cannot find the file specified

Is running interpreter with python3.10 as easy as moving the python3.10 executable path to the top in the PATH variable?

@leiftaylorcb
Copy link

Will need to figure out the start_cmd that is bring passed in, can you modify the interpreter/core/code_interpreters/subprocess_code_interpreter.py line 48 to add a print command:

Here:

    def start_process(self):
        if self.process:
            self.terminate()

        my_env = os.environ.copy()
        my_env["PYTHONIOENCODING"] = "utf-8"
        # Add this line here
        print(self.start_cmd)
        self.process = subprocess.Popen(
            self.start_cmd.split(),
            stdin=subprocess.PIPE,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
            text=True,
            bufsize=0,
            universal_newlines=True,
            env=my_env,
        )
    And share the output.  We need to know what the start_cmd is to debug. 

@pieterhouwen
Copy link

pieterhouwen commented Dec 21, 2023

Thank you for your response. Using the print statement you provided I saw that OI is trying to run node -i which indeed is not installed on my system.

Is this something that OI should pick up on and not blame Python?

@leiftaylorcb
Copy link

The issue appears to be coming from interpreter/core/respond.py

the create_code_interpreter function is being called with the language set to javascript (hence why node is the start_cmd).

This respond.py, we determine what interpreter (e.g. Python or Javascript or HTML) from the interepter.messages[-1]["language"] key. For whatever reason, when it's trying to give you a response, it seems to think that the language you want is javascript, so it's instantiating the Javascript SubprocessCodeInterpreter (core/code_interpreters/languages/javascript.py Javascript())

@pieterhouwen Are you getting this error after you make your first prompt, or is this error happening the moment you start open-interpreter? What question are you asking it?
@KillianLucas WIthout knowing more about how Open interpreter makes the language determination I can't proceed too much further here.

One thing that could potentially get you further, is to install node js. e.g. node 20.10.x
At that point you should have node in your path.

@pieterhouwen
Copy link

@leiftaylorcb Thanks for the explanation! Yes I am indeed trying to make it create Javascript.

For a website I am building I am asking it to write a function which will display a form and based on that input show a loader icon and then some text. The html part (which it does first) works fine but indeed as soon as I want to check the JS code it crashes.

I installed node and it seems to be working now

@leiftaylorcb
Copy link

Excellent! Thanks for your prompt responses. It was very helpful for me being able to solve your problem!

@leiftaylorcb
Copy link

@KillianLucas We should probably add exception handling for if the self.run_cmd, e.g. node -i fails, that open interpreter should let the user know that in order to complete the code they will need to install the required application, e.g. in this case nodejs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.