Description
Describe the bug
Today I installed openinterpreter and want to use it locally.
But when I'm using a local ollama model, I get some json structure as an output:
Loading qwen2.5-coder:32b...
Model loaded.
Open Interpreter will require approval before running code.
Use interpreter -y to bypass this.
Press CTRL-C to exit.
list all files in my Images directory
{"name": "execute", "arguments":{"language": "shell", "code": "dir %USERPROFILE%\Pictures"}}
browse to cnn.com
{"name": "execute", "arguments":{"language": "python", "code":
"computer.browser.setup(headless=False)\ncomputer.browser.go_to_url('https://www.cnn.com')"}}
But when i use an openai model, I get this output, as expected:
list all files in my Images directory
ls ~/Images
Would you like to run this code? (y/n)
If you need further actions or details on any specific file, please let me know!
browse to cnn.com
computer.browser.go_to_url('https://www.cnn.com')
Would you like to run this code? (y/n)
Reproduce
interpreter
list all files in my Images directory
{"name": "execute", "arguments":{"language": "shell", "code": "dir %USERPROFILE%\Pictures"}}
browse to cnn.com
{"name": "execute", "arguments":{"language": "python", "code":
"computer.browser.setup(headless=False)\ncomputer.browser.go_to_url('https://www.cnn.com')"}}
Expected behavior
computer.browser.go_to_url('https://www.cnn.com')
Would you like to run this code? (y/n)
Screenshots
I would expect
instead of
{"name": "execute", "arguments":{"language": "python", "code":
"computer.browser.setup(headless=False)\ncomputer.browser.go_to_url('https://www.cnn.com')"}}
Open Interpreter version
Version: 0.4.3
Python version
Python 3.11.11
Operating System name and version
W11
Additional context
No response