-
Notifications
You must be signed in to change notification settings - Fork 4.9k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #1472 from MikeBirdTech/more-cookbooks
More cookbooks
- Loading branch information
Showing
4 changed files
with
474 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,127 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Add a Custom Tool to your Instance\n", | ||
"\n", | ||
"You can create custom tools for your instance of Open Interpreter. This is extremely helpful for adding new functionality in a reliable way.\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"First, create a profile and configure your instance:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Configure Open Interpreter\n", | ||
"from interpreter import interpreter\n", | ||
"\n", | ||
"interpreter.llm.model = \"claude-3-5-sonnet-20240620\"\n", | ||
"interpreter.computer.import_computer_api = True\n", | ||
"interpreter.llm.supports_functions = True\n", | ||
"interpreter.llm.supports_vision = True\n", | ||
"interpreter.llm.context_window = 100000\n", | ||
"interpreter.llm.max_tokens = 4096" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Then you will define your custom tool by writing valid Python code within a comment. This example is for searching the AWS documentation using Perplexity:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"custom_tool = \"\"\"\n", | ||
"\n", | ||
"import os\n", | ||
"import requests\n", | ||
"\n", | ||
"def search_aws_docs(query):\n", | ||
"\n", | ||
" url = \"https://api.perplexity.ai/chat/completions\"\n", | ||
"\n", | ||
" payload = {\n", | ||
" \"model\": \"llama-3.1-sonar-small-128k-online\",\n", | ||
" \"messages\": [\n", | ||
" {\n", | ||
" \"role\": \"system\",\n", | ||
" \"content\": \"Be precise and concise.\"\n", | ||
" },\n", | ||
" {\n", | ||
" \"role\": \"user\",\n", | ||
" \"content\": query\n", | ||
" }\n", | ||
" ],\n", | ||
" \"temperature\": 0.2,\n", | ||
" \"top_p\": 0.9,\n", | ||
" \"return_citations\": True,\n", | ||
" \"search_domain_filter\": [\"docs.aws.amazon.com\"],\n", | ||
" \"return_images\": False,\n", | ||
" \"return_related_questions\": False,\n", | ||
" #\"search_recency_filter\": \"month\",\n", | ||
" \"top_k\": 0,\n", | ||
" \"stream\": False,\n", | ||
" \"presence_penalty\": 0,\n", | ||
" \"frequency_penalty\": 1\n", | ||
" }\n", | ||
" headers = {\n", | ||
" \"Authorization\": f\"Bearer {os.environ.get('PPLX_API_KEY')}\",\n", | ||
" \"Content-Type\": \"application/json\"\n", | ||
" }\n", | ||
"\n", | ||
" response = requests.request(\"POST\", url, json=payload, headers=headers)\n", | ||
"\n", | ||
" print(response.text)\n", | ||
"\n", | ||
" return response.text\n", | ||
"\n", | ||
"\"\"\"\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Finally, you add the tool to the OI instance's computer:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"interpreter.computer.run(\"python\", custom_tool)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"> Note: You can define and set multiple tools in a single instance." | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"language_info": { | ||
"name": "python" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 2 | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,73 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Jan Computer Control\n", | ||
"\n", | ||
"We love Jan as an A.I. inference server. It also has a chat interface to chat with LLMs. But did you know that you can use this same chat interface as a computer control interface? Read on!\n", | ||
"\n", | ||
"[View on YouTube](https://www.youtube.com/watch?v=1l3B0AzbbjQ)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Install and set up Jan\n", | ||
"\n", | ||
"https://jan.ai/" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Install Open Interpreter\n", | ||
"\n", | ||
"https://docs.openinterpreter.com/getting-started/introduction" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Run the Open Interpreter OpenAI-compatible server.\n", | ||
"\n", | ||
"`interpreter --server`\n", | ||
"\n", | ||
"Add flags to set the `--model`, `--context_window`, or any other [setting](https://docs.openinterpreter.com/settings/all-settings) you want" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Edit Jan's OpenAI settings to point to the local server.\n", | ||
"\n", | ||
"Settings => OpenAI => Chat Competion endpoint `http://127.0.0.1:8000/openai/chat/completions`.\n", | ||
"\n", | ||
"Jan has a requirement to set a dummy OpenAI API key." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Go to Jan's chat window to start a new thread.\n", | ||
"\n", | ||
"Set `Model` to an OpenAI model. \n", | ||
"\n", | ||
"Start controlling your computer!" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"language_info": { | ||
"name": "python" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 2 | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,118 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Organize your photos with Open Interpreter" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"You can use Open Interpreter with a vision model to organize photos based on their contents. This is limited by the ability of the LLM as well as the organization of the directories storing photos. " | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"> Note: It is always recommended to back up photos and files on a regular basis. Most models are intelligent enough to know the difference between `move` and `delete` but on rare occasions, files can be deleted during some operations. It is important to test on duplicated photos and to keep an eye on code written by an LLM." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Create a profile. This example uses GPT-4o but you can use any vision model." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"\"\"\"\n", | ||
"This is an Open Interpreter profile to organize a directory of photos. \n", | ||
"\"\"\"\n", | ||
"\n", | ||
"from interpreter import interpreter\n", | ||
"\n", | ||
"\n", | ||
"# LLM settings\n", | ||
"interpreter.llm.model = \"gpt-4o\"\n", | ||
"#interpreter.llm.model = \"ollama/codestral\"\n", | ||
"interpreter.llm.supports_vision = True\n", | ||
"interpreter.llm.execution_instructions = False\n", | ||
"interpreter.llm.max_tokens = 1000\n", | ||
"interpreter.llm.context_window = 7000\n", | ||
"interpreter.llm.load() # Loads Ollama models\n", | ||
"\n", | ||
"# Computer settings\n", | ||
"interpreter.computer.import_computer_api = True\n", | ||
"\n", | ||
"# Misc settings\n", | ||
"interpreter.auto_run = False\n", | ||
"\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"The following custom instruction is intended for a directory containing one sub-directory of unorganized photos and multiple empty sub-directories with names for the intended organization. Please update the custom instructions to match your use-case. This will take some trial and error, depending on the model used." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Custom Instructions\n", | ||
"interpreter.custom_instructions=f\"\"\"\n", | ||
" Recap the plan before answering the user's query!\n", | ||
" Your job is to organize photos. You love organizing photos.\n", | ||
" You will be given a parent directory with sub-directories. \n", | ||
" One sub-directory will be of unorganized photos.\n", | ||
" The other sub-directories will be categories that you move the photos in to.\n", | ||
" Remember the sub-directories's names because they will be the categories for organizing.\n", | ||
" It is extremely important because these are the only options for where you move the photos.\n", | ||
" Loop through every photo in the unorganized photos directory. \n", | ||
" Skip over non-photo files by checking for common photo extensions (.jpg, .jpeg, .png, etc).\n", | ||
" In this loop you will determine the description of each image one at a time. \n", | ||
" Use `computer.vision.query()` to get a description of the image.\n", | ||
" `computer.vision.query()` takes a `path=` argument to know which photo to describe. \n", | ||
" Print out the description so you can get the full context.\n", | ||
" Determine which sub-directory the photo should go in to.\n", | ||
" Every photo needs to go into one of the sub-directories.\n", | ||
" Make sure you actually move the photo. \n", | ||
" Your task is done when every photo in the unorganized photos directory has been moved to another directory. \n", | ||
" **Confirm that the unorganized photos directory has no more photos**.\n", | ||
" \"\"\"\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Save the profile with a descriptive name. Then run interpreter with:\n", | ||
"\n", | ||
"`interpreter --profile <profileName.py>`\n", | ||
"\n", | ||
"Then ask it to organize the directory:\n", | ||
"\n", | ||
"`Please organize this directory: /path/to/directory`" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"language_info": { | ||
"name": "python" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 2 | ||
} |
Oops, something went wrong.