Fulcrum, a fork of the original Taxy AI project, harnesses the power of GPT-4 along with advanced vision models such as GPT4-Vision-Turbo and Gemini Pro to control your browser and perform a wide array of tasks. These enhancements allow Fulcrum to interpret and interact with both textual and visual content on web pages, offering a more dynamic and versatile automation experience. The Chrome Plugin, a key component of Fulcrum, now features screenshot functionality. It captures the current visible screen, integrating these screenshots into the prompts sent to Fulcrum. This allows for more accurate and context-aware automation based on visual information. 
Fulcrum uses LLMs to control your browser and perform repetitive actions on your behalf. Currently it allows you to define ad-hoc text instructions.
Fulcrum is fully open-source, and requires users to bring their keys, ultimately we would like to move to hosting our own models using cogagent/llava .
Currently this extension is only available through this GitHub repo. We'll release it on the Chrome Web Store after adding features to increase its usability for a non-technical audience. To build and install the extension locally on your machine, follow the instructions below.
- Ensure you have Node.js >= 16.
- Clone this repository
- Run
yarn
to install the dependencies - Run
yarn start
to build the package - Load your extension on Chrome by doing the following:
- Navigate to
chrome://extensions/
- Toggle
Developer mode
- Click on
Load unpacked extension
- Select the
build
folder thatyarn start
generated
- Navigate to
- Once installed, the browser plugin will be available in two forms:
- As a Popup. Activate by pressing
cmd+shift+y
on mac orctrl+shift+y
on windows/linux, or by clicking the extension logo in your browser. - As a devtools panel. Activate by first opening the browser's developer tools, then navigating to the
Taxy AI
panel.
- As a Popup. Activate by pressing
- The next thing you need to do is create or access an existing OpenAI API Key and paste it in the provided box. This key will be stored securely in your browser, and will not be uploaded to a third party.
- Finally, navigate to a webpage you want Taxy to act upon (for instance the OpenAI playground) and start experimenting!
- Taxy runs a content script on the webpage to pull the entire DOM. It simplifies the html it receives to only include interactive or semantically important elements, like buttons or text. It assigns an id to each interactive element. It then "templatizes" the DOM to reduce the token count even further.
- Taxy sends the simplified DOM, along with the user's instructions, to a selected LLM (currently GPT-3.5 and GPT-4 are supported). Taxy informs the LLM of two methods to interact with the webpage:
click(id)
- click on the interactive element associated with that idsetValue(id, text)
- focus on a text input, clear its existing text, and type the specified text into that input
- When Taxy gets a completion from the LLM, it parses the response for an action. The action cycle will end at this stage if any of the following conditions are met:
- The LLM believes the task is complete. Instead of an action, the LLM can return an indication that it believes the user's task is complete based on the state of the DOM and the action history up to this point.
- The user stopped the task's execution. The user can stop the LLM's execution at any time, without waiting for it to complete.
- There was an error. Taxy's safety-first architecture causes it to automatically halt execution in the event of an unexpected response.
- Taxy executes the action using the chrome.debugger API.
- The action is added to the action history and Taxy cycles back to step 1 and parses the updated DOM. All prior actions are sent to the LLM as part of the prompt used to determine the next action. Taxy can currently complete a maximum of 50 actions for a single task, though in practice most tasks require fewer than 10 actions.
Technology currently used by this extension: