SootiAI is a multi-purpose large language model (LLM) agent designed to perform general tasks on both local machines and online environments. It is simple to use, highly flexible, and equipped with a range of tools to handle tasks like research, data analysis, local file operations, and more.
-
Researching Topics
- Search and scrape multiple sources to gather information.
- Generate summaries or detailed research papers with structured sections like Abstract and Results.
-
Data Handling and Visualization
- Gather data online and use it to create data sheets or plot charts using Python.
- Example: "Plot a graph of the weather in NYC, Chicago, and Houston for the next 3 days."
-
Local Machine Operations
- Execute tasks like creating folders, listing directory contents, or downloading files.
- Example: "Download the top 3 math PDFs to my home directory under 'math' and sort them by date."
-
Multi-Tasking
- Perform multiple tasks in a single command seamlessly.
-
User-Friendly Interfaces
- CLI: Ideal for terminal enthusiasts.
- WebUI: Includes a browser-based interface with local conversation context saving (until cleared). Multi-session save/load functionality is on the roadmap.
Existing agents often come with limitations such as:
- Complex setup processes.
- Lack of essential tools like scraping and searching.
- Dependence on paid APIs for basic functionalities.
- Inability to write and execute code effectively.
- Poor performance with smaller models or overly complex workflows for simple tasks.
SootiAI bridges these gaps by providing a streamlined, efficient, and flexible solution for users.
-
Clone the repository:
git clone https://github.com/sooti/sootiAI.git
-
Navigate to the project directory:
cd sootiAI
-
Set up a virtual environment:
python3 -m venv .venv
-
Install dependencies:
pip install -r requirements.txt
-
Configure the environment:
- Copy the example environment file:
cp .env.example .env
- Edit the
.env
file to customize the following:- OpenAI Endpoint: Set the endpoint to local, remote, llama.cpp, or another compatible source.
- API Key: Add an API key if required (not needed for local models).
- Model Name: Specify the model name (e.g., required for MLX, not for llama.cpp).
- Copy the example environment file:
-
Start the application:
- For WebUI (default port: 8080):
python main.py
- For CLI mode:
python main_cli.py
- For WebUI (default port: 8080):
-
Research and Summarization
- "Research the history of quantum computing and summarize it in a research paper format."
-
Data Visualization
- "Plot a line graph showing the temperature trends in San Francisco over the past week."
-
Local File Operations
- "Create a folder named 'Projects' and move all files with '.py' extension into it."
-
Automated Data Collection
- "Scrape the latest stock prices for Apple, Google, and Tesla and save them in a CSV file."
- Qwen-2.5 instruct 14B - I've found this one to be the best balance of speed, script writing and instruction following.
- Qwen-2.5 instruct 7B - Good for some basic research cases and basic tasks, but not complex programming requests.
- EXAONE 7.8B - Good for research, OK for programming tasks.
- Llama-3.1
- Heremes 2 and Hermes 3
- Gemma 9b - mixed results, sometimes ok but other times fails to follow instructions.
- Add support for multi-session save/load in the WebUI.
- Enhance CLI commands with more intuitive shortcuts.
- Expand compatibility with additional LLM backends and endpoints.
- Improve documentation and add community-contributed examples.
We welcome contributions! Feel free to open issues or submit pull requests to help improve SootiAI. Make sure to follow the contributing guidelines (to be added soon).
SootiAI is licensed under the MIT License.
Feel free to explore and enjoy the capabilities of SootiAI!