ποΈ Table of Contents
OpenDevin.webm
Welcome to OpenDevin, an open-source project aiming to replicate Devin, an autonomous AI software engineer who is capable of executing complex engineering tasks and collaborating actively with users on software development projects. This project aspires to replicate, enhance, and innovate upon Devin through the power of the open-source community.
Devin represents a cutting-edge autonomous agent designed to navigate the complexities of software engineering. It leverages a combination of tools such as a shell, code editor, and web browser, showcasing the untapped potential of LLMs in software development. Our goal is to explore and expand upon Devin's capabilities, identifying both its strengths and areas for improvement, to guide the progress of open code models.
The OpenDevin project is born out of a desire to replicate, enhance, and innovate beyond the original Devin model. By engaging the open-source community, we aim to tackle the challenges faced by Code LLMs in practical scenarios, producing works that significantly contribute to the community and pave the way for future advancements.
OpenDevin is currently a work in progress, but you can already run the alpha version to see the end-to-end system in action. The project team is actively working on the following key milestones:
- UI: Developing a user-friendly interface, including a chat interface, a shell demonstrating commands, and a web browser.
- Architecture: Building a stable agent framework with a robust backend that can read, write, and run simple commands.
- Agent Capabilities: Enhancing the agent's abilities to generate bash scripts, run tests, and perform other software engineering tasks.
- Evaluation: Establishing a minimal evaluation pipeline that is consistent with Devin's evaluation criteria.
After completing the MVP, the team will focus on research in various areas, including foundation models, specialist capabilities, evaluation, and agent studies.
- OpenDevin is still an alpha project. It is changing very quickly and is unstable. We are working on getting a stable release out in the coming weeks.
- OpenDevin will issue many prompts to the LLM you configure. Most of these LLMs cost money--be sure to set spending limits and monitor usage.
- OpenDevin runs
bash
commands within a Docker sandbox, so it should not affect your machine. But your workspace directory will be attached to that sandbox, and files in the directory may be modified or deleted. - Our default Agent is currently the MonologueAgent, which has limited capabilities, but is fairly stable. We're working on other Agent implementations, including SWE Agent. You can read about our current set of agents here.
Getting started with the OpenDevin project is incredibly easy. Follow these simple steps to set up and run OpenDevin on your system:
- Linux, Mac OS, or WSL on Windows
- Docker(For those on MacOS, make sure to allow the default Docker socket to be used from advanced settings!)
- Python >= 3.11
- NodeJS >= 18.17.1
- Poetry >= 1.8
Make sure you have all these dependencies installed before moving on to make build
.
- Build the Project: Begin by building the project, which includes setting up the environment and installing dependencies. This step ensures that OpenDevin is ready to run smoothly on your system.
make build
OpenDevin supports a diverse array of Language Models (LMs) through the powerful litellm library. By default, we've chosen the mighty GPT-4 from OpenAI as our go-to model, but the world is your oyster! You can unleash the potential of Anthropic's suave Claude, the enigmatic Llama, or any other LM that piques your interest.
To configure the LM of your choice, follow these steps:
- Using the Makefile: The Effortless Approach
With a single command, you can have a smooth LM setup for your OpenDevin experience. Simply run:
This command will prompt you to enter the LLM API key and model name, ensuring that OpenDevin is tailored to your specific needs.
make setup-config
Note on Alternative Models: Some alternative models may prove more challenging to tame than others. Fear not, brave adventurer! We shall soon unveil LLM-specific documentation to guide you on your quest. And if you've already mastered the art of wielding a model other than OpenAI's GPT, we encourage you to share your setup instructions with us.
For a full list of the LM providers and models available, please consult the litellm documentation.
There is also documentation for running with local models using ollama.
We are working on a guide for running OpenDevin with Azure.
- Run the Application: Once the setup is complete, launching OpenDevin is as simple as running a single command. This command starts both the backend and frontend servers seamlessly, allowing you to interact with OpenDevin without any hassle.
make run
-
Start the Backend Server: If you prefer, you can start the backend server independently to focus on backend-related tasks or configurations.
make start-backend
-
Start the Frontend Server: Similarly, you can start the frontend server on its own to work on frontend-related components or interface enhancements.
make start-frontend
- Get Some Help: Need assistance or information on available targets and commands? The help command provides all the necessary guidance to ensure a smooth experience with OpenDevin.
make help
Achieving full replication of production-grade applications with LLMs is a complex endeavor. Our strategy involves:
- Core Technical Research: Focusing on foundational research to understand and improve the technical aspects of code generation and handling.
- Specialist Abilities: Enhancing the effectiveness of core components through data curation, training methods, and more.
- Task Planning: Developing capabilities for bug detection, codebase management, and optimization.
- Evaluation: Establishing comprehensive evaluation metrics to better understand and improve our models.
OpenDevin is a community-driven project, and we welcome contributions from everyone. Whether you're a developer, a researcher, or simply enthusiastic about advancing the field of software engineering with AI, there are many ways to get involved:
- Code Contributions: Help us develop the core functionalities, frontend interface, or sandboxing solutions.
- Research and Evaluation: Contribute to our understanding of LLMs in software engineering, participate in evaluating the models, or suggest improvements.
- Feedback and Testing: Use the OpenDevin toolset, report bugs, suggest features, or provide feedback on usability.
For details, please check this document.
Now we have both Slack workspace for the collaboration on building OpenDevin and Discord server for discussion about anything related, e.g., this project, LLM, agent, etc.
If you would love to contribute, feel free to join our community (note that now there is no need to fill in the form). Let's simplify software engineering together!
π Code less, make more with OpenDevin.
OpenDevin is built using a combination of powerful frameworks and libraries, providing a robust foundation for its development. Here are the key technologies used in the project:
Please note that the selection of these technologies is in progress, and additional technologies may be added or existing ones may be removed as the project evolves. We strive to adopt the most suitable and efficient tools to enhance the capabilities of OpenDevin.
Distributed under the MIT License. See LICENSE
for more information.