A toy project reimplementing open_deep_research with pydantic-ai. All code logic and prompts are from the original project - I'm just playing around and learning
⚠️ This is just for fun and learning! If you need something reliable, please use the original project.
This project uses uv as the package manager. Simply run:
uv sync
-
You'll need these API keys:
- Tavily API key (required for web search)
- Choose at least one of these:
- OpenAI API key (if using OpenAI as provider)
- Anthropic API key (if using Anthropic as provider)
-
Copy
.env.example
to.env
, configure your settings:
# Required
TAVILY_API_KEY=your-tavily-key
# Choose your providers and fill corresponding API keys
PLANNER_PROVIDER=openai # or anthropic
WRITER_PROVIDER=anthropic # or openai
# If using OpenAI
OPENAI_API_KEY=sk-xxx
OPENAI_BASE_URL= # optional, default is official API
# If using Anthropic
ANTHROPIC_API_KEY=xxx
ANTHROPIC_BASE_URL= # optional
# Optional settings
PLANNER_MODEL=o3-mini # model for planning
WRITER_MODEL=claude-3.5-sonnet # model for writing
NUMBER_OF_QUERIES=3 # searches per section
MAX_SEARCH_DEPTH=2 # max research iterations
MAX_RETRIES=3 # API call retries
REPORT_STRUCTURE= # custom report template
- Run it:
python cli.py "your topic"
Here's what happens when you run it:
All operations are logged to logfire, where you can track the execution flow:
The logs show:
- Search queries generation
- Web search operations
- Section writing progress
- Model API calls
Well... pretty much everything 😂
- Quality is totally unpredictable
- Error handling is minimal
- Code is messy
- Test coverage? What's that?
Huge thanks to open_deep_research! This is just a learning exercise based on their amazing work.
MIT (do whatever you want, but don't blame me if it breaks 😉)