FactHarbor is an AI-powered platform that turns complex, contested information into structured breakdowns that make reasoning visible and verifiable.
An Evidence Model contains:
- Claims — Key assertions extracted from the source material
- Analysis Contexts — The frames and conditions under which claims may hold or fail
- Evidence — Supporting and opposing sources with quality ratings and reliability scores
- Verdicts — Conclusions with explicit confidence levels and cited evidence
- Full Transparency — Every assumption, algorithm, and data source is exposed
The result is not a single verdict, but an evidence landscape — showing where a claim holds up, where it fails, and where reasonable disagreement exists.
Who benefits? Journalists, researchers, educators, policy analysts, and anyone navigating contested claims who wants to understand, not just believe.
Browse full documentation online — vision, architecture, methodology, and the complete project roadmap.
See CONTRIBUTING.md for prerequisites (including API keys), setup, and how to run the application locally.
Tech stack: Next.js + ASP.NET Core + LLM orchestration (Anthropic, OpenAI, Google, Mistral)
FactHarbor uses a multi-license model to maximize openness while protecting transparency:
| Content | License |
|---|---|
| Documentation | CC BY-SA 4.0 |
| Code (default) | MIT |
| Code (core engine) | AGPL-3.0 |
| Structured data | ODbL |
See LICENSE.md for full details.
FactHarbor — Making complex claims transparent through evidence, context, and open reasoning.