This repository contains the official website for RetentionLabs, an open-source project group focused on researching and developing AI memory solutions.
RetentionLabs is dedicated to overcoming the context length limitations of Self-Attention mechanisms in large language models. Our mission is to enhance AI systems with better memory capabilities, creating more helpful, consistent, and personalized AI experiences.
Our first product is RetentionEngine, a PyTorch adapter that enhances language models with memory capabilities. Unlike RAG approaches, RetentionEngine stores information directly in the adapter weights, allowing for more efficient and integrated memory retention.
- Bilingual support (English and Korean)
- Responsive design for all devices
- Dark/light mode toggle
- Interactive memory visualization
This website is built with HTML, CSS, and JavaScript. To run it locally:
- Clone this repository
- Open
index.htmlin your browser
Contributions to both the website and our AI memory projects are welcome! Please feel free to submit pull requests or open issues.