Skip to content

Memory management system to enhance AI agents with personalized, reasoned and context-aware responses

License

Notifications You must be signed in to change notification settings

Tovana-AI/gpt-memory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 GPT Memory

Memory Driven Reasoning for Smarter AI Agents

GPT Memory is a library powered by Tovana that introduces a new approach to improving LLM reasoning through actionable insights (aka beliefs) derived from continuous interactions and long term memory.

PyPI version License: Apache 2

πŸ€” Why GPT Memory?

Current LLMs face significant limitations in their ability to learn and adapt from user-specific interactions over time. While LLMs excel at processing vast amounts of data, they struggle with ongoing personalization and context-aware learning. This gap restricts their ability to provide truly adaptive and evolving AI experiences.

Our Memory manager aims to address these challenges by providing a comprehensive memory and belief management framework for AI agents. Its core concept revolves around converting experiences (events) into memories, which in turn shape beliefs. These beliefs then influence the agent's reasoning, responses, and actions.

By simulating human-like memory processes, GPT Memory enables more personalized, adaptive, and context-aware AI interactions. This framework bridges the gap between static knowledge bases and dynamic, experience-based learning, allowing AI agents to evolve their understanding and behavior over time.

πŸš€ Quick Start

  1. Install Tovana:
pip install tovana
  1. Use it in your project:
from tovana import MemoryManager

business_description = "a commerce shopping assistant"

# Initialize with your preferred LLM provider and API key (Refer to the documentation for specific models)
memory_manager = MemoryManager(api_key="provider-api-key",
                               provider="openai",
                               business_description=business_description,
                               include_beliefs=True)
  1. Manage your LLMs memory with ongoing user conversation messages:
user_id = "user123"
message = "I just moved from New York to Paris for work."

# Update user memory
memory_manager.update_user_memory(user_id=user_id, message=message)
print(user_memory)  # Output: {'location': 'Paris', 'previous_location': 'New York'}

# Get memory context for LLM
context = memory_manager.get_memory_context(user_id=user_id)
print(context)  # Output: 'User Memory:\n location: Paris,\n previous_location: New York'

# Get beliefs
beliefs = memory_manager.get_beliefs(user_id=user_id)
print(beliefs)  # Output: {"beliefs": "- Provide recommendations for products shipping to Paris"}

πŸ—οΈ Architecture

Screenshot 2024-08-21 at 9 04 07

🧠 Belief Generation

GPT memory introduces a new approach to LLM reasoning: actionable beliefs generated from user memory. These beliefs provide personalized insights that can significantly enhance your agent's planning, reasoning and responses.

Examples

Input:

  • business_description: "a commerce site"
  • memory: {'pets': ['dog named charlie', 'horse named luna']}

Output:

{"beliefs": ",- suggest pet products for dogs and horses"}

Input:

  • business_description: "an AI therapist"
  • memory: {'pets': ['dog named charlie', 'horse named luna', 'sleep_time: 10pm']}

Output:

{"beliefs": ",- Suggest mediation at 9:30pm\n- Suggest spending time with Charlie and Luna for emotional well-being"}

🌟 Features

Feature Status Description
🧠 Human-like Memory βœ… Transform interactions into lasting memories and actionable beliefs
πŸ” Smart Information Extraction βœ… Automatically capture and store relevant user details from conversations
πŸ’‘ Dynamic Belief Generation βœ… Create personalized, context-aware insights to guide AI responses
πŸ€– LLM-Friendly Context βœ… Seamlessly integrate memory and beliefs into your AI's decision-making process
πŸ”Œ Easy Integration βœ… Plug into your AI applications with a straightforward API
🎭 Conflict Resolution βœ… Intelligently handle contradictions in user information
🌐 Flexible Architecture βœ… Designed to work with various LLM providers and models
πŸ“Š Memory Management βœ… Process events, store short-term and long-term memories, and manage beliefs
πŸ”— Advanced Association Creation βœ… Form connections between memories and beliefs for more nuanced understanding
🧡 Async Functionality βœ… Support for asynchronous operations to enhance performance in concurrent environments
⛁ Persistent Database Support πŸ”œ Integration with persistent databases for long-term storage and retrieval of memory data
πŸŽ›οΈ Custom Belief Generation πŸ”œ User-generated beliefs offering end-to-end flexibility in shaping the belief system reasoning

πŸ› οΈ API Reference

MemoryManager

  • get_memory(user_id: str) -> JSON: Fetch user memory
  • delete_memory(user_id: str) -> bool: Delete user memory
  • update_memory(user_id: str, message: str) -> JSON: Update memory with relevant information if found in message
  • batch_update_memory(user_id: str, messages: List[Dict[str, str]]) -> JSON: Update memory with relevant information if found in message
  • get_memory_context(user_id: str, message: Optiona[str]) -> str: Get formatted memory context, general or message specific
  • get_beliefs(user_id: str) -> str: Get actionable beliefs context

Batch Update Memory

Traditional per-message memory updates can be costly and inefficient, especially in longer conversations. They often miss crucial context, leading to suboptimal information retrieval.

Our batch memory update method addresses these challenges by processing entire conversations at once. This approach not only improves performance and reduces costs but also enhances the quality of extracted information. This results in a more coherent and accurate user memory, ultimately leading to better AI reasoning.

Example

user_id = "user123"
messages = [
    {"role": "user", "content": "Hi, I'm planning a trip to Japan."},
    {"role": "assistant", "content": "That's exciting! When are you planning to go?"},
    {"role": "user", "content": "I'm thinking about next spring. I love sushi and technology."}
]

await memory_manager.batch_update_memory(user_id, messages)

Sync vs Async Updates

This library provides both synchronous and asynchronous update methods to cater to different use cases and application architectures:

  1. Asynchronous Updates (AsyncMemoryManager): Ideal for applications built on asynchronous frameworks like FastAPI or asynchronous Python scripts. This allows for non-blocking memory updates, improving overall application performance, especially when dealing with I/O-bound operations or high-concurrency scenarios.
  2. Synchronous Updates (MemoryManager): Suitable for traditional synchronous applications or when you need to ensure that memory updates are completed before proceeding with other operations. This can be useful in scripts or applications where the order of operations is critical.

By providing both options, our library offers flexibility, allowing to choose the most appropriate method based on your specific application requirements and architecture.

🀝 Contributing

We welcome contributions! Found a bug or have a feature idea? Open an issue or submit a pull request. Let's make Tovana even better together! πŸ’ͺ

πŸ“„ License

Tovana is Apache-2.0 licensed. See the LICENSE file for details.


Ready to empower your AI agents with memory-driven reasoning? Get started with GPT Memory! πŸš€ If you find it useful, don't forget to star the repo! ⭐

About

Memory management system to enhance AI agents with personalized, reasoned and context-aware responses

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published