Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

updated docs for mem0 architecture diagram #2185

Merged
merged 2 commits into from
Feb 1, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 90 additions & 0 deletions docs/faqs.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
---
title: FAQs
---


<AccordionGroup>
<Accordion title="How does Mem0 work?">
Mem0 utilizes a sophisticated hybrid database system to efficiently manage and retrieve memories for AI agents and assistants. Each memory is linked to a unique identifier, such as a user ID or agent ID, enabling Mem0 to organize and access memories tailored to specific individuals or contexts.

When a message is added to Mem0 via the `add` method, the system extracts pertinent facts and preferences, distributing them across various data stores: a vector database and a graph database. This hybrid strategy ensures that diverse types of information are stored optimally, facilitating swift and effective searches.

When an AI agent or LLM needs to access memories, it employs the `search` method. Mem0 conducts a comprehensive search across these data stores, retrieving relevant information from each.

The retrieved memories can be seamlessly integrated into the LLM's prompt as required, enhancing the personalization and relevance of responses.
</Accordion>

<Accordion title="What are the key features of Mem0?">
- **User, Session, and AI Agent Memory**: Retains information across sessions and interactions for users and AI agents, ensuring continuity and context.
- **Adaptive Personalization**: Continuously updates memories based on user interactions and feedback.
- **Developer-Friendly API**: Offers a straightforward API for seamless integration into various applications.
- **Platform Consistency**: Ensures consistent behavior and data across different platforms and devices.
- **Managed Service**: Provides a hosted solution for easy deployment and maintenance.
- **Save Costs**: Saves costs by adding relevent memories instead of complete transcripts to context window
</Accordion>

<Accordion title="How Mem0 is different from traditional RAG?">
Mem0's memory implementation for Large Language Models (LLMs) offers several advantages over Retrieval-Augmented Generation (RAG):

- **Entity Relationships**: Mem0 can understand and relate entities across different interactions, unlike RAG which retrieves information from static documents. This leads to a deeper understanding of context and relationships.

- **Contextual Continuity**: Mem0 retains information across sessions, maintaining continuity in conversations and interactions, which is essential for long-term engagement applications like virtual companions or personalized learning assistants.

- **Adaptive Learning**: Mem0 improves its personalization based on user interactions and feedback, making the memory more accurate and tailored to individual users over time.

- **Dynamic Updates**: Mem0 can dynamically update its memory with new information and interactions, unlike RAG which relies on static data. This allows for real-time adjustments and improvements, enhancing the user experience.

These advanced memory capabilities make Mem0 a powerful tool for developers aiming to create personalized and context-aware AI applications.
</Accordion>


<Accordion title="What are the common use-cases of Mem0?">
- **Personalized Learning Assistants**: Long-term memory allows learning assistants to remember user preferences, strengths and weaknesses, and progress, providing a more tailored and effective learning experience.

- **Customer Support AI Agents**: By retaining information from previous interactions, customer support bots can offer more accurate and context-aware assistance, improving customer satisfaction and reducing resolution times.

- **Healthcare Assistants**: Long-term memory enables healthcare assistants to keep track of patient history, medication schedules, and treatment plans, ensuring personalized and consistent care.

- **Virtual Companions**: Virtual companions can use long-term memory to build deeper relationships with users by remembering personal details, preferences, and past conversations, making interactions more delightful.

- **Productivity Tools**: Long-term memory helps productivity tools remember user habits, frequently used documents, and task history, streamlining workflows and enhancing efficiency.

- **Gaming AI**: In gaming, AI with long-term memory can create more immersive experiences by remembering player choices, strategies, and progress, adapting the game environment accordingly.

</Accordion>

<Accordion title="Why aren't my memories being created?">
Mem0 uses a sophisticated classification system to determine which parts of text should be extracted as memories. Not all text content will generate memories, as the system is designed to identify specific types of memorable information.
There are several scenarios where mem0 may return an empty list of memories:

- When users input definitional questions (e.g., "What is backpropagation?")
- For general concept explanations that don't contain personal or experiential information
- Technical definitions and theoretical explanations
- General knowledge statements without personal context
- Abstract or theoretical content

Example Scenarios

```
Input: "What is machine learning?"
No memories extracted - Content is definitional and does not meet memory classification criteria.

Input: "Yesterday I learned about machine learning in class"
Memory extracted - Contains personal experience and temporal context.
```

Best Practices

To ensure successful memory extraction:
- Include temporal markers (when events occurred)
- Add personal context or experiences
- Frame information in terms of real-world applications or experiences
- Include specific examples or cases rather than general definitions
</Accordion>

</AccordionGroup>





38 changes: 0 additions & 38 deletions docs/features.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,44 +13,6 @@ title: Features



## How does Mem0 work?

Mem0 leverages a hybrid database approach to manage and retrieve long-term memories for AI agents and assistants. Each memory is associated with a unique identifier, such as a user ID or agent ID, allowing Mem0 to organize and access memories specific to an individual or context.

When a message is added to the Mem0 using add() method, the system extracts relevant facts and preferences and stores it across data stores: a vector database, a key-value database, and a graph database. This hybrid approach ensures that different types of information are stored in the most efficient manner, making subsequent searches quick and effective.

When an AI agent or LLM needs to recall memories, it uses the search() method. Mem0 then performs search across these data stores, retrieving relevant information from each source. This information is then passed through a scoring layer, which evaluates their importance based on relevance, importance, and recency. This ensures that only the most personalized and useful context is surfaced.

The retrieved memories can then be appended to the LLM's prompt as needed, making responses personalized and relevant.


## Common Use Cases

- **Personalized Learning Assistants**: Long-term memory allows learning assistants to remember user preferences, strengths and weaknesses, and progress, providing a more tailored and effective learning experience.

- **Customer Support AI Agents**: By retaining information from previous interactions, customer support bots can offer more accurate and context-aware assistance, improving customer satisfaction and reducing resolution times.

- **Healthcare Assistants**: Long-term memory enables healthcare assistants to keep track of patient history, medication schedules, and treatment plans, ensuring personalized and consistent care.

- **Virtual Companions**: Virtual companions can use long-term memory to build deeper relationships with users by remembering personal details, preferences, and past conversations, making interactions more delightful.

- **Productivity Tools**: Long-term memory helps productivity tools remember user habits, frequently used documents, and task history, streamlining workflows and enhancing efficiency.

- **Gaming AI**: In gaming, AI with long-term memory can create more immersive experiences by remembering player choices, strategies, and progress, adapting the game environment accordingly.

## How is Mem0 different from RAG?

Mem0's memory implementation for Large Language Models (LLMs) offers several advantages over Retrieval-Augmented Generation (RAG):

- **Entity Relationships**: Mem0 can understand and relate entities across different interactions, unlike RAG which retrieves information from static documents. This leads to a deeper understanding of context and relationships.

- **Contextual Continuity**: Mem0 retains information across sessions, maintaining continuity in conversations and interactions, which is essential for long-term engagement applications like virtual companions or personalized learning assistants.

- **Adaptive Learning**: Mem0 improves its personalization based on user interactions and feedback, making the memory more accurate and tailored to individual users over time.

- **Dynamic Updates**: Mem0 can dynamically update its memory with new information and interactions, unlike RAG which relies on static data. This allows for real-time adjustments and improvements, enhancing the user experience.

These advanced memory capabilities make Mem0 a powerful tool for developers aiming to create personalized and context-aware AI applications.

If you have any questions, please feel free to reach out to us using one of the following methods:

Expand Down
Binary file added docs/images/add_architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/search_architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 2 additions & 1 deletion docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,8 @@
"pages": [
"overview",
"quickstart",
"features"
"playground",
"faqs"
]
},
{
Expand Down
91 changes: 51 additions & 40 deletions docs/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,57 @@ title: Overview

[Mem0](https://mem0.dev/wd) (pronounced "mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. Mem0 remembers user preferences and traits and continuously updates over time, making it ideal for applications like customer support chatbots and AI assistants.

## Understanding Mem0

Mem0, described as "_The Memory Layer for your AI Agents_," leverages advanced LLMs and algorithms to detect, store, and retrieve memories from conversations and interactions. It identifies key information such as facts, user preferences, and other contextual information, smartly updates memories over time by resolving contradictions, and supports the development of an AI Agent that evolves with the user interactions. When needed, Mem0 employs a smart search system to find memories, ranking them based on relevance, importance, and recency to ensure only the most useful information is presented.

Mem0 provides multiple endpoints through which users can interact with their memories. The two main endpoints are `add` and `search`. The `add` endpoint lets users ingest their conversations into Mem0, storing them as memories. The `search` endpoint handles retrieval, allowing users to query their set of stored memories.

### ADD Memories

<Frame caption="Architecture diagram illustrating the process of adding memories.">
<img src="images/add_architecture.png" />
</Frame>

When a user has a conversation, Mem0 uses an LLM to understand and extract important information. This model is designed to capture detailed information while maintaining the full context of the conversation.
Here's how the process works:

1. First, the LLM extracts two key elements:
* Relevant memories
* Important entities and their relationships
2. The system then compares this new information with existing data to identify contradictions, if present.
3. A second LLM evaluates the new information and decides whether to:
* Add it as new data
* Update existing information
* Delete outdated information
4. These changes are automatically made to two databases:
* A vector database (for storing memories)
* A graph database (for storing relationships)

This entire process happens continuously with each user interaction, ensuring that the system always maintains an up-to-date understanding of the user's information.

### SEARCH Memories

<Frame caption="Architecture diagram illustrating the memory search process.">
<img src="images/search_architecture.png" />
</Frame>

When a user asks Mem0 a question, the system uses smart memory lookup to find relevant information. Here's how it works:

1. The user submits a question to Mem0
2. The LLM processes this question in two ways:
* It rewrites the question to search the vector database better
* It identifies important entities and their relationships from the question
3. The system then performs two parallel searches:
* It searches the vector database using the rewritten question and semantic search
* It searches the graph database using the identified entities and relationships using graph queries
4. Finally, Mem0 combines the results from both databases to provide a complete answer to the user's question

This approach ensures that Mem0 can find and return all relevant information, whether it's stored as memories in the vector database or as relationships in the graph database.

## Getting Started
Mem0 offers two powerful ways to leverage our technology: our [managed platform](/platform/overview) and our [open source solution](/open-source/quickstart).
## Getting Started


<CardGroup cols={3}>
<Card title="Quickstart" icon="rocket" href="/quickstart">
Expand All @@ -22,47 +71,9 @@ Mem0 offers two powerful ways to leverage our technology: our [managed platform]
See what you can build with Mem0
</Card>
</CardGroup>
## Key Features

- OpenAI-compatible API: Easily switch between OpenAI and Mem0
- Advanced memory management: Save costs by efficiently handling long-term context
- Flexible deployment: Choose between managed platform or self-hosted solution
<Card title="All Mem0 Features" icon="list" href="/features" horizontal="false">
</Card>

# Memory Classification in mem0

Mem0 uses a sophisticated classification system to determine which parts of text should be extracted as memories. Not all text content will generate memories, as the system is designed to identify specific types of memorable information.

### When Memories Are Not Generated

There are several scenarios where mem0 may return an empty list of memories:

- When users input definitional questions (e.g., "What is backpropagation?")
- For general concept explanations that don't contain personal or experiential information
- Technical definitions and theoretical explanations
- General knowledge statements without personal context
- Abstract or theoretical content

### Example Scenarios

```
Input: "What is machine learning?"
No memories extracted - Content is definitional and does not meet memory classification criteria.

Input: "Yesterday I learned about machine learning in class"
Memory extracted - Contains personal experience and temporal context.
```

### Best Practices

To ensure successful memory extraction:
- Include temporal markers (when events occurred)
- Add personal context or experiences
- Frame information in terms of real-world applications or experiences
- Include specific examples or cases rather than general definitions


## Need help?
If you have any questions, please feel free to reach out to us using one of the following methods:

<Snippet file="get-help.mdx"/>