description |
---|
LangChain Memory Nodes |
Memory allow you to chat with AI as if AI has the memory of previous conversations.
Human: hi i am bob
AI: Hello Bob! It's nice to meet you. How can I assist you today?
Human: what's my name?
AI: Your name is Bob, as you mentioned earlier.
Under the hood, these conversations are stored in arrays or databases, and provided as context to LLM. For example:
You are an assistant to a human, powered by a large language model trained by OpenAI.
Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.
Current conversation:
{history}
- Buffer Memory
- Buffer Window Memory
- Conversation Summary Memory
- Conversation Summary Buffer Memory
- DynamoDB Chat Memory
- MongoDB Atlas Chat Memory
- Redis-Backed Chat Memory
- Upstash Redis-Backed Chat Memory
- Zep Memory
By default, UI and Embedded Chat will automatically separate different users conversations. This is done by generating a unique chatId
for each new interaction. That logic is handled under the hood by Flowise.
You can separate the conversations for multiple users by specifying a unique sessionId
- For every memory node, you should be able to see a input parameter
Session ID
- In the
/api/v1/prediction/{your-chatflowid}
POST body request, specify thesessionId
inoverrideConfig
{
"question": "hello!",
"overrideConfig": {
"sessionId": "user1"
}
}
- GET
/api/v1/chatmessage/{your-chatflowid}
- DELETE
/api/v1/chatmessage/{your-chatflowid}
Query Param | Type | Value |
---|---|---|
sessionId | string | |
sort | enum | ASC or DESC |
startDate | string | |
endDate | string |
All conversations can be visualized and managed from UI as well:
For OpenAI Assistant, Threads will be used to store conversations.