how to set up the memory #183
Replies: 2 comments
-
Hi @jonison-zhang, did you find a solution? |
Beta Was this translation helpful? Give feedback.
-
Hey @jonison-zhang! Actually, I am using the langchain and langgraph framework for quite a while, and I came across the same requirement during the development of a product from my team. In all my experience for general software development, what I realized is that the solution to a requirement from a good framework library/framework always has the solution in the documentation. So it did. Basically, from typing import TypedDict, Annotated, Sequence
from langchain_core.messages import BaseMessage
import operator
from langgraph.graph import StateGraph
from langchain_community.chat_message_histories import PostgresChatMessageHistory # There are other classes as well, but I like postgres so I used that
import uuid
from langchain_core.messages import HumanMessage, AIMessage
class State(TypedDict):
chat_history: Annotated[Sequence[BaseMessage], operator.add]
question: str
output: str
# ... Your other state variables
graph = StateGraph(State)
# ... your graph nodes definition
# ...
graph.compile()
session_id = str(uuid.uuid4())
user_input = input("Your question here...: ")
PostgresChatMessageHistory(session_id = session_id, connection_string=# your connection string here
).add_message(HumanMesage(content=user_input))
result = graph.invoke({"question": user_input, "chat_history": PostgresChatMessageHistory(session_id = session_id, connection_string=#...
).messages
PostgresChatMessageHistory(# ... same thing
).add_message(AIMessaage(content=result["output"]) It's not the perfect code but, it'll do for understanding. ExplanationBasically, in langgraph states don't remember previous calls after the graph is invoked (meaning, they're stateless, I guess). So I thought, why not make it stateful by providing external memory as a state variable which the LLM chain (which is one or more nodes) can catch and use it in its context. And I searched the documentation for external memory storage and voilà, there it is, in the api documetation of langchain: https://api.python.langchain.com/en/latest/community_api_reference.html#module-langchain_community.chat_message_histories Hope this helps! Let me know if you have any questions. |
Beta Was this translation helpful? Give feedback.
-
Hello, I am using your great framework, aka multiple agents, to build scientific bot to answer scientists. One feature of application is memory for history_chat that allow the previous chat record can be pulled for future use. I am trying to define the memory in def create_agent, particular in agent executor by passing memory, but when I execute the code, it seems the memory will either make the agent get in to a iteration or block human_promote to be passed to the first agent. I am a bit confused, would you please give me some ideas about how to set up the memory for chat_history? many thanks
Beta Was this translation helpful? Give feedback.
All reactions