Skip to content

Fixed message order for JDBC Chat Memory #2781

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

linarkou
Copy link
Contributor

This PR solves some problems with message ordering:

  1. JdbcChatMemory fetches rows in DESC order, and MessageChatMemoryAdvisor get and add all messages in that order - from last to first. So messages list needs to be reversed.
  2. After batch insert without specifying timestamp manually all rows have the same timestamp (because database sets current_timestamp by default). Sooo after fetching rows from database the message order is unpredictable - they have same timestamp.

@linarkou
Copy link
Contributor Author

@leijendary @sobychacko Can you please take a look? As far as I see you were main contributor/reviewer of that feature

…r for batch inserts

Signed-off-by: linarkou <abzaltdinov@gmail.com>
@linarkou linarkou force-pushed the chat-memory-jdbc-fix-message-order branch from 6cba78c to 2b42f34 Compare April 17, 2025 12:21
@leijendary
Copy link
Contributor

This is good! It is probably better to remove the default value for the timestamp column to avoid unintentional current_timestamp since that generates inconsistencies.

Signed-off-by: linarkou <abzaltdinov@gmail.com>
@linarkou
Copy link
Contributor Author

This is good! It is probably better to remove the default value for the timestamp column to avoid unintentional current_timestamp since that generates inconsistencies.

✅ Thanks for the suggestion! I've updated the code.

@linarkou
Copy link
Contributor Author

I just found out that Cassandra ChatMemory solves same problem in different, probably more efficient way:

final AtomicLong instantSeq = new AtomicLong(Instant.now().toEpochMilli());
messages.forEach(msg -> {
	if (msg.getMetadata().containsKey(CONVERSATION_TS)) {
		msg.getMetadata().put(CONVERSATION_TS, Instant.ofEpochMilli(instantSeq.getAndIncrement()));
	}
	add(conversationId, msg);
});

What do you think, maybe we should implement same logic here?

@ThomasVitale
Copy link
Contributor

ThomasVitale commented Apr 25, 2025

@linarkou thanks so much for this improvement! We have just delivered some changes to the ChatMemory API which include the introduction of a ChatMemoryRepository API to separate the two different concerns: memory management strategy and storage mechanism. The JdbcChatMemory has now been superseded by JdbcChatMemoryRepository.

The ChatMemoryRepository.saveAll() method is meant to replace the existing memory for a given conversationId since the management is performed by a given ChatMemory implementation (e.g. removing older messages that exceed the window size). Still, there might be some issues regarding the timestamp for the messages saved in a batch job, so it would be great to get this change out before the next release. Thanks a lot for your contributions!

The "lastN" concept has been deprecated (so the DESC sorting is not needed anymore) in favour of strategy-specific ChatMemory implementation. By default, the MessageWindowChatMemory is used, which handles correctly ensuring the memory doesn't exceed a certain limit. The "lastN" option had the major issue of not considering the type of messages returned, but just returning the last N (e.g. SystemMessages might end up being excluded since they are usually the oldest, and therefore resulting in errors). I hope the new APIs can help bring more flexibility and robustness.

Upgrade Notes: https://docs.spring.io/spring-ai/reference/upgrade-notes.html#_chat_memory
New Chat Memory Docs: https://docs.spring.io/spring-ai/reference/api/chat-memory.html
PR with the change: #2890

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants