This repository was archived by the owner on Jul 16, 2025. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 24
Add basic setup for memory injections to system prompt #387
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
41a68de
to
7d9ba6b
Compare
7d9ba6b
to
c6cec4f
Compare
chr-hertel
reviewed
Jul 13, 2025
chr-hertel
reviewed
Jul 13, 2025
Love it - thanks @DZunke! I think adding it to the existing system prompt is the right way to go - not all providers support having multiple system prompts. Can see this in action already with the bundle and using a custom MemoryProviderInterface implementation to inject user specific data 👍 |
Co-authored-by: Christopher Hertel <mail@christopher-hertel.de>
912623a
to
b95883d
Compare
chr-hertel
approved these changes
Jul 14, 2025
Thanks @DZunke - merged here, but I'll have a look now at symfony/ai#117 |
chr-hertel
added a commit
to symfony/ai
that referenced
this pull request
Jul 16, 2025
…prompt (DZunke) This PR was merged into the main branch. Discussion ---------- [Agent] Add basic setup for memory injections to system prompt | Q | A | ------------- | --- | Bug fix? | no | New feature? | yes | Docs? | yes | Issues | | License | MIT See original contribution at php-llm/llm-chain#387 > This PR introduces a flexible memory system that allows the LLM to recall contextual information that are permantent for the conversation. In difference to tools it can be always utilized, when use_memory option is not disabled. It would be possible to fetch memory by a tool with a system instruction like always call the tool foo_bar but, for me, this feels like a bad design to always force the model to do a tool call without further need. > > Currently i have added just two memory providers to show what my idea is. I could also think about a write layer to fill a memory, together with a read layer, this could, for example, be working good with a graph database and tools for memory handling. But these are ideas for the future. > > So for this first throw i hope you get what i was thinking about to reach. I decided to inject the memory to the system prompt, when it is available, instead of adding a second system prompt to the message bag. But this was just a 50/50 thinking. I tried both seem to be working equally, at least for open ai. I am open to change it back again. > > What do you think? Commits ------- d9b13f9 Add basic setup for memory injections to system prompt
symfony-splitter
pushed a commit
to symfony/ai-agent
that referenced
this pull request
Jul 16, 2025
…prompt (DZunke) This PR was merged into the main branch. Discussion ---------- [Agent] Add basic setup for memory injections to system prompt | Q | A | ------------- | --- | Bug fix? | no | New feature? | yes | Docs? | yes | Issues | | License | MIT See original contribution at php-llm/llm-chain#387 > This PR introduces a flexible memory system that allows the LLM to recall contextual information that are permantent for the conversation. In difference to tools it can be always utilized, when use_memory option is not disabled. It would be possible to fetch memory by a tool with a system instruction like always call the tool foo_bar but, for me, this feels like a bad design to always force the model to do a tool call without further need. > > Currently i have added just two memory providers to show what my idea is. I could also think about a write layer to fill a memory, together with a read layer, this could, for example, be working good with a graph database and tools for memory handling. But these are ideas for the future. > > So for this first throw i hope you get what i was thinking about to reach. I decided to inject the memory to the system prompt, when it is available, instead of adding a second system prompt to the message bag. But this was just a 50/50 thinking. I tried both seem to be working equally, at least for open ai. I am open to change it back again. > > What do you think? Commits ------- d9b13f9 Add basic setup for memory injections to system prompt
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR introduces a flexible memory system that allows the LLM to recall contextual information that are permantent for the conversation. In difference to tools it can be always utilized, when
use_memory
option is not disabled. It would be possible to fetch memory by a tool with a system instruction likealways call the tool foo_bar
but, for me, this feels like a bad design to always force the model to do a tool call without further need.Currently i have added just two memory providers to show what my idea is. I could also think about a write layer to fill a memory, together with a read layer, this could, for example, be working good with a graph database and tools for memory handling. But these are ideas for the future.
So for this first throw i hope you get what i was thinking about to reach. I decided to inject the memory to the system prompt, when it is available, instead of adding a second system prompt to the message bag. But this was just a 50/50 thinking. I tried both seem to be working equally, at least for open ai. I am open to change it back again.
What do you think?