Skip to content
This repository was archived by the owner on Jul 16, 2025. It is now read-only.

Add basic setup for memory injections to system prompt #387

Merged
merged 3 commits into from
Jul 14, 2025

Conversation

DZunke
Copy link
Contributor

@DZunke DZunke commented Jul 11, 2025

This PR introduces a flexible memory system that allows the LLM to recall contextual information that are permantent for the conversation. In difference to tools it can be always utilized, when use_memory option is not disabled. It would be possible to fetch memory by a tool with a system instruction like always call the tool foo_bar but, for me, this feels like a bad design to always force the model to do a tool call without further need.

Currently i have added just two memory providers to show what my idea is. I could also think about a write layer to fill a memory, together with a read layer, this could, for example, be working good with a graph database and tools for memory handling. But these are ideas for the future.

So for this first throw i hope you get what i was thinking about to reach. I decided to inject the memory to the system prompt, when it is available, instead of adding a second system prompt to the message bag. But this was just a 50/50 thinking. I tried both seem to be working equally, at least for open ai. I am open to change it back again.

What do you think?

@DZunke DZunke force-pushed the chain-memory-processor branch from 41a68de to 7d9ba6b Compare July 11, 2025 13:53
@DZunke DZunke force-pushed the chain-memory-processor branch from 7d9ba6b to c6cec4f Compare July 11, 2025 14:01
@chr-hertel
Copy link
Member

Love it - thanks @DZunke!

I think adding it to the existing system prompt is the right way to go - not all providers support having multiple system prompts.

Can see this in action already with the bundle and using a custom MemoryProviderInterface implementation to inject user specific data 👍

DZunke and others added 2 commits July 14, 2025 08:39
@chr-hertel
Copy link
Member

Thanks @DZunke - merged here, but I'll have a look now at symfony/ai#117

@chr-hertel chr-hertel merged commit 5b9e952 into php-llm:main Jul 14, 2025
7 checks passed
chr-hertel added a commit to symfony/ai that referenced this pull request Jul 16, 2025
…prompt (DZunke)

This PR was merged into the main branch.

Discussion
----------

[Agent] Add basic setup for memory injections to system prompt

| Q             | A
| ------------- | ---
| Bug fix?      | no
| New feature?  | yes
| Docs?         | yes
| Issues        |
| License       | MIT

See original contribution at php-llm/llm-chain#387

> This PR introduces a flexible memory system that allows the LLM to recall contextual information that are permantent for the conversation. In difference to tools it can be always utilized, when use_memory option is not disabled. It would be possible to fetch memory by a tool with a system instruction like always call the tool foo_bar but, for me, this feels like a bad design to always force the model to do a tool call without further need.
>
> Currently i have added just two memory providers to show what my idea is. I could also think about a write layer to fill a memory, together with a read layer, this could, for example, be working good with a graph database and tools for memory handling. But these are ideas for the future.
>
> So for this first throw i hope you get what i was thinking about to reach. I decided to inject the memory to the system prompt, when it is available, instead of adding a second system prompt to the message bag. But this was just a 50/50 thinking. I tried both seem to be working equally, at least for open ai. I am open to change it back again.
>
> What do you think?

Commits
-------

d9b13f9 Add basic setup for memory injections to system prompt
symfony-splitter pushed a commit to symfony/ai-agent that referenced this pull request Jul 16, 2025
…prompt (DZunke)

This PR was merged into the main branch.

Discussion
----------

[Agent] Add basic setup for memory injections to system prompt

| Q             | A
| ------------- | ---
| Bug fix?      | no
| New feature?  | yes
| Docs?         | yes
| Issues        |
| License       | MIT

See original contribution at php-llm/llm-chain#387

> This PR introduces a flexible memory system that allows the LLM to recall contextual information that are permantent for the conversation. In difference to tools it can be always utilized, when use_memory option is not disabled. It would be possible to fetch memory by a tool with a system instruction like always call the tool foo_bar but, for me, this feels like a bad design to always force the model to do a tool call without further need.
>
> Currently i have added just two memory providers to show what my idea is. I could also think about a write layer to fill a memory, together with a read layer, this could, for example, be working good with a graph database and tools for memory handling. But these are ideas for the future.
>
> So for this first throw i hope you get what i was thinking about to reach. I decided to inject the memory to the system prompt, when it is available, instead of adding a second system prompt to the message bag. But this was just a 50/50 thinking. I tried both seem to be working equally, at least for open ai. I am open to change it back again.
>
> What do you think?

Commits
-------

d9b13f9 Add basic setup for memory injections to system prompt
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants