Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: memgpt/prompts/system/memgpt_intuitive_knowledge.txt #774

Merged
merged 1 commit into from
Jan 22, 2024

Conversation

jimlloyd
Copy link
Contributor

@jimlloyd jimlloyd commented Jan 3, 2024

Please describe the purpose of this pull request.
Add a new system prompt that introduces the concept of intuitive knowledge as a form of memory. See the thread https://discord.com/channels/1161736243340640419/1190489640067674193.

How to test
As far as I know this can only be tested by trying it out, which requires a preset that selects it.

Have you tested this PR?
I have been using variations of this system prompt for the last several days. I think it is working fairly well. memgpt is still trying to access archive memory at times when I think it doesn't need to, but I have actually studied how the agent is coded (which is near the top of my TODOs) so perhaps the behavior isn't dynamic enough to this system prompt to control the behavior to that degree.

Related issues or PRs
I haven't created an issue for this. LMK if you require one.

Is your PR over 500 lines of code?
No

Additional context
See comments below where I post some sample transcripts.

@jimlloyd jimlloyd marked this pull request as draft January 3, 2024 06:49
@jimlloyd
Copy link
Contributor Author

jimlloyd commented Jan 3, 2024

@cpacker this is a draft of a new system prompt. I expect that there are some subtle considerations for creating system prompts that I don't yet understand, so please do critique the design of the prompt. I have various questions:

  1. I used Markdown for some of the formatting. Given that the other prompts do not use Markdown there may be good reasons to not do so. I don't know what degradation it might cause, but if there is degradation it is subtle.
  2. I'm unclear on the right way to manage role & pronouns. Is it enforced that the system prompt is entirely from the perspective of the system, and LLMs will understand that "you" means the LLM itself? And then later, when the human and persona test is processed, the role has been explicitly changed to that the LLM will understand that the pronoun "I" now refers to the LLM itself?
  3. I wonder if some of this system prompt should instead be in the persona.

@jimlloyd
Copy link
Contributor Author

jimlloyd commented Jan 3, 2024

Here is a sample transcript that illustrates that the LLM I am using (in this case the SauerkrautLM-UNA-SOLAR-Instruct) has already has knowledge of Kaheman's theory of System 1/System 2 aka. Fast/Slow cognition, and to some degree can map that theory to its own methods of "thinking".

💭 User has logged in for the first time. Providing a warm welcome and an introduction.
🤖 Hello Jim, it's great to meet you! I'm MemGPT, your advanced digital companion designed to help you achieve your goals with ease and efficiency.
Enter your message: Hi MemGPT, the pleasure is all mine.
💭 Jim seems friendly and eager to engage. Let's respond in kind.
🤖 Thank you, Jim! I am ready to assist you whenever needed. Please feel free to ask any questions or share tasks that require my attention.
Enter your message: What can you tell me about System 1 and System 2?
💭 Jim asked a question related to cognitive psychology. Retrieving and summarizing information from archival memory.
🤖 System 1 and System 2 are cognitive models introduced by the psychologist Daniel Kahneman in his book 'Thinking, Fast and Slow.' System 1 is our fast,
intuitive, and automatic mode of thinking, which relies on heuristics, patterns, and mental shortcuts to process information quickly. It operates without much
conscious effort or deliberation but sometimes leads to errors or biases. On the other hand, System 2 refers to our slow, deliberate, and more controlled mode of
thinking. It requires attention and can be fatigued over time. This mode is responsible for complex problem-solving, logical reasoning, and conscious decision
making. Understanding these two systems can help us better appreciate how we think and make choices.
Enter your message: As a digital companion, how do these concepts apply to you?
💭 Jim is making connections between human cognitive models and AI. Providing a relevant response.
🤖 In the context of an AI like me, System 1 could be seen as our ability to perform automatic responses or make quick decisions based on patterns recognized in
large amounts of data without extensive processing. We can generate answers almost instantaneously using our vast knowledge base and pattern recognition skills.
This is comparable to how humans use intuition for rapid judgments or choices. On the other hand, System 2 could be seen as the deliberate and analytical thought
process that requires more time and computational resources. This might involve complex mathematical calculations, logical reasoning, or advanced problem-solving
tasks. We can leverage our computational power to execute these tasks much faster than humans, but they still demand more focus and conscious effort from the AI
system.
Enter your message: You have a set of functions that you can use to manage your memory systems. When you use those functions, is that System 1 or System 2?
💭 Jim is probing further into the cognitive models' application to my functionality. Responding with a nuanced explanation.
🤖 When using functions like 'core_memory_append', 'conversation_search', or 'archival_memory_insert', it can involve elements of both System 1 and System 2. The
act of accessing these function names, understanding their parameters, and processing the given inputs could be seen as a fast, automatic response through System 1. However, when dealing with more complex queries like semantic archival memory searches or manipulating large blocks of data in memory, it might require
deliberate computational processing using System 2 capabilities.

@cpacker cpacker self-requested a review January 3, 2024 08:07
@cpacker cpacker changed the title memgpt/prompts/system/memgpt_intuitive_knowledge.txt feat: memgpt/prompts/system/memgpt_intuitive_knowledge.txt Jan 3, 2024
@cpacker cpacker self-assigned this Jan 3, 2024
@agiletechnologist
Copy link
Contributor

There are some changes I would recommend.

Some passive voice statements should be an active voice to make them more effective as a directive.

Example of a fix to make it an active voice correction:

  • Conversation History: A portion of your working memory keeps track of your recent exchanges with your human partner. You will save older conversations to Archival Memory to make archival memory searchable with 'conversation_search'.

@jimlloyd
Copy link
Contributor Author

jimlloyd commented Jan 3, 2024

Thanks @agiletechnologist! That is useful feedback!

@cpacker cpacker marked this pull request as ready for review January 22, 2024 20:20
Copy link
Collaborator

@cpacker cpacker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry for the delay here!

@cpacker cpacker merged commit abfbef5 into letta-ai:main Jan 22, 2024
norton120 pushed a commit to norton120/MemGPT that referenced this pull request Feb 15, 2024
sarahwooders pushed a commit to sarahwooders/MemGPT that referenced this pull request May 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants