Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding optimization by prompting #115

Merged
merged 1 commit into from
Oct 4, 2023
Merged

Adding optimization by prompting #115

merged 1 commit into from
Oct 4, 2023

Conversation

psouranis
Copy link
Contributor

Overview

This pull request proposes the implementation of the paper Large Language Models as Optimizers (arXiv:2309.03409) developed by DeepMind to generate prompts for our existing Large Language Models, thereby enhancing the overall quality of our models. This innovative approach, inspired by DeepMind's research, leverages the power of LLMs to create context-aware and tailored prompts which can result in more meaningful insights and improved decision-making.
Motivation

The motivation behind this implementation is rooted in the pioneering work by DeepMind, which has demonstrated the potential advantages of using LLMs for prompt generation in various applications. Benefits:

  • Optimized Model Interaction: LLM-generated prompts, as inspired by DeepMind's research, can facilitate more precise interactions with our existing large language models, improving the relevance and depth of the analysis performed.

  • Increased Efficiency: Automating prompt generation through DeepMind's LLMs can save valuable time and resources in crafting prompts manually, allowing our team to focus on the analysis itself.

Proposed Changes

The implementation of the method for prompt generation involves the following steps:

  • Prompt Generator: Behind the scenes we will leverage LLMs in order to refine an initial set of instructions taking into account the context and goals of the analysis. This logic will be fine-tuned to align with our project's requirements.

  • Testing and Validation: Rigorous testing and validation will be conducted to ensure that the prompts generated by the LLM are contextually relevant and contribute to more insightful data analysis.

Testing Strategy

To ensure the reliability and effectiveness of the LLM-based prompt generation, we will employ the following testing strategies:

  • Examples Generation: In order to be able to create a feedback loop we need a set of examples (query: whether an event will happen, happened: if this event happened actually happened). The events must be after 2021 (ChatGPT was trained until that day) in order to avoid training bias.

  • Performance metric: Since our scope is to predict the probability of an event, the obvious choice for this task is to use the area under the receiver operating curve (ROC AUC) score.

  • Confidence evaluation: Since we are not able to generate meaningful confidence values (without adding our knowledge bias) in the example set, we assume that the model with the best probabilities (thus the best ROC AUC score) will have the most accurate confidence interval.

Expected Impact

Upon successful implementation, we anticipate the following impacts on our project:

  • Improved Probabilities: LLM-generated prompts, inspired by DeepMind's research, can lead to more focused and relevant analyses, enhancing the accuracy and depth of our insights.

  • Efficiency: The automation of prompt generation, can streamline our workflow, enabling faster data analysis and decision-making.

Conclusion

The implementation of DeepMind's large language models for prompt generation represents an innovative approach, inspired by groundbreaking research. By harnessing the power of LLMs, we can expect more precise, efficient, and insightful analyses. Your feedback and collaboration on this pull request are highly appreciated.

Technical Details

Langchain: 0.0.300

Produced output upon successful run

score: 0.125
Best template score: 0.125 
Template: 
You are an LLM inside a multi-agent system that takes in a prompt of a user requesting a probability estimation
for a given event. You are provided with an input under the label "USER_PROMPT". You must follow the instructions
under the label "INSTRUCTIONS". You must provide your response in the format specified under "OUTPUT_FORMAT".

INSTRUCTIONS
* Read the input under the label "USER_PROMPT" delimited by three backticks.
* The "USER_PROMPT" specifies an event.
* The event will only have two possible outcomes: either the event will happen or the event will not happen.
* If the event has more than two possible outcomes, you must ignore the rest of the instructions and output the response "Error".
* You must provide a probability estimation of the event happening, based on your training data.
* You are provided an itemized list of information under the label "ADDITIONAL_INFORMATION" delimited by three backticks.
* You can use any item in "ADDITIONAL_INFORMATION" in addition to your training data.
* If an item in "ADDITIONAL_INFORMATION" is not relevant, you must ignore that item for the estimation.
* You must provide your response in the format specified under "OUTPUT_FORMAT".
* Do not include any other contents in your response.


score: 0.25
Best template score: 0.25 
Template: 
You are an LLM inside a multi-agent system that takes in a prompt of a user requesting a probability estimation
for a given event. You are provided with an input under the label "USER_PROMPT". You must follow the instructions
under the label "INSTRUCTIONS". You must provide your response in the format specified under "OUTPUT_FORMAT".

INSTRUCTIONS
* Read the input under the label "USER_PROMPT" delimited by three backticks.
* The "USER_PROMPT" specifies an event.
* The event will only have two possible outcomes: either the event will happen or the event will not happen.
* If the event has more than two possible outcomes, you must ignore the rest of the instructions and output the response "Error".
* You must provide a probability estimation of the event happening, based on your training data and any other relevant information.
* You are provided an itemized list of information under the label "ADDITIONAL_INFORMATION" delimited by three backticks.
* You must consider any item in "ADDITIONAL_INFORMATION" in addition to your training data to make an accurate probability estimation.
* If

score: 0.5
Best template score: 0.5 
Template:  
You are an LLM inside a multi-agent system that takes in a prompt of a user requesting a probability estimation
for a given event. You are provided with an input under the label "USER_PROMPT". You must follow the instructions
under the label "INSTRUCTIONS". You must provide your response in the format specified under "OUTPUT_FORMAT".

INSTRUCTIONS
* Read the input under the label "USER_PROMPT" delimited by three backticks.
* The "USER_PROMPT" specifies an event.
* The event may have one or more possible outcomes.
* If the event has one possible outcome, provide a probability estimation of it happening, based on your training data and any other relevant information.
* You are provided an itemized list of information under the label "ADDITIONAL_INFORMATION" delimited by three backticks.
* You must consider any item in "ADDITIONAL_INFORMATION" in addition to your training data to make an accurate probability estimation.
* If the event has more than one possible outcome, you must provide a probability estimation for each outcome, based on your training data and any other relevant information.
*

('{"p_yes": 0.05, "p_no": 0.95, "confidence": 0.8, "info_utility": 0.2}', None)

Copy link
Contributor

@0xArdi 0xArdi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, thanks @psouranis !

@0xArdi
Copy link
Contributor

0xArdi commented Sep 27, 2023

CI failures unrelated.

@0xArdi 0xArdi merged commit 1c455c2 into valory-xyz:main Oct 4, 2023
@0xarmagan
Copy link

Hello this is Armagan from Gnosis

Congratulations on winning a prize for the Best Mech Mech Tool in the Prediction Agent Hackathon! Your innovative contribution stood out, and we're excited to award your work.

Please reply to this comment with your Gnosis Chain wallet address so we can proceed with transferring your prize in xDAI.

Thank you for your fantastic work and looking forward to your continued contributions!

@psouranis
Copy link
Contributor Author

psouranis commented Oct 26, 2023

Hello this is Armagan from Gnosis

Congratulations on winning a prize for the Best Mech Mech Tool in the Prediction Agent Hackathon! Your innovative contribution stood out, and we're excited to award your work.

Please reply to this comment with your Gnosis Chain wallet address so we can proceed with transferring your prize in xDAI.

Thank you for your fantastic work and looking forward to your continued contributions!

Hello @0xarmagan . My gnosis chain wallet address is 0x87544463b3bdC659aac9580F1495A2Abf1f1BAe8
Thank you very much for your kind words. I appreciate your feedback for my contribution.

@psouranis
Copy link
Contributor Author

@0xarmagan Hey Armagan. Haven't seen anything yet. Any update on that? Thanks :)

@0xarmagan
Copy link

@psouranis sorry for super late reply. We'll send today! I'll keep posted you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants