Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to custom prompt in agent type CHAT_CONVERSATIONAL_REACT_DESCRIPTION #10721

Closed
easontsai opened this issue Sep 18, 2023 · 6 comments
Closed
Labels
Ɑ: agent Related to agents module 🤖:question A specific question about the codebase, product, project, or how to use a feature

Comments

@easontsai
Copy link

agent_chain = initialize_agent(
    tools,
    llm,
    agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
    verbose=True,
    return_intermediate_steps=True,
    handle_parsing_errors=True,
    memory=memory
)

how to overwrite the PREFIX, SUFFIX and FORMAT_INSTRUCTIONS when I using the CHAT_CONVERSATIONAL_REACT_DESCRIPTION and creating agent by initialize_agent function?

@dosubot dosubot bot added Ɑ: agent Related to agents module 🤖:question A specific question about the codebase, product, project, or how to use a feature labels Sep 18, 2023
@dosubot
Copy link

dosubot bot commented Sep 18, 2023

🤖

Hello,

To overwrite the PREFIX, SUFFIX, and FORMAT_INSTRUCTIONS when using the CHAT_CONVERSATIONAL_REACT_DESCRIPTION in the initialize_agent function of the LangChain framework, you can pass these as arguments to the from_llm_and_tools method of the ConversationalChatAgent class.

Here is an example of how you can do this:

from libs.langchain.langchain.agents.conversational_chat.base import ConversationalChatAgent
from libs.langchain.langchain.agents.utils import validate_tools_single_input
from langchain.tools.base import BaseTool

# Define your tools
tools = [BaseTool(name="tool1", description="description1"), BaseTool(name="tool2", description="description2")]

# Define your language model
llm = ...

# Overwrite PREFIX, SUFFIX, and FORMAT_INSTRUCTIONS
new_prefix = "Your new prefix text here."
new_suffix = "Your new suffix text here."
new_format_instructions = "Your new format instructions here."

# Now, you can call the from_llm_and_tools method
agent = ConversationalChatAgent.from_llm_and_tools(
    llm=llm,
    tools=tools,
    system_message=new_prefix,
    human_message=new_suffix,
    output_parser=new_format_instructions
)

Please replace "Your new prefix text here.", "Your new suffix text here.", and "Your new format instructions here." with your desired text.

This will overwrite the default PREFIX, SUFFIX, and FORMAT_INSTRUCTIONS in the ConversationalChatAgent class in the LangChain framework.

In the LangChain framework, the PREFIX, SUFFIX, and FORMAT_INSTRUCTIONS in the CHAT_CONVERSATIONAL_REACT_DESCRIPTION agent type serve the following roles:

  • PREFIX: This is a string that provides a brief description of the Assistant. It is used to introduce the Assistant to the user and explain its capabilities. It is prepended to the conversation to set the context. In the context shared, it is used as the system message in the create_prompt method.

  • FORMAT_INSTRUCTIONS: This is a string that provides instructions to the user on how to format their responses. It explains the two possible formats for the user's response: one for when the user wants the Assistant to use a tool, and another for when the user wants to respond directly to the Assistant. In the context shared, it is used in the create_prompt method to format the human message.

  • SUFFIX: This is a string that provides additional information to the user. It explains the tools that the Assistant can use to look up information, and reminds the user to respond with a markdown code snippet of a JSON blob with a single action. It is appended to the conversation to provide additional context and instructions. In the context shared, it is used as the human message in the create_prompt method.

I hope this helps! Let me know if you have any other questions.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@guidorietbroek
Copy link

We've created a subclass of the ConvoOutputParser and that way it is possible to edit the get_format_instructions() method to load your custom format instructions.

@xiaoyaoyang
Copy link

same question, it looks like you have to define custom agent, wondering if there is a way to define it similar to AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION

@kamalkech
Copy link

@spartan-minhbui format_instructions not working the agent not take consideration this property

@hdnh2006
Copy link
Contributor

hdnh2006 commented Feb 2, 2024

We've created a subclass of the ConvoOutputParser and that way it is possible to edit the get_format_instructions() method to load your custom format instructions.

Yes, but you cannot set your own format instructions without defining a new class, because you are importing a variable and passing it to a function as it is done here:

    def get_format_instructions(self) -> str:
        """Returns formatting instructions for the given output parser."""
        return FORMAT_INSTRUCTIONS

There is a way to pass this parameter in the init as you do in agents and other functions:

# Define a class that parses output for conversational agents
class ConvoOutputParser(AgentOutputParser):
    """Output parser for the conversational agent."""
    format_instructions: str = FORMAT_INSTRUCTIONS

And another way to do it directly in the agent, in the create_prompt function:

    def create_prompt(
        cls,
        tools: Sequence[BaseTool],
        system_message: str = PREFIX,
        human_message: str = SUFFIX,
        input_variables: Optional[List[str]] = None,
        output_parser: Optional[BaseOutputParser] = None,
        format_instructions: str = None
    ) -> BasePromptTemplate:
... # continues...

        format_instructions = human_message.format(
            format_instructions=_output_parser.get_format_instructions() if format_instructions is None else format_instructions
        )

Particullarly the second option will allow you to define the format_instructions template when you are defining the agent. As it is available for the other parameters in the agent_kwargs:

agent = initialize_agent(
    agent = AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
    #agent = AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    tools = tools,
    #tools = load_knowledge_tools(),
    llm = llm_agent,
    verbose = True,
    max_iterations = 3,
    early_stopping_method = 'generate',
    memory = b_w_memory,
    handle_parsing_errors = True,
    agent_kwargs={
        'system_message':PREFIX,
        'human_message':SUFFIX,
        'template_tool_response':TEMPLATE_TOOL_RESPONSE,
        }
)

Maybe a combination of both solutions would be great.

I'll try a PR, any thought about this @guidorietbroek ?

@hdnh2006
Copy link
Contributor

hdnh2006 commented Feb 2, 2024

format_instructions=custom_format_instructions

This doesn't work, if you pass the format_instructions while you are initializing the agent, the template remains the same:


FORMAT_INSTRUCTIONS = "These are my new format_instructions"

agent = initialize_agent(
    agent = AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
    tools = tools,
    llm = llm_agent,
    verbose = True,
    max_iterations = 3,
    early_stopping_method = 'generate',
    memory = b_w_memory,
    handle_parsing_errors = True,
    format_instructions = FORMAT_INSTRUCTIONS
)

print(agent.agent.output_parser.get_format_instructions())
RESPONSE FORMAT INSTRUCTIONS
----------------------------

When responding to me, please output a response in one of two formats:

**Option 1:**
Use this if you want the human to use a tool.
Markdown code snippet formatted in the following schema:

json
{{{{
    "action": string, \\ The action to take. Must be one of {tool_names}
    "action_input": string \\ The input to the action
}}}}


**Option #2:**
Use this if you want to respond directly to the human. Markdown code snippet formatted in the following schema:

json
{{{{
    "action": "Final Answer",
    "action_input": string \\ You should put what you want to return to use here
}}}}

baskaryan added a commit that referenced this issue Feb 6, 2024
…able (#16945)

**Description:**
With this modification, users can customize the `FORMAT_INSTRUCTIONS`
template, allowing them to create their own prompts

As it is happening in
[this](#10721) issue,
the `FORMAT_INSTRUCTIONS` is not customizable for the output parser,
unless you create your own class `ConvoOutputParser`. To avoid this, a
modification was done, creating a `format_instruction` variable that
users can customize with ease after initialize the agent.

For example:
```
agent = initialize_agent(
    agent = AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
    tools = tools,
    llm = llm_agent,
    verbose = True,
    max_iterations = 3,
    early_stopping_method = 'generate',
    memory = b_w_memory,
    handle_parsing_errors = True,
    agent_kwargs={
        'system_message':PREFIX,
        'human_message':SUFFIX,
        'template_tool_response':TEMPLATE_TOOL_RESPONSE,
        }
)
agent.agent.output_parser.format_instructions = "MY CUSTOM FORMAT INSTRUCTIONS"
print(agent.agent.output_parser.get_format_instructions())
MY CUSTOM FORMAT INSTRUCTIONS
```

Other parameters like `system_message`, `human_message`, or
`template_tool_response` are already customizable and with this PR, the
last parameter `FORMAT_INSTRUCTIONS` in
`langchain.agents.conversational_chat.prompt` can be modified.


**Issue:**
#10721

**Dependencies:**
No new dependencies required for this change

**Twitter handle:**
With my github user is enough. Thanks

I hope you accept my PR.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
hwchase17 pushed a commit that referenced this issue Feb 7, 2024
…custom `FORMAT_INSTRUCTIONS` (#17168)

- **Description:** 
This PR standardizes the `output_parser.py` file across all agent types
to ensure a uniform parsing mechanism is implemented. It introduces a
cohesive structure and common interface for output parsing, facilitating
easier modifications and extensions by users. The standardized approach
enhances maintainability and scalability of the codebase by providing a
consistent pattern for output parsing, which can be easily understood
and utilized across different agent types.

This PR builds upon the foundation set by a previously merged PR, which
focused exclusively on standardizing the `output_parser.py` for the
`conversational_agent` ([PR
#16945](#16945)). With
this new update, I extend the standardization efforts to encompass
`output_parser.py` files across all agent types. This enhancement not
only unifies the parsing mechanism across the board but also introduces
the flexibility for users to incorporate custom `FORMAT_INSTRUCTIONS`.

  - **Issue:** 
#10721
#4044

  - **Dependencies:**
No new dependencies required for this change

  - **Twitter handle:**
With my github user is enough. Thanks

I hope you accept my PR.
@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label May 3, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale May 10, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label May 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Ɑ: agent Related to agents module 🤖:question A specific question about the codebase, product, project, or how to use a feature
Projects
None yet
Development

No branches or pull requests

5 participants