Replies: 22 comments 8 replies
-
Sorry, not exactly what you asked for, but I've had luck using a dumping ground for wayward outputs.
I've found that this gives agents a chance to reprocess the action and try again. Generally, I've seen the LLM parsing error with custom tools that I haven't quite got the prompt down on. This somewhat hacky method has reduced my parsing issues significantly. Although, sometimes the agent just does what it wants 😄 |
Beta Was this translation helpful? Give feedback.
-
I've seen a number of cases of a Sorry, don't have a small repro case at the moment. |
Beta Was this translation helpful? Give feedback.
-
When using an StruturedAgent with a pythonRepl tool, the agent tends to stop producing text before taking an action or providing an answer.
Output
Thought: I can generate a random fact using the random module in Python.
Observation: Did you know that the largest ocean on Earth is the Pacific Ocean?
|
Beta Was this translation helpful? Give feedback.
-
I've noticed I get the Could not parse LLM output error a ton using the canned CSV Agent toolkit when I pass in OpenAI model_name='gpt-3.5-turbo' or 'gpt-4'. I do not get it when I just let it default to believe davinci. |
Beta Was this translation helpful? Give feedback.
-
#langchain-0.0.169 from langchain.agents import load_tools Entering new AgentExecutor chain... Traceback (most recent call last): |
Beta Was this translation helpful? Give feedback.
-
langchain-0.0.168 import os
from langchain.llms import GooseAI
from langchain import PromptTemplate, LLMChain
from langchain.agents import load_tools, initialize_agent llm = GooseAI(model_name='gpt-neo-20b')
tools_final = load_tools(['google-serper', "llm-math", "python_repl"], llm=llm)
agent = initialize_agent(tools_final, llm, agent="zero-shot-react-description", verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
agent.run(question)
|
Beta Was this translation helpful? Give feedback.
-
#langchain 0.0.170 =>exception : ouput parser error |
Beta Was this translation helpful? Give feedback.
-
The LLMs output do not follow the Agent's template. There needs to be a way to instruct the LLM to output a specific format. |
Beta Was this translation helpful? Give feedback.
-
#langchain 0.0.170
OutputParserException |
Beta Was this translation helpful? Give feedback.
-
Langchain version: Code Snippet:
|
Beta Was this translation helpful? Give feedback.
-
I'm getting a parsing exception, with a similar but not exact message, could it be related? Going through the agents tutorial, it seems to arrive at an answer but then craps out understanding itself. Sorry if I've misunderstood. Most recent version of langchain. The code I used
The output with error
|
Beta Was this translation helpful? Give feedback.
-
Hi @mendhak |
Beta Was this translation helpful? Give feedback.
-
Hey @ps97-kundankumar I didn't find a proper solution, I instead switched to AzureChatOpenAI class, and it was easier to work with. |
Beta Was this translation helpful? Give feedback.
-
I'm getthing "Could not parse LLM output" as well.
Root cause: agent uses triple backtick
In my
|
Beta Was this translation helpful? Give feedback.
-
version:0.0.242 llm = AzureChatOpenAI( |
Beta Was this translation helpful? Give feedback.
-
Encountered the issue in this pattern scenario, when query the same questions that already existed before message history.
|
Beta Was this translation helpful? Give feedback.
-
Facing same issue(output parser exception) with LangChain 0.0.246 version with AzureChatOpenAI (GPT 3.5 turbo - 0301/0613). We are mostly using langchain for statistical and mathematical operations with csv_agent/pandas_dataframe_agent. So most of the time, it is calling python_repl_ast tool while processing the request. |
Beta Was this translation helpful? Give feedback.
-
Facing the issue with every GPT4All Language Model. Error Code:raise OutputParserException( Who I initialized the Agent:from langchain.agents import create_pandas_dataframe_agent llm = GPT4All(model="ggml-stable-vicuna-13B.q4_2.bin", max_tokens=2048) agent = create_pandas_dataframe_agent( print(agent.run("How many entries have more than 5000 employees?")) I already tried to implement a custom regex parser in output_parser.py, because I noticed that the action_input is missing in the response, but didn't help. |
Beta Was this translation helpful? Give feedback.
-
For me I am getting parsing error if I initialized an agent as:
But no error if I use ConversationalChatAgent instead:
|
Beta Was this translation helpful? Give feedback.
-
FWIW, I encountered this issue frequently in my agents when using I hope this helps others! |
Beta Was this translation helpful? Give feedback.
-
im still getting error
error i got:
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
We've heard a lot of issues around parsing LLM output for agents. We want to fix this. Step one in this is gathering a good dataset to benchmark against, and we want your help with that!
Specifically, we need examples of where current parsing fails. If you've run into a parsing error, please add a snippet of code to reproduce that error below. This should involve:
Thanks in advance! We look forward to trying to solve these issues
Beta Was this translation helpful? Give feedback.
All reactions