-
Notifications
You must be signed in to change notification settings - Fork 16.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BooleanOutputParser expected output value error #11408
Comments
🤖 Hello, The error you're encountering is due to the output of your language model (LLM) not matching the expected values "YES" or "NO" that the The Here's the relevant code that's causing the error: cleaned_text = text.strip()
if cleaned_text.upper() not in (self.true_val.upper(), self.false_val.upper()):
raise ValueError(
f"BooleanOutputParser expected output value to either be "
f"{self.true_val} or {self.false_val}. Received {cleaned_text}."
) This code is checking if the cleaned, uppercased output of your LLM ( To resolve this issue, you need to ensure that the output from your LLM is either "YES" or "NO". If your LLM is producing different outputs, you might need to adjust your LLM or create a custom output parser that can handle the specific outputs of your LLM. In the context shared, the I hope this helps! If you have any other questions, feel free to ask. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@ankur287 Looks like the output parser doesn't retry when the LLM output isn't as expected. We should make it more robust by retrying. I can try fixing this issue if that's ok with you. |
@agola11 Can you please assign this issue to me? I'm unable to do it myself. |
I fixed the issue temporarily by creating a rule that if "YES" in output just return "YES". However, we need to make this change as we cannot expect LLM to always say just yes or no. In my case model said, 'Yes answer is available in context' |
@ankur287 Do you mind posting how you were able to get around this issue in detail since LangChain itself hasn't really fixed this? If not is there an issue tracking this problem? I have found a quick workaround by implementing my own boolean output parser to default to YES along with checking if YES/NO is in the output instead of strict matching. I am happy to make a PR to address this problem. |
I posted above how I fixed it. See my last comment |
…responses (#17810) - **Description:** I encountered this error when I tried to use LLMChainFilter. Even if the message slightly differs, like `Not relevant (NO)` this results in an error. It has been reported already here: https://github.com/langchain-ai/langchain/issues/. This change hopefully makes it more robust. - **Issue:** #11408 - **Dependencies:** No - **Twitter handle:** dokatox
…responses (langchain-ai#17810) - **Description:** I encountered this error when I tried to use LLMChainFilter. Even if the message slightly differs, like `Not relevant (NO)` this results in an error. It has been reported already here: https://github.com/langchain-ai/langchain/issues/. This change hopefully makes it more robust. - **Issue:** langchain-ai#11408 - **Dependencies:** No - **Twitter handle:** dokatox
…responses (langchain-ai#17810) - **Description:** I encountered this error when I tried to use LLMChainFilter. Even if the message slightly differs, like `Not relevant (NO)` this results in an error. It has been reported already here: https://github.com/langchain-ai/langchain/issues/. This change hopefully makes it more robust. - **Issue:** langchain-ai#11408 - **Dependencies:** No - **Twitter handle:** dokatox
…#20064) - **Description**: fixes BooleanOutputParser detecting sub-words ("NOW this is likely (YES)" -> `True`, not `AmbiguousError`) - **Issue(s)**: fixes #11408 (follow-up to #17810) - **Dependencies**: None - **GitHub handle**: @casperdcl <!-- if unreviewd after a few days, @-mention one of baskaryan, efriis, eyurtsev, hwchase17 --> - [x] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ --------- Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
…#20064) - **Description**: fixes BooleanOutputParser detecting sub-words ("NOW this is likely (YES)" -> `True`, not `AmbiguousError`) - **Issue(s)**: fixes #11408 (follow-up to #17810) - **Dependencies**: None - **GitHub handle**: @casperdcl <!-- if unreviewd after a few days, @-mention one of baskaryan, efriis, eyurtsev, hwchase17 --> - [x] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ --------- Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
System Info
Hi, I am using LLMChainFilter.from_llm(llm) but while running, I am getting this error:
ValueError: BooleanOutputParser expected output value to either be YES or NO. Received Yes, the context is relevant to the question as it provides information about the problem in the.
How do I resolve this error?
Langchain version: 0.0.308
Who can help?
@agola11
Information
Related Components
Reproduction
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor, LLMChainFilter
llm = SageMakerEndpointModel
_filter = LLMChainFilter.from_llm(llm)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=_filter, base_retriever=faiss_retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown?")
Expected behavior
Get filtered docs
The text was updated successfully, but these errors were encountered: