-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Search Refinement #393
Search Refinement #393
Conversation
…hecking input query with llm model
@Eyobyb . I'm good with these changes. Please also take a look at them. |
"Have you encountered hallucinations with this? It adds unwanted details and, rather than rearranging them, it elaborates on them. Take this example:
It returns all of them in the same sequence, but it elaborates on each document to make sense based on the question." |
This is an interesting observation. I think in general if the answer is relevant I'm fine that the LLM elaborates them. But we should probably fix the prompt so that it filters documents that are not relevant (e.g. give an empty string). For now, it seems that the LLM gives a description about why the document is not relevant instead of gives an empty string. Also, the prompt can be configured by the user. But we should try to give a solid baseline. |
Also notice that this feature is about extracting relevant information from a document, instead of reranking them |
…educed hallucination
Hi @Eyobyb thank you so much for your comment. Improved prompting to reduce hallucinations and added a function to remove irrelevant answers. |
one possible implementation for the prompt could be that it ranks each item
as not relevant (return empty string), partially relevant (extract only the
relevant sentences as they are), or directly relevant (return the answer as
is). it's likely that this semi chain of thought help the model do a better
job in the refinement. plus we could use the tag to filter out anything
that's not relevant, for example
…On Mon, Jul 1, 2024, 21:15 Fandi Yi ***@***.***> wrote:
Hi @Eyobyb <https://github.com/Eyobyb> thank you so much for your
comment. Improved prompting to reduce hallucinations and added a function
to remove irrelevant answers.
—
Reply to this email directly, view it on GitHub
<#393 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AD4OK5MYJMJDUDMPPGT7KOLZKH5J7AVCNFSM6AAAAABJX7OQTWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMBRGYYDANJRGY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
@Bubbletea98 Lets move ahead with the new refinery so we can move this PR forward! |
…t will only come from original input sentences. 1. tokenize and index sentences 2. let llm pick the relevant sentences' index number. Updated test case in refinement.py as well.
Hi Team, updated the refinement function:
Please let me know if there are any additional conditions we need to include. :) |
Description
Resolve #363
Added refinement function to extract key answers by running LLM with input query.
Type of change
Related issues
Checklists
To speed up the review process, please follow these checklists:
Development
make format && make lint
)make test
)See the testing guidelines for help on tests, especially those involving web services.
Code review
💔 Thank you for submitting a pull request!