[FIX] fix hf output bug (current output contain user prompt which cause logical error in entity extraction phase) #138
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
the current
response_text
inhf_model_if_cache
will contain user prompt as shown as follows:even though the response_text is being stripped in
operate.py
which make the final response look like normally, but in the process of creating the knowledge graph and vector databse (entity extraction phase), the user prompt is not strip, which will result the promptPROMPTS["entity_extraction"]
inprompt.py
will also consider as assitant response as well and causes the entity in example prompt is being extracted, but these entity is not actually in the text file (pure text database), instead, it just an example entity.what does thir pr do: remove user prompt in response as shown as follows: