[Security Solution] Some OSS LLMs do not respond well and return corrupted output #204261
Labels
bug
Fixes for quality problems that affect the customer experience
Team:Security Generative AI
Security Generative AI
Team: SecuritySolution
Security Solutions Team working on SIEM, Endpoint, Timeline, Resolver, etc.
triage_needed
This issue was discovered during testing of this PR:
For some reason
llama3.2:latest
returns a weird answer. Maybe our prompts do not work well with that model?When I send simple
hello
, it returns a next response:which then being converted into
"[object Object]"
as an output result.Here is the example of such a behaviour https://smith.langchain.com/o/b739bf24-7ba4-4994-b632-65dd677ac74e/projects/p/b9ebe1df-f5ad-4c26-bc57-e5e65994b91e?timeModel=%7B%22duration%22%3A%227d%22%7D&runtab=0&peek=5f3962c7-18fa-46de-9be6-57f56f8760e4
Note
llama:3.1
works well.The text was updated successfully, but these errors were encountered: