How to view and debug logs - no LLM response shown and test case failure #2004
-
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 9 replies
-
Hey @kevinmessiaen - do you have an idea of what could be wrong? |
Beta Was this translation helpful? Give feedback.
-
Hello @ClarkKentIsSuperman, It seems that your LLM client (the one used for generating adversarial inputs) has returned an output in a wrong JSON format - it contains a phrase after the dict.
Normally it happens when the LLM used is not a powerful or recent model. Could you share which LLM client and model are you using as the default one? By the way, we recommend to use GPT-4o whenever it's possible. |
Beta Was this translation helpful? Give feedback.
-
Also, I'm fine using Mistral - would I add the
|
Beta Was this translation helpful? Give feedback.
-
I'll close it and reach out directly if I have more issues. |
Beta Was this translation helpful? Give feedback.
nevermind I see it here:
oc = OpenAIClient(model="mistral:latest", client=_client, json_mode=True)