Bot Responding with 'Internal Error' When Using OpenAI for llama_guard
Model in NeMoGuardrails
#693
Unanswered
abhijeet-adarsh
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Hi @abhijeet-adarsh ! The first parameter for I'm not sure what you're trying to achieve. Why do you need the |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm facing an issue where my bot is responding with "Internal Error" when using the
llama_guard
model integrated with the OpenAI API. The error seems to stem from aNoneType
object being accessed when trying to invoke the OpenAI API.Configuration Details
config.yml:
config.py:
Problem Description
When I run the bot, I get the following error message, and the bot only responds with "Internal Error":
I was expecting my bot to correctly use the
llama_guard
model to check the input for unsafe content and respond accordingly, without throwing an internal error.Request for Assistance
Could you please help me identify what might be causing this issue and how to resolve it?
Beta Was this translation helpful? Give feedback.
All reactions