-
Notifications
You must be signed in to change notification settings - Fork 405
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Completion vs. Chat - clarification request #96
Comments
+1. |
In theory, the same guardrails configuration should work with both completion and chat models. The completion mode works better (e.g. That being said. Can you share some quick examples here? I'd like to debug this a bit with you if you have time. Just the config + the dialog is enough. And just in case you're not doing this already, you should use Thanks! |
Hi @drazvan! In the meantime, here is an example of Nemo breaking out of a flow after generating user intent:
Just run it with gpt-3.5-turbo and then switch to text-davinci-003.. Regarding the context message and register_prompt_context, I've tested these in an isolated environment and they seem to work just fine. I need to conduct further testing on the issue within our infrastructure. I'll keep you posted if anything comes up. We are familiar with the debugging options and use them quite often, but our backend includes some other core-logic components between Nemo and the client. Therefore, in some cases, we simply can't employ that approach:) Thank you again for your help! |
+1, I'm having the same issue. |
Hi @drazvan, hope you are doing well. Also in that discussion, you told us about the state feature that you guys are working on.. Thank you again for all your work, let us know if there is anything you need from us or if we can help in any way. Bar. |
Hi, does setting |
I get 2x "Parameter temperature does not exist for OpenAIChat" when trying to use chat models like gpt-4. openai==0.28.1 config.yaml
Returns
Trying to use guardrails to prevent it from discussing politics as it's a common default objective. Solved by bypassing the NeMo yaml by adding
|
@Johnnil: can you check again with |
Hey,
I wanted to start off by saying a huge thanks for the awesome work your team has done, it is really an amazing project.
I've been tinkering around with Nemo and I have a question I'm hoping you can help me with,
During my usage of Nemo with chat models (such as GPT-3.5/4), I have encountered an issue where the process deviates from the intended flow during bot message generation. Although user intent is being correctly identified, the bot message generated employs the language model (LLM) to create a more general bot response instead of adhering to the predefined conversational flow.
When using Nemo with a completion model (specifically davinci-003), I've observed that it faithfully follows the conversation flows. however, it appears to lack the creative element and capabilities of a language model when generating bot messages.
Could you please provide clarification on the distinctions between running Nemo in a chat model vs. a completion model? Additionally, if its not too much trouble, I would appreciate guidance on the necessary adjustments when utilizing each of these models. (reading the documentation, I couldn't figure out how to make both work properly)
Getting some insights on this would be a game-changer for me.
Thanks again for the hard work,
Ari
The text was updated successfully, but these errors were encountered: