Conversations are not working at the moment #168
Replies: 12 comments 19 replies
-
Beta Was this translation helpful? Give feedback.
-
Thank you for the notice, I just tried them and found that both the original model and
It seems that OpenAI shut down those endpoints |
Beta Was this translation helpful? Give feedback.
-
It's a little bit weird, no matter which model I use in revChatGPT, I get using revChatGPT latest version |
Beta Was this translation helpful? Give feedback.
-
Actually I found a good way to pass the model-name to revChatGPT! in your src/responses.py, modify line 18:
in your config.json, add a new line with:
Whenever you want to update the used model, just replace it inside the config.json. No changes on revChatGPT-Side necessary as the init of the ChatBot-class takes "engine" as a parameter:
|
Beta Was this translation helpful? Give feedback.
-
I made it, thanks a lot, I'll update it immediately |
Beta Was this translation helpful? Give feedback.
-
hey @Cellenseres & @Zero6992, I'm running the latest version, 0.8.1, and conversations are not working for me. Is there anything special that should be done when prompting the bot? |
Beta Was this translation helpful? Give feedback.
-
Am I correct in assuming switching to the revChatGPT method will end up with the censorship that comes with ChatGPT? I'd like to add conversation to my bot, but not at the cost of censorship. |
Beta Was this translation helpful? Give feedback.
-
Your understanding is mostly correct Since version 0.9.1, it sends requests directly to ChatGPT (website), but it doesn't pass through OpenAI's front-end moderation API Differences in responses may be due to ChatGPT (website) having a prepended prompt, but you can use prompt engineering to ignore those restrictions once again |
Beta Was this translation helpful? Give feedback.
-
using version 0.9.2 and I get in the logs: openai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://onboard.openai.com for details, or email support@openai.com if you have any questions. That label no longer exists in the .env file. |
Beta Was this translation helpful? Give feedback.
-
Here's the error: 2023-03-01 06:10:05,583 - discord.app_commands.tree - ERROR - on_error - Ignoring exception in command 'chat' The above exception was the direct cause of the following exception: Traceback (most recent call last): |
Beta Was this translation helpful? Give feedback.
-
If I run via python3 main.py working perfectly. If I run via docker I get an error |
Beta Was this translation helpful? Give feedback.
-
This error on 0.9.3 The above exception was the direct cause of the following exception: Traceback (most recent call last): |
Beta Was this translation helpful? Give feedback.
-
So everyone knows: Switching to the GPT-3 Model temporarily breaks conversations as it uses the openAI-Library again and not revChatGPT, which is currently the only Library supporting conversations.
@Zero6992 for everyone who wants to work with conversations, wouldn't it be possible to use revChatGPT again?
I'm not very familiar with either python or the libraries.
But in the revChatGPTs "Official.py" I found this"
`ENGINE = os.environ.get("GPT_ENGINE") or "text-chat-davinci-002-20221122"
ENCODER = tiktoken.get_encoding("gpt2")
`
maybe you can pass your own model to it as it's looking for os.environ.get("GPT_ENGINE")?
Beta Was this translation helpful? Give feedback.
All reactions