Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Large messages sent to o1-preview are flagged as violating the usage policy #5554

Open
johnnymo87 opened this issue Jan 22, 2025 · 4 comments
Labels
🐛 Bug Something isn't working | 缺陷

Comments

@johnnymo87
Copy link
Contributor

johnnymo87 commented Jan 22, 2025

📦 Environment

Vercel

📌 Version

v1.47.15

💻 Operating System

macOS

🌐 Browser

Firefox

🐛 Bug Description

Hello, I've been using lobe-chat since April, 2024, and it's amazing, thank you very much for all the work you've put into it.

When o1-preview came out in in September, 2024, I noticed an issue with using it via lobe-chat. I discovered the messages over a certain length were rejected by OpenAI as violating their usage policy. Here's a screenshot of this problem in action.

Image
{
  "message": "Invalid prompt: your prompt was flagged as potentially violating our usage policy. Please try again with a different prompt: https://platform.openai.com/docs/guides/reasoning#advice-on-prompting",
  "provider": "openai",
  "errorType": "ProviderBizError"
}

Sorry, I don't know exactly how many tokens long my message needs to be to trigger this problem. Somewhere in the 2.5k-5k range I think.

I have a workaround, a personal CLI for sending one-shot messages to o1-preview, which relies on the ruby-openai gem to do the heavy lifting.

So when I hit this problem in lobe-chat, I switch to that CLI instead. So I know that there's no actual usage policy violation occurring, because o1-preview will respond if I send my message this way. And I don't notice any discussion of this problem in the ruby-openai gem's GitHub issues.

I wanted to raise this issue here, in case other people have noticed it.

📷 Recurrence Steps

  1. Get a certain number of tokens by downloading the complete works of Shakespeare and then copying the first 500 lines (yields ~5.5k tokens) to your clipboard (assumes macOS).
    wget https://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt
    head -n 500 t8.shakespeare.txt | pbcopy
    
  2. Paste into a new chat with o1-preview, send the message.

🚦 Expected Behavior

I would o1-preview to accept lobe-chat messages if it accepts them from my personal CLI.

📝 Additional Information

Details about my lobe-chat instance.

  • It is deployed to vercel, with postgresql, S3, and clerk.
  • It is "server-side", i.e. the requests to AI service providers don't happen in the browser. Instead, it happens on the server.
    • I vaguely recall that things are different between calling from the browser versus the server. Because prior to upgrading to the server-side calls, my browser-based calls to o1-preview would time out.

Details about my o1-preview assistant.

  • Message threshold: 2.
  • Limit History Message Count: disabled.
@johnnymo87 johnnymo87 added the 🐛 Bug Something isn't working | 缺陷 label Jan 22, 2025
@github-project-automation github-project-automation bot moved this to Roadmap - Chat 1.x in Lobe Chat Routine Jan 22, 2025
@lobehubbot
Copy link
Member

👀 @johnnymo87

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

Copy link

dosubot bot commented Jan 22, 2025

Hey @johnnymo87, I'm here to help you with any bugs, questions, or contributions you have. Let's tackle this issue together!

The issue you're experiencing with messages being flagged as violating OpenAI's usage policy when using o1-preview in lobe-chat might be related to how messages are processed or altered before being sent to OpenAI. In lobe-chat, there is middleware or server-side logic that can modify messages, potentially affecting their content or length [1].

Here are a few things to consider:

  1. Message Processing: The processMessages function in lobe-chat can alter messages by adding system roles or handling content types, which might inadvertently change the message in a way that triggers OpenAI's usage policy [1].

  2. Token Calculation: Ensure that the token count is accurately calculated before sending messages. Miscalculations could lead to messages being longer than expected, potentially triggering policy violations [2][3].

  3. Differences in Request Handling: There might be differences in how requests are handled between your personal CLI and lobe-chat, such as headers or payload structure, which could affect how OpenAI interprets the messages. Unfortunately, specific differences in headers or authentication methods between lobe-chat and your CLI are not documented in the available information.

To troubleshoot further, you might want to:

  • Review the processMessages function to see if any modifications could be causing the issue.
  • Compare the request payloads and headers between lobe-chat and your CLI to identify any discrepancies.
  • Check if there are any logging or debugging options in lobe-chat that could provide more insights into why messages are flagged.

If the issue persists, consider adjusting the message content or length to see if it affects the outcome.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

Copy link

dosubot bot commented Jan 22, 2025

The issue you're experiencing with messages being flagged as violating the usage policy when using the o1-preview model in lobe-chat might be related to how the payload is being processed or the headers used in the API requests.

  1. Payload Processing: Lobe-chat modifies the message content before sending it to OpenAI's o1-preview model using the pruneO1Payload function. This function sets frequency_penalty and presence_penalty to 0, changes the role of messages from 'system' to 'user' if applicable, and sets temperature and top_p to 1 [1]. This processing might differ from your CLI setup, potentially affecting how OpenAI interprets the message.

  2. Headers and Authentication: Lobe-chat uses specific headers in its API requests, such as X-lobe-chat-access-code and X-openai-api-key, which might differ from those used in your CLI [2][3]. Differences in headers or authentication methods could lead to different handling of requests by OpenAI.

  3. Token Count Calculation: Ensure that the token count calculation in lobe-chat is accurate. Lobe-chat uses the useTokenCount hook to calculate the token count for different components of the message [4]. If there's a discrepancy in how tokens are counted compared to your CLI, it might lead to messages being flagged.

To troubleshoot further, you might want to compare the exact payload and headers sent by lobe-chat and your CLI to identify any differences. Additionally, reviewing the error handling mechanisms in lobe-chat could provide insights into how these errors are being processed [5].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@johnnymo87 johnnymo87 changed the title [Bug] Larges messages sent to o1-preview are flagged as violating the usage policy. [Bug] Large messages sent to o1-preview are flagged as violating the usage policy Jan 22, 2025
@syntax-error-1
Copy link

Same issue, any update?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Something isn't working | 缺陷
Projects
Status: Roadmap - Chat 1.x
Development

No branches or pull requests

3 participants