You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, we may need the input sequences for reproduction.
Sorry, it's not convenient to share the original text at this moment. We noticed that this issue specifically occurs when the input text length exceeds approximately 13,000 characters. Interestingly, we haven't observed such behavior with the Qwen-2 model, and other models don't have this issue.
This issue has been automatically marked as inactive due to lack of recent activity. Should you believe it remains unresolved and warrants attention, kindly leave a comment on this thread.
Model Series
Qwen2.5
What are the models used?
qwen2.5-7b-instruct
What is the scenario where the problem happened?
In the process of VLLM reasoning generated JSON, it was half normally and half of the error.
Is this badcase known and can it be solved using avaiable techniques?
Information about environment
OS Version:
Ubuntu 22.04.5 LTS
Python Version:
Python 3.10.12
GPU Information:
NVIDIA GeForce RTX 4080
NVIDIA GeForce RTX 4080
NVIDIA Driver Version:
550.120
CUDA Compiler Version:
1.5,
PyTorch Version:
2.5.1+cu124
Description
Steps to reproduce
This happens to qwen2.5-7b-instruct
The badcase can be reproduced with the following steps:
The following example output can be used:
Expected results
理论上应该每一个 dict 都是正常的 key, value, 后续出现了乱码;
Attempts to fix
I have tried several ways to fix this, including:
Anything else helpful for investigation
I find that this problem also happens to ...
The text was updated successfully, but these errors were encountered: