-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect results reported for TensorRT-LLM #187
Comments
Hey, sorry for the late reply, however, upon seeing that, I did cross check, and it is to be noted that the device I am using is an A100 80 GB and also I maxed out the gpu utilization (with higher kv cache), so that is why there is so much high throughput. Speaking of the code, I also printed out the line, and it seems fine, otherwise, the output that you see here would have been very different from ground truth. |
Hey, thanks for looking into this and no worries. And btw thanks for a really nicely reproducible code, it was super helpful, as trt-llm is showing really good scaling with larger batch sizes for us. I think there is a way to see that there is something strange going on, without any code changes -- to observe how reported generation speed changes with different prompts, which have different generation lengths. Using only int4 models for simplicity, L4 GPU:
I expect that A100 would have better absolute performance, but similar relative performance -- when generation is short, the token/s as measured by current code is off the charts (7600 t/s), and when it's long, it's much slower (176 t/s). While with the fix the generation speed is slightly lower when generation is short (due to kernel launch overhead), and consistent with other measurements. This is happening because the measured Curious if you see similar discrepancy in reported generation speed with TensoRT-LLM with different prompts? Could be an issue with my build, sorry if that's the case.
I think extra EOS tokens in the end would be ignored during decoding, so the output is still fine, it's only how we measure the number of generated tokens is off. |
Wow, 7K tokens/sec, this is concerning, well I saw some discrepencies where the tokens/sec was differing by 100 (like sometimes getting 400 and sometimes 300), but one way to analyse this, is out of 10 generations, I am going to print all of them and see if all of them outputs the same thing or not. Also, I see you using two envs, can you show me what you changed in the Thanks |
Makes sense, thank you! If you can print the
Sure, here is a mistral-specific fix:
|
Thanks @lopuhin, will update you very soon |
I was able to run float16 and int4 trt-llm benchmarks with mistral 7B on L4 GPU (GCP), the reported performance is 40.96 ± 0.37 t/s in float16 and 166.02 ± 0.52 t/s with int4, which is significantly faster compared to both exllamav2 and vllm with batch size 1 on llama 3 8B (also int4), and also 2x higher than theoretically possible given the memory bandwidth available.
I did some debugging and believe that reported results are incorrect, in terms of number of generated tokens, e.g. after this line
benchmarks/bench_tensorrtllm/bench.py
Line 101 in 0710037
num_output_tokens
asoutput_tokens.index(2)
(which is obviously not a general solution but works for now for mistral), then I get the values which are much closer to vllm and also get same generation speed in the speed test and in the subsequent quality test.The text was updated successfully, but these errors were encountered: