You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Are you using H100-PCIe or H100-SXM? The memory bandwidth of A100-SXM and H100-PCIe are both 2TB/s, so their performance in LLM inference is similar (LLM inference is sensitive to memory bandwidth).
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
Thank you for your hard work.
The performance difference between A100 and H100 is not significant. I used the official VLLM image 0.2.4 on Docker Hub.
I set the prompt and completion to 500, and both A100 and H100 take 19 seconds.
Are there any settings to optimize performance on H100?
The text was updated successfully, but these errors were encountered: