Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any optimization options for H100? #2107

Open
Archmilio opened this issue Dec 14, 2023 · 2 comments
Open

Any optimization options for H100? #2107

Archmilio opened this issue Dec 14, 2023 · 2 comments
Labels
performance Performance-related issues stale

Comments

@Archmilio
Copy link

Thank you for your hard work.

The performance difference between A100 and H100 is not significant. I used the official VLLM image 0.2.4 on Docker Hub.

I set the prompt and completion to 500, and both A100 and H100 take 19 seconds.

Are there any settings to optimize performance on H100?

@Tan-YiFan
Copy link

Are you using H100-PCIe or H100-SXM? The memory bandwidth of A100-SXM and H100-PCIe are both 2TB/s, so their performance in LLM inference is similar (LLM inference is sensitive to memory bandwidth).

@hmellor hmellor added the performance Performance-related issues label Mar 25, 2024
Copy link

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

@github-actions github-actions bot added the stale label Oct 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Performance-related issues stale
Projects
None yet
Development

No branches or pull requests

3 participants