Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add hip support #330

Merged
merged 2 commits into from
Oct 4, 2024
Merged

Add hip support #330

merged 2 commits into from
Oct 4, 2024

Conversation

dacorvo
Copy link
Collaborator

@dacorvo dacorvo commented Oct 4, 2024

What does this PR do?

This is a rebase of #280.

This has been tested on an AMD Instinct MI250X/MI250.

Some tests with qfloat8 Linear are failing because there is a wider mismatch between the quantized and non-quantized version, but this is a good first step.

FAILED test/nn/test_qlinear.py::test_quantize_linear_float16_activations_float8[cuda-a-qfloat8-e4m3-w-qint8-fp16-bias-10-256-1] - ValueError: Alignment 0.95751953 deviates too much from 1.0 with atol=0.005, rtol=0.001
FAILED test/nn/test_qlinear.py::test_quantize_linear_float16_activations_float8[cuda-a-qfloat8-e4m3-w-qint8-fp16-bias-10-256-10] - ValueError: Alignment 0.95556641 deviates too much from 1.0 with atol=0.005, rtol=0.001
FAILED test/nn/test_qlinear.py::test_quantize_linear_float16_activations_float8[cuda-a-qfloat8-e4m3-w-qint8-fp16-no-bias-10-256-1] - ValueError: Alignment 0.95458984 deviates too much from 1.0 with atol=0.005, rtol=0.001
FAILED test/nn/test_qlinear.py::test_quantize_linear_float16_activations_float8[cuda-a-qfloat8-e4m3-w-qint8-fp16-no-bias-10-256-10] - ValueError: Alignment 0.95507812 deviates too much from 1.0 with atol=0.005, rtol=0.001
FAILED test/nn/test_qlinear.py::test_quantize_linear_float16_activations_float8[cuda-a-float8-e4m3-uz-w-qint8-fp16-bias-10-256-1] - ValueError: Alignment 0.99267578 deviates too much from 1.0 with atol=0.005, rtol=0.001
FAILED test/nn/test_qlinear.py::test_quantize_linear_float16_activations_float8[cuda-a-float8-e4m3-uz-w-qint8-fp16-bias-10-256-10] - ValueError: Alignment 0.99316406 deviates too much from 1.0 with atol=0.005, rtol=0.001
FAILED test/nn/test_qlinear.py::test_quantize_linear_float16_activations_float8[cuda-a-float8-e4m3-uz-w-qint8-fp16-no-bias-10-256-1] - ValueError: Alignment 0.99267578 deviates too much from 1.0 with atol=0.005, rtol=0.001
FAILED test/nn/test_qlinear.py::test_quantize_linear_float16_activations_float8[cuda-a-float8-e4m3-uz-w-qint8-fp16-no-bias-10-256-10] - ValueError: Alignment 0.99267578 deviates too much from 1.0 with atol=0.005, rtol=0.001

@dacorvo dacorvo mentioned this pull request Oct 4, 2024
3 tasks
@dacorvo dacorvo merged commit 843b793 into main Oct 4, 2024
16 checks passed
@dacorvo dacorvo deleted the add_hip_support branch October 4, 2024 16:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants