Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Defaulting HIP_PLATFORM to amd on Windows #3466

Merged
merged 1 commit into from
Sep 25, 2024
Merged

Conversation

apwojcik
Copy link
Collaborator

The HIP SDK on Windows does not detect the platform correctly. Before calling find_package(hip ...), it needs to be defaulted to "amd".

@apwojcik apwojcik added Windows Related changes for Windows Environments UAI labels Sep 21, 2024
@apwojcik apwojcik requested a review from causten as a code owner September 21, 2024 17:11
Copy link

codecov bot commented Sep 21, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 92.04%. Comparing base (ab5da3c) to head (47308a2).
Report is 3 commits behind head on develop.

Additional details and impacted files
@@           Coverage Diff            @@
##           develop    #3466   +/-   ##
========================================
  Coverage    92.04%   92.04%           
========================================
  Files          506      506           
  Lines        20872    20872           
========================================
  Hits         19212    19212           
  Misses        1660     1660           
Flag Coverage Δ
92.04% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@migraphx-bot
Copy link
Collaborator

Test Batch Rate new
47308a
Rate old
32a84a
Diff Compare
torchvision-resnet50 64 3,247.97 3,250.93 -0.09%
torchvision-resnet50_fp16 64 6,992.03 6,986.29 0.08%
torchvision-densenet121 32 2,428.93 2,433.76 -0.20%
torchvision-densenet121_fp16 32 4,100.59 4,110.78 -0.25%
torchvision-inceptionv3 32 1,637.75 1,635.58 0.13%
torchvision-inceptionv3_fp16 32 2,742.12 2,737.29 0.18%
cadene-inceptionv4 16 779.39 775.61 0.49%
cadene-resnext64x4 16 808.62 808.58 0.00%
slim-mobilenet 64 7,455.26 7,452.87 0.03%
slim-nasnetalarge 64 208.16 208.18 -0.01%
slim-resnet50v2 64 3,433.62 3,434.69 -0.03%
bert-mrpc-onnx 8 1,148.05 1,150.44 -0.21%
bert-mrpc-tf 1 311.79 311.37 0.13%
pytorch-examples-wlang-gru 1 421.11 426.16 -1.19%
pytorch-examples-wlang-lstm 1 393.60 386.06 1.95%
torchvision-resnet50_1 1 769.50 816.95 -5.81% 🔴
cadene-dpn92_1 1 403.35 401.62 0.43%
cadene-resnext101_1 1 381.50 380.63 0.23%
onnx-taau-downsample 1 344.33 345.07 -0.21%
dlrm-criteoterabyte 1 35.04 35.06 -0.07%
dlrm-criteoterabyte_fp16 1 58.02 58.01 0.01%
agentmodel 1 7,988.31 7,975.82 0.16%
unet_fp16 2 57.90 58.07 -0.29%
resnet50v1_fp16 1 1,031.52 921.74 11.91% 🔆
resnet50v1_int8 1 967.66 953.21 1.52%
bert_base_cased_fp16 64 1,152.84 1,152.80 0.00%
bert_large_uncased_fp16 32 355.91 355.84 0.02%
bert_large_fp16 1 211.86 211.95 -0.05%
distilgpt2_fp16 16 2,152.14 2,157.57 -0.25%
yolov5s 1 537.38 528.76 1.63%
tinyllama 1 43.40 43.37 0.07%
vicuna-fastchat 1 169.02 168.85 0.10%
whisper-tiny-encoder 1 418.95 418.42 0.13%
whisper-tiny-decoder 1 432.18 425.66 1.53%

This build is not recommended to merge 🔴

@migraphx-bot
Copy link
Collaborator


     ✅ bert-mrpc-onnx: PASSED: MIGraphX meets tolerance

     ✅ bert-mrpc-tf: PASSED: MIGraphX meets tolerance

     ✅ pytorch-examples-wlang-gru: PASSED: MIGraphX meets tolerance

     ✅ pytorch-examples-wlang-lstm: PASSED: MIGraphX meets tolerance

     ✅ torchvision-resnet50_1: PASSED: MIGraphX meets tolerance

     ✅ cadene-dpn92_1: PASSED: MIGraphX meets tolerance

     ✅ cadene-resnext101_1: PASSED: MIGraphX meets tolerance

     ✅ dlrm-criteoterabyte: PASSED: MIGraphX meets tolerance

     ✅ agentmodel: PASSED: MIGraphX meets tolerance

     ✅ unet: PASSED: MIGraphX meets tolerance

     ✅ resnet50v1: PASSED: MIGraphX meets tolerance

     ✅ bert_base_cased_fp16: PASSED: MIGraphX meets tolerance

🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


     ✅ bert_large: PASSED: MIGraphX meets tolerance

     ✅ yolov5s: PASSED: MIGraphX meets tolerance

     ✅ tinyllama: PASSED: MIGraphX meets tolerance

     ✅ vicuna-fastchat: PASSED: MIGraphX meets tolerance

     ✅ whisper-tiny-encoder: PASSED: MIGraphX meets tolerance

     ✅ whisper-tiny-decoder: PASSED: MIGraphX meets tolerance

     ✅ distilgpt2_fp16: PASSED: MIGraphX meets tolerance

@pfultz2
Copy link
Collaborator

pfultz2 commented Sep 21, 2024

This should be fixed in hip, this is not a migraphx bug. As a workaround you can just pass -DHIP_PLATFORM=amd to the cmake command until hip fixes it.

@causten causten merged commit 9f01876 into develop Sep 25, 2024
51 checks passed
@causten causten deleted the hip_sdk_6.2_windows branch September 25, 2024 03:04
@pfultz2
Copy link
Collaborator

pfultz2 commented Sep 25, 2024

@causten Why was this merged? My comments was not even addressed. We shouldn't put hip fixes in out repo, they should be fixed in hip. And there is a simple workaround that can already be used. I think I am going to revert this.

pfultz2 added a commit that referenced this pull request Sep 25, 2024
pfultz2 added a commit that referenced this pull request Sep 25, 2024
@Dnonmi
Copy link

Dnonmi commented Sep 26, 2024

@causten Why was this merged? My comments was not even addressed. We shouldn't put hip fixes in out repo, they should be fixed in hip. And there is a simple workaround that can already be used. I think I am going to revert this.

My question is why create a delay for fixes that were already deemed fine for other branches on the same software stack?
Namely MIOpen: ROCm/MIOpen#3263
I guess even here..

With this lack of consistency, It's not assuring the hip side of things will be fixed soon so other people can move on with their jobs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
UAI Windows Related changes for Windows Environments
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants