Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-enable vision MPS builds #8485

Merged
merged 10 commits into from
Jun 10, 2024
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ jobs:
export GPU_ARCH_TYPE=cpu
export GPU_ARCH_VERSION=''

./.github/scripts/unittest.sh
${CONDA_RUN} ./.github/scripts/unittest.sh
Copy link
Contributor Author

@huydhn huydhn Jun 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not related to the issue at hand but I found it during the investigation. This is the correct way to invoke a script using pytorch/test-infra/.github/workflows/macos_job.yml@main. Otherwise, the system conda will be used instead of the one setup by macos_job and the ci env created by the script won't be clean up correctly

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @huydhn , does that mean we should use that for the rest of the jobs as well (including linux and windows)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need this on Linux because the script is run inside Docker, so cleaning things up is not an issue there and you can use your own conda installation if needed.

On the other hand, this might be needed on Windows, but I haven't seen any disk space issue there before, so maybe it's not that important there.


unittests-windows:
strategy:
Expand Down
13 changes: 3 additions & 10 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
import shutil
import subprocess
import sys
import warnings

import torch
from pkg_resources import DistributionNotFound, get_distribution, parse_version
Expand Down Expand Up @@ -139,6 +138,7 @@ def get_extensions():
+ glob.glob(os.path.join(extensions_dir, "ops", "cpu", "*.cpp"))
+ glob.glob(os.path.join(extensions_dir, "ops", "quantized", "cpu", "*.cpp"))
)
source_mps = glob.glob(os.path.join(extensions_dir, "ops", "mps", "*.mm"))

print("Compiling extensions with following flags:")
force_cuda = os.getenv("FORCE_CUDA", "0") == "1"
Expand Down Expand Up @@ -204,15 +204,8 @@ def get_extensions():
define_macros += [("WITH_HIP", None)]
nvcc_flags = []
extra_compile_args["nvcc"] = nvcc_flags

# FIXME: MPS build breaks custom ops registration, so it was disabled.
# See https://github.com/pytorch/vision/issues/8456.
# TODO: Fix MPS build, remove warning below, and put back commented-out elif block.V
if force_mps:
warnings.warn("MPS build is temporarily disabled!!!!")
# elif torch.backends.mps.is_available() or force_mps:
# source_mps = glob.glob(os.path.join(extensions_dir, "ops", "mps", "*.mm"))
# sources += source_mps
elif torch.backends.mps.is_available() or force_mps:
sources += source_mps

if sys.platform == "win32":
define_macros += [("torchvision_EXPORTS", None)]
Expand Down
Loading