Skip to content

Conversation

@Cunxiao2002
Copy link
Contributor

@Cunxiao2002 Cunxiao2002 commented Oct 13, 2025

I discovered that in PyTorch versions below 2.4, there's no torch.mps.is_available() interface, only torch.backends.mps.is_available(). To support older PyTorch versions, we need to modify this interface.

Summary by CodeRabbit

  • Bug Fixes
    • Improved detection of Apple Metal (MPS) support on macOS, resulting in more reliable automatic device selection. This reduces incorrect CPU/GPU fallbacks and applies MPS-specific behavior only when supported, improving stability and reducing errors for users with compatible hardware. CUDA and CPU behavior remains unchanged.

I discovered that in PyTorch versions below 2.4, there's no torch.mps.is_available() interface, only torch.backends.mps.is_available(). To support older PyTorch versions, we need to modify this interface.
@github-actions
Copy link

👋 Hi! Thank you for contributing to the TileLang project.

Please remember to run pre-commit run --all-files in the root directory of the project to ensure your changes are properly linted and formatted. This will help ensure your contribution passes the format check.

We appreciate you taking this step! Our team will review your contribution, and we look forward to your awesome work! 🚀

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 13, 2025

Walkthrough

Updated MPS availability detection in tilelang/utils/device.py to use torch.backends.mps.is_available() instead of torch.mps.is_available(), affecting the IS_MPS flag used by get_current_device. No changes to CUDA handling or return value logic.

Changes

Cohort / File(s) Change Summary
Device selection logic
tilelang/utils/device.py
Replaced MPS availability check to torch.backends.mps.is_available() and updated IS_MPS initialization accordingly; no other logic modified.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Poem

I twitch my ears at backend news,
A tiny hop, a subtle choose—
Backends.MPS, we now consult,
The device decides without tumult.
I thump in logs, then swiftly glide,
To CUDA, CPU, or Apple’s side. 🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly indicates that a bugfix was made to the torch MPS availability check, directly reflecting the primary change of switching from torch.mps.is_available() to torch.backends.mps.is_available(), so it clearly summarizes the main intent of the pull request.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tilelang/utils/device.py (1)

4-4: LGTM! Correct fix for PyTorch version compatibility.

Using torch.backends.mps.is_available() is the right approach as it's the stable API available since PyTorch 1.12 (when MPS support was introduced), whereas torch.mps.is_available() was only added in PyTorch 2.4. This change ensures backward compatibility while maintaining forward compatibility.

Optionally, consider adding a brief comment explaining the API choice for future maintainers:

+# Use torch.backends.mps for compatibility with PyTorch < 2.4
 IS_MPS = torch.backends.mps.is_available()
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d89ba5b and e657410.

📒 Files selected for processing (1)
  • tilelang/utils/device.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-test-metal

@oraluben
Copy link
Contributor

oraluben commented Oct 13, 2025

Currently torch's mps backend rely on api since 2.7. Refactor to support lower version is planned but I don't have specific time yet. One solution is to add something like torch>=2.7; platform='Darwin' in requirements.

@Cunxiao2002
Copy link
Contributor Author

@oraluben Thank you for your response. I understand now. I will close this PR and look forward to the refactoring!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants