Skip to content

Conversation

@LJC00118
Copy link
Collaborator

@LJC00118 LJC00118 commented Sep 26, 2025

Summary by CodeRabbit

  • New Features
    • Overlapping input tensors are now accepted during kernel execution; prior validation that could raise errors has been removed, allowing operations to proceed.
    • This reduces unexpected interruptions and improves compatibility with workflows that produce aliased or strided tensors, enabling more tensor configurations to run without manual adjustments.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 26, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

The change disables the overlapping-tensor validation in tilelang/jit/adapter/cython/cython_wrapper.pyx by converting the check into a triple-quoted string, leaving input preparation logic otherwise intact.

Changes

Cohort / File(s) Summary
Cython wrapper input preparation
tilelang/jit/adapter/cython/cython_wrapper.pyx
Commented out (triple-quoted) runtime check for overlapping input tensors, removing the ValueError path while keeping the rest of the preparation flow unchanged.

Sequence Diagram(s)

sequenceDiagram
  participant Caller
  participant CythonWrapper as Cython Wrapper
  participant Kernel as Kernel

  Caller->>CythonWrapper: prepare_inputs(args)
  %% Previously performed validation (now disabled)
  rect rgba(200,200,200,0.2)
    note over CythonWrapper: Overlapping-tensor validation (removed)
  end
  CythonWrapper->>CythonWrapper: Build/prep tensor views
  CythonWrapper->>Kernel: launch(args_prepared)
  Kernel-->>Caller: result
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

I nudge the kernels, ears perked high,
The overlap guard hops gently by—now shy.
No clatter, no clang, just a silent skip,
Inputs align, we launch the trip.
In fields of bytes, I bound with glee,
One less fence, more run-time free. 🐇

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
Title Check ✅ Passed The title “[Cython] Remove an incorrect check” directly reflects the primary change of disabling the overlapping-tensor validation check in the Cython wrapper and clearly conveys the context and intent of the patch.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the TileLang project.

Please remember to run bash format.sh in the root directory of the project to ensure your changes are properly linted and formatted. This will help ensure your contribution passes the format check.

We appreciate you taking this step! Our team will review your contribution, and we look forward to your awesome work!

🚀

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @LJC00118, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue with an existing validation check for torch.Tensor inputs. By commenting out a specific if condition that previously checked for non-contiguous and overlapping tensors, the PR aims to prevent erroneous validation errors and ensure a smoother processing flow for tensors within the CythonKernelWrapper.

Highlights

  • Tensor Validation: Removed an if condition that checked for non-contiguous, overlapping torch.Tensor inputs within the CythonKernelWrapper, as this check was identified as incorrect or unnecessary.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request removes a check for overlapping tensors by commenting it out. While disabling the check seems correct based on the PR title, the method used (a multi-line string) is not ideal. I've suggested removing the code block and the associated TODO comment entirely to improve code clarity and maintainability.

Comment on lines 199 to +207
# TODO(chenggang): remove this check or rewrite by ourselves?
'''
if isinstance(tensor, torch.Tensor) and tensor._base is not None and not tensor.is_contiguous():
base_tensor = tensor._base.as_strided(tensor._base.shape, tensor.stride())
if torch._debug_has_internal_overlap(base_tensor):
raise ValueError(f"Cannot use an overlapping tensor"
f"(shape={tensor.shape}, strides={tensor.stride()}, "
f"overlap={torch._debug_has_internal_overlap(base_tensor)}) as the kernel input")
'''
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Instead of commenting out this code using a multi-line string, it's better to remove it entirely, along with the now-obsolete TODO comment. This improves code clarity and avoids clutter. If the code needs to be restored later, version control history can be used.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aa0b109 and 1d2ff3a.

📒 Files selected for processing (1)
  • tilelang/jit/adapter/cython/cython_wrapper.pyx (1 hunks)

Comment on lines +200 to +207
'''
if isinstance(tensor, torch.Tensor) and tensor._base is not None and not tensor.is_contiguous():
base_tensor = tensor._base.as_strided(tensor._base.shape, tensor.stride())
if torch._debug_has_internal_overlap(base_tensor):
raise ValueError(f"Cannot use an overlapping tensor"
f"(shape={tensor.shape}, strides={tensor.stride()}, "
f"overlap={torch._debug_has_internal_overlap(base_tensor)}) as the kernel input")
'''
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Restore the overlapping-tensor guard

Commenting out this block lets overlapping views (e.g., results of expand, as_strided, certain transposes) reach the kernel. Those views alias the same storage, so once the kernel writes, threads race and results become undefined. The underlying check was flawed because of the as_strided call on _base, but we still need a guard; we should just check the tensor directly and treat failures conservatively. Please rewrite instead of removing.

-            '''
-            if isinstance(tensor, torch.Tensor) and tensor._base is not None and not tensor.is_contiguous():
-                base_tensor = tensor._base.as_strided(tensor._base.shape, tensor.stride())
-                if torch._debug_has_internal_overlap(base_tensor):
-                    raise ValueError(f"Cannot use an overlapping tensor"
-                                     f"(shape={tensor.shape}, strides={tensor.stride()}, "
-                                     f"overlap={torch._debug_has_internal_overlap(base_tensor)}) as the kernel input")
-            '''
+            if (
+                isinstance(tensor, torch.Tensor)
+                and tensor._base is not None
+                and not tensor.is_contiguous()
+            ):
+                try:
+                    has_overlap = torch._debug_has_internal_overlap(tensor)
+                except RuntimeError as err:
+                    raise ValueError(
+                        f"Cannot use an overlapping tensor "
+                        f"(shape={tensor.shape}, strides={tensor.stride()}) as the kernel input"
+                    ) from err
+                if has_overlap:
+                    raise ValueError(
+                        f"Cannot use an overlapping tensor "
+                        f"(shape={tensor.shape}, strides={tensor.stride()}, "
+                        f"overlap={has_overlap}) as the kernel input"
+                    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
'''
if isinstance(tensor, torch.Tensor) and tensor._base is not None and not tensor.is_contiguous():
base_tensor = tensor._base.as_strided(tensor._base.shape, tensor.stride())
if torch._debug_has_internal_overlap(base_tensor):
raise ValueError(f"Cannot use an overlapping tensor"
f"(shape={tensor.shape}, strides={tensor.stride()}, "
f"overlap={torch._debug_has_internal_overlap(base_tensor)}) as the kernel input")
'''
if (
isinstance(tensor, torch.Tensor)
and tensor._base is not None
and not tensor.is_contiguous()
):
try:
has_overlap = torch._debug_has_internal_overlap(tensor)
except RuntimeError as err:
raise ValueError(
f"Cannot use an overlapping tensor "
f"(shape={tensor.shape}, strides={tensor.stride()}) as the kernel input"
) from err
if has_overlap:
raise ValueError(
f"Cannot use an overlapping tensor "
f"(shape={tensor.shape}, strides={tensor.stride()}, "
f"overlap={has_overlap}) as the kernel input"
)

@LeiWang1999 LeiWang1999 changed the title Remove an incorrect check [Cython] Remove an incorrect check Sep 26, 2025
@LeiWang1999 LeiWang1999 merged commit 6f6ef7a into tile-ai:main Sep 26, 2025
2 checks passed
@LeiWang1999
Copy link
Member

LGTM, Merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants