Skip to content

Commit

Permalink
[Reland] Fix tensor.data_ptr() representation overflow (pytorch#135567)
Browse files Browse the repository at this point in the history
# Motivation
fix pytorch#135550
In PyTorch, [`tensor.data_ptr()`](https://github.com/pytorch/pytorch/blob/e889252493558a56263618faae9a9ef421c2a47d/tools/autograd/templates/python_variable_methods.cpp#L204) is reinterpreted by a [signed int64](https://github.com/pytorch/pytorch/blob/e889252493558a56263618faae9a9ef421c2a47d/torch/csrc/autograd/utils/wrap_outputs.h#L50) data type, which could result in an **overflow issue**, like below:
```python
import torch
a = torch.randn(2).to('xpu')
a.data_ptr()
# one possible output is
-23453392437248
# this is inconsistent with storage.data_ptr()
a.untyped_storage().data_ptr()
# one possible output is
18446720620317114368
```
This PR aims to fix this representation overflow issue to make `tensor.data_ptr()` consistent with [`tensor.untyped_storage().data_ptr()`](https://github.com/pytorch/pytorch/blob/c0d2f991b14d50f8081d788d4a3dc6584ee15502/torch/csrc/StorageMethods.cpp#L62). With this PR, the output will become:
```python
import torch
a = torch.randn(2).to('xpu')
a.data_ptr()
# one possible output is
18446720620317114368
# this is consistent with storage.data_ptr()
a.untyped_storage().data_ptr()
# one possible output is
18446720620317114368
```

# Solution
Use `PyLong_FromVoidPtr` to prevent the overflow issue and fit the semantic of `wrap`.

# Additional Context
This PR has been reverted (in place, no more change, and revert commit pytorch@2e8d431) due to the change of `tensor.data_ptr()`, which needs to sync up to intel xpu triton side, see [pytorch#2192](intel/intel-xpu-backend-for-triton#2192). So we have to update xpu triton commit pin with this PR together.
Pull Request resolved: pytorch#135567
Approved by: https://github.com/dvrogozh, https://github.com/EikanWang, https://github.com/albanD
  • Loading branch information
guangyey authored and pobin6 committed Dec 5, 2024
1 parent 32f1b59 commit a4feb2f
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 5 deletions.
5 changes: 1 addition & 4 deletions torch/csrc/StorageMethods.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -50,17 +50,14 @@ static PyObject* THPStorage_nbytes(PyObject* self, PyObject* noargs) {

static PyObject* THPStorage_dataPtr(PyObject* self, PyObject* noargs) {
HANDLE_TH_ERRORS
// PyLong_FromVoidPtr should not need to mutate the pointer in order
// to extract a new long object from it.

auto self_ = THPStorage_Unpack(self);
// See Note [Invalid Python Storages]
auto invalid = self_.data() == nullptr &&
self_.device_type() != c10::DeviceType::Meta && self_.sym_nbytes() != 0;
TORCH_CHECK(
!invalid,
"Attempted to access the data pointer on an invalid python storage.")
return PyLong_FromVoidPtr(self_.mutable_data());
return torch::autograd::utils::wrap(self_.mutable_data());
END_HANDLE_TH_ERRORS
}

Expand Down
2 changes: 1 addition & 1 deletion torch/csrc/autograd/utils/wrap_outputs.h
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ inline PyObject* wrap(c10::complex<double> value) {
}

inline PyObject* wrap(void* value) {
return THPUtils_packInt64(reinterpret_cast<intptr_t>(value));
return PyLong_FromVoidPtr(value);
}

inline PyObject* wrap(THPDtype* dtype) {
Expand Down

0 comments on commit a4feb2f

Please sign in to comment.