Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 7th No.25】为 Paddle 新增 is_coalesced -part #68334

Merged
merged 18 commits into from
Oct 28, 2024
60 changes: 60 additions & 0 deletions paddle/fluid/pybind/eager_method.cc
Original file line number Diff line number Diff line change
Expand Up @@ -2735,6 +2735,62 @@ static PyObject* tensor_method_to_sparse_csr(TensorObject* self,
EAGER_CATCH_AND_THROW_RETURN_NULL
}

PyDoc_STRVAR(tensor_is_coalesced__doc__, // NOLINT
R"DOC(is_coalesced($self, /)
--

Check whether the Tensor is a coalesced SparseCooTensor. If not it will return False.
Any Tensor type among DenseTensor/SparseCooTensor/SparseCsrTensor are supported.

Notes:
It will return always False for a newly created SparseCooTensor.

Args:
x (Tensor): The input tensor. It can be DenseTensor/SparseCooTensor/SparseCsrTensor.

Returns:
bool: True if the Tensor is a coalesced SparseCooTensor, and False otherwise.

Examples:

.. code-block:: python

>>> import paddle

>>> indices = [[0, 0, 1], [1, 1, 2]]
>>> values = [1.0, 2.0, 3.0]
>>> x = paddle.sparse.sparse_coo_tensor(indices, values)

>>> x.is_coalesced()
False
>>> x = x.coalesce()
>>> x.is_coalesced()
True

>>> x = paddle.to_tensor([[1., 2., 3.]])
>>> x.is_coalesced()
False

>>> x = x.to_sparse_csr()
>>> x.is_coalesced()
False

)DOC"); // NOLINT

static PyObject* tensor_method_is_coalesced(TensorObject* self,
PyObject* args,
PyObject* kwargs) {
EAGER_TRY
if (self->tensor.is_sparse_coo_tensor()) {
auto sparse_coo_tensor =
std::dynamic_pointer_cast<phi::SparseCooTensor>(self->tensor.impl());
return ToPyObject(sparse_coo_tensor->coalesced());
} else {
return ToPyObject(false);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

只是一个疑问,这里传入错误类型的情况下,感觉报错比返回 False 更好?这点在原来的设计中是有考虑到的么?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是参考 pytorch 的设计

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Python 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> x = torch.as_tensor([[1., 2., 3.]])
>>> x.is_coalesced()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: is_coalesced expected sparse coordinate tensor layout but got Strided
>>> 

可是 PyTorch 貌似是报错?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的,我再改一下吧

}
EAGER_CATCH_AND_THROW_RETURN_NULL
}

PyDoc_STRVAR(tensor_is_same_shape__doc__, // NOLINT
R"DOC(is_same_shape($self, y, /)
--
Expand Down Expand Up @@ -3503,6 +3559,10 @@ PyMethodDef variable_methods[] = { // NOLINT
(PyCFunction)(void (*)())tensor_method_to_sparse_csr,
METH_VARARGS | METH_KEYWORDS,
tensor_to_sparse_csr__doc__},
{"is_coalesced",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要在 python/paddle/tensor/tensor.prototype.pyi stub 中补充新 Tensor API 类型

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的,已补充

(PyCFunction)(void (*)())tensor_method_is_coalesced,
METH_VARARGS | METH_KEYWORDS,
tensor_is_coalesced__doc__},
/***the method of sparse tensor****/
{"element_size",
(PyCFunction)(void (*)())tensor_method_element_size,
Expand Down
9 changes: 9 additions & 0 deletions paddle/fluid/pybind/pir.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1379,6 +1379,15 @@ void BindValue(py::module *m) {
[](Value &self, TensorDistAttribute dist_attr) {
self.set_type(dialect::CvtToPirDistType(self.type(), dist_attr));
})
.def("is_coalesced",
[](Value self) {
auto sparse_coo_tensor_type =
self.type().dyn_cast<SparseCooTensorType>();
if (sparse_coo_tensor_type) {
return sparse_coo_tensor_type.coalesced();
}
return false;
})
.def_property_readonly("process_mesh", [](Value &self) -> py::object {
auto type = self.type();
if (auto dist_type = type.dyn_cast<DistTypeInterface>()) {
Expand Down
4 changes: 3 additions & 1 deletion paddle/phi/kernels/sparse/cpu/coalesce_kernel.cc
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,8 @@ PD_REGISTER_KERNEL(coalesce_coo,
uint8_t,
int16_t,
int,
int64_t) {
int64_t,
phi::dtype::complex<float>,
phi::dtype::complex<double>) {
kernel->InputAt(0).SetDataLayout(phi::DataLayout::SPARSE_COO);
}
4 changes: 3 additions & 1 deletion paddle/phi/kernels/sparse/gpu/coalesce_kernel.cu
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,8 @@ PD_REGISTER_KERNEL(coalesce_coo,
uint8_t,
int16_t,
int,
int64_t) {
int64_t,
phi::dtype::complex<float>,
phi::dtype::complex<double>) {
kernel->InputAt(0).SetDataLayout(phi::DataLayout::SPARSE_COO);
}
Loading