Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move tensor api to cpython part3 #8342

Merged
merged 66 commits into from
Jun 7, 2022
Merged
Show file tree
Hide file tree
Changes from 62 commits
Commits
Show all changes
66 commits
Select commit Hold shift + click to select a range
6031351
add tensor_functions
marigoold May 20, 2022
92d17ac
Merge branch 'master' into move_tensor_api_to_cpython
marigoold May 20, 2022
2610b22
Merge remote-tracking branch 'origin/master' into move_tensor_api_to_…
marigoold May 21, 2022
8ed6387
concat py methods
marigoold May 23, 2022
603684e
add hash, restore tensor.py
marigoold May 24, 2022
0b81125
Merge branch 'master' into move_tensor_api_to_cpython
marigoold May 24, 2022
879e462
Merge branch 'master' into move_tensor_api_to_cpython
marigoold May 24, 2022
1ecd73e
check replacement
marigoold May 25, 2022
b26c8ce
refine code, remove commented tensor.py
marigoold May 25, 2022
63a09ec
refine code
marigoold May 25, 2022
05fff5c
move some api
marigoold May 26, 2022
4772aa0
add cpu and cuda api
marigoold May 26, 2022
7955c61
add triu tril norm and etc.
marigoold May 27, 2022
5c1dd97
merge master
marigoold May 27, 2022
76fa597
remove tensor_functions.h
marigoold May 27, 2022
e6e22e8
move more api
marigoold May 27, 2022
cac0e24
move more api, refine size
marigoold May 27, 2022
c16bc8d
fix typo
marigoold May 27, 2022
ba679d3
format code, remove useless include
marigoold May 27, 2022
bb61005
refine code
marigoold May 30, 2022
c48e537
merge master
marigoold May 30, 2022
400d43c
refine code, fix typo
marigoold May 30, 2022
ebac1b0
align .cuda to python
marigoold May 30, 2022
0e03a60
refine code
marigoold May 30, 2022
4b0f227
split some api to part3 for review
marigoold May 30, 2022
6ff3648
remove positional only arguments of argmax and argmin
marigoold May 30, 2022
29c4b34
remove arguments parse
marigoold May 30, 2022
0b9b0b6
modify arguments name in matmul and floor_divide
marigoold May 31, 2022
52e58a9
rename BINARY_FUNC to DIRECT_PASS_FUNC, modify some functions
marigoold May 31, 2022
2b8d78e
refine code, format code
marigoold May 31, 2022
005cd6d
add inplace /=, add comments
marigoold May 31, 2022
67e9d37
remove name in macros
marigoold May 31, 2022
f139060
remove python api
marigoold May 31, 2022
ac74978
remove redundant include
marigoold May 31, 2022
6174148
remove cout
marigoold May 31, 2022
dce09af
format code
marigoold May 31, 2022
8bf1782
Merge branch 'master' into move_tensor_api_to_cpython_part2
marigoold May 31, 2022
67f7cb5
refactor tensor.size by directly call shape.at, refactor tensor.sub_ …
marigoold May 31, 2022
50a1a84
remove redundant code
marigoold May 31, 2022
80a9f44
Merge branch 'master' into move_tensor_api_to_cpython_part2
marigoold May 31, 2022
5815c46
auto format by CI
oneflow-ci-bot May 31, 2022
6ec91f5
fix typo, fix wrong call
marigoold May 31, 2022
fa0051d
modify idx datatype from int32 to int64 in tensor.size
marigoold Jun 1, 2022
9e9920f
add some DIRECT_PASS_FUNC
marigoold Jun 1, 2022
d30502b
merge part2
marigoold Jun 1, 2022
28600e4
add cpu cuda var pow and etc.
marigoold Jun 1, 2022
1723d74
add masked_fill any all
marigoold Jun 1, 2022
8556fd9
merge master
marigoold Jun 1, 2022
fa11e6d
merge master
marigoold Jun 1, 2022
9221c05
make REDUCE_FUNC macro, add reduce_* functions
marigoold Jun 1, 2022
a39b38a
add 0dim check in ReduceSumWhole, refine yaml
marigoold Jun 2, 2022
60c4b5e
fix bug
marigoold Jun 2, 2022
0367588
restore add add_ sub sub_
marigoold Jun 2, 2022
32715a2
add unittest for tensor.half tensor.add tensor.add_
marigoold Jun 2, 2022
3622f68
refine code
marigoold Jun 2, 2022
0f2fa6f
refine code
marigoold Jun 2, 2022
6ed2ae7
Merge branch 'master' into move_tensor_api_to_cpython_part3
marigoold Jun 2, 2022
a428c7b
fix typo
marigoold Jun 2, 2022
c77742f
Merge branch 'master' into move_tensor_api_to_cpython_part3
marigoold Jun 6, 2022
b07e070
fix bug of tensor.std()
marigoold Jun 7, 2022
5bdd6b2
refactor var std and cuda, using c++ functional api
marigoold Jun 7, 2022
f120f00
Merge branch 'master' into move_tensor_api_to_cpython_part3
marigoold Jun 7, 2022
82d51ac
add beta and threshold in softplus
marigoold Jun 7, 2022
1f1e1a7
auto format by CI
oneflow-ci-bot Jun 7, 2022
0944e95
Merge branch 'master' into move_tensor_api_to_cpython_part3
marigoold Jun 7, 2022
063d3e5
Merge branch 'master' into move_tensor_api_to_cpython_part3
mergify[bot] Jun 7, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
218 changes: 202 additions & 16 deletions oneflow/api/python/framework/tensor_functions.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,6 @@ DIRECT_PASS_FUNC(PyTensorObject_div, functional::div)
DIRECT_PASS_FUNC(PyTensorObject_div_, functional::div_)
DIRECT_PASS_FUNC(PyTensorObject_mul, functional::mul)
DIRECT_PASS_FUNC(PyTensorObject_mul_, functional::mul_)
DIRECT_PASS_FUNC(PyTensorObject_sub, functional::sub)
DIRECT_PASS_FUNC(PyTensorObject_fmod, functional::fmod)
DIRECT_PASS_FUNC(PyTensorObject_logical_and, functional::logical_and)
DIRECT_PASS_FUNC(PyTensorObject_logical_or, functional::logical_or)
Expand All @@ -253,8 +252,36 @@ DIRECT_PASS_FUNC(PyTensorObject_bmm, functional::batch_matmul)
DIRECT_PASS_FUNC(PyTensorObject_argmax, functional::argmax)
DIRECT_PASS_FUNC(PyTensorObject_argmin, functional::argmin)
DIRECT_PASS_FUNC(PyTensorObject_amin, functional::amin)
DIRECT_PASS_FUNC(PyTensorObject_amax, functional::amax)
DIRECT_PASS_FUNC(PyTensorObject_addcmul, functional::addcmul)
DIRECT_PASS_FUNC(PyTensorObject_addcmul_, functional::addcmul_)
DIRECT_PASS_FUNC(PyTensorObject_clip, functional::clip)
DIRECT_PASS_FUNC(PyTensorObject_clip_, functional::clip_)
DIRECT_PASS_FUNC(PyTensorObject_clamp, functional::clamp)
DIRECT_PASS_FUNC(PyTensorObject_clamp_, functional::clamp_)
DIRECT_PASS_FUNC(PyTensorObject_flatten, functional::flatten)
DIRECT_PASS_FUNC(PyTensorObject_in_top_k, functional::in_top_k)
DIRECT_PASS_FUNC(PyTensorObject_index_select, functional::index_select)
DIRECT_PASS_FUNC(PyTensorObject_maximum, functional::maximum)
DIRECT_PASS_FUNC(PyTensorObject_minimum, functional::minimum)
Copy link
Contributor Author

@marigoold marigoold Jun 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里tensor.py参数名有误,为 self, y,functional_api.yaml中和PyTorch一致,为 input, other

https://github.com/pytorch/pytorch/blob/4858c56334aa2b09b1ba10d0a3547ef01edda363/aten/src/ATen/native/native_functions.yaml#L8119

DIRECT_PASS_FUNC(PyTensorObject_tril, functional::tril)
DIRECT_PASS_FUNC(PyTensorObject_triu, functional::triu)
DIRECT_PASS_FUNC(PyTensorObject_softmax, functional::softmax)
DIRECT_PASS_FUNC(PyTensorObject_log_softmax, functional::log_softmax)
DIRECT_PASS_FUNC(PyTensorObject_roll, functional::roll)
DIRECT_PASS_FUNC(PyTensorObject_unbind, functional::unbind)
DIRECT_PASS_FUNC(PyTensorObject_squeeze, functional::squeeze)
DIRECT_PASS_FUNC(PyTensorObject_swapaxes, functional::swapaxes)
DIRECT_PASS_FUNC(PyTensorObject_swapdims, functional::swapdims)
DIRECT_PASS_FUNC(PyTensorObject_unfold, functional::unfold_tensor)
DIRECT_PASS_FUNC(PyTensorObject_unsqueeze, functional::unsqueeze)
DIRECT_PASS_FUNC(PyTensorObject_max, functional::max)
DIRECT_PASS_FUNC(PyTensorObject_min, functional::min)
DIRECT_PASS_FUNC(PyTensorObject_median, functional::median)
DIRECT_PASS_FUNC(PyTensorObject_pow, functional::pow)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的参数名也和Pytorch对不齐,tensor.py中是b,pytorch中是exponent,但是functional_api.yaml里面是对的。所以直接改掉了
https://github.com/pytorch/pytorch/blob/4858c56334aa2b09b1ba10d0a3547ef01edda363/aten/src/ATen/native/native_functions.yaml#L8302-L8339

DIRECT_PASS_FUNC(PyTensorObject_chunk, functional::chunk)
Copy link
Contributor Author

@marigoold marigoold Jun 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里tensor.py参数默认值有误,为 self, chunks=None, dim=None ,functional_api.yaml中和PyTorch一致,为 self, int chunks, int dim=0

https://github.com/pytorch/pytorch/blob/4858c56334aa2b09b1ba10d0a3547ef01edda363/aten/src/ATen/native/native_functions.yaml#L1197

DIRECT_PASS_FUNC(PyTensorObject_narrow, functional::narrow)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里tensor.py参数名有误,为 self, dimension, start, length ,functional_api.yaml中和PyTorch一致,为 self, dim, start, length

https://github.com/pytorch/pytorch/blob/4858c56334aa2b09b1ba10d0a3547ef01edda363/aten/src/ATen/native/native_functions.yaml#L3476

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

看来很多接口还是没太对齐

DIRECT_PASS_FUNC(PyTensorObject_masked_fill, functional::masked_fill)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里tensor.py参数名有误,为 self, mask, fill_value ,functional_api.yaml中和PyTorch一致,为 self, mask, value

https://github.com/pytorch/pytorch/blob/4858c56334aa2b09b1ba10d0a3547ef01edda363/aten/src/ATen/native/native_functions.yaml#L6316


// functions that parsing at Python C api layer
static PyObject* PyTensorObject_byte(PyObject* self, PyObject* unused) {
Expand Down Expand Up @@ -370,19 +397,6 @@ static PyObject* PyTensorObject_matmul(PyObject* self, PyObject* args, PyObject*
END_HANDLE_ERRORS
}

static PyObject* PyTensorObject_sub_(PyObject* self, PyObject* args, PyObject* kwargs) {
HANDLE_ERRORS
PyObject* other = NULL;
static const char* keywords[2] = {"other", NULL};
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O:sub_", const_cast<char**>(keywords), &other)) {
return NULL;
}
PyObject* result = PyTensorObject_nb_inplace_sub(self, other);
if (PyErr_Occurred()) { throw py::error_already_set(); }
return result;
END_HANDLE_ERRORS
}

static PyObject* PyTensorObject_reshape(PyObject* self, PyObject* args, PyObject* kwargs) {
HANDLE_ERRORS
PyObject* shape = args;
Expand Down Expand Up @@ -411,6 +425,139 @@ static PyObject* PyTensorObject_reshape_as(PyObject* self, PyObject* args, PyObj
END_HANDLE_ERRORS
}

static PyObject* PyTensorObject_cpu(PyObject* self, PyObject* unused) {
HANDLE_ERRORS
Optional<std::string> device = "cpu";
return PyTensor_New(ASSERT_PTR(functional::To(PyTensor_Unpack(self), device, NullOpt, false)));
END_HANDLE_ERRORS
}

static PyObject* PyTensorObject_cuda(PyObject* self, PyObject* args, PyObject* kwargs) {
HANDLE_ERRORS
PyObject* device_obj = Py_None;
static const char* keywords[2] = {"device", NULL};
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|O:cuda", const_cast<char**>(keywords),
&device_obj)) {
return NULL;
}
auto tensor = PyTensor_Unpack(self);
if (functional::PyDeviceCheck(device_obj)) {
Optional<Symbol<Device>> device = functional::PyUnpackDevice(device_obj);
return PyTensor_New(ASSERT_PTR(functional::To(tensor, device, NullOpt, false)));
}
Optional<std::string> device_str;
if (device_obj == Py_None) {
device_str = "cuda";
} else if (PyLong_Check(device_obj)) {
device_str = "cuda:" + std::to_string(PyLong_AsLongLong(device_obj));
}
return PyTensor_New(ASSERT_PTR(functional::To(tensor, device_str, tensor->dtype(), false)));
END_HANDLE_ERRORS
}

static PyObject* PyTensorObject_var(PyObject* self, PyObject* args, PyObject* kwargs) {
HANDLE_ERRORS
PyObject* dim_obj = Py_None;
PyObject* unbiased_obj = Py_True;
PyObject* keepdim_obj = Py_False;
static const char* keywords[4] = {"dim", "unbiased", "keepdim", NULL};
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|OO!O!:var", const_cast<char**>(keywords),
&dim_obj, &PyBool_Type, &unbiased_obj, &PyBool_Type,
&keepdim_obj)) {
return NULL;
}
bool unbiased = unbiased_obj == Py_True;
bool keepdim = keepdim_obj == Py_True;
CHECK_OR_THROW(dim_obj == Py_None || PyLong_Check(dim_obj)
|| functional::PyLongSequenceCheck(dim_obj))
<< Error::TypeError() << "var(): argument 'dim' must be int32 list, not "
<< functional::PyStringAsString(PyObject_Str((PyObject*)Py_TYPE(dim_obj)));
auto tensor = PyTensor_Unpack(self);
if (dim_obj == Py_None) {
return PyTensor_New(ASSERT_PTR(functional::Variance(tensor, NullOpt, unbiased, keepdim)));
}
std::vector<int32_t> dim;
if (PyLong_Check(dim_obj)) {
dim.emplace_back(static_cast<int32_t>(PyLong_AsLong(dim_obj)));
return PyTensor_New(ASSERT_PTR(functional::Variance(tensor, dim, unbiased, keepdim)));
}
dim = functional::PyUnpackLongSequence<int32_t>(dim_obj);
return PyTensor_New(ASSERT_PTR(functional::Variance(tensor, dim, unbiased, keepdim)));
END_HANDLE_ERRORS
}

static PyObject* PyTensorObject_std(PyObject* self, PyObject* args, PyObject* kwargs) {
HANDLE_ERRORS
PyObject* dim_obj = Py_None;
PyObject* unbiased_obj = Py_True;
PyObject* keepdim_obj = Py_False;
static const char* keywords[4] = {"dim", "unbiased", "keepdim", NULL};
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|OO!O!:std", const_cast<char**>(keywords),
&dim_obj, &PyBool_Type, &unbiased_obj, &PyBool_Type,
&keepdim_obj)) {
return NULL;
}
bool unbiased = unbiased_obj == Py_True;
bool keepdim = keepdim_obj == Py_True;
CHECK_OR_THROW(dim_obj == Py_None || PyLong_Check(dim_obj)
|| functional::PyLongSequenceCheck(dim_obj))
<< Error::TypeError() << "std(): argument 'dim' must be int32 list, not "
<< functional::PyStringAsString(PyObject_Str((PyObject*)Py_TYPE(dim_obj)));
auto tensor = PyTensor_Unpack(self);
if (dim_obj == Py_None) {
return PyTensor_New(
ASSERT_PTR(functional::StandardDeviation(tensor, NullOpt, unbiased, keepdim)));
}
std::vector<int32_t> dim;
if (PyLong_Check(dim_obj)) {
dim.emplace_back(static_cast<int32_t>(PyLong_AsLong(dim_obj)));
return PyTensor_New(ASSERT_PTR(functional::StandardDeviation(tensor, dim, unbiased, keepdim)));
}
dim = functional::PyUnpackLongSequence<int32_t>(dim_obj);
return PyTensor_New(ASSERT_PTR(functional::StandardDeviation(tensor, dim, unbiased, keepdim)));
END_HANDLE_ERRORS
}

marigoold marked this conversation as resolved.
Show resolved Hide resolved
static PyObject* PyTensorObject_softplus(PyObject* self, PyObject* unused) {
HANDLE_ERRORS
PyObjectPtr concat_args(PyTuple_Pack(1, self));
PyObject* result = functional::softplus(NULL, concat_args.get(), NULL);
if (PyErr_Occurred()) { throw py::error_already_set(); }
return result;
END_HANDLE_ERRORS
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个函数在tensor.py处不接受其他参数,但是functional_api.yaml多了beta和threshold,所以没放在DIRECT_PASS_FUNC里面

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里我倾向还是加上beta和threshold


static PyObject* PyTensorObject_relu(PyObject* self, PyObject* unused) {
HANDLE_ERRORS
return PyTensor_New(ASSERT_PTR(functional::Relu(PyTensor_Unpack(self), false)));
END_HANDLE_ERRORS
}

static PyObject* PyTensorObject_relu_(PyObject* self, PyObject* unused) {
HANDLE_ERRORS
return PyTensor_New(ASSERT_PTR(functional::Relu(PyTensor_Unpack(self), true)));
END_HANDLE_ERRORS
}

#define REDUCE_FUNC(func_name, bind_func, whole_func) \
static PyObject* func_name(PyObject* self, PyObject* args, PyObject* kwargs) { \
HANDLE_ERRORS \
if ((args == NULL || PyTuple_Size(args) == 0) \
&& (kwargs == NULL || PyDict_Size(kwargs) == 0)) { \
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里判断这么多,是发现测试的时候即使调用了tensor.sum()也会导致kwargs不是NULL,所以多加上了一些判断语句。这里的args == NULL可能不是必须的,因为args恒是一个Tuple?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

加上args == NULL判断更好

return PyTensor_New(ASSERT_PTR(whole_func(PyTensor_Unpack(self)))); \
} \
PyObjectPtr concat_args(concat_self(self, args)); \
PyObject* result = bind_func(NULL, concat_args.get(), kwargs); \
if (PyErr_Occurred()) { throw py::error_already_set(); } \
return result; \
END_HANDLE_ERRORS \
}

REDUCE_FUNC(PyTensorObject_any, functional::reduce_any, functional::ReduceAnyWhole)
REDUCE_FUNC(PyTensorObject_all, functional::reduce_all, functional::ReduceAllWhole)
REDUCE_FUNC(PyTensorObject_sum, functional::reduce_sum, functional::ReduceSumWhole)
REDUCE_FUNC(PyTensorObject_mean, functional::reduce_mean, functional::ReduceMeanWhole)

#define DATATYPE_FUNC(func_name, dtype) \
static PyObject* func_name(PyObject* self, PyObject* unused) { \
HANDLE_ERRORS \
Expand All @@ -421,6 +568,7 @@ static PyObject* PyTensorObject_reshape_as(PyObject* self, PyObject* args, PyObj

DATATYPE_FUNC(PyTensorObject_int, DType::Int32());
DATATYPE_FUNC(PyTensorObject_long, DType::Int64());
DATATYPE_FUNC(PyTensorObject_half, DType::Float16());
DATATYPE_FUNC(PyTensorObject_float, DType::Float());
DATATYPE_FUNC(PyTensorObject_double, DType::Double());

Expand Down Expand Up @@ -499,12 +647,23 @@ PyMethodDef PyTensorObject_extra_methods[] = {
{"diagonal", (PyCFunction)PyTensorObject_diagonal, METH_VARARGS | METH_KEYWORDS, NULL},
{"addcmul", (PyCFunction)PyTensorObject_addcmul, METH_VARARGS | METH_KEYWORDS, NULL},
{"addcmul_", (PyCFunction)PyTensorObject_addcmul_, METH_VARARGS | METH_KEYWORDS, NULL},
{"sub_", (PyCFunction)PyTensorObject_sub_, METH_VARARGS | METH_KEYWORDS, NULL},
{"matmul", (PyCFunction)PyTensorObject_matmul, METH_VARARGS | METH_KEYWORDS, NULL},
{"int", PyTensorObject_int, METH_NOARGS, NULL},
{"long", PyTensorObject_long, METH_NOARGS, NULL},
{"half", PyTensorObject_half, METH_NOARGS, NULL},
{"float", PyTensorObject_float, METH_NOARGS, NULL},
{"double", PyTensorObject_double, METH_NOARGS, NULL},
{"cpu", PyTensorObject_cpu, METH_NOARGS, NULL},
{"cuda", (PyCFunction)PyTensorObject_cuda, METH_VARARGS | METH_KEYWORDS, NULL},
{"var", (PyCFunction)PyTensorObject_var, METH_VARARGS | METH_KEYWORDS, NULL},
{"std", (PyCFunction)PyTensorObject_std, METH_VARARGS | METH_KEYWORDS, NULL},
{"softplus", PyTensorObject_softplus, METH_NOARGS, NULL},
{"relu", PyTensorObject_relu, METH_NOARGS, NULL},
{"relu_", PyTensorObject_relu_, METH_NOARGS, NULL},
{"all", (PyCFunction)PyTensorObject_all, METH_VARARGS | METH_KEYWORDS, NULL},
{"any", (PyCFunction)PyTensorObject_any, METH_VARARGS | METH_KEYWORDS, NULL},
{"sum", (PyCFunction)PyTensorObject_sum, METH_VARARGS | METH_KEYWORDS, NULL},
{"mean", (PyCFunction)PyTensorObject_mean, METH_VARARGS | METH_KEYWORDS, NULL},

// macro DIRECT_PASS_FUNC
{"floor_divide", (PyCFunction)PyTensorObject_floor_divide, METH_VARARGS | METH_KEYWORDS, NULL},
Expand All @@ -515,7 +674,6 @@ PyMethodDef PyTensorObject_extra_methods[] = {
{"div_", (PyCFunction)PyTensorObject_div_, METH_VARARGS | METH_KEYWORDS, NULL},
{"mul", (PyCFunction)PyTensorObject_mul, METH_VARARGS | METH_KEYWORDS, NULL},
{"mul_", (PyCFunction)PyTensorObject_mul_, METH_VARARGS | METH_KEYWORDS, NULL},
{"sub", (PyCFunction)PyTensorObject_sub, METH_VARARGS | METH_KEYWORDS, NULL},
{"fmod", (PyCFunction)PyTensorObject_fmod, METH_VARARGS | METH_KEYWORDS, NULL},
{"logical_and", (PyCFunction)PyTensorObject_logical_and, METH_VARARGS | METH_KEYWORDS, NULL},
{"logical_or", (PyCFunction)PyTensorObject_logical_or, METH_VARARGS | METH_KEYWORDS, NULL},
Expand All @@ -524,6 +682,34 @@ PyMethodDef PyTensorObject_extra_methods[] = {
{"ne", (PyCFunction)PyTensorObject_ne, METH_VARARGS | METH_KEYWORDS, NULL},
{"lt", (PyCFunction)PyTensorObject_lt, METH_VARARGS | METH_KEYWORDS, NULL},
{"le", (PyCFunction)PyTensorObject_le, METH_VARARGS | METH_KEYWORDS, NULL},
{"clip", (PyCFunction)PyTensorObject_clip, METH_VARARGS | METH_KEYWORDS, NULL},
{"clip_", (PyCFunction)PyTensorObject_clip_, METH_VARARGS | METH_KEYWORDS, NULL},
{"clamp", (PyCFunction)PyTensorObject_clamp, METH_VARARGS | METH_KEYWORDS, NULL},
{"clamp_", (PyCFunction)PyTensorObject_clamp_, METH_VARARGS | METH_KEYWORDS, NULL},
{"flatten", (PyCFunction)PyTensorObject_flatten, METH_VARARGS | METH_KEYWORDS, NULL},
{"in_top_k", (PyCFunction)PyTensorObject_in_top_k, METH_VARARGS | METH_KEYWORDS, NULL},
{"index_select", (PyCFunction)PyTensorObject_index_select, METH_VARARGS | METH_KEYWORDS, NULL},
{"maximum", (PyCFunction)PyTensorObject_maximum, METH_VARARGS | METH_KEYWORDS, NULL},
{"minimum", (PyCFunction)PyTensorObject_minimum, METH_VARARGS | METH_KEYWORDS, NULL},
{"tril", (PyCFunction)PyTensorObject_tril, METH_VARARGS | METH_KEYWORDS, NULL},
{"triu", (PyCFunction)PyTensorObject_triu, METH_VARARGS | METH_KEYWORDS, NULL},
{"softmax", (PyCFunction)PyTensorObject_softmax, METH_VARARGS | METH_KEYWORDS, NULL},
{"log_softmax", (PyCFunction)PyTensorObject_log_softmax, METH_VARARGS | METH_KEYWORDS, NULL},
{"roll", (PyCFunction)PyTensorObject_roll, METH_VARARGS | METH_KEYWORDS, NULL},
{"unbind", (PyCFunction)PyTensorObject_unbind, METH_VARARGS | METH_KEYWORDS, NULL},
{"squeeze", (PyCFunction)PyTensorObject_squeeze, METH_VARARGS | METH_KEYWORDS, NULL},
{"swapaxes", (PyCFunction)PyTensorObject_swapaxes, METH_VARARGS | METH_KEYWORDS, NULL},
{"amax", (PyCFunction)PyTensorObject_amax, METH_VARARGS | METH_KEYWORDS, NULL},
{"swapdims", (PyCFunction)PyTensorObject_swapdims, METH_VARARGS | METH_KEYWORDS, NULL},
{"unfold", (PyCFunction)PyTensorObject_unfold, METH_VARARGS | METH_KEYWORDS, NULL},
{"unsqueeze", (PyCFunction)PyTensorObject_unsqueeze, METH_VARARGS | METH_KEYWORDS, NULL},
{"max", (PyCFunction)PyTensorObject_max, METH_VARARGS | METH_KEYWORDS, NULL},
{"min", (PyCFunction)PyTensorObject_min, METH_VARARGS | METH_KEYWORDS, NULL},
{"median", (PyCFunction)PyTensorObject_median, METH_VARARGS | METH_KEYWORDS, NULL},
{"pow", (PyCFunction)PyTensorObject_pow, METH_VARARGS | METH_KEYWORDS, NULL},
{"chunk", (PyCFunction)PyTensorObject_chunk, METH_VARARGS | METH_KEYWORDS, NULL},
{"narrow", (PyCFunction)PyTensorObject_narrow, METH_VARARGS | METH_KEYWORDS, NULL},
{"masked_fill", (PyCFunction)PyTensorObject_masked_fill, METH_VARARGS | METH_KEYWORDS, NULL},

// macro UNARY_METHOD
{"abs", PyTensorObject_abs, METH_NOARGS, NULL},
Expand Down
6 changes: 4 additions & 2 deletions oneflow/core/functional/functional_api.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -292,8 +292,10 @@
bind_python: True

- name: "reduce_mean"
signature: ["Tensor (Tensor x, Int32List[1] dim, Bool keepdim=False) => ReduceMean",
"Tensor (Tensor x) => ReduceMeanWhole"]
signature: [
"Tensor (Tensor x, Int32List[1] dim, Bool keepdim=False) => ReduceMean",
"Tensor (Tensor x) => ReduceMeanWhole"
]
bind_python: True

- name: "reduce_all"
Expand Down
8 changes: 7 additions & 1 deletion oneflow/core/functional/impl/math_functor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -458,6 +458,7 @@ class ReduceSumWholeFunctor {
Maybe<Tensor> operator()(const std::shared_ptr<one::Tensor>& x) const {
MutableAttrMap attrs;
const int32_t naxis = x->ndim();
if (naxis == 0) { return x; } // for 0-dim Tensor
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_cuda_0dim会在backward的时候报错,这里加上了特判

std::vector<int32_t> axis(naxis);
std::iota(axis.begin(), axis.end(), 0);
JUST(attrs.SetAttr<std::vector<int32_t>>("axis", axis));
Expand Down Expand Up @@ -1952,7 +1953,12 @@ class StandardDeviationFunctor {
Maybe<Tensor> operator()(const std::shared_ptr<Tensor>& input,
const Optional<std::vector<int32_t>>& dim,
const Optional<bool>& unbiased, const Optional<bool>& keepdim) const {
std::vector<int32_t> axis = *JUST(CheckAxis(*JUST(dim), input->ndim()));
std::vector<int32_t> axis;
if (!dim) {
for (int i = 0; i < input->ndim(); i++) { axis.emplace_back(i); }
} else {
axis = *JUST(CheckAxis(*JUST(dim), input->ndim()));
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

原来调用tensor.std()会挂在check上,这里修正了这个bug

bool unbias = true;
bool keepdims = false;
if (unbiased.has_value()) { unbias = JUST(unbiased); }
Expand Down
Loading