Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Numpy-compatible Infra #15581

Merged
merged 46 commits into from
Aug 8, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
2e09926
[Do not review] [Do not merge] New numpy-compatible sum (#14739)
haojin2 Apr 21, 2019
c8127ae
[numpy] Infra for supporting numpy ops in imperative mode and Gluon A…
reminisce May 3, 2019
3a437ad
Enable np op compat check with name prefix (#14897)
reminisce May 6, 2019
2657af1
[numpy] Numpy dot (#14831)
haojin2 May 8, 2019
6c461f2
numpy-compatible mean (#14859)
haojin2 May 9, 2019
162eaa7
[numpy] Some np ops for d2l (#14924)
reminisce May 10, 2019
aeaeb46
[numpy] Refactor np modules (#14989)
reminisce May 18, 2019
846a335
[numpy] Refactor np module (example runs through) (#15055)
reminisce May 27, 2019
fbd0a3b
Change np_compat to np_shape
reminisce May 27, 2019
131dbe3
Temporarily disable test_amp
reminisce May 27, 2019
4fe6cad
Numpy-compatible stack (#15027)
haojin2 May 31, 2019
25277a5
Numpy Unary Ops (#15010)
haojin2 Jun 2, 2019
9dc5e0a
[numpy] Fix np branch after rebase (#15086)
reminisce Jun 2, 2019
5d8f125
numpy concatenate (#15104)
haojin2 Jun 4, 2019
b4716a9
[WIP][numpy] Fix for D2L Chapters 2/3/4 (#15139)
reminisce Jun 5, 2019
049ded2
[numpy] Fix d2l performance regression (#15173)
reminisce Jun 7, 2019
a584326
Fix (#15188)
reminisce Jun 9, 2019
0d0f284
fix for chapter6 conv nn (#15224)
haojin2 Jun 12, 2019
1d47418
[numpy] Fix d2l chapter8 (#15237)
reminisce Jun 13, 2019
8a2b41f
fix for ch11 (#15244)
haojin2 Jun 14, 2019
820752f
Numpy-compatible split (#15049)
haojin2 Jun 17, 2019
21be6f8
[numpy] [DO NOT MERGE] Fix d2l chapters 9 and 13 (#15246)
reminisce Jun 17, 2019
a54a3f2
[numpy] Fix d2l chapter 5 (#15264)
reminisce Jun 18, 2019
6bd552f
Numpy compatible max (#15161)
stu1130 Jun 19, 2019
42d6760
Numpy compatible multinomial (#15219)
stu1130 Jun 20, 2019
9f8d4a4
Numpy compatible linspace (#15256)
stu1130 Jun 20, 2019
12aab7a
numpy-compatible cumsum (#15309)
haojin2 Jun 23, 2019
9cc355f
[numpy] Misc fix for other chapters (#15332)
reminisce Jun 23, 2019
78c541f
[numpy] Change d2l chapters cv and gan to use numpy (#15368)
reminisce Jun 27, 2019
5218a09
add doc for multinomial, dot, cumsum, clip, abs, exp, arctan (#15386)
hzfan Jun 28, 2019
99a9b0a
[numpy] Fix several places in numpy (#15398)
reminisce Jun 28, 2019
7caacd8
[numpy] fix cython (#15418)
hzfan Jul 2, 2019
a8869b6
fix after rebase
haojin2 Jul 24, 2019
33f1cfb
get rid of coverage in clang60 mkldnn
haojin2 Jul 28, 2019
56ac957
fix lint issues
haojin2 Jul 29, 2019
ba54b26
fix flaky test and get rid of extra print
haojin2 Jul 30, 2019
6b84d53
remove numpy examples
haojin2 Jul 30, 2019
7e8deab
revert #15309 #15256 #15219 #15161
haojin2 Jul 31, 2019
0b6f2c8
remove numpy docs
haojin2 Jul 31, 2019
b2e48d9
remove changes to contrib/text/embedding.py
haojin2 Jul 31, 2019
7ede112
remove numpy changes to gluon peripherals
haojin2 Jul 31, 2019
024bedb
Revert "remove numpy docs"
haojin2 Jul 31, 2019
0e955a3
get rid of most operators
haojin2 Jul 31, 2019
3262591
Revert "get rid of coverage in clang60 mkldnn"
haojin2 Jul 31, 2019
275b063
remove np-compatible from mxnet.image mxnet.initializer
haojin2 Aug 1, 2019
345c522
address comments
haojin2 Aug 1, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion include/mxnet/base.h
Original file line number Diff line number Diff line change
Expand Up @@ -451,7 +451,9 @@ inline int32_t Context::GetGPUCount() {
}
int32_t count;
cudaError_t e = cudaGetDeviceCount(&count);
if (e == cudaErrorNoDevice) {
// TODO(junwu): Remove e == cudaErrorInsufficientDriver
// This is skipped for working around wheel build system with older CUDA driver.
if (e == cudaErrorNoDevice || e == cudaErrorInsufficientDriver) {
return 0;
}
CHECK_EQ(e, cudaSuccess) << " CUDA: " << cudaGetErrorString(e);
Expand Down
12 changes: 12 additions & 0 deletions include/mxnet/c_api.h
Original file line number Diff line number Diff line change
Expand Up @@ -2902,6 +2902,18 @@ MXNET_DLL int MXEnginePushSync(EngineSyncFunc sync_func, void* func_param,
EngineVarHandle mutable_vars_handle, int num_mutable_vars,
EngineFnPropertyHandle prop_handle DEFAULT(NULL),
int priority DEFAULT(0), const char* opr_name DEFAULT(NULL));
/*!
* \brief Create an NDArray from source sharing the same data chunk.
* \param src source NDArray
* \param out new NDArray sharing the same data chunck with src
*/
MXNET_DLL int MXShallowCopyNDArray(NDArrayHandle src, NDArrayHandle* out);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggest to add a space

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you be more specific? Where should the space go to?

/*!
* \brief Create an Symbol from source sharing the same graph structure.
* \param src source Symbol
* \param out new Symbol sharing the same graph structure with src
*/
MXNET_DLL int MXShallowCopySymbol(SymbolHandle src, SymbolHandle * out);

/*!
* \brief Push an asynchronous operation to the engine.
Expand Down
15 changes: 15 additions & 0 deletions include/mxnet/tuple.h
Original file line number Diff line number Diff line change
Expand Up @@ -272,6 +272,14 @@ class Tuple {
is.get();
if (ch == '(' || ch == '[') break;
if (!isspace(ch)) {
if (ch == 'N') {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't this be on the same line using short circuit?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make your comments easier to understand. I usually give a simplified snippet to illustrate how I want the code to look like in my code reviews.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure
I find this would be more readable and less nested:

if (!isspace(ch) && c == 'N') {  //  look for "None"

std::string tmp_val;
is >> tmp_val;
if (tmp_val == "one") { // is stores "None"
t.SetDim(-1);
return is;
}
}
is.setstate(std::ios::failbit);
return is;
}
Expand Down Expand Up @@ -653,6 +661,13 @@ inline bool shape_is_known(const TShape& x) {
return true;
}

inline bool shape_is_known(const std::vector<TShape>& shapes) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why inline?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing inline here actually would result in a linker error.

Copy link
Contributor

@larroy larroy Aug 5, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see there's no tuple implementation file. I guess it's not worth it to add it for a small function. Feel free to resolve.

for (const TShape& shape : shapes) {
if (!shape_is_known(shape)) return false;
}
return true;
}

/*! \brief helper function to cast type of container elements */
template<typename SrcIter, typename DstIter>
inline DstIter ShapeTypeCast(const SrcIter begin,
Expand Down
5 changes: 5 additions & 0 deletions python/mxnet/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,15 @@
from . import engine
from .base import MXNetError
from .util import is_np_shape, set_np_shape, np_shape, use_np_shape
from .util import is_np_array, np_array, use_np_array, use_np
from . import base
from . import contrib
from . import ndarray
from . import ndarray as nd
from . import numpy
from . import numpy_extension
from . import numpy as np
from . import numpy_extension as npx
from . import name
# use mx.sym as short for symbol
from . import symbol as sym
Expand Down
36 changes: 24 additions & 12 deletions python/mxnet/_ctypes/ndarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,14 +55,22 @@ def __reduce__(self):


_ndarray_cls = None
_np_ndarray_cls = None


def _set_ndarray_class(cls):
"""Set the symbolic class to be cls"""
global _ndarray_cls
_ndarray_cls = cls


def _imperative_invoke(handle, ndargs, keys, vals, out):
def _set_np_ndarray_class(cls):
"""Set the symbolic class to be cls"""
global _np_ndarray_cls
_np_ndarray_cls = cls


def _imperative_invoke(handle, ndargs, keys, vals, out, is_np_op):
"""ctypes implementation of imperative invoke wrapper"""
if out is not None:
original_output = out
Expand Down Expand Up @@ -91,23 +99,27 @@ def _imperative_invoke(handle, ndargs, keys, vals, out):
c_str_array([str(s) for s in vals]),
ctypes.byref(out_stypes)))

create_ndarray_fn = _np_ndarray_cls if is_np_op else _ndarray_cls
if original_output is not None:
return original_output
if num_output.value == 1:
return _ndarray_cls(ctypes.cast(output_vars[0], NDArrayHandle),
stype=out_stypes[0])
return create_ndarray_fn(ctypes.cast(output_vars[0], NDArrayHandle),
stype=out_stypes[0])
else:
return [_ndarray_cls(ctypes.cast(output_vars[i], NDArrayHandle),
stype=out_stypes[i])
for i in range(num_output.value)]
return [create_ndarray_fn(ctypes.cast(output_vars[i], NDArrayHandle),
stype=out_stypes[i]) for i in range(num_output.value)]


class CachedOp(object):
"""Cached operator handle."""
__slots__ = ["handle"]
__slots__ = ["handle", "is_np_sym"]

def __init__(self, sym, flags=()):
self.handle = CachedOpHandle()

from ..symbol.numpy._symbol import _Symbol
self.is_np_sym = bool(isinstance(sym, _Symbol))

check_call(_LIB.MXCreateCachedOpEx(
sym.handle,
len(flags),
Expand Down Expand Up @@ -151,10 +163,10 @@ def __call__(self, *args, **kwargs):

if original_output is not None:
return original_output
create_ndarray_fn = _np_ndarray_cls if self.is_np_sym else _ndarray_cls
if num_output.value == 1:
return _ndarray_cls(ctypes.cast(output_vars[0], NDArrayHandle),
stype=out_stypes[0])
return create_ndarray_fn(ctypes.cast(output_vars[0], NDArrayHandle),
stype=out_stypes[0])
else:
return [_ndarray_cls(ctypes.cast(output_vars[i], NDArrayHandle),
stype=out_stypes[i])
for i in range(num_output.value)]
return [create_ndarray_fn(ctypes.cast(output_vars[i], NDArrayHandle),
stype=out_stypes[i]) for i in range(num_output.value)]
13 changes: 11 additions & 2 deletions python/mxnet/_ctypes/symbol.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,9 @@
from ..base import SymbolHandle
from ..base import check_call

# The symbol class to be used (Cython or Ctypes)
_symbol_cls = None
_np_symbol_cls = None
haojin2 marked this conversation as resolved.
Show resolved Hide resolved

class SymbolBase(object):
"""Symbol is symbolic graph."""
Expand Down Expand Up @@ -115,7 +117,13 @@ def _set_symbol_class(cls):
_symbol_cls = cls


def _symbol_creator(handle, args, kwargs, keys, vals, name):
def _set_np_symbol_class(cls):
haojin2 marked this conversation as resolved.
Show resolved Hide resolved
"""Set the numpy-compatible symbolic class to be cls"""
global _np_symbol_cls
_np_symbol_cls = cls


def _symbol_creator(handle, args, kwargs, keys, vals, name, is_np_op):
sym_handle = SymbolHandle()
check_call(_LIB.MXSymbolCreateAtomicSymbol(
ctypes.c_void_p(handle),
Expand All @@ -128,7 +136,8 @@ def _symbol_creator(handle, args, kwargs, keys, vals, name):
raise TypeError(
'Operators with variable length input can only accept input'
'Symbols either as positional or keyword arguments, not both')
s = _symbol_cls(sym_handle)
create_symbol_fn = _np_symbol_cls if is_np_op else _symbol_cls
s = create_symbol_fn(sym_handle)
if args:
s._compose(*args, name=name)
elif kwargs:
Expand Down
54 changes: 54 additions & 0 deletions python/mxnet/_numpy_op_doc.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.

# pylint: skip-file

"""Doc placeholder for numpy ops with prefix _np."""


def _np_ones_like(a):
"""Return an array of ones with the same shape and type as a given array.

Parameters
----------
a : ndarray
The shape and data-type of `a` define these same attributes of
the returned array.

Returns
-------
out : ndarray
Array of ones with the same shape and type as `a`.
"""
pass


def _np_zeros_like(a):
"""Return an array of zeros with the same shape and type as a given array.

Parameters
----------
a : ndarray
The shape and data-type of `a` define these same attributes of
the returned array.

Returns
-------
out : ndarray
Array of zeros with the same shape and type as `a`.
"""
pass
118 changes: 115 additions & 3 deletions python/mxnet/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
# under the License.

# coding: utf-8
# pylint: disable=invalid-name, no-member, trailing-comma-tuple, bad-mcs-classmethod-argument, unnecessary-pass, wrong-import-position
# pylint: disable=invalid-name, no-member, trailing-comma-tuple, bad-mcs-classmethod-argument, unnecessary-pass, too-many-lines, wrong-import-position
"""ctypes library of mxnet and helper functions."""
from __future__ import absolute_import

Expand Down Expand Up @@ -598,7 +598,9 @@ def _init_op_module(root_namespace, module_name, make_op_func):
ctypes.byref(plist)))
op_names = []
for i in range(size.value):
op_names.append(py_str(plist[i]))
op_name = py_str(plist[i])
if not _is_np_op(op_name):
op_names.append(op_name)

module_op = sys.modules["%s.%s.op" % (root_namespace, module_name)]
module_internal = sys.modules["%s.%s._internal" % (root_namespace, module_name)]
Expand Down Expand Up @@ -692,7 +694,9 @@ def write_all_str(module_file, module_all_list):
ctypes.byref(plist)))
op_names = []
for i in range(size.value):
op_names.append(py_str(plist[i]))
op_name = py_str(plist[i])
if not _is_np_op(op_name):
op_names.append(op_name)

module_op_file = get_module_file("%s.%s.op" % (root_namespace, module_name))
module_op_all = []
Expand Down Expand Up @@ -735,7 +739,115 @@ def write_all_str(module_file, module_all_list):
ctypes.pythonapi.PyCapsule_New.restype = ctypes.py_object
ctypes.pythonapi.PyCapsule_GetPointer.restype = ctypes.c_void_p


from .runtime import Features
if Features().is_enabled("TVM_OP"):
_LIB_TVM_OP = libinfo.find_lib_path("libtvmop")
check_call(_LIB.MXLoadTVMOp(c_str(_LIB_TVM_OP[0])))


_NP_OP_PREFIX = '_np_'
_NP_OP_SUBMODULE_LIST = ['_random_', '_linalg_']

_NP_EXT_OP_PREFIX = '_npx_'
_NP_EXT_OP_SUBMODULE_LIST = ['_image_']

_NP_INTERNAL_OP_PREFIX = '_npi_'


def _is_np_op(op_name):
return op_name.startswith(_NP_OP_PREFIX) or op_name.startswith(_NP_EXT_OP_PREFIX)\
or op_name.startswith(_NP_INTERNAL_OP_PREFIX)


def _get_op_submodule_name(op_name, op_name_prefix, submodule_name_list):
haojin2 marked this conversation as resolved.
Show resolved Hide resolved
"""Get the submodule name of a specific op"""
assert op_name.startswith(op_name_prefix)
for submodule_name in submodule_name_list:
if op_name[len(op_name_prefix):].startswith(submodule_name):
return submodule_name
return ""


def _init_np_op_module(root_module_name, np_module_name, mx_module_name, make_op_func):
"""
Register numpy operators in namespaces `mxnet.numpy`, `mxnet.ndarray.numpy`
and `mxnet.symbol.numpy`. They are used in imperative mode, Gluon APIs w/o hybridization,
and Gluon APIs w/ hybridization, respectively. Essentially, operators with the same name
registered in three namespaces, respectively share the same functionality in C++ backend.
Different namespaces are needed for dispatching operator calls in Gluon's `HybridBlock` by `F`.

Parameters
----------
root_module_name : str
Top level module name, `mxnet` in the current cases.
np_module_name : str
Second level module name, `numpy` or `numpy_extension` in the current case.
make_op_func : function
Function for creating op functions.
"""
from . import _numpy_op_doc as _np_op_doc
if np_module_name == 'numpy':
op_name_prefix = _NP_OP_PREFIX
submodule_name_list = _NP_OP_SUBMODULE_LIST
elif np_module_name == 'numpy_extension':
op_name_prefix = _NP_EXT_OP_PREFIX
submodule_name_list = _NP_EXT_OP_SUBMODULE_LIST
elif np_module_name == 'numpy._internal':
op_name_prefix = _NP_INTERNAL_OP_PREFIX
submodule_name_list = []
else:
raise ValueError('unsupported np module name {}'.format(np_module_name))

plist = ctypes.POINTER(ctypes.c_char_p)()
size = ctypes.c_uint()
check_call(_LIB.MXListAllOpNames(ctypes.byref(size), ctypes.byref(plist)))
haojin2 marked this conversation as resolved.
Show resolved Hide resolved
op_names = []
for i in range(size.value):
name = py_str(plist[i])
if name.startswith(op_name_prefix):
op_names.append(name)

if mx_module_name is None:
# register np/npx ops for imperative programming
op_module_name = "%s.%s._op" % (root_module_name, np_module_name) # e.g. mxnet.numpy._op
op_submodule_name = "%s.%s" % (root_module_name, np_module_name) # e.g. mxnet.numpy.random
elif mx_module_name in ('ndarray', 'symbol'):
# register numpy internal ops and np/npx ops for use in Gluon
# np internal ops are registered in mxnet.ndarray/symbol.numpy._internal
# np ops are registered in mxnet.ndarray/symbol.numpy._op
# npx ops are registered in mxnet.ndarray/symbol.numpy_extension._op
op_module_name = "%s.%s.%s" % (root_module_name, mx_module_name, np_module_name)
if op_name_prefix != _NP_INTERNAL_OP_PREFIX:
op_module_name += '._op'
# e.g. mxnet.symbol.numpy.random
op_submodule_name = "%s.%s.%s" % (root_module_name, mx_module_name, np_module_name)
else:
raise ValueError('unsupported mxnet module {}'.format(mx_module_name))
op_submodule_name += '.%s'

op_module = sys.modules[op_module_name]
submodule_dict = {}
for submodule_name in submodule_name_list:
submodule_dict[submodule_name] = sys.modules[op_submodule_name % submodule_name[1:-1]]
for name in op_names:
hdl = OpHandle()
check_call(_LIB.NNGetOpHandle(c_str(name), ctypes.byref(hdl)))
submodule_name = _get_op_submodule_name(name, op_name_prefix, submodule_name_list)
if len(submodule_name) > 0:
func_name = name[(len(op_name_prefix) + len(submodule_name)):]
cur_module = submodule_dict[submodule_name]
module_name_local = op_submodule_name % submodule_name[1:-1]
else:
func_name = name[len(op_name_prefix):]
cur_module = op_module
module_name_local =\
op_module_name[:-len('._op')] if op_module_name.endswith('._op') else op_module_name

function = make_op_func(hdl, name, func_name)
function.__module__ = module_name_local
setattr(cur_module, function.__name__, function)
cur_module.__all__.append(function.__name__)

if hasattr(_np_op_doc, name):
function.__doc__ = getattr(_np_op_doc, name).__doc__
Loading