Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Caffe Frontend] supporting group > 1 cases for Deconv op #8125

Closed
wants to merge 141 commits into from
Closed
Show file tree
Hide file tree
Changes from 12 commits
Commits
Show all changes
141 commits
Select commit Hold shift + click to select a range
fbcfd5d
[Caffe Frontend] adding Reduction op
zotanika May 11, 2021
655d9ef
reformatting Reduction op test script
zotanika May 11, 2021
f80ba3e
reformatting Reduction test script
zotanika May 11, 2021
8d8af41
[Caffe frontend] Reduction op
zotanika May 18, 2021
be443dd
linting test script
zotanika May 18, 2021
d361160
linting
zotanika May 18, 2021
657c2af
Merge branch 'apache:main' into zotanika
zotanika May 25, 2021
43e25e5
[Caffe Frontend] Supporting multiple grouped(channel-wise) Deconv op
zotanika May 25, 2021
7da97b9
[TVMC] Add support for the MLF to 'compile' command (#8086)
gromero May 25, 2021
6f82e98
[Relay][PRNG] Support generating data of any shape in threefry_genera…
zhuzilin May 25, 2021
aefa0c8
[Relay][dismantler] Added handling of packed func (#8004)
d-smirnov May 25, 2021
dc5fc68
[METAL] Split kernels and compile them separately (#7980)
echuraev May 25, 2021
03c8a6f
[TensorIR][M2a] Structural Error Reporting (#8121)
junrushao May 25, 2021
2a008f3
Fix typos and format in comments (#8132)
gromero May 26, 2021
69e56c6
Fix typo in a comment (#8129)
gromero May 26, 2021
c02cafb
[Vulkan] Add device capabilities to Target, use in codegen (#8127)
Lunderberg May 26, 2021
31e21a2
[CUBLAS] Remove deprecated CUBLAS_TENSOR_OP_MATH flag (#8130)
tkonolige May 26, 2021
4344540
[FastMath] Add fast_softmax support in fast_math pass (#8138)
jcf94 May 26, 2021
f4dce24
[Codegen][CUDA] Fix make_int4x cuda codegen vectorize (#8137)
wyc-ruiker May 26, 2021
23a95f6
[lint] Fix black whitespace errors (#8124)
ekalda May 26, 2021
95f71f9
[Cuda][Codegen] Check for cuda include dir in /usr/include. (#8135)
Lunderberg May 26, 2021
2cde3dc
[COMMUNITY] New committer -- trevor-m (#8141)
tqchen May 27, 2021
dad59be
[microTVM] AOT Demo (#8075)
mehrdadh May 28, 2021
f0aedc4
Pin black version (#8139)
NicolaLancellotti May 28, 2021
ece644c
[IR][Pass][Instrument] Pass instrument framework (#7952)
zackcquic May 28, 2021
0e73035
[Vulkan][Refactor] Split out vulkan.cc into separate distinct functio…
Lunderberg May 29, 2021
a4fb12d
[CI] Cleanup stale logs for auto-tuning (#8160)
tqchen May 29, 2021
d78cd07
[Docs] Added developer documentation for DeviceAPI and Target. (#8082)
Lunderberg May 29, 2021
2e2dea7
rev jenkins containers for #7995 (#8155)
areusch May 29, 2021
27e44ee
[Relay] Support dynamic indices size in gather_nd and scatter_nd (#8105)
masahi May 30, 2021
e26990f
[AutoTVM][AutoScheduler] Add workaround to alter op layout bug in tas…
May 30, 2021
8b5d843
Fix tvmc tuner for cases when uTVM is not enabled (#8153)
elvin-n May 30, 2021
e535ec8
[VM] Avoid round-trip Target->str->Target conversions (#8161)
Lunderberg May 30, 2021
1fe9f8d
[CMake][Minor] Update CMake warning flags (#8152)
junrushao May 30, 2021
4bbbfe8
[Fix] Fix conv2d HWNC type strategy (#8147)
wyc-ruiker May 30, 2021
7316a38
[CI] Fix the CI after image update. (#8164)
tqchen May 30, 2021
713de0c
[CI][DOCKER] Fix cuda11 nvidia-docker support for non-Tesla gpus (#8163)
tqchen May 31, 2021
eebd5a9
[FastMath] Add cuda & x86 schedules for fast_softmax (#8150)
jcf94 May 31, 2021
bd4b14d
Update auto_tuning_with_python.py (#8158)
jiangjiajun May 31, 2021
06a466c
allow libbacktrace to be used when cross compiling the runtime (#7917)
mherkazandjian Jun 1, 2021
86fea5f
[Caffe Frontend] reverting codes related Reduction for splitting PR
zotanika Jun 1, 2021
e26846f
instant fix against docker format error
zotanika Jun 1, 2021
9fb8d71
Revert "instant fix against docker format error"
zotanika Jun 1, 2021
20878fa
instant fix against docker format error, only on 'frontend/caffe'
zotanika Jun 1, 2021
106c331
[microTVM] make RVM memory and number of cores variable (#8154)
mehrdadh Jun 1, 2021
6baccc1
[ONNX] [Relay] Update unique operator to match ONNX output (1D only) …
electriclilies Jun 1, 2021
bc785de
Add function attribute for shape func for profiling (#8148)
masahi Jun 1, 2021
bb3e772
[Vulkan][Docs] Minor updates following Vulkan target query. (#8151)
Lunderberg Jun 2, 2021
0c83fe8
[Vulkan] Remove dependency on Target from -from_device functionality.…
Lunderberg Jun 2, 2021
b7c98b8
[Strategy] Add group_conv2d_nchw_int8 in cuda strategy (#8167)
wyc-ruiker Jun 2, 2021
cbe3dca
[Relay, TOPI] Refactor strided_slice and add axes argument (#8165)
masahi Jun 2, 2021
cc3d60e
[BYOC][TensorRT] Reuse TRT engines based on max_batch_size for dynami…
Jun 3, 2021
155f669
[TVMC] Fix tvmc compile to extract target and target_host from --targ…
leandron Jun 3, 2021
b753772
fix UTF (#8185)
mehrdadh Jun 3, 2021
dd09bbb
[TensorIR][M2a] ComputeInline,ReverseComputeInline (#8170)
junrushao Jun 4, 2021
7c99d83
[Vulkan][UnitTests] Compatibility fix for test_vulkan_unique(). (#8186)
Lunderberg Jun 4, 2021
aca48d6
[Vulkan] Corrected typo in Vulkan capability error messages. (#8187)
Lunderberg Jun 4, 2021
ae4a3be
[Vulkan][Refactor] Pull out vulkan initialization into VulkanInstance…
Lunderberg Jun 4, 2021
c7f1b45
Onnx eyelike (#8191)
CircleSpin Jun 4, 2021
0429c63
Complete register op from python (#8079)
xqdan Jun 4, 2021
a74d0fe
[Codegen] Use "target.build.$TARGET_KIND" for all codegen functions. …
Lunderberg Jun 4, 2021
c9db3d0
[METAL] Fix the rest memory leaks in Metal runtime (#8175)
echuraev Jun 4, 2021
82cf197
Fix prelu bug in pytorch frontend (#8192)
YuhengHuang42 Jun 4, 2021
aa9974f
[TE/TIR] Fix create_prim_func to properly handle rank 0 tensors. (#8128)
tkonolige Jun 4, 2021
3e34e11
[CMake] Add compile-time check that libtvm_runtime.so has no undefine…
Lunderberg Jun 4, 2021
a769ece
[AOT] Initial implementation of --unpacked-api (#8023)
Mousius Jun 4, 2021
a1cd6d5
fix py files (#8194)
mehrdadh Jun 4, 2021
e0baf80
Run ONNX Node Tests on available targets (#8189)
Jun 4, 2021
f4ec5fd
[Relay, TF] Support converting TF combined_nms using Relay all_class_…
masahi Jun 4, 2021
010d11b
[Texture support][Part 0] Device API and runtime support (#7711)
csullivan Jun 5, 2021
5b37b4a
Fix typo (#8197)
zxybazh Jun 5, 2021
43387d0
fix bug in dense_nopack if dynamic input shape (#8166)
lygztq Jun 5, 2021
2cca934
[RUNTIME][REFACTOR] Re-organize Containers into SubFolders (#8183)
ZihengJiang Jun 6, 2021
cc9d5cf
update python code style to 3.6 (#8199)
Jun 6, 2021
f4b5e76
[CI][DOCS] Fix the sphinx doc style for sphinx4 (#8198)
tqchen Jun 6, 2021
072a3d2
Fix incorrect device name in TVMC. (#8181)
mdw-octoml Jun 6, 2021
3ab4a6b
Add thread_warp_size for Metal device in default target attributes (#…
elvin-n Jun 7, 2021
51bbd63
Fix conv2d_nchw for opencl intel graphics (#8201)
elvin-n Jun 7, 2021
364bc1b
[QEMU] Add number of cores, target list for build (#8156)
mehrdadh Jun 7, 2021
2c67d71
[FIX] Allow tokenizer to parse numbers greater than INT_MAX. (#8120)
tkonolige Jun 7, 2021
64a8e81
[Frontend, Tensorflow2] Adding TF2 frontend code with support for con…
rohanmukh Jun 8, 2021
9be0f4f
[Relay] Convert a fake quantized or QAT graph into QNN ops (#8126)
Jun 8, 2021
d1e2e0d
[Fix][microTVM] QEMU RPC issue (#8021)
mehrdadh Jun 8, 2021
f1486ef
[Docker] Add external directory mount (#8144)
mehrdadh Jun 8, 2021
bd0f5bc
Support dequantizing scalar inputs (#8207)
Jun 9, 2021
f646048
use an empty module for fold_constant (#8208)
Jun 9, 2021
5e006e0
[TIR] Fix data dependent indexing when lowering TE to TIR (#8217)
tkonolige Jun 9, 2021
685ebda
[VM] Better error messages (#8218)
hypercubestart Jun 9, 2021
9899f1e
Auto-tuning a Convolutional Network for ARM CPU (tutorial error, bug …
cbswj Jun 9, 2021
55459e7
[TVMSCRIPT] Add tir.min node in tvm script (#8219)
Beya2019 Jun 9, 2021
5dc9627
[Metal] Remove matching Metal to OpenCL in tophub (#8211)
echuraev Jun 9, 2021
8a04efa
Graph executor: remove unnecessary unique_ptr, NFC (#8214)
Jun 9, 2021
53e4c60
[DOC] Improve "Getting Started with TVM" tutorials and fix warnings (…
merrymercy Jun 9, 2021
1f2ca06
Expose list of PassContext configurations to the Python APIs (#8212)
leandron Jun 9, 2021
4d9bc9b
[RUNTIME] ShapeTuple Container (#8200)
ZihengJiang Jun 9, 2021
34e9a4f
[Frontend, Tensorflow, Tensorflow2] Tensorflow frontend op refactor (…
rohanmukh Jun 10, 2021
d767659
Fix use of wrong variable (#8227)
serkm Jun 10, 2021
a468f08
Add metadata information to the listing of PassContext configuration …
leandron Jun 10, 2021
d97d8d3
fake quantization to integer (#8228)
AndrewZhaoLuo Jun 10, 2021
b93e56e
[CuBLAS] Support implicit broadcast in batch_matmul (#8229)
comaniac Jun 10, 2021
089bfe7
[Caffe Frontend] adding Reduction op
zotanika May 11, 2021
1afe7a8
reformatting Reduction op test script
zotanika May 11, 2021
792423e
reformatting Reduction test script
zotanika May 11, 2021
ef4d076
[Caffe frontend] Reduction op
zotanika May 18, 2021
924f3cf
linting test script
zotanika May 18, 2021
5536589
linting
zotanika May 18, 2021
fc6aa3a
[Caffe Frontend] Supporting multiple grouped(channel-wise) Deconv op
zotanika May 25, 2021
8e913a8
[Caffe Frontend] reverting codes related Reduction for splitting PR
zotanika Jun 1, 2021
db96cd6
instant fix against docker format error
zotanika Jun 1, 2021
9d821dc
Revert "instant fix against docker format error"
zotanika Jun 1, 2021
552b1a8
instant fix against docker format error, only on 'frontend/caffe'
zotanika Jun 1, 2021
1c8c505
[Caffe Frontend] adding Reduction op
zotanika May 11, 2021
b75bc53
reformatting Reduction op test script
zotanika May 11, 2021
fc87889
reformatting Reduction test script
zotanika May 11, 2021
847b3b8
[Caffe frontend] Reduction op
zotanika May 18, 2021
400edce
linting test script
zotanika May 18, 2021
f3fad0d
linting
zotanika May 18, 2021
72637ef
[Caffe Frontend] reverting codes related Reduction for splitting PR
zotanika Jun 1, 2021
2214813
instant fix against docker format error
zotanika Jun 1, 2021
0321ad3
Revert "instant fix against docker format error"
zotanika Jun 1, 2021
37824f4
instant fix against docker format error, only on 'frontend/caffe'
zotanika Jun 1, 2021
b895f2e
[COMMUNITY] Egor Churaev -> reviewer (#8231)
merrymercy Jun 10, 2021
4079ffd
[LLVM] Fix CodeGenLLVM::LinkParameters (#8213)
Jun 10, 2021
217555f
[AutoTVM] Added @functools.wraps to function decorators (#8237)
Lunderberg Jun 10, 2021
8ea6a30
[Metal] Reduce number of threads for reduction layers (#8206)
echuraev Jun 10, 2021
4e9760b
support matching attributes with more complext objects (#8240)
Jun 11, 2021
c29301e
[µTVM] Zephyr: Fix missing board-specific config file in build dir (#…
gromero Jun 11, 2021
657af3a
Fix compile time and runtime errors of EdgeTPURuntime (#8133)
akmaru Jun 11, 2021
938c1f6
Merge branch 'frontend-caffe-deconv' of https://github.com/zotanika/i…
zotanika Jun 11, 2021
f906fa8
[Vulkan][Refactor] Move ownership of per-CPU-thread objects to Vulkan…
Lunderberg Jun 11, 2021
8a0472f
[BYOC][ACL] Prevent dilated pooling (#8149)
d-smirnov Jun 11, 2021
d69011d
[ETHOSN] Removed support for 20.08 version of the driver stack. (#7858)
tristan-arm Jun 11, 2021
959e39a
[microTVM] Add QEMU build to RVM image (#8190)
mehrdadh Jun 11, 2021
ab16685
[TOPI][batch_matmul] Allow cblas batch_matmul implicit batch_size bro…
ymwangg Jun 12, 2021
3972c29
doc: fixes to dataflow_pattern (#8247)
Jun 12, 2021
9dd1286
Unify Python and C++ TIR lower API (#8110)
CircleSpin Jun 12, 2021
f4b95ab
Move Micro TVM top level page (#8249)
Jun 12, 2021
90fb626
[CI] [ComputeLibrary] Use pre-built binaries instead of compiled (#8245)
d-smirnov Jun 14, 2021
1c251f5
Fix build break in android_rpc (#8252)
euntaik Jun 14, 2021
24c2f5c
make simplify inference iterative (#8246)
Jun 14, 2021
af998e4
Merge remote-tracking branch 'remotes/origin/frontend-caffe-deconv' i…
zotanika Jun 15, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
109 changes: 81 additions & 28 deletions python/tvm/relay/frontend/caffe.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@


class OperatorConverter(object):
""" Operator Converted for converting Caffe ops to Relay ops """
"""Operator Converted for converting Caffe ops to Relay ops"""

def __init__(self, init_layer_dict, predict_layer, exp_tab):
self.init_layer_dict = init_layer_dict
Expand Down Expand Up @@ -66,7 +66,7 @@ def __init__(self, init_layer_dict, predict_layer, exp_tab):
}

def convert_flatten(self, op):
""" Convert Flatten layer """
"""Convert Flatten layer"""
inputs = op.bottom
in_expr = self.exp_tab.get_expr(inputs[0])

Expand All @@ -77,7 +77,7 @@ def convert_flatten(self, op):
return out

def convert_eltwise(self, op):
""" Convert Eltwise layer """
"""Convert Eltwise layer"""
inputs = op.bottom
assert len(inputs) == 2, "input tensors length should be 2"

Expand Down Expand Up @@ -115,7 +115,7 @@ def convert_eltwise(self, op):
return out

def _parse_conv_params(self, op):
""" Parse the parameters of Convolution and Deconvolution layer """
"""Parse the parameters of Convolution and Deconvolution layer"""
nonzone = lambda val, pos, dflt: val[pos] if pos < len(val) else dflt

conv_params = op.convolution_param
Expand Down Expand Up @@ -160,7 +160,7 @@ def _parse_conv_params(self, op):
return params

def convert_batch_norm(self, op):
""" Convert BatchNorm layer """
"""Convert BatchNorm layer"""
inputs = op.bottom
in_expr = self.exp_tab.get_expr(inputs[0])
n, c, h, w = _infer_shape(in_expr)
Expand Down Expand Up @@ -215,7 +215,7 @@ def convert_batch_norm(self, op):
return out[0]

def convert_scale(self, op):
""" Convert Scale layer """
"""Convert Scale layer"""
inputs = op.bottom
in_expr = self.exp_tab.get_expr(inputs[0])
weight_bias_blobs = self.init_layer_dict[op.name].blobs
Expand Down Expand Up @@ -243,7 +243,7 @@ def convert_scale(self, op):
return out

def convert_concat(self, op):
""" Convert Concat layer """
"""Convert Concat layer"""
inputs = op.bottom
in_expr = (self.exp_tab.get_expr(inputs[i]) for i in range(len(inputs)))

Expand All @@ -254,7 +254,7 @@ def convert_concat(self, op):
return out

def convert_reshape(self, op):
""" Convert Reshape layer """
"""Convert Reshape layer"""
inputs = op.bottom
input_name = inputs[0]

Expand Down Expand Up @@ -294,7 +294,7 @@ def convert_reshape(self, op):
return out

def convert_softmax(self, op):
""" Convert Softmax layer """
"""Convert Softmax layer"""
inputs = op.bottom
assert len(inputs) == 1, "input tensors length should be 1"

Expand All @@ -309,7 +309,7 @@ def convert_softmax(self, op):
return out

def convert_conv(self, op):
""" Convert Convolution layer """
"""Convert Convolution layer"""
params = self._parse_conv_params(op)
weight_bias_blobs = self.init_layer_dict[op.name].blobs
conv_params = op.convolution_param
Expand Down Expand Up @@ -339,7 +339,7 @@ def convert_conv(self, op):
return out

def convert_pooling(self, op):
""" Convert Pooling layer """
"""Convert Pooling layer"""
inputs = op.bottom
input_name = inputs[0]

Expand Down Expand Up @@ -400,7 +400,7 @@ def convert_pooling(self, op):
return out

def convert_lrn(self, op):
""" Convert LRN layer """
"""Convert LRN layer"""
inputs = op.bottom
input_name = inputs[0]

Expand All @@ -416,7 +416,7 @@ def convert_lrn(self, op):
return out

def convert_innerproduct(self, op):
""" Convert InnerProduct layer """
"""Convert InnerProduct layer"""
inputs = op.bottom
weight_bias_blobs = self.init_layer_dict[op.name].blobs
dense_params = op.inner_product_param
Expand Down Expand Up @@ -457,7 +457,7 @@ def convert_innerproduct(self, op):
return out

def convert_dropout(self, op):
""" Convert Dropout layer """
"""Convert Dropout layer"""
inputs = op.bottom
input_name = inputs[0]

Expand All @@ -471,7 +471,7 @@ def convert_dropout(self, op):
return out

def convert_relu(self, op):
""" Convert ReLU layer """
"""Convert ReLU layer"""
inputs = op.bottom
in_expr = self.exp_tab.get_expr(inputs[0])
negative_slope = op.relu_param.negative_slope
Expand All @@ -483,7 +483,7 @@ def convert_relu(self, op):
return out

def convert_prelu(self, op):
""" Convert PReLU layer """
"""Convert PReLU layer"""
inputs = op.bottom
in_expr = self.exp_tab.get_expr(inputs[0])

Expand All @@ -495,7 +495,7 @@ def convert_prelu(self, op):
return out

def convert_deconv(self, op):
""" Convert Deconvolution layer """
"""Convert Deconvolution layer"""
params = self._parse_conv_params(op)
weight_bias_blobs = self.init_layer_dict[op.name].blobs
conv_params = op.convolution_param
Expand All @@ -511,23 +511,76 @@ def convert_deconv(self, op):
if weight:
kh, kw = params["kernel_size"]
weight_shape = [-1, conv_params.num_output, kh, kw]
weight_value = np.asarray(weight.data, np.float32)
if not weight.data:
if conv_params.weight_filler:
_filler = conv_params.weight_filler.value
weight_value = np.full(weight.shape.dim, _filler, np.float32)
else:
raise tvm.error.OpAttributeInvalid("At least weight_filler must be given")
else:
weight_value = np.asarray(weight.data, np.float32)
weight_value = np.reshape(weight_value, weight_shape)
else:
raise Exception("No weight value of layer {} in caffemodel".format(op.name))
raise tvm.error.OpAttributeRequired(
"No weight value of layer {} in caffemodel".format(op.name)
)

weight_expr = self.exp_tab.new_const(weight_value, dtype="float32")
in_expr = self.exp_tab.get_expr(inputs[0])
out = _op.nn.conv2d_transpose(data=in_expr, weight=weight_expr, **params)
if bias:

groups = params["groups"]
channels = params["channels"]

if bias:
bias_value = np.asarray(bias.data, np.float32)
bias_expr = self.exp_tab.new_const(bias_value, dtype="float32")
out = _op.nn.bias_add(out, bias_expr)

if groups > channels:
raise tvm.error.OpAttributeInvalid(
"Groups cannot be larger than the number of input channels"
)

if groups == channels:
inputs_expr = _op.split(in_expr, groups, axis=1)
weights_expr = _op.split(weight_expr, groups, axis=1)
# Preventing to create Concat layer with too many tensors(> 16)
q = groups >> 4
r = groups % 16

params["groups"] = 1
params["channels"] = 1
out = []
for lc in range(q):
_outputs = []
_inputs = [inputs_expr[i] for i in range(lc << 4, (lc << 4) + 16)]
_weights = [weights_expr[i] for i in range(lc << 4, (lc << 4) + 16)]
for (i, w) in zip(_inputs, _weights):
_out = _op.nn.conv2d_transpose(data=i, weight=w, **params)
if bias:
_out = _op.nn.bias_add(_out, bias_expr)
_outputs.append(_out)
out.append(_op.concatenate(_outputs, axis=1))
if r != 0:
_outputs = []
_inputs = [inputs_expr[i] for i in range(groups - r, groups)]
_weights = [weights_expr[i] for i in range(groups - r, groups)]
for (i, w) in zip(_inputs, _weights):
_out = _op.nn.conv2d_transpose(data=i, weight=w, **params)
if bias:
_out = _op.nn.bias_add(_out, bias_expr)
_outputs.append(_out)
out.append(_op.concatenate(_outputs, axis=1))
out = _op.concatenate(out, axis=1)
elif groups == 1:
out = _op.nn.conv2d_transpose(data=in_expr, weight=weight_expr, **params)
if bias:
out = _op.nn.bias_add(out, bias_expr)
else:
raise tvm.error.OpAttributeInvalid("Unable to handle.")
return out

def convert_slice(self, op):
""" Convert Slice layer """
"""Convert Slice layer"""
inputs = op.bottom
in_expr = self.exp_tab.get_expr(inputs[0])

Expand All @@ -545,21 +598,21 @@ def convert_slice(self, op):
return out

def convert_sigmoid(self, op):
""" Convert Sigmoid layer """
"""Convert Sigmoid layer"""
inputs = op.bottom
in_expr = self.exp_tab.get_expr(inputs[0])
out = _op.sigmoid(in_expr)
return out

def convert_tanh(self, op):
""" Convert TanH layer """
"""Convert TanH layer"""
inputs = op.bottom
in_expr = self.exp_tab.get_expr(inputs[0])
out = _op.tanh(in_expr)
return out

def convert_crop(self, op):
""" Convert Crop layer """
"""Convert Crop layer"""
inputs = op.bottom
assert len(inputs) == 2, "Need two inputs of Crop layer"
in_expr_a = self.exp_tab.get_expr(inputs[0])
Expand Down Expand Up @@ -615,7 +668,7 @@ def check_unsupported_ops(self):
raise tvm.error.OpNotImplemented(msg.format(ops))

def fuse_op(self, layers):
""" Fusing the BatchNorm and Scale layer """
"""Fusing the BatchNorm and Scale layer"""
bn, scale = layers["bn"], layers["scale"]

# bn params
Expand All @@ -641,7 +694,7 @@ def fuse_op(self, layers):
return bn

def op_fuse(self):
"""fuse bn and scale """
"""fuse bn and scale"""
new_layers = []
temp_layers = {}
changed_layers = {}
Expand Down
27 changes: 27 additions & 0 deletions tests/python/frontend/caffe/test_forward.py
Original file line number Diff line number Diff line change
Expand Up @@ -452,6 +452,33 @@ def test_forward_Deconvolution():
bias_filler=dict(type="xavier"),
),
)
_test_deconvolution(
data,
convolution_param=dict(
num_output=16,
bias_term=False,
pad=0,
kernel_size=2,
stride=2,
dilation=1,
group=16,
weight_filler=dict(type="xavier"),
bias_filler=dict(type="xavier"),
),
)
data = np.random.rand(1, 100, 32, 32).astype(np.float32)
_test_deconvolution(
data,
convolution_param=dict(
num_output=100,
bias_term=False,
pad=0,
kernel_size=2,
stride=2,
dilation=1,
group=100,
),
)


#######################################################################
Expand Down