-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Make convolution operator fully work with oneDNN v2.4+ #20847
Make convolution operator fully work with oneDNN v2.4+ #20847
Conversation
Hey @bartekkuncer , Thanks for submitting the PR
CI supported jobs: [windows-gpu, windows-cpu, centos-cpu, website, centos-gpu, edge, unix-gpu, sanity, unix-cpu, miscellaneous, clang] Note: |
8067872
to
3aa4649
Compare
@mxnet-bot run ci [centos-gpu, unix-cpu] |
Jenkins CI successfully triggered : [unix-cpu, centos-gpu] |
Co-authored-by: bgawrych <bartlomiej.gawrych@intel.com>
@mxnet-bot run ci [centos-gpu] |
Jenkins CI successfully triggered : [centos-gpu] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
With the upgrade of oneDNN to version 2.4+ tests/python/dnnl/subgraphs/test_conv_subgraph.py::test_pos_conv_add[True-data_shape1] started failing. During investigation of the problem it turned out that it regards only cases with amount of input channels lower than 4 and switching away from primitive with weight dnnl::format_tag ABcd4b16a4b fixes the issue. This change implements the switch in MXNet and restores the original shape in the failing test (adjusted here: #20662).
This is only temporary solution until the full fix arrives.
Problem tracking issue: #20826.
Checklist
Changes