Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize nchw MaxPooling #7426

Merged
merged 21 commits into from
Feb 21, 2022
Merged

Optimize nchw MaxPooling #7426

merged 21 commits into from
Feb 21, 2022

Conversation

MARD1NO
Copy link
Contributor

@MARD1NO MARD1NO commented Feb 7, 2022

测试平台:

A100,cuda11.4

pool type shape of time torch time
maxpool1d forward 32 * 64 * (112 * 112) 196.19us 253.92us
maxpool1d backward 32 * 64 * (112 * 112) 275.3us 694.69us
maxpool2d forward 32 * 64 * 112 * 112 204.03us 206.34us
maxpool2d backward 32 * 64 * 112 * 112 211.68us 851.74us
maxpool3d forward 32 * 32 * 64 * 32 * 64 1210us 1450us
maxpool3d backward 32 * 32 * 64 * 32 * 64 747.3us 718.75us

backward这里,torch在1d/2d均采用自己的一套reduce操作,3d使用的是atomic_add。
而我们使用的都是atomic_add,所以这里3d情况下差距不大

TODO:NHWC的优化

@MARD1NO MARD1NO linked an issue Feb 9, 2022 that may be closed by this pull request
@MARD1NO MARD1NO marked this pull request as ready for review February 9, 2022 03:49
@@ -289,20 +294,39 @@ class MaxPool2dKernel final : public user_op::OpKernel {
const MaxPoolingParams3D& params_3d = pooling_cache->GetParams3D();

const int64_t elem_num = y->shape().elem_cnt();
// const int32_t elem_num = y->shape().elem_cnt();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个注释还要吗?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已去除

@@ -50,6 +54,12 @@ struct DeviceAdd {
};
};

#ifdef WITH_CUDA

OF_DEVICE_FUNC int32_t device_min(int32_t a, int32_t b) { return a <= b ? a : b; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个函数在这个pr用了吗?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

device_min好像没用到?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已去除

@oneflow-ci-bot oneflow-ci-bot removed their request for review February 21, 2022 06:43
@oneflow-ci-bot oneflow-ci-bot requested review from oneflow-ci-bot and removed request for oneflow-ci-bot February 21, 2022 07:17
@oneflow-ci-bot oneflow-ci-bot requested review from oneflow-ci-bot and removed request for oneflow-ci-bot February 21, 2022 09:55
@oneflow-ci-bot oneflow-ci-bot self-requested a review February 21, 2022 14:46
@github-actions
Copy link
Contributor

Speed stats:
GPU Name: GeForce GTX 1080 

✔️ OneFlow resnet50 time: 128.5ms (= 12852.3ms / 100, input_shape=[16, 3, 224, 224])
PyTorch resnet50 time: 139.3ms (= 13928.0ms / 100, input_shape=[16, 3, 224, 224])
✔️ Relative speed: 1.08 (= 139.3ms / 128.5ms)

✔️ OneFlow resnet50 time: 78.0ms (= 7802.0ms / 100, input_shape=[8, 3, 224, 224])
PyTorch resnet50 time: 84.3ms (= 8431.8ms / 100, input_shape=[8, 3, 224, 224])
✔️ Relative speed: 1.08 (= 84.3ms / 78.0ms)

OneFlow resnet50 time: 51.9ms (= 10387.4ms / 200, input_shape=[4, 3, 224, 224])
PyTorch resnet50 time: 54.0ms (= 10801.9ms / 200, input_shape=[4, 3, 224, 224])
✔️ Relative speed: 1.04 (= 54.0ms / 51.9ms)

OneFlow resnet50 time: 43.1ms (= 8617.1ms / 200, input_shape=[2, 3, 224, 224])
PyTorch resnet50 time: 47.0ms (= 9396.0ms / 200, input_shape=[2, 3, 224, 224])
✔️ Relative speed: 1.09 (= 47.0ms / 43.1ms)

OneFlow resnet50 time: 39.6ms (= 7927.2ms / 200, input_shape=[1, 3, 224, 224])
PyTorch resnet50 time: 38.4ms (= 7682.3ms / 200, input_shape=[1, 3, 224, 224])
✔️ Relative speed: 0.97 (= 38.4ms / 39.6ms)

✔️ OneFlow resnet50 time: 140.8ms (= 14084.2ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 160.3ms (= 16034.0ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.14 (= 160.3ms / 140.8ms)

OneFlow resnet50 time: 88.4ms (= 8844.0ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 102.4ms (= 10243.7ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.16 (= 102.4ms / 88.4ms)

OneFlow resnet50 time: 62.2ms (= 12446.2ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 76.2ms (= 15232.0ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.22 (= 76.2ms / 62.2ms)

OneFlow resnet50 time: 51.9ms (= 10387.2ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 63.2ms (= 12638.9ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.22 (= 63.2ms / 51.9ms)

OneFlow resnet50 time: 47.7ms (= 9544.2ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 58.9ms (= 11785.0ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.23 (= 58.9ms / 47.7ms)

@oneflow-ci-bot oneflow-ci-bot merged commit 95983ff into master Feb 21, 2022
@oneflow-ci-bot oneflow-ci-bot deleted the optimize_nchw_pool branch February 21, 2022 17:57
@oneflow-ci-bot oneflow-ci-bot removed their request for review February 21, 2022 17:57
marigoold pushed a commit that referenced this pull request Mar 15, 2022
* first debug

* fix maxpool

* fix bug

* remove redundant code

* remove redundant read

* remove redundant data_ptr offset

* use int32 to describe x shape

* Fix cuda input params for maxpool2d

* just for debug

* just for profile

* reduce div

* use int32_t indice

* revert back to use int64_t

* fix maxpool1d 3d

* optimize backward

* fix all optimize. TODO: NHWC

* fix comment

Co-authored-by: oneflow-ci-bot <69100618+oneflow-ci-bot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Optimize Pooling NCHW Kernel
6 participants