Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat support host memory #9928

Merged
merged 24 commits into from
Mar 29, 2023
Merged

Conversation

clackhan
Copy link
Contributor

@clackhan clackhan commented Mar 2, 2023

实现HostMemoryInput机制,可以将op的某个输入定义为HostMemoryInput类型,定义方式如下:

REGISTER_OP_HOST_MEMORY_INPUT("host_scalar_add_by_tensor", "scalar", 0);

当被定义为HostMemoryInput时,可以直接在kernel的host函数体内访问数据。

@clackhan clackhan requested a review from liujuncheng March 2, 2023 08:59
@clackhan clackhan changed the title Feat support host memory in lazy mode Feat support host memory Mar 9, 2023
@clackhan clackhan marked this pull request as ready for review March 9, 2023 03:09
@clackhan clackhan requested a review from oneflow-ci-bot March 9, 2023 03:10
Comment on lines +173 to +176
Symbol<ParallelDesc> dst_parallel_desc =
is_host_input
? JUST(ReplaceDeviceType(infered_input_meta->parallel_desc(), DeviceType::kCPU))
: infered_input_meta->parallel_desc();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

当op的输入为HostMemory类型时,boxing_out_ parallel_desc的类型设置为cpu

Comment on lines +99 to +102
const auto& host_input = JUST(functional::To(
inputs.at(i), Optional<Symbol<Device>>(JUST(GetDefaultCpuDevice())), NullOpt, false));
input_eager_blob_objects.at(i) = JUST(host_input->eager_blob_object());
host_inputs.emplace_back(host_input);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

延长host_input的生命周期,防止其被过析构

@clackhan clackhan requested review from oneflow-ci-bot and removed request for oneflow-ci-bot March 9, 2023 03:48
@github-actions
Copy link
Contributor

Speed stats:
GPU Name: GeForce GTX 1080 

❌ OneFlow resnet50 time: 140.9ms (= 14086.1ms / 100, input_shape=[16, 3, 224, 224])
PyTorch resnet50 time: 143.9ms (= 14387.3ms / 100, input_shape=[16, 3, 224, 224])
❌ Relative speed: 1.02 (= 143.9ms / 140.9ms)

OneFlow resnet50 time: 80.5ms (= 8049.0ms / 100, input_shape=[8, 3, 224, 224])
PyTorch resnet50 time: 84.2ms (= 8424.3ms / 100, input_shape=[8, 3, 224, 224])
✔️ Relative speed: 1.05 (= 84.2ms / 80.5ms)

OneFlow resnet50 time: 49.0ms (= 9801.2ms / 200, input_shape=[4, 3, 224, 224])
PyTorch resnet50 time: 54.1ms (= 10817.2ms / 200, input_shape=[4, 3, 224, 224])
✔️ Relative speed: 1.10 (= 54.1ms / 49.0ms)

OneFlow resnet50 time: 32.6ms (= 6526.4ms / 200, input_shape=[2, 3, 224, 224])
PyTorch resnet50 time: 44.1ms (= 8818.8ms / 200, input_shape=[2, 3, 224, 224])
✔️ Relative speed: 1.35 (= 44.1ms / 32.6ms)

OneFlow resnet50 time: 25.3ms (= 5056.1ms / 200, input_shape=[1, 3, 224, 224])
PyTorch resnet50 time: 40.6ms (= 8128.3ms / 200, input_shape=[1, 3, 224, 224])
✔️ Relative speed: 1.61 (= 40.6ms / 25.3ms)

OneFlow swin dataloader time: 0.239s (= 47.702s / 200, num_workers=1)
PyTorch swin dataloader time: 0.150s (= 30.030s / 200, num_workers=1)
Relative speed: 0.630 (= 0.150s / 0.239s)

OneFlow swin dataloader time: 0.066s (= 13.242s / 200, num_workers=4)
PyTorch swin dataloader time: 0.041s (= 8.271s / 200, num_workers=4)
Relative speed: 0.625 (= 0.041s / 0.066s)

OneFlow swin dataloader time: 0.043s (= 8.638s / 200, num_workers=8)
PyTorch swin dataloader time: 0.022s (= 4.486s / 200, num_workers=8)
Relative speed: 0.519 (= 0.022s / 0.043s)

❌ OneFlow resnet50 time: 152.4ms (= 15241.0ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 161.3ms (= 16128.9ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
❌ Relative speed: 1.06 (= 161.3ms / 152.4ms)

OneFlow resnet50 time: 91.1ms (= 9106.8ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 108.1ms (= 10807.6ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.19 (= 108.1ms / 91.1ms)

OneFlow resnet50 time: 59.0ms (= 11793.4ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 78.0ms (= 15598.3ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.32 (= 78.0ms / 59.0ms)

OneFlow resnet50 time: 42.3ms (= 8459.8ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 72.5ms (= 14502.5ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.71 (= 72.5ms / 42.3ms)

OneFlow resnet50 time: 36.5ms (= 7305.6ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 72.2ms (= 14437.6ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.98 (= 72.2ms / 36.5ms)

@github-actions
Copy link
Contributor

Speed stats:

@github-actions
Copy link
Contributor

Speed stats:
GPU Name: GeForce GTX 1080 

❌ OneFlow resnet50 time: 141.2ms (= 14115.2ms / 100, input_shape=[16, 3, 224, 224])
PyTorch resnet50 time: 144.2ms (= 14416.5ms / 100, input_shape=[16, 3, 224, 224])
❌ Relative speed: 1.02 (= 144.2ms / 141.2ms)

OneFlow resnet50 time: 82.0ms (= 8200.7ms / 100, input_shape=[8, 3, 224, 224])
PyTorch resnet50 time: 88.1ms (= 8812.9ms / 100, input_shape=[8, 3, 224, 224])
✔️ Relative speed: 1.07 (= 88.1ms / 82.0ms)

OneFlow resnet50 time: 51.3ms (= 10261.3ms / 200, input_shape=[4, 3, 224, 224])
PyTorch resnet50 time: 60.5ms (= 12101.2ms / 200, input_shape=[4, 3, 224, 224])
✔️ Relative speed: 1.18 (= 60.5ms / 51.3ms)

OneFlow resnet50 time: 34.4ms (= 6887.3ms / 200, input_shape=[2, 3, 224, 224])
PyTorch resnet50 time: 46.6ms (= 9324.2ms / 200, input_shape=[2, 3, 224, 224])
✔️ Relative speed: 1.35 (= 46.6ms / 34.4ms)

OneFlow resnet50 time: 26.6ms (= 5322.1ms / 200, input_shape=[1, 3, 224, 224])
PyTorch resnet50 time: 44.3ms (= 8851.7ms / 200, input_shape=[1, 3, 224, 224])
✔️ Relative speed: 1.66 (= 44.3ms / 26.6ms)

OneFlow swin dataloader time: 0.256s (= 51.141s / 200, num_workers=1)
PyTorch swin dataloader time: 0.151s (= 30.152s / 200, num_workers=1)
Relative speed: 0.590 (= 0.151s / 0.256s)

OneFlow swin dataloader time: 0.073s (= 14.637s / 200, num_workers=4)
PyTorch swin dataloader time: 0.045s (= 8.906s / 200, num_workers=4)
Relative speed: 0.608 (= 0.045s / 0.073s)

OneFlow swin dataloader time: 0.041s (= 8.261s / 200, num_workers=8)
PyTorch swin dataloader time: 0.023s (= 4.617s / 200, num_workers=8)
Relative speed: 0.559 (= 0.023s / 0.041s)

❌ OneFlow resnet50 time: 153.5ms (= 15350.4ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 165.6ms (= 16562.1ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
❌ Relative speed: 1.08 (= 165.6ms / 153.5ms)

OneFlow resnet50 time: 93.1ms (= 9311.7ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 103.4ms (= 10341.1ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.11 (= 103.4ms / 93.1ms)

OneFlow resnet50 time: 61.2ms (= 12231.0ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 79.1ms (= 15827.8ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.29 (= 79.1ms / 61.2ms)

OneFlow resnet50 time: 42.8ms (= 8564.4ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 67.2ms (= 13434.3ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.57 (= 67.2ms / 42.8ms)

OneFlow resnet50 time: 37.1ms (= 7428.0ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 68.8ms (= 13761.9ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.85 (= 68.8ms / 37.1ms)

@github-actions
Copy link
Contributor

View latest API docs preview at: https://staging.oneflow.info/docs/Oneflow-Inc/oneflow/pr/9928/

@mergify mergify bot merged commit b305117 into master Mar 29, 2023
@mergify mergify bot deleted the feat_support_host_memory_in_lazy_mode branch March 29, 2023 17:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants