Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make output buffers for arugment inputs to GPU operators pinned. #3728

Merged
merged 2 commits into from
Mar 10, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 20 additions & 4 deletions dali/pipeline/executor/executor.h
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Copyright (c) 2017-2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
// Copyright (c) 2017-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -640,13 +640,29 @@ std::vector<int> Executor<WorkspacePolicy, QueuePolicy>::GetTensorQueueSizes(con
template <typename WorkspacePolicy, typename QueuePolicy>
void Executor<WorkspacePolicy, QueuePolicy>::PrepinData(
std::vector<tensor_data_store_queue_t> &tensor_to_store_queue, const OpGraph &graph) {
// We only pin what we need
// We only pin what we need:
// The inputs of mixed ops are potentially used for H2D copies...
for (int i = 0; i < graph.NumOp(OpType::MIXED); i++) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now it will apply this also for decoders. I'm not sure if we want this (in some cases they don't need the input to be pinned, in other we copy to a staging buffer - like https://github.com/NVIDIA/DALI/blob/main/dali/operators/decoder/nvjpeg/nvjpeg_decoder_decoupled_api.h#L867).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think it's a big problem? I could specifically exclude decoders by name, although it seems a bit ugly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was rather curious if it was a conscious decision. Maybe you can remove this https://github.com/NVIDIA/DALI/blob/main/dali/operators/decoder/nvjpeg/nvjpeg_decoder_decoupled_api.h#L867 if you are going to pin the input anyway.

auto &node = graph.Node(OpType::MIXED, i);
for (int j = 0; j < node.spec.NumRegularInput(); ++j) {
if (node.spec.name().find("decoders__") == 0)
continue; // don't pin inputs to decoders
for (int j = 0; j < node.spec.NumInput(); ++j) {
auto tid = node.parent_tensors[j];
// Use pinned memory only when it is useful
if (node.spec.name() == "MakeContiguous" && node.spec.NumOutput() == 1) {
auto &parent_tensor_queue =
get_queue<OpType::CPU, StorageDevice::CPU>(tensor_to_store_queue_[tid]);
for (auto &tensor : parent_tensor_queue) {
tensor->set_pinned(node.spec.OutputDevice(0) == "gpu" && !RestrictPinnedMemUsage());
}
}
}
Comment on lines +649 to +658
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bool pinned = node.spec.OutputDevice(0) == "gpu" && !RestrictPinnedMemUsage();
for (int j = 0; j < node.spec.NumInput(); ++j) {
  ...
  for (auto &tensor : parent_tensor_queue) {
    tensor->set_pinned(pinned);
  }

You can extract the condition outside of both loops.


// ...as are CPU inputs of GPU ops (e.g. argument inputs)
for (int i = 0; i < graph.NumOp(OpType::GPU); i++) {
auto &node = graph.Node(OpType::GPU, i);
for (int j = 0; j < node.spec.NumInput(); ++j) {
auto tid = node.parent_tensors[j];
if (graph.Tensor(tid).producer.storage_device == StorageDevice::CPU) {
auto &parent_tensor_queue =
get_queue<OpType::CPU, StorageDevice::CPU>(tensor_to_store_queue_[tid]);
for (auto &tensor : parent_tensor_queue) {
Expand Down