-
Notifications
You must be signed in to change notification settings - Fork 630
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make output buffers for arugment inputs to GPU operators pinned. #3728
Conversation
Signed-off-by: Michał Zientkiewicz <mzient@gmail.com>
CI MESSAGE: [4109799]: BUILD STARTED |
@@ -640,13 +640,27 @@ std::vector<int> Executor<WorkspacePolicy, QueuePolicy>::GetTensorQueueSizes(con | |||
template <typename WorkspacePolicy, typename QueuePolicy> | |||
void Executor<WorkspacePolicy, QueuePolicy>::PrepinData( | |||
std::vector<tensor_data_store_queue_t> &tensor_to_store_queue, const OpGraph &graph) { | |||
// We only pin what we need | |||
// We only pin what we need: | |||
// The inputs of mixed ops are potentially used for H2D copies... | |||
for (int i = 0; i < graph.NumOp(OpType::MIXED); i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now it will apply this also for decoders. I'm not sure if we want this (in some cases they don't need the input to be pinned, in other we copy to a staging buffer - like https://github.com/NVIDIA/DALI/blob/main/dali/operators/decoder/nvjpeg/nvjpeg_decoder_decoupled_api.h#L867).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it's a big problem? I could specifically exclude decoders by name, although it seems a bit ugly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was rather curious if it was a conscious decision. Maybe you can remove this https://github.com/NVIDIA/DALI/blob/main/dali/operators/decoder/nvjpeg/nvjpeg_decoder_decoupled_api.h#L867 if you are going to pin the input anyway.
CI MESSAGE: [4109799]: BUILD PASSED |
for (int j = 0; j < node.spec.NumInput(); ++j) { | ||
auto tid = node.parent_tensors[j]; | ||
// Use pinned memory only when it is useful | ||
if (node.spec.name() == "MakeContiguous" && node.spec.NumOutput() == 1) { | ||
auto &parent_tensor_queue = | ||
get_queue<OpType::CPU, StorageDevice::CPU>(tensor_to_store_queue_[tid]); | ||
for (auto &tensor : parent_tensor_queue) { | ||
tensor->set_pinned(node.spec.OutputDevice(0) == "gpu" && !RestrictPinnedMemUsage()); | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bool pinned = node.spec.OutputDevice(0) == "gpu" && !RestrictPinnedMemUsage();
for (int j = 0; j < node.spec.NumInput(); ++j) {
...
for (auto &tensor : parent_tensor_queue) {
tensor->set_pinned(pinned);
}
You can extract the condition outside of both loops.
Signed-off-by: Michał Zientkiewicz <mzient@gmail.com>
CI MESSAGE: [4117733]: BUILD STARTED |
CI MESSAGE: [4117733]: BUILD PASSED |
…DIA#3728) * Make output buffers for argument inputs to GPU operators pinned. * Pin GPU operators' CPU inputs and all mixed operators' inputs (except decoders) Signed-off-by: Michał Zientkiewicz <mzient@gmail.com>
…DIA#3728) * Make output buffers for argument inputs to GPU operators pinned. * Pin GPU operators' CPU inputs and all mixed operators' inputs (except decoders) Signed-off-by: Michał Zientkiewicz <mzient@gmail.com>
Signed-off-by: Michał Zientkiewicz mzient@gmail.com
Category:
Other Performance optimization
Description:
To date, the argument inputs were not treated as being on a stage boundary, so they were not pinned. This change increases the scope of input buffer pinning in an effort to avoid H2D copies from non-pinned memory.
Additional information:
Affected modules and functionalities:
Key points relevant for the review:
Checklist
Tests
Pinnedness is not observable; tested locally in a debugger that the buffer is now pinned.
Documentation
DALI team only
Requirements
REQ IDs: N/A
JIRA TASK: DALI-2649