-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplize system_allocator and fix GPU_INFO #6653
Conversation
|
||
if (size > usable) return nullptr; | ||
|
||
cudaError_t result = cudaMallocHost(&p, size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove cudaMallocHost
?
Most of the memory required for training deep recurrent networks is used to store activations through each layer for use by back propagation, not to store the parameters of the network. For example, storing the weights for a 70M parameter network with 9 layers requires approximately 280 MB of memory, but storing the activations for a batch of 64, seven-second utterances requires 1.5 GB of memory. TitanX GPUs include 12GB of GDDR5 RAM, and sometimes very deep networks can exceed the GPU memory capacity when processing long utterances. This can happen unpredictably, it is desirable to avoid a catastrophic failure when this occurs.
The combination of fast memory allocation with a fallback mechanism that allows us to slightly overflow available GPU memory in exceptional cases makes the system significantly simpler, more robust, and more efficient.
This memory can be accessed directly by the GPU by forwarding individual memory transactions over PCIe at reduced bandwidth, and it allows a model to continue to make progress even after encountering an outlier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deep Speech 2 section 4.3, https://arxiv.org/pdf/1512.02595.pdf
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From Zen of python
Explicit is better than implicit.
Simple is better than complex.
cudaMallocHost
will be invoked IMPLICITLY when out of memory in the previous implementation. The performance is very poor when uses cudaMallocHost
to allocate memory and run the kernel on GPU and since it is an implicit behaviour, it is hard to debug before.
It may be better to fail fast when out of memory. It is explicit and simple.
If this feature is needed, we can also implement another decorator rather than combine these logics together. For example:
class CUDAFallbackAllocator {
public:
void* alloc(size_t size) {
void* ptr = allocator_->alloc(size);
if (ptr == nullptr) {
return cudaMallocHost(size);
} else {
return ptr;
}
}
private:
CUDASystemAllocator* allocator_;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, thanks.
Fix #6651