Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implemented kDLCPUPinned (cudaMallocHost) #4985

Merged
merged 4 commits into from
Mar 10, 2020

Conversation

jmorrill
Copy link
Contributor

@jmorrill jmorrill commented Mar 4, 2020

Data allocated via cudaMallocHost is supposed to be faster at transferring data to/from a cuda device, and it was not implemented in the tvm runtime.

The DeviceAPIs treat DLDeviceTypes as their own device, so kDLCPUPinned felt a little bit out of place because it was sort of a kDLCPU (host memory) but really it was owned by a kDLGPU (cuda api).

I felt the least complicated path was to register an alias for "device_api.gpu" as "device_api.cpu_pinned" and implement the kDLCPUPinned logic in CUDADeviceAPI.

Some small checks also needed to be modified. Not sure if I missed any.

Open to suggestions if my implementation is way off.

@tqchen tqchen merged commit fd39c5c into apache:master Mar 10, 2020
@tqchen
Copy link
Member

tqchen commented Mar 10, 2020

Thanks @jmorrill ! this is merged

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Apr 16, 2020
* implement kDLCPUPinned

* Fix line endings

* Fix whitespace for linter

* cleanup up allocdataspace method
zhiics pushed a commit to neo-ai/tvm that referenced this pull request Apr 17, 2020
* implement kDLCPUPinned

* Fix line endings

* Fix whitespace for linter

* cleanup up allocdataspace method
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants