You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 26, 2022. It is now read-only.
Is it possible to run inference on CPU? In the forward function of the roi_Xconv1fc_gn_head_panet in fast_rcnn_heads.py, it relies on the gpu version of the roi align. How can this issue be solved?
The text was updated successfully, but these errors were encountered:
The basic changes I had to make was changing all of the .cuda() calls at the end of tensors that pushed them to devices to tensor.to(torch.device(device_id))
device_id = ''
if blobs_in[0].is_cuda:
device_id = blobs_in[0].get_device()
else:
device_id = 'cpu'
X.to(torch.device(device_id))
This allows you to set the device as CPU. I also had to adjust data_parallel so that it was okay with handling cpu devices. In the above repo I created a datasingular that has the changes.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Is it possible to run inference on CPU? In the forward function of the roi_Xconv1fc_gn_head_panet in fast_rcnn_heads.py, it relies on the gpu version of the roi align. How can this issue be solved?
The text was updated successfully, but these errors were encountered: