Skip to content
This repository has been archived by the owner on Jan 26, 2022. It is now read-only.

Inference on CPU #222

Open
ashnair1 opened this issue May 21, 2019 · 1 comment
Open

Inference on CPU #222

ashnair1 opened this issue May 21, 2019 · 1 comment

Comments

@ashnair1
Copy link

Is it possible to run inference on CPU? In the forward function of the roi_Xconv1fc_gn_head_panet in fast_rcnn_heads.py, it relies on the gpu version of the roi align. How can this issue be solved?

@jmills09
Copy link

@Ash1995

I'm running a fork of this repository that has cpu compatibility on just the infer script.

Unfortunately I have already modified the entire repo beyond the point of it likely being useful because I'm using a non-coco dataset. (it's https://github.com/NuTufts/Detectron.pytorch/tree/cpu_train but I don't recommend trying to use it.

The basic changes I had to make was changing all of the .cuda() calls at the end of tensors that pushed them to devices to tensor.to(torch.device(device_id))
device_id = ''
if blobs_in[0].is_cuda:
device_id = blobs_in[0].get_device()
else:
device_id = 'cpu'
X.to(torch.device(device_id))

This allows you to set the device as CPU. I also had to adjust data_parallel so that it was okay with handling cpu devices. In the above repo I created a datasingular that has the changes.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants