-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
model_utils.py's make_and_restore_model fails without cuda #15
Comments
The following change successfully avoids the runtime error:
|
But a subsequent error ( Did I miss something in the README or requirements.txt that cuda is required? |
Same assertion error when trying to generate an adversarial image:
Stack trace:
|
Hello, Currently cuda is required and CPU support is not on our timeline, as training models on CPU tends to be extremely impractical for the vast majority of use cases. I’ve updated the README to reflect this as well. |
@andrewilyas , I understand not wanting to train models on CPU, but what about using models on CPU? This seems quite reasonable. |
Looking at the |
Both classes require gradient access to the model (backprop) which takes significantly longer on a CPU. Given that the common use case is using a GPU for inference, we don't plan to implement CPU support. That being said, you can fork the project and make the necessary adjustments (it should not be too much work, just not something we have the bandwidth to do at the moment). |
I'm running pytorch version
'1.2.0+cpu'
. I'm trying to get the introductory example with pretrained models to work with the CIFAR10 L2-norm (ResNet50) ε = 0.0 model, but I receive the following errorIt appears that the appropriate place to modify
torch.load()
is line 71` of model_utils.py.The text was updated successfully, but these errors were encountered: