-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: add use_gpu for cifar finetuning #882
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, but you need to use the new CML GPU api instead of the Concrete one
use_case_examples/cifar/cifar_brevitas_finetuning/cifar_utils.py
Outdated
Show resolved
Hide resolved
use_case_examples/cifar/cifar_brevitas_finetuning/PerrorImpactOnFMNIST.ipynb
Outdated
Show resolved
Hide resolved
094545e
to
e57809c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kcelia Can you please confirm you can run these notebooks on your machine ?
use_case_examples/cifar/cifar_brevitas_finetuning/FromImageNetToCifar.ipynb
Outdated
Show resolved
Hide resolved
use_case_examples/cifar/cifar_brevitas_finetuning/CifarQuantizationAwareTraining.ipynb
Outdated
Show resolved
Hide resolved
use_case_examples/cifar/cifar_brevitas_finetuning/CifarInFheWithSmallerAccumulators.ipynb
Outdated
Show resolved
Hide resolved
use_case_examples/cifar/cifar_brevitas_finetuning/CifarInFhe.ipynb
Outdated
Show resolved
Hide resolved
fc37f69
to
7fb522d
Compare
7fb522d
to
46b831e
Compare
model, images, n_bits, rounding_threshold_bits=None, fhe_mode="disable", use_gpu=False | ||
model, | ||
images, | ||
n_bits, | ||
rounding_threshold_bits=None, | ||
fhe_mode="disable", | ||
compilation_device="cpu", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I preferred use_gpu
or just device=...
to do the same as torch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"@jfrery, use_device
might sound like a boolean type, but in the compilation functions, it's actually a string that can be either 'cpu' or 'gpu'.
The reason it's not simply named device
is because the PyTorch device and the CML one may differ. This distinction helps differentiate between the device used for PyTorch training/evaluation and the one used for CML compilation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jfrery the API was decided some time ago:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
device="cuda|cpu"
is perfect yes but here we have compilation_device.
The reason it's not simply named device is because the PyTorch device and the CML one may differ.
Well as far as I know we try to stay close to the torch API. We did it for sklearn and the others. I don't see why now we would want to differ. device seems the right choice here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We assume that the compile_device may differ from the device used to train the model.
Additionally, on standard machines, setting compile_device="cpu" is currently faster than compile_device="cuda". So, it's better to keep them separate.
a7b37a4
to
218c666
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one small error remains
use_case_examples/cifar/cifar_brevitas_training/evaluate_torch_cml.py
Outdated
Show resolved
Hide resolved
use_case_examples/cifar/cifar_brevitas_finetuning/CifarQuantizationAwareTraining.ipynb
Outdated
Show resolved
Hide resolved
I tested on my machine, all the scripts are working |
40bc329
to
1bf1cad
Compare
|
Coverage failed ❌Coverage details
|
1bf1cad
to
f4f7c30
Compare
f4f7c30
to
e4934c8
Compare
No description provided.