You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.
Describe the bug:
As far as I know, most ptq-quantization methods needn't train again but it seems that the current ptq-quantization of nni must run with the training process. It would cost too many time to train again. Is there any way to run the ptq-quantization of nni without training?
Environment:
NNI version:
Training service (local|remote|pai|aml|etc):
Python version:
PyTorch version:
Cpu or cuda version:
Reproduce the problem
Code|Example:
How to reproduce:
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Describe the bug:
As far as I know, most ptq-quantization methods needn't train again but it seems that the current ptq-quantization of nni must run with the training process. It would cost too many time to train again. Is there any way to run the ptq-quantization of nni without training?
Environment:
Reproduce the problem
Code|Example:
How to reproduce:
The text was updated successfully, but these errors were encountered: