This repo provides the implementation of the paper "LOT: Layer-wise Orthogonal Training on Improving
Follow the instructions in SOC to set up the environment.
python train_robust.py --conv-layer lot --activation ACT --block-size BLOCKS --dataset DATASET --gamma GAMMA --opt-level O0 --residual
- ACT: maxmin or hh1.
- BLOCKS: 1, 2, 3, 4, 5, 6, 7, 8
- DATASET: cifar10/cifar100.
- GAMMA: certificate regularization coefficient
- The LOT does not support O2 optimization as it requires a high float number precision.
- Use
--lln
to enable last layer normalization