You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, I'm having trouble doing a proper reverse sampling - for instance if use the following code snippet, I am unable to recreate the input from the output:
I figured out the issue: I believe it's due to different statistics used in the BN of the subnetworks. This can cause very different activation maps. E.g., if you do forward in training mode, but sample backward in evaluation mode, the recreated input can be orders of magnitude different in scale. Likewise if you do forward with a full batch in training model but sample backward with a sub-batch.
Have you tried using something like InstanceNorm instead, given the impact of 'non-invertibility' if the BN is used in eval/training mode or with differently sized batches in training mode?
Anyway, I leave this issue open for now, in case my diagnosis was incorrect. Feel free to close. Any insights in the impact of BN would also be welcome - thanks!
I noticed that the sampling function seems to be out of date:
ibinn_imagenet/ibinn_imagenet/model/classifiers/invertible_imagenet_classifier.py
Line 171 in 04b2ab9
However, I'm having trouble doing a proper reverse sampling - for instance if use the following code snippet, I am unable to recreate the input from the output:
x
will be in the range [0,1], butrev_ims
will be in the range of roughly [-100, 100]. Is there something I'm doing trivially wrong here?The text was updated successfully, but these errors were encountered: