-
Notifications
You must be signed in to change notification settings - Fork 7.2k
Open
Description
When debugging vision models, it is often useful to be able to map predicted bounding boxes, segmentation masks, or keypoints back onto the original image. To do this conveniently, each transformation should know how to invert itself. A discussion about this can be found in this thread. While useful, it was deemed a lower priority than adding general support for non-image input types in the prototype transforms. However, from the preliminary discussions, inverting transformations seems not to conflict with the proposal and thus can be added later.
Apart from the thread linked above there were some discussions without written notes. They are listed here so they don’t get lost:
- While some transformations can be statically inverted, transformations with random elements can only be inverted for a specific sampled parameter set. In the thread linked above, this parameter set would need to be returned by the forward transformation and used for the inverse. Instead of passing the parameter set around, the transformation could also save it from the last call and use that for inversion.
- Some transformations are only pseudo-invertible. For example, while cropping is the true inverse of padding, the same is not true the other way around. By cropping first, information is eliminated that cannot be revived by padding. Thus, padding is just the pseudo-inverse of cropping. The inversion functionality should have a strict flag that, if set, disallows pseudo-inverses.