-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pooled texture model #284
base: main
Are you sure you want to change the base?
Pooled texture model #284
Conversation
…onalVision/plenoptic into pooled_texture
several steps all in here together - store the n_autocorrs as a hidden attribute for easy reference. - rename n_shifts to n_autocorrs throughout - fix autocorr calculation: the output of torch.gather has the same shape as the *indices*, for some reason. this meant that we were doing something weird when we were computing the autocorrs of the magnitudes, which had an orientation dimension as well. changed how those indices are created in order to fix this - fixes shape_dict and necessary stats mask to work with new shapes: now only 4 pixel stats and the autocorrs dropped a dimension, and no longer have any unnecessary stats - got compute_cross_correlation working. this also required changing the shape of the variance returned by the autocorr function, which we then change the shape of afterwards - gets plot_representation callable. this still needs more work, but right now it runs, creating a bunch of images
change back to using stem plot, gets update_plot working on GPU
two changes here: - use register_buffer for masks and autocorr_rolls, so that we can move model to GPU correctly - when downsampling the masks, scale the pixel values up so that the sum remains approximately the same across scales
…ilar magnitude as other stats
with this: - use blur_downsample rather than shrink to downsample masks across scales (avoids aliasing) - check that mask has no negative values (which messes up computation of say, variance) - clip the downsampled masks so that they have no negative values, for same reason as above
Two changes, intermixed here: - adds _division_epsilon and _sqrt_epsilon attributes, which are used to make the sqrt and division operations more stable, in order to avoid NaNs or very large numbers. still needs work to determine what values are reasonable - we still had some redundant autocorrs, which I removed: was including the "other side" of the diagonal, as well as the "zero shift" term itself. got rid of those, and then added the extra shifts for even spatial_corr_width
… into pooled_texture
instead of separate epsilons for division and sqrt, we now use a single epsilon. this works as long as the windows aren't *too* small, which can be checked by taking their sum. this also removes the unstable_locs function which, with the above, wasn't necessary
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Codecov ReportAttention: Patch coverage is
|
Adding thoughts to questions/comments inline below: Currently, both of these approaches are supported -- should they be? Or should we require the user to pass two masks that we multiply together? I can't think there's any case in which someone would pass a list with three or more mask tensors, but that would work as well. Even if we allow a list of masks, should we allow more than two? At least for me I don’t see a need of having more than two mask tensors or two masks more generally. Although I do think for simplicity allowing either format of mask input may be useful. I guess for large images is where using the product of the two masks will be important — I can imagine a world where I will want to measure statistics on larger images where this will be important. Before moving on, another question: how to handle mask normalization? The model works best if the individual masks (as shown above) sum to approximately 1 because, otherwise, some of the statistics end up being much larger than the others and optimization is hard. Currently, we're not normalizing within the model code, but should we? I would normalize within the model code — it’s one of the things that I could imagine a first time user forgetting even if you make it very explicit or at the very least have a warning in the case that its not normalized that reminds the user this is an issue. For the pooling regions, you want to avoid aliasing. This is an interaction between the sampling distance and the function used to generate the regions, and can be checked by seeing if they're equivariant to translation (if the windows are Cartesian) or rotation / dilation (if the windows are polar). I don't think the model should check this, but we can show an example of how to check this in our documentation and point out that this is important. This seems reasonable to me. The foveated pooling windows are not currently in plenoptic and are a pain to implement yourself. Pytorch implementations exist (as well as ways to get existing windows into pytorch), including one that I wrote. I can show examples making use of some of these, is that sufficient? None of them are python packages (so they can't be installed with pip), but I could at least package the one I wrote at least. I've hesitated because it's definitely research code, but it would simplify the process for people. I have always found using the pooling-windows repo straightforward so I think it is sufficient. You could package it but cloning has always been easy to get running for me. |
Documentation built by flatiron-jenkins at http://docs.plenoptic.org/docs//pulls/284 |
with this commit, we now have a single PS class which accepts a weights argument. If None (default), then we compute stats across whole image. else, we use them as weighting regions. in both cases, the output is now 4d, with the third dimension holding the number of weighting regions, which is a singleton if weights=None to accomplish this, we make use of the WeightedAverage class, as well as new helper modules: StatsComputer and WeightedComputer, which handle the computations for pixel_stats, autocorr, cross-corr, and skew/kurt_recon. docstrings need to be updated / written, everything needs to be tested a bit more (and new tests need to be written), and it's possible things can be made simpler / more efficient, but I believe the basics are now working
for more information, see https://pre-commit.ci
This pull request adds the pooled texture statistic model from Freeman and Simoncelli, 2011 and elsewhere. In our implementation, the statistics are computed with user-defined
masks
, which is a list of 3d tensors of shape(masks, height, width)
. The list may contain any number of tensors, and they will be multiplied together to create the final pooling regions. The intended use case is to define the regions as two-dimensional and separable, either in x and y, or polar angle and eccentricity.This pull request contains a notebook,
pooled_texture_model
, which demonstrates the usage of the model with different images and types of windows.This pull request is not yet ready; the following questions still need to be addressed:
Math / model details:
sqrt
and division operations in the model don't return NaNs or very large numbers. This has worked for the models and windows I've tested it with, assuming that the above normalization has been done (I noticed if the window sums are too small, a larger epsilon is needed). But it feels fairly arbitrary; I can try and figure out if I can figure out the relationship between the window sum and the epsilon value and then do one of: use window sum to determine epsilon raise an error if the window sum looks wrong; normalize the windows myselfSoftware design:
torch.gather
at run-time. This works well on the GPU, but not on the CPU and isn't very memory-efficient. I'm not sure if there's a better way here.User interface:
masks
argument is a list containing some number of 3d tensors, which are all multiplied together. In practice, I think the majority of users will pass two tensors here (x/y or angle/eccentricity) and I can't imagine a use case for three or more. Should I just require two tensors as the input? Or keep the list (allowing one or two), but raise an exception if more than two are passed? Allowing for a single mask allows me to test the pooled version against the original (see below).pip
), but I could at least package the one I wrote at least. I've hesitated because it's definitely research code, but it would simplify the process for people.Testing / relationship to other models:
What tests to write? Regression tests like the regular model: cache its output, metamer result, and
_representation_scales
.Create a model with a single image-wide mask and compare it against the regular model. The following works:
I would like to do the similar but with a "folded image", like so:
However, this doesn't work because, in the regular model on the "folded image" above, the steerable pyramid coefficients, their magnitudes, and the reconstructed lowpass images on each of those four regions will all have zero mean, whereas in the pooled version, they all have a global zero mean: across the whole image, but not within each region. This ends up being fine for the purposes of synthesis, but means the outputs are quite different.
Maybe I can just synthesize the two metamers and (after rearrangement) check that each version of the model agrees it's a good metamer?
Would be nice to better understand the relationship between my implementation and at least Freeman's implementation and Brown et al. 2023's. The actual implementation details are different enough that getting the model outputs to match seems unlikely, but maybe check that metamers for those models are decent metamers for mine?
Any other tests?