-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid unnecessary computation of magnitude variance in Portilla-Simoncelli model #281
Conversation
turns out, the variance of the magnitudes was already being computed when we took their autocorrelations. so restructures _compute_cross_correlations to just reuse that instead of recomputing
in previous commit, both of these tensors had orientations along the final dimension and scales along the second to last. this commit swaps those two this makes mags_std the same as before these changes, but swaps the final two of auto_corr_mag. worth doing because it makes it consistent with the other statistics, which always have scales along the final dim
because we changed how we compute the magnitude variance, the output of synthesis has changed (though in a way that is indetectable perceptually).
for these regression tests, the swapping of the autocorr mags dimensions caused them to fail. this fixes that by swapping them back for the tests
right now, the GPU and CPU synthesis are giving the same output. that's weird, but alright then.
now its part of defaults
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
A question: because of the dimension swap, I had to update the tests for Portilla-Simoncelli as well. For the matlab tests, this makes sense: the statistics will be ordered differently in the two, so the test should include swapping the matlab representation's dimensions as appropriate. For the regression tests against earlier versions of plenoptic, that feels a little odder to me. You can see these changes in |
Jenkins was running out of CUDA memory with auto's 80 workers
Codecov ReportAll modified and coverable lines are covered by tests ✅
|
Ah actually, looking back at those test files, I think I would need to regenerate them; making sure I rearranged them correctly would be difficult. I think that's fine, as the fact that the tests are passing now, with the old test files but the rearranging in the test functions, means that the rearranging would be the only changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About the tests, if you plan to merge this PR before any other, I would change the test files once and for all, and fix the other ope PRs by merging. If there are PRs with priority, I would keep your current changes and open an issue.
I probably would recommend the first option since this PR seems small enough.
coeff_other_var = coeff_var | ||
if coeffs_var is None: | ||
# First, compute the variances of each coeff | ||
coeff_var = einops.einsum(coeff, coeff, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this a variance? are coeff mean centered?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good eye. Yes, they are. coeff
is either the demeaned magnitude of the steerable pyramid coefficients or the real component of those coefficients (both computed in _compute_intermediate_representations
). The magnitude is explicitly mean-centered, like I said, and the steerable pyramid coefficients all have mean=0, because they're the output of a bandpass filter.
This is not generally true, as you point out, hence why this is a method of this class, instead of a more general function.
Yes, that makes sense. I'll go ahead and update those files and then merge this |
reverts the test_ps_torch_output and test_portilla_simoncelli_scales tests back to their original version, as now the test files themselves have been updated to reflect the changes in this PR
… into portilla_simoncelli_vars
I updated the test files, as discussed, and also added some helper functions in |
I realized that
PortillaSimoncelli._compute_cross_correlation
was always computing the variance itself (for normalizing the covariance to a cross-correlation), but that this was unnecessary because, in two of the three calls, it was computing the variance of each magnitude band, which had already been computed (while computing the auto-correlation of those magnitude bands). This refactor then allows us to make use of that fact, only computing the variance as necessary.In this model, the above change does not change the total forward time by a noticeable amount. It will, however, help the pooled Portilla-Simoncelli model.
Additionally, I swapped the final two dimensions of the magnitude bands' auto-correlations, changing their shape from
(batch, channel, spatial_corr_width, spatial_corr_width, n_scales, n_orientations)
to(batch, channel, spatial_corr_width, spatial_corr_width, n_orientations, n_scales)
. This makes their shape more consistent with the other stats, which always haven_scales
as the final dimension. As a result, it makes the representation plot a bit easier to interpret.The first change above does mean that the metamer output is slightly, but not perceptually, different from before, so needed to update the contents of the test files. While doing so, I realized that the GPU and CPU synthesis outputs are now identical. I doubt that's related to these changes, but unsure when that happened.
Finally, also added
-n auto
to our pytest configuration inpyproject.toml
, so that pytest will take advantage of parallelization by default (this won't affect the CI, which had manually added that arg).