Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Selection response #51

Open
b-remy opened this issue Jun 1, 2022 · 6 comments
Open

Selection response #51

b-remy opened this issue Jun 1, 2022 · 6 comments
Assignees
Labels
investigation Issues related to investigating a given question

Comments

@b-remy
Copy link
Member

b-remy commented Jun 1, 2022

I'm opening this issue to start discussing the selection response with autometacal.

@b-remy b-remy added the investigation Issues related to investigating a given question label Jun 1, 2022
@b-remy b-remy self-assigned this Jun 1, 2022
@b-remy
Copy link
Member Author

b-remy commented Jun 1, 2022

First development can be found here: notebook

So far the selection response computation is written as

def selection(x, t, sigma):
  return .5 * (1 + tf.math.erf((tf.math.log(x) - tf.math.log(t))/sigma))

def get_selection_amc_response(gal_images, psf_images, size_cut):
  gal_images = tf.convert_to_tensor(gal_images, dtype=tf.float32)
  psf_images = tf.convert_to_tensor(psf_images, dtype=tf.float32)
  batch_size, _ , _ = gal_images.get_shape().as_list()
  g = tf.zeros([1,2])
    
  Tpsf = get_moment_T(psf_images, scale=0.2, fwhm=1.2)

  with tf.GradientTape() as tape:
    tape.watch(g)
    m_images = generate_mcal_image(gal_images, psf_images, psf_images, g)

    T = get_moment_T(m_images, scale=0.2, fwhm=1.2)
    
    w = selection(T/Tpsf, size_cut, 0.01)

    m_images = tf.stop_gradient(m_images)
    e = tf.reduce_mean(get_moment_ellipticities(m_images, scale=0.2, fwhm=1.2)*tf.transpose(w),
                   axis=0)
  
  R = tape.jacobian(e, g)
  return e, R

but does not seem to be able to recover the right R11... (eaa0e9a):

finitediff metacal
+0.0027
autometacal
-0.0012

@martinkilbinger
Copy link
Collaborator

Not that I understand much of the tf code...

If I remember correctly, for R_selection we make a selection on sheared images, and apply this selection mask on the unsheared galaxies.

How do you make sure that in the line where you compute the mean, the m_images-dependence on g is not used in the gradient, only the one coming from selection?

@b-remy
Copy link
Member Author

b-remy commented Jun 2, 2022

Good point. I add this line:

m_images = tf.stop_gradient(m_images)

And the autodiff stops to keep track of the m_image dependence on g when computing the average right after.

@b-remy
Copy link
Member Author

b-remy commented Jun 2, 2022

After discussing with @andrevitorelli and @EiffL this morning, we think that the problem might come from that there is too much shape noise in the sample, so I have to try with much more galaxies. I only considered a batch of 800 above.

@b-remy
Copy link
Member Author

b-remy commented Jun 7, 2022

Even with a limited amount of galaxies, we actually should have the same result for finite and auto diff, the difference should only be due to the error of approximating the gradients.

In the last commit 8f9a4cb, the finite and auto differentiation methods give almost the same results for the selection response.

finite diff
R11 0.010942272
auto diff
R11 0.009751715

Now we want to compare to a working ngmix example of the selection response computation, like metacal_select.py.

@andrevitorelli
Copy link
Member

I was talking about the issue of shape noise - number of galaxies with @aguinot today, and the requirements for it to average out are really big. So one thing to try is to always generate pairs of galaxies in which the noise cancel out (ie. pairs with one rotated 90deg to the other). The other one is not very clear to me yet, but comes from Pujol et al 2020

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
investigation Issues related to investigating a given question
Projects
None yet
Development

No branches or pull requests

3 participants