Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

metacal bias for crazy WCS #72

Open
beckermr opened this issue Aug 24, 2019 · 45 comments
Open

metacal bias for crazy WCS #72

beckermr opened this issue Aug 24, 2019 · 45 comments

Comments

@beckermr
Copy link
Collaborator

I am seeing some biases in metacalibration when I use a WCS Jacobian that differs a lot from a simple pixel scale.

Here is a basic test with a simple pixel scale

$ python run.py 10 0.0 0.0
# of sims: 10
wcs_g1   : 0.000000
wcs_g2   : 0.000000
dudx     : 0.263000
dudy     : -0.000000
dvdx     : -0.000000
dvdy     : 0.263000
m [1e-3] : 4.628283 +/- 4.470836
c [1e-4] : -0.508443 +/- 1.581002

Now if I introduce a net g1 distortion in the WCS, I get

$ python run.py 10 0.1 0.0
# of sims: 10
wcs_g1   : 0.100000
wcs_g2   : 0.000000
dudx     : 0.237892
dudy     : -0.000000
dvdx     : -0.000000
dvdy     : 0.290757
m [1e-3] : -197.869263 +/- 5.030609
c [1e-4] : -0.769631 +/- 1.390868

Here is the test code

import sys
import numpy as np
import tqdm

import ngmix
import galsim
import numba

from ngmix import metacal
from metadetect.fitting import Moments


def run_metacal(*, n_sims, wcs_g1, wcs_g2):
    """Run metacal and measure m and c.

    The resulting m and c are printed to STDOUT.

    Parameters
    ----------
    n_sims : int
        The number of objects to simulated.
    wcs_g1 : float
        The shear on the 1-axis of the WCS Jacobian.
    wcs_g2 : float
        The shear on the 2-axis of the WCS Jacobian.
    """
    jc = galsim.ShearWCS(0.263, galsim.Shear(g1=wcs_g1, g2=wcs_g2)).jacobian()

    jacobian_dict = {
        'dudx': jc.dudx,
        'dudy': jc.dudy,
        'dvdx': jc.dvdx,
        'dvdy': jc.dvdy
    }

    swap_g1g2 = False

    res = _run_metacal(
        n_sims=n_sims,
        rng=np.random.RandomState(seed=10),
        swap_g1g2=swap_g1g2,
        **jacobian_dict)

    g1 = np.array([r['noshear']['g'][0] for r in res])
    g2 = np.array([r['noshear']['g'][1] for r in res])
    g1p = np.array([r['1p']['g'][0] for r in res])
    g1m = np.array([r['1m']['g'][0] for r in res])
    g2p = np.array([r['2p']['g'][1] for r in res])
    g2m = np.array([r['2m']['g'][1] for r in res])

    g_true = 0.02
    step = 0.01

    if swap_g1g2:
        R11 = (g1p - g1m) / 2 / step
        R22 = (g2p - g2m) / 2 / step * g_true

        m, merr, c, cerr = _jack_est(g2, R22, g1, R11)
    else:
        R11 = (g1p - g1m) / 2 / step * g_true
        R22 = (g2p - g2m) / 2 / step

        m, merr, c, cerr = _jack_est(g1, R11, g2, R22)

    print("""\
# of sims: {n_sims}
wcs_g1   : {wcs_g1:f}
wcs_g2   : {wcs_g2:f}
dudx     : {dudx:f}
dudy     : {dudy:f}
dvdx     : {dvdx:f}
dvdy     : {dvdy:f}
m [1e-3] : {m:f} +/- {msd:f}
c [1e-4] : {c:f} +/- {csd:f}""".format(
        n_sims=len(g1),
        wcs_g1=wcs_g1,
        wcs_g2=wcs_g2,
        **jacobian_dict,
        m=m/1e-3,
        msd=merr/1e-3,
        c=c/1e-4,
        csd=cerr/1e-4), flush=True)


def _run_metacal(*, n_sims, rng, swap_g1g2, dudx, dudy, dvdx, dvdy):
    """Run metacal on an image composed of stamps w/ constant noise.

    Parameters
    ----------
    n_sims : int
        The number of objects to run.
    rng : np.random.RandomState
        An RNG to use.
    swap_g1g2 : bool
        If True, set the true shear on the 2-axis to 0.02 and 1-axis to 0.0.
        Otherwise, the true shear on the 1-axis is 0.02 and on the 2-axis is
        0.0.
    dudx : float
        The du/dx Jacobian component.
    dudy : float
        The du/dy Jacobian component.
    dydx : float
        The dv/dx Jacobian component.
    dvdy : float
        The dv/dy Jacobian component.

    Returns
    -------
    result : dict
        A dictionary with each of the metacal catalogs.
    """

    stamp_size = 33
    psf_stamp_size = 33

    cen = (stamp_size - 1) / 2
    psf_cen = (psf_stamp_size - 1)/2

    noise = 1
    flux = 64000

    galsim_jac = galsim.JacobianWCS(
        dudx=dudx,
        dudy=dudy,
        dvdx=dvdx,
        dvdy=dvdy)

    if swap_g1g2:
        g1 = 0.0
        g2 = 0.02
    else:
        g1 = 0.02
        g2 = 0.0

    gal = galsim.Exponential(
        half_light_radius=0.5
    ).withFlux(
        flux
    ).shear(
        g1=g1, g2=g2)

    psf = galsim.Gaussian(fwhm=0.9).withFlux(1)

    data = []
    for ind in tqdm.trange(n_sims):
        ################################
        # make the obs

        # psf
        psf_im = psf.drawImage(
            nx=psf_stamp_size,
            ny=psf_stamp_size,
            wcs=galsim_jac).array
        psf_noise = np.sqrt(np.sum(psf_im**2)) / 500
        wgt = np.ones_like(psf_im) / psf_noise**2
        psf_jac = ngmix.Jacobian(
            x=psf_cen,
            y=psf_cen,
            dudx=dudx,
            dudy=dudy,
            dvdx=dvdx,
            dvdy=dvdy)
        psf_obs = ngmix.Observation(
            image=psf_im,
            weight=wgt,
            jacobian=psf_jac)

        # now render object
        obj = galsim.Convolve(gal, psf)
        offset = rng.uniform(low=-0.5, high=0.5, size=2)
        im = obj.drawImage(
            nx=stamp_size,
            ny=stamp_size,
            wcs=galsim_jac,
            offset=offset).array
        jac = ngmix.Jacobian(
            x=cen+offset[0],
            y=cen+offset[1],
            dudx=dudx,
            dudy=dudy,
            dvdx=dvdx,
            dvdy=dvdy)
        wgt = np.ones_like(im) / noise**2
        nse = rng.normal(size=im.shape) * noise
        im += (rng.normal(size=im.shape) * noise)
        obs = ngmix.Observation(
            image=im,
            weight=wgt,
            noise=nse,
            bmask=np.zeros_like(im, dtype=np.int32),
            ormask=np.zeros_like(im, dtype=np.int32),
            jacobian=jac,
            psf=psf_obs
        )

        # build the mbobs
        mbobs = ngmix.MultiBandObsList()
        obslist = ngmix.ObsList()
        obslist.append(obs)
        mbobs.append(obslist)

        mbobs.meta['id'] = ind+1
        # these settings do not matter that much I think
        mbobs[0].meta['Tsky'] = 1
        mbobs[0].meta['magzp_ref'] = 26.5
        mbobs[0][0].meta['orig_col'] = ind+1
        mbobs[0][0].meta['orig_row'] = ind+1

        ################################
        # run the fitters
        try:
            res = _run_metacal_fitter(mbobs, rng)
        except Exception as e:
            print(e)
            res = None

        if res is not None:
            data.append(res)

    if len(data) > 0:
        res = data
    else:
        res = None

    return res


@numba.njit
def _jack_est(g1, R11, g2, R22):
    g1bar = np.mean(g1)
    R11bar = np.mean(R11)
    g2bar = np.mean(g2)
    R22bar = np.mean(R22)
    n = g1.shape[0]
    fac = n / (n-1)
    m_samps = np.zeros_like(g1)
    c_samps = np.zeros_like(g1)

    for i in range(n):
        _g1 = fac * (g1bar - g1[i]/n)
        _R11 = fac * (R11bar - R11[i]/n)
        _g2 = fac * (g2bar - g2[i]/n)
        _R22 = fac * (R22bar - R22[i]/n)
        m_samps[i] = _g1 / _R11 - 1
        c_samps[i] = _g2 / _R22

    m = np.mean(m_samps)
    c = np.mean(c_samps)

    m_err = np.sqrt(np.sum((m - m_samps)**2) / fac)
    c_err = np.sqrt(np.sum((c - c_samps)**2) / fac)

    return m, m_err, c, c_err


def _fit_psf(psf):
    runner = ngmix.bootstrap.PSFRunner(
        psf,
        'gauss',
        1.0,
        {'maxfev': 2000, 'ftol': 1.0e-5, 'xtol': 1.0e-5}
    )
    runner.go(ntry=2)

    psf_fitter = runner.fitter
    res = psf_fitter.get_result()
    psf.update_meta_data({'fitter': psf_fitter})

    if res['flags'] == 0:
        gmix = psf_fitter.get_gmix()
        psf.set_gmix(gmix)
    else:
        from ngmix.gexceptions import BootPSFFailure
        raise BootPSFFailure("failed to fit psfs: %s" % str(res))


def _run_metacal_fitter(mbobs, rng):
    # fit the PSF
    _fit_psf(mbobs[0][0].psf)

    metacal_pars = {
        'psf': 'fitgauss',
        'types': ['noshear', '1p', '1m', '2p', '2m'],
        'use_noise_image': True,
        'step': 0.01
    }
    moments_pars = {'bmask_flags': 2**30, 'weight': {'fwhm': 1.2}}

    obs_dict = metacal.get_all_metacal(mbobs, **metacal_pars)

    # overall flags, or'ed from each moments fit
    res = {'mcal_flags': 0}
    for key in sorted(obs_dict):
        try:
            fitter = Moments(moments_pars, rng)
            fres = fitter.go([obs_dict[key]])
        except Exception as err:
            print(err)
            fres = {'flags': np.ones(1, dtype=[('flags', 'i4')])}

        res['mcal_flags'] |= fres['flags'][0]
        tres = {}
        for name in fres.dtype.names:
            no_wmom = name.replace('wmom_', '')
            tres[no_wmom] = fres[name][0]
        tres['flags'] = fres['flags'][0]  # make sure this is moved over
        res[key] = tres

    return res


if __name__ == '__main__':
    if len(sys.argv) > 2:
        wcs_g1 = float(sys.argv[2])
    else:
        wcs_g1 = 0.0

    if len(sys.argv) > 3:
        wcs_g2 = float(sys.argv[3])
    else:
        wcs_g2 = wcs_g1

    run_metacal(n_sims=int(sys.argv[1]), wcs_g1=wcs_g1, wcs_g2=wcs_g2)

It only needs the Moments fitter from the metadetect repo.

I have a copy in git here: https://github.com/beckermr/misc/blob/simple-des-y3/work/sheared_wcs_wl_sims/run.py

@beckermr beckermr changed the title metcal bias for crazy WCS metacal bias for crazy WCS Aug 24, 2019
@beckermr
Copy link
Collaborator Author

@esheldon for viz

@esheldon
Copy link
Owner

you should always add noise to the psf image when you will fit using LM (which the code is doing internally for psf: fitgauss. are you seeing psf fit failures? It can fail more often when there is no noise.

Maybe try with psf: gauss

@beckermr
Copy link
Collaborator Author

I am not seeing psf fit failures but I can try this.

@beckermr
Copy link
Collaborator Author

Still biased with psf noise:

(anl) localhost:sheared_wcs_wl_sims Matt$ python run.py 100 0.1 0.0
# of sims: 100
wcs_g1   : 0.100000
wcs_g2   : 0.000000
dudx     : 0.237892
dudy     : -0.000000
dvdx     : -0.000000
dvdy     : 0.290757
m [1e-3] : -219.291718 +/- 13.767538
c [1e-4] : 2.694665 +/- 3.193834

and biased with psf: gauss

(anl) localhost:sheared_wcs_wl_sims Matt$ python run.py 100 0.1 0.0
# of sims: 100
wcs_g1   : 0.100000
wcs_g2   : 0.000000
dudx     : 0.237892
dudy     : -0.000000
dvdx     : -0.000000
dvdy     : 0.290757
m [1e-3] : -386.300928 +/- 2.035341
c [1e-4] : -0.380396 +/- 0.380571

@beckermr
Copy link
Collaborator Author

Hey @rmjarvis! I’ve found a weird bug in metacal that has both me and @esheldon stumped. We’ve had some discussion offline on this and are looking for your help.

@rmjarvis
Copy link

I'd guess something to do with the col/row <-> x,y or u,v stuff. I always find that confusing.

Have you checked if actually round objects come out measured as round when the wcs has a large g1 component? That seems easier to diagnose if that also shows a problem.

@rmjarvis
Copy link

Another thought -- if it's just a bug in the wcs handling, it should be insensitive to the size of the galaxy. Could check with hlr=3 instead of 0.5.

If that works well, but 0.5 doesn't, then it's more likely something subtle with the relative pixel sizes in the two directions. In which case, going even smaller, but much higher S/N might be instructive.

@beckermr
Copy link
Collaborator Author

So an object with hlr=3 is still biased

(anl) localhost:sheared_wcs_wl_sims Matt$ python run.py 100 0.1 0.0
# of sims: 100
wcs_g1   : 0.100000
wcs_g2   : 0.000000
dudx     : 0.237892
dudy     : -0.000000
dvdx     : -0.000000
dvdy     : 0.290757
m [1e-3] : -44.600671 +/- 1.311836
c [1e-4] : 0.399745 +/- 0.243934

@beckermr
Copy link
Collaborator Author

An object with hlr=0.1 is even more biased

(anl) localhost:sheared_wcs_wl_sims Matt$ python run.py 100 0.1 0.0
# of sims: 100
wcs_g1   : 0.100000
wcs_g2   : 0.000000
dudx     : 0.237892
dudy     : -0.000000
dvdx     : -0.000000
dvdy     : 0.290757
m [1e-3] : -2915.698134 +/- 7.954447
c [1e-4] : 1.218687 +/- 1.759858

@beckermr
Copy link
Collaborator Author

beckermr commented Aug 26, 2019

Oh crap. This is the moments fitter.

(anl) localhost:sheared_wcs_wl_sims Matt$ python check_moments_fitter.py 100 0.1 0.1
# of sims: 100
wcs_g1   : 0.100000
wcs_g2   : 0.100000
dudx     : 0.239103
dudy     : -0.026567
dvdx     : -0.026567
dvdy     : 0.292237
g1 [1e-3] : -1.706751 +/- 0.000117
g2 [1e-3] : -1.695389 +/- 0.000106

@beckermr
Copy link
Collaborator Author

Here is a Gaussian max like fit

(anl) localhost:sheared_wcs_wl_sims Matt$ python check_ml_fitter.py 100 0.1 0.1
# of sims: 100
wcs_g1   : 0.100000
wcs_g2   : 0.100000
dudx     : 0.239103
dudy     : -0.026567
dvdx     : -0.026567
dvdy     : 0.292237
g1 [1e-3] : 127.453853 +/- 8.383264
g2 [1e-3] : 190.071506 +/- 8.213148

An earlier version of my shear test above showed metacal to be biased with this as well, which makes sense given the result above.

So I think there is a bug in the Jacobian or pixel handling in ngmix somewhere. :/

@beckermr
Copy link
Collaborator Author

Scripts that run these tests are in the same spot as above.

@beckermr
Copy link
Collaborator Author

I'm going to step away for now. I will probably start in on unit tests for ngmix tomorrow.

@rmjarvis
Copy link

rmjarvis commented Aug 27, 2019

Sounds like the first unit test to write is that Gaussian fit and moments give consistent answers when the wcs is sheared.

@beckermr
Copy link
Collaborator Author

beckermr commented Aug 27, 2019

OK. I started in on some unit tests for key parts. #73

A few results:

  • The admom fitter is generally only good to ~1e-3 in shear or so.
  • the maximum likelihood fitter for a simple Gaussian case is good to ~4e-4 in shear

Edit: both of these tests are without PSF or pixel effects - that is next

@esheldon
Copy link
Owner

I did the main results of sheldon & huff with admom, so I'm a bit surprised.

@beckermr
Copy link
Collaborator Author

Well, metacalibration can calibrate anything. :)

@beckermr
Copy link
Collaborator Author

Also for certain cases, like no WCS distortions, it is closer to 2e-4.

@esheldon
Copy link
Owner

oh, I thought you mean 0.001 bias with metacal

@beckermr
Copy link
Collaborator Author

Ahhhh sorry! Yes, this is just in the shape that comes out.

@esheldon
Copy link
Owner

where do we stand on this one?

@beckermr
Copy link
Collaborator Author

Likely still a bug.

@beckermr
Copy link
Collaborator Author

To expand a bit. We have tests for almost all of the relevant WCS handling code and did not find any serious bugs. So I think there is a methodological error somewhere.

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

Here is an updated script from Anna Niemiec that shows the issue. Note changing the psf to 'fitgauss' removes the bias.

The issue is somewhere in the MetacalGaussPSF code for determining the reconvolution PSF

@rmjarvis is the original author of that

import numpy as np
import ngmix
import galsim


def main(seed, psf='gauss', noise=1.e-6, ntrial=100):

    print("ngmix version:", ngmix.__version__)

    wcs = galsim.JacobianWCS(
        -0.00105142719975775,
        0.16467706437987895,
        0.15681099855148395,
        -0.0015749298342502371
    )

    print("WCS:", wcs.getDecomposition())
    print()
    print()

    shear_true = [0.02, 0.00]
    rng = np.random.RandomState(seed)
    # We will measure moments with a fixed gaussian weight function
    weight_fwhm = 1.2
    fitter = ngmix.gaussmom.GaussMom(fwhm=weight_fwhm)
    psf_fitter = ngmix.gaussmom.GaussMom(fwhm=weight_fwhm)
    # these "runners" run the measurement code on observations
    psf_runner = ngmix.runners.PSFRunner(fitter=psf_fitter)
    runner = ngmix.runners.Runner(fitter=fitter)
    # this "bootstrapper" runs the metacal image shearing as well as both psf
    # and object measurements
    boot = ngmix.metacal.MetacalBootstrapper(
        runner=runner, psf_runner=psf_runner,
        rng=rng,
        psf=psf,
        types=['noshear', '1p', '1m', '2p', '2m'],
    )
    dlist = []
    for i in progress(ntrial, miniters=10):
        im, psf_im, obs = make_data(
            rng=rng, noise=noise, shear=shear_true, wcs=wcs
        )
        resdict, obsdict = boot.go(obs)
        for stype, sres in resdict.items():
            st = make_struct(res=sres, obs=obsdict[stype], shear_type=stype)
            dlist.append(st)
    print()
    data = np.hstack(dlist)
    # selections performed separately on each shear type
    w = select(data=data, shear_type='noshear')
    w_1p = select(data=data, shear_type='1p')
    w_1m = select(data=data, shear_type='1m')
    w_2p = select(data=data, shear_type='2p')
    w_2m = select(data=data, shear_type='2m')

    g = data['g'][w].mean(axis=0)
    gerr = data['g'][w].std(axis=0) / np.sqrt(w.size)
    g1_1p = data['g'][w_1p, 0].mean()
    g1_1m = data['g'][w_1m, 0].mean()
    # g2_1p = data['g'][w_1p, 1].mean()
    # g2_1m = data['g'][w_1m, 1].mean()
    # g1_2p = data['g'][w_2p, 0].mean()
    # g1_2m = data['g'][w_2m, 0].mean()
    g2_2p = data['g'][w_2p, 1].mean()
    g2_2m = data['g'][w_2m, 1].mean()
    R11 = (g1_1p - g1_1m)/0.02
    R22 = (g2_2p - g2_2m)/0.02
    # R12 = (g1_2p - g1_2m)/0.02
    # R21 = (g2_1p - g2_1m)/0.02
    shear = np.array([g[0] / R11, g[1]/R22])
    shear_err = gerr / R11
    m = np.linalg.norm(shear)/np.linalg.norm(shear_true)-1
    merr = np.linalg.norm(shear_err)/np.linalg.norm(shear_true)

    s2n = data['s2n'][w].mean()
    print('S/N: %g' % s2n)
    print('R11: %g' % R11)
    print('m: %g +/- %g (99.7%% conf)' % (m, merr*3))
    print('c: %g +/- %g (99.7%% conf)' % (shear[1], shear_err[1]*3))

    print('shear 1 = %g +/- %g' % (shear[0], shear_err[0]))
    print('shear 2 = %g +/- %g' % (shear[1], shear_err[1]))

    return sres, im, psf_im


def make_struct(res, obs, shear_type):
    """
    make the data structure
    Parameters
    ----------
    res: dict
        With keys 's2n', 'e', and 'T'
    obs: ngmix.Observation
        The observation for this shear type
    shear_type: str
        The shear type
    Returns
    -------
    1-element array with fields
    """
    dt = [
        ('flags', 'i4'),
        ('shear_type', 'U7'),
        ('s2n', 'f8'),
        ('g', 'f8', 2),
        ('T', 'f8'),
        ('Tpsf', 'f8'),
    ]
    data = np.zeros(1, dtype=dt)
    data['shear_type'] = shear_type
    data['flags'] = res['flags']
    if res['flags'] == 0:
        data['s2n'] = res['s2n']
        # for moments we are actually measureing e, the elliptity
        data['g'] = res['e']
        data['T'] = res['T']
    else:
        data['s2n'] = np.nan
        data['g'] = np.nan
        data['T'] = np.nan
        data['Tpsf'] = np.nan
    # we only have one epoch and band, so we can get the psf T from the
    # observation rather than averaging over epochs/bands
    data['Tpsf'] = obs.psf.meta['result']['T']
    return data


def select(data, shear_type):
    """
    select the data by shear type and size
    Parameters
    ----------
    data: array
        The array with fields shear_type and T
    shear_type: str
        e.g. 'noshear', '1p', etc.
    Returns
    -------
    array of indices
    """
    # raw moments, so the T is the post-psf T.  This the
    # selection is > 1.2 rather than something smaller like 0.5
    # for pre-psf T from one of the maximum likelihood fitters
    wtype, = np.where(
        (data['shear_type'] == shear_type) &
        (data['flags'] == 0)
    )
    w, = np.where(data['T'][wtype]/data['Tpsf'][wtype] > 1.2)
    print('%s kept: %d/%d' % (shear_type, w.size, wtype.size))
    w = wtype[w]
    return w


def make_data(rng, noise, shear, wcs):
    """
    simulate an exponential object with moffat psf
    the hlr of the exponential is drawn from a gaussian
    with mean 0.4 arcseconds and sigma 0.2
    Parameters
    ----------
    rng: np.random.RandomState
        The random number generator
    noise: float
        Noise for the image
    shear: (g1, g2)
        The shear in each component
    Returns
    -------
    ngmix.Observation
    """
    psf_noise = 1.0e-8
    stamp_size = 91
    # psf_stamp = 71
    # scale = 0.263

    psf_fwhm = 0.9
    gal_hlr = 0.5
    psf = galsim.Moffat(
        beta=2.5, fwhm=psf_fwhm,
    ).shear(
        g1=0.02,
        g2=-0.01,
    )
    obj0 = galsim.Exponential(
        half_light_radius=gal_hlr,
    ).shear(
        g1=shear[0],
        g2=shear[1],
    )
    obj = galsim.Convolve(psf, obj0)
    psf_im = psf.drawImage(nx=stamp_size, ny=stamp_size, wcs=wcs).array
    im = obj.drawImage(nx=stamp_size, ny=stamp_size, wcs=wcs).array
    psf_im += rng.normal(scale=psf_noise, size=psf_im.shape)
    im += rng.normal(scale=noise, size=im.shape)
    cen = (np.array(im.shape)-1.0)/2.0
    psf_cen = (np.array(psf_im.shape)-1.0)/2.0
    jacobian = ngmix.Jacobian(
        x=cen[1], y=cen[0], wcs=wcs.jacobian(
            image_pos=galsim.PositionD(cen[1], cen[0])
        ),
    )
    psf_jacobian = ngmix.Jacobian(
        x=psf_cen[1], y=psf_cen[0], wcs=wcs.jacobian(
            image_pos=galsim.PositionD(psf_cen[1], psf_cen[0])
        ),
    )
    wt = im*0 + 1.0/noise**2
    psf_wt = psf_im*0 + 1.0/psf_noise**2
    psf_obs = ngmix.Observation(
        psf_im,
        weight=psf_wt,
        jacobian=psf_jacobian,
    )
    obs = ngmix.Observation(
        im,
        weight=wt,
        jacobian=jacobian,
        psf=psf_obs,
    )
    return im, psf_im, obs


def progress(total, miniters=1):
    last_print_n = 0
    last_printed_len = 0
    sl = str(len(str(total)))
    mf = '%'+sl+'d/%'+sl+'d %3d%%'
    for i in range(total):
        yield i
        num = i+1
        if i == 0 or num == total or num - last_print_n >= miniters:
            meter = mf % (num, total, 100*float(num) / total)
            nspace = max(last_printed_len-len(meter), 0)
            print('\r'+meter+' '*nspace, flush=True, end='')
            last_printed_len = len(meter)
            if i > 0:
                last_print_n = num
    print(flush=True)


if __name__ == '__main__':
    main(42, psf='fitgauss', noise=1.0e-8)

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

Here is the current version of that. I probably introduced a bug recently, the original by @rmjarvis worked for distorted psfs

class MetacalGaussPSF(MetacalDilatePSF):

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

If I do not reconvolve the inferred gaussian PSF by the pixel it works.

@rmjarvis
Copy link

rmjarvis commented Oct 3, 2023

I was summoned. But I'm not sure, do you need me to look at this?

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

Thanks Mike. I'm finding that the algorithm you developed is failing for the above case. However it works if I don't reconvolve by the pixel. This is odd because in the code as originally developed the pixel is reconvolved.

https://github.com/GalSim-developers/GalSim/blob/a1f0a11a24690a279abd7027b116ec3e4986643d/tests/test_metacal.py#L149

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

(And we do expect that reconvolving by the pixel is needed)

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

I just copy pasted your code except for usin dk = psf.stepk/4 which I checked is not the issue

def _get_gauss_target_psf(psf, flux):

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

I should clarify, the algorithm isn't failing. We find a bias in the recovered shear with a complex WCS.

And looking over your original tests, it didn't actually have a shear recovery test, it was about noise corrections. So it may be this is not a regression, it's just that we just never tested this algorithm with a distorted WCS.

@rmjarvis
Copy link

rmjarvis commented Oct 3, 2023

And moreover, the GalSim test you got this from (testing noise propagation) never worked properly.
cf. GalSim-developers/GalSim#720
There are multiple parts of that test that are either commented out or guarded with if False because they didn't work the way we expected. So I'm not hugely confident about any parts of that, really.

@rmjarvis
Copy link

rmjarvis commented Oct 3, 2023

Still, metacal should really work with a moderately distorted WCS, so this is worth tracking down. I just wouldn't trust the code you got from me necessarily.

@beckermr
Copy link
Collaborator Author

beckermr commented Oct 3, 2023

@esheldon You have not touched the code in question in a significant way in two years or more. I wonder if this has always been there...

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

Yeah, I expect it's not a regression. I've not touched it. Its just that we didn't test it in this regime.

Note the code that wraps mike's algorithm shares all its other moving parts with the fitgauss method which works fine.

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

I propose to modify the algorithm to instead work on the pixel convolved PSF rather than the pre-pixel PSF

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

(which works for this case)

@beckermr
Copy link
Collaborator Author

beckermr commented Oct 3, 2023

That seems fine. Or we can make the algorithm more conservative and have it expand the gaussian more. I suppose those two are ultimately the same thing.

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

I tried expanding, that doesn't work

@beckermr
Copy link
Collaborator Author

beckermr commented Oct 3, 2023

Huh? How can that be? Running it on the PSF with a pixel should be roughly the same as expanding the PSF. Are you sure there isn't something else going on?

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

I didn't say I understand it. But when I put the pixel back in I get a bias.

If I don't put it back, even if I do no dilation at all, it works fine.

@beckermr
Copy link
Collaborator Author

beckermr commented Oct 3, 2023

Did you try different PSF sizes?

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

Yep

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

More evidence

What I'm doing is the following.

I always draw the new psf image with method="no_pixel". This is because for all algorithms other than psf='gauss' the new psf already has the pixel in it.

For psf='gauss' I needed to put the pixel back in for this to work.

I get the bias.

However, if I instead draw the PSF without setting method='no_pixel' this works fine.

So it is some issue with the following

  • create pre pixel psf
  • get model for new psf
  • convolve model with pixel
  • draw method='no_pixel'

But this is ok

  • create pre pixel psf
  • get model for new psf
  • draw without no_pixel

@esheldon
Copy link
Owner

esheldon commented Oct 3, 2023

It may be interesting to find out why this is happening.

But there is actually no reason to be working with a pre pixel PSF in this case. We are just trying to make a round PSF that is large enough, and that is best done with the pixel in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants