Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validate code against CCL predictions #61

Open
damonge opened this issue Jul 6, 2020 · 59 comments
Open

Validate code against CCL predictions #61

damonge opened this issue Jul 6, 2020 · 59 comments

Comments

@damonge
Copy link
Owner

damonge commented Jul 6, 2020

We should validate the code in a realistic user scenario

  • 3D and skewers was already validated in James' paper.
  • LSST-like simulation and validate density, shear and magnification for a couple of distant bins.
  • Intensity mapping (generate an SKA-like simulation and validate for a couple of frequency bins, both C_ells and Pk^1D - see e.g. https://arxiv.org/abs/1609.00019)
@cramirezpe
Copy link
Collaborator

Hi @damonge,

Following the last discussion, we have been working on the tests for the Cls using theoretical predictions made with CCL.

The script used for this (with some modifications) is cl_test_ccl.py.

We have some doubts/problems that we were not able to solve by ourselves and we wanted to share them with you in case you have any idea of what is happening.

The first thing that we noted is in the line 122 of the previous script:

cl_dd_t = np.array([ccl.angular_cl(cosmo, tr_d[p1], tr_d[p2], larr, p_of_k_a=pk2d_mm)
                    for p1, p2 in pairs])

Here it is used as power spectrum to project the pk2d_mm extracted from the CoLoRe output. The question we ask ourselves is whether we should use pk2d_dd instead since here we are working with the galaxy-galaxy Cls.

We understand that we should match the pk2d_mm with lensing in the same way as pk2d_dd is matched with galaxies. Is it right?

Another doubt related to the script is located in the line 114:

tr_d = [ccl.NumberCountsTracer(cosmo, False, (z_nz, nz_tot[i]), bias=(z_nz, np.ones_like(z_nz)))
        for i in range(nbins)]

Here it is used as bias a one_like array. We understand that this happens because we are using directly the galaxies as the objects and they already include the bias. Is it right?

Moving to the other quantities (dm,mm); we modified the script to allow computing them. However, the results do not look convincent to us:
normal

Here 0,1 are the two redshift bins considered ((0,0.15) and (0.15,inf)).

The results for the galaxy-galaxy part are good (as they were in the original script), even with the line 122 changed (it does not affect this plot by much).

To compute the other quantities (dm,mm) the script was modified as follows:

Data

In the script cl_test_ccl.py data is computed as:

cl_dd_d = np.array([hp.anafast(dmap[p1], dmap[p2]) for p1, p2 in pairs])

We used healpy anafast in a similar way as it was used in the scrip cl_test.py as it allows us to compute all the needed quantities at the same time. For a given pair (p1,p2) (pair of bins):

d_values = 
hp.anafast(np.asarray([dmap[p1],e1map[p1],e2map[p1]]),np.asarray([dmap[p2],e1map[p2],e2map[p2]]), pol=True)

This numpy array d_values will be composed by the cls (following anafast documentation using T -> D): DD, EE, BB, DE, EB, DB.

For the plots we considered E to be the lensing part. That is we used the equivalence:

  • dd -> DD
  • dm -> DE
  • mm-> EE

Theoretical predictions

In cl_test_ccl.py for the dd case it is computed as:

cl_dd_t = np.array([ccl.angular_cl(cosmo, tr_d[p1], tr_d[p2], larr, p_of_k_a=pk2d_mm)
                    for p1, p2 in pairs])

For our theoretical predictions we used:

cl_dd_t = np.array([ccl.angular_cl(cosmo, tr_d[p1], tr_d[p2], larr, p_of_k_a=pk2d_dd)
                    for p1, p2 in pairs])
cl_dm_t = np.array([ccl.angular_cl(cosmo, tr_d[p1], tr_l[p2], larr, p_of_k_a=pk2d_dm)
                    for p1, p2 in pairs])
cl_mm_t = np.array([ccl.angular_cl(cosmo, tr_l[p1], tr_l[p2], larr, p_of_k_a=pk2d_mm)
                    for p1, p2 in pairs])

Note again the change pk2d_mm -> pk2d_dd in the first line.
I also assumed here that the angular power spectrum coming from the WeakLensingTracer tr_l is the E-mode shear power spectrum; this is why I selected before the E mode for everything. Is it right?

Other computations

Different size

We tried for a larger simulation to check if a different behavior could tell us what is happening. The quantities that were changed are:

  • z_max: 0.4 -> 1.5
  • r_smooth: 5. -> 2.0
  • n_grid: 512 -> 4096

These quantities are similar to the ones used in the LSST simulations (except for the z_max that was 2.5 for the LSST case). The bias model is changed to behave as a one_like array modifying the Bz_test.txt file accordingly.

This simulation was run with the New shear method and the result is the following:
big

The output was downsampled evenly across the sky and redshifts to save memory (10% downsampling). The results are quite confusing and I don't think they give much information. However, maybe they are showing something relevant that we are not understanding.

Thanks!

@fjaviersanchez
Copy link
Collaborator

Hi @cramirezpe! Thanks a lot for the detailed analysis. I think that you are right in that you should use pk2d_dd rather than pk2d_mm as you mention (since then you set the bias=1 in the prediction so that should be folded into the pk). I'm pretty surprised with the last results that you show in the post above. Did you change this https://github.com/damonge/CoLoRe/blob/master/examples/cl_test/cl_test_ccl.py#L58 to accommodate the larger redshift range now? (so instead of range=[0, 0.5] put [0, 1.6] -- and maybe you also want to change nz_h and increase it).

@cramirezpe
Copy link
Collaborator

Hi @fjaviersanchez ,

You are totally right, that's a silly error from my side. I corrected it while keeping the value of nz_h to 50.

Now the results look more sensible:
big

I'm playing with other values of nz_h and also setting the z_split at other regions. But I'm getting weird things so I save them for another update.

Thanks for your help!

@fjaviersanchez
Copy link
Collaborator

Thanks a lot! I'm still puzzled by the large difference between in the panels on the right. Yes, please keep us updated :)

@cramirezpe
Copy link
Collaborator

Hi @damonge , @fjaviersanchez

I continued the analysis with the big test adding more features. I use the following parameters:

  • zbins:
    • 0: [0, 0.5)
    • 1: [0.5, 1)
    • 2: [1, inf)
  • range=[0,2.5]
  • nz_h = 100
  • downsampling (to save computing time): 1 % (I expect it to only add noise).

I also included error bars estimated from the standard deviation in the rebinning (is there a way to obtain errors from pyccl?).

But the problem with the mm and dm plots can be seen clearly when going from the logarithmic plot to the linear one:

More specifically when computing the ratio between the data and the theoretical prediction:

There is a bias 2 affecting the lensing part (therefore 2 for the dm part and 4 for the mm part).

It can be corrected by hardcoding:

cl_mm_d = cl_mm_d/4
cl_md_d = cl_md_d/2

and then everything looks reasonable:

The problem is to know where is this bias coming from. I've change the input bias from CoLoRe and the ratio stills the same, so I'm quite sure the problem is not coming from there.

Summarizing we have two questions:

  1. Is it possible to obtain errors from the CCL software (I was not able to find it in the documentation).
  2. We see a bias 2 affecting the lensing part. Do you have an idea of where it is coming from?

@andreufont
Copy link
Contributor

Hi @damonge - we had a factor of 2 issue in CoLoRe's lensing a few month ago, might this be related?

@fjaviersanchez
Copy link
Collaborator

Hi @cramirezpe! This looks beautiful! Great agreement (modulo the typical factors of 2 that we have to figure out).

  1. CCL doesn't know about covariances but you can probably just use the mode-counting error-bars: Δ Cℓ = sqrt(2 / (2ℓ + 1)) *Cℓ
  2. I'm with Andreu, it might be one of those factors of 2 that we were seeing a few months ago. Either that or some convention mismatch between the shear from CCL and the measurements from anafast (there are always these factors of 2 floating around with different conventions).

@andreufont
Copy link
Contributor

Hi @cramirezpe - Could you redo the measurement using a larger fraction of galaxies? 10% should be enough for now, to reduce a bit the noise in the plots. Is the limitation speed, or memory?

It would also be good to try to use the mode-counting errorbar that @fjaviersanchez suggested, instead of the ones you have now.

@damonge - Would you be able to dig into the code to chase the missing factor of 2? Is there a test that we could run to help you identify where it is?

@cramirezpe
Copy link
Collaborator

Hi @andreufont.

I've cut the number of galaxies two times.

  • In the CoLoRe simulation itself: Here I applied a downsampling of 10% directly to the nz file. (The original is the one coming from the LSST-like simulation). This number can be increased if I increase the resources. However for the LSST comparison between Old and New lensing I used at maximum this 10%.
  • In the script to compute the plots for this thread: Here is just a matter of time. I run it once without downsampling and this are the results. The reduction in noise is low, maybe too low for a sample 100 times larger.

In this plot the error bars are computed following Javier's suggestion.

@andreufont
Copy link
Contributor

Thinking about it, the equation from @fjaviersanchez is probably only valid in the absence of shot noise, i.e., when we are dominated by cosmic variance. Is this true, Javi? The density of galaxies does not appear in his equation.

They clearly underpredict the scatter in some plots, like in d1-d2, don't you think?

@cramirezpe , could you redo the plot using again your old errorbars? Also, do you mind correcting for the factors of 2,4 that we discussed?

@fjaviersanchez
Copy link
Collaborator

Yes, that's right you have to add the shot-noise term, that you already calculated here: https://github.com/damonge/CoLoRe/blob/master/examples/cl_test/cl_test_ccl.py#L131. So the total error would be sqrt((Delta C_ell)^2 + n_l)

@andreufont
Copy link
Contributor

mmm... the cross-terms should not have shot-noise contributions, right? How does the low density of tracers affect the errorbars in a cross-correlation of galaxy densities at different bins? And how does it affect the lensing correlations, since these should not have shot-noise perse either, right? Sorry if these are basic questions, I'm not used to thinking about these tomographic measurements.

@cramirezpe
Copy link
Collaborator

Here are the plots with the error computed as the last update from Javier.

But as Andreu says, shootnoise is only applied in d0d0 d1d1 and d2d2 (and the effect is tiny). If it should be included also in the mm and dm part I should make some changes to the script.

download

download (1)

@andreufont
Copy link
Contributor

It looks to me that the shot-noise should not affect the signal in cross-bins like d1-d2, or d0-d2, but it should affect its errorbars...

We could always get the errorbars by looking at the scatter in 10 boxes or so...

@damonge
Copy link
Owner Author

damonge commented Aug 4, 2020

Hi all,

sorry for the delay, you caught me coming back from holidays and catching up with things.

  1. Regarding the factor of 2: I think this is fine, let me check with the previous plotting scripts, but this is just the convention of going from elipticities to shear (there's an equation in Dodleson). I'll get back to you.

  2. Regarding errors: this is the formula you should use for the covariance of the power spectrum between tracers (ab) and the power spectrum of tracers (cd):

    so, if you want the error bar for C_ell^{ab}:

In this formula, the auto-correlations should contain shot noise (and that's how even cross-correlations are affected by shot noise in the uncertainties).

@damonge
Copy link
Owner Author

damonge commented Aug 4, 2020

I'm pretty sure the error bars you're computing are fine though.

@cramirezpe
Copy link
Collaborator

Hi @damonge , @fjaviersanchez

Ignoring for a moment the factor 2 (that could be solved with the convention thing). We still have some bad behavior at some plots (specially in plots d1d1 and d2d2 and for l ≈ 200, but the problem is in the same place for other plots). Do you think the results are good enough to continue, or we should try to find a solution for it?

In case we are good to go, we may need the help of @fjaviersanchez to run the analysis in bigger boxes.

@fjaviersanchez
Copy link
Collaborator

I think this is good to go, actually. Would you agree @damonge? The large scale "problems" aren't necessarily significant so, I think now it's on my plate to run larger sims. I'll report here as soon as I have results. @cramirezpe, would you be able to run the same tests once the sims are done?

Also, just a suggestion for the ratio plots, for the next round could you subtract 1 to the ratio so they are centered at 0, please? (That way it's easier to spot biases by eye).

@cramirezpe
Copy link
Collaborator

Yes, @fjaviersanchez. I do have everything set up to do the analysis, but the CoLoRe boxes are quite large to be run from scratches by me. Just save them in a place where I can access them.

And you're right, it is better to set the ideal value to 0. Thanks for the suggestion!

@damonge
Copy link
Owner Author

damonge commented Aug 7, 2020

@cramirezpe

Regarding the ell>200 stuff: these should be smoothing effects (due to pixelization, 3D cell size, and interpolation) that are completely ignored in the script I gave you originally. I'm pretty sure we can correct for this accurately enough.

I can confirm that the factor 2 is fine, it's just convention. Just multiply cross-correlations by 2 and lensing auto-correlations by 4.

When you say "bigger boxes" do you mean the current box but with 100% of the sources? Because I thought for the previous plots you had already been using the LSST setup (but with fewer sources).

@cramirezpe
Copy link
Collaborator

Yes. The first plot of the thread correspond to the test example (which is smaller and with less sources). The following plots correspond to the LSST-like simulation bit with less sources.

@damonge
Copy link
Owner Author

damonge commented Aug 10, 2020

Right, so, to correct for all the different smoothings, @cramirezpe can you send me the param file and the command-line output of one of the simulations? Just wanna come up with a prescription for you to apply the right beams in the right places.

@cramirezpe
Copy link
Collaborator

Hi damonge,

I used a slight modification from the LSST example param.cfg. The content of the param.cfg file is the following:

global:
{
  prefix_out= "out";
  output_format= "FITS";
  output_density= false
  pk_filename= "Pk_CAMB_test.dat"
  z_min= 0.001
  z_max= 2.5
  seed= 1001
  write_pred=true
  just_write_pred=false
  pred_dz=0.1
}

field_par:
{
  r_smooth= 2.0
  smooth_potential= true
  n_grid= 4096
  dens_type= 0
  lpt_buffer_fraction= 0.6
  lpt_interp_type= 1
  output_lpt= 0
}

cosmo_par:
{
  omega_M= 0.3
  omega_L= 0.7
  omega_B= 0.05
  h= 0.7
  w= -1.0
  ns= 0.96
  sigma_8= 0.8
}

srcs1:
{
  nz_filename= "NzBlue.txt"
  bias_filename= "Bz_test.txt"
  include_shear= true
  store_skewers= false
}

kappa:
{
  z_out= [0.38]
  nside= 128
}

isw:
{
  z_out= [0.38]
  nside= 128
}

shear:
{
  nside= 1024
  n_shear= 50
  spacing_type= "log(1+z)"
  write= true
 }

The bias is set to 1 everywhere.

The output from CoLoRe is the following:

MPI task 10, OMP thread count starts at 640
MPI task 6, OMP thread count starts at 384
MPI task 19, OMP thread count starts at 1216
MPI task 7, OMP thread count starts at 448
MPI task 5, OMP thread count starts at 320
MPI task 20, OMP thread count starts at 1280
MPI task 31, OMP thread count starts at 1984
MPI task 21, OMP thread count starts at 1344
MPI task 18, OMP thread count starts at 1152
MPI task 30, OMP thread count starts at 1920
MPI task 9, OMP thread count starts at 576
MPI task 23, OMP thread count starts at 1472
MPI task 26, OMP thread count starts at 1664
MPI task 28, OMP thread count starts at 1792
MPI task 22, OMP thread count starts at 1408
MPI task 13, OMP thread count starts at 832
MPI task 17, OMP thread count starts at 1088
MPI task 25, OMP thread count starts at 1600
MPI task 29, OMP thread count starts at 1856
MPI task 16, OMP thread count starts at 1024
MPI task 27, OMP thread count starts at 1728
MPI task 24, OMP thread count starts at 1536
MPI task 2, OMP thread count starts at 128
MPI task 3, OMP thread count starts at 192
MPI task 12, OMP thread count starts at 768
MPI task 0 has 64 OMP threads
MPI task 1 has 64 OMP threads
MPI task 2 has 64 OMP threads
MPI task 3 has 64 OMP threads
MPI task 4 has 64 OMP threads
MPI task 5 has 64 OMP threads
MPI task 6 has 64 OMP threads
MPI task 7 has 64 OMP threads
MPI task 8 has 64 OMP threads
MPI task 9 has 64 OMP threads
MPI task 10 has 64 OMP threads
MPI task 11 has 64 OMP threads
MPI task 12 has 64 OMP threads
MPI task 13 has 64 OMP threads
MPI task 14 has 64 OMP threads
MPI task 15 has 64 OMP threads
MPI task 16 has 64 OMP threads
MPI task 17 has 64 OMP threads
MPI task 18 has 64 OMP threads
MPI task 19 has 64 OMP threads
MPI task 20 has 64 OMP threads
MPI task 21 has 64 OMP threads
MPI task 22 has 64 OMP threads
MPI task 23 has 64 OMP threads
MPI task 24 has 64 OMP threads
MPI task 25 has 64 OMP threads
MPI task 26 has 64 OMP threads
MPI task 27 has 64 OMP threads
MPI task 28 has 64 OMP threads
MPI task 29 has 64 OMP threads
MPI task 30 has 64 OMP threads
MPI task 31 has 64 OMP threads
MPI task 0, OMP thread count starts at 0
MPI task 14, OMP thread count starts at 896
MPI task 11, OMP thread count starts at 704
MPI task 8, OMP thread count starts at 512
MPI task 15, OMP thread count starts at 960
MPI task 4, OMP thread count starts at 256
MPI task 1, OMP thread count starts at 64
 MPIThreadsOK = 1

|-------------------------------------------------|
|                      CoLoRe                     |
|-------------------------------------------------|

The cosmological model is:
 O_M=0.300 O_L=0.700 O_K=0.000
 O_B=0.050 w=-1.000 h=0.700
 Flat universe, standard cosmological constant

 Time of equality: a_eq=0.75416
 Particle horizon: chi_H(0)=9.908E+03 Mpc/h
 Present growth factor: D_0=0.779

Reading P_k from file: Pk_CAMB_test.dat
  Original sigma8=0.803913
  Sigma_Gauss should be 1.123973
Node 0 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 1 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 2 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 3 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 4 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 5 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 6 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 7 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 8 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 9 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 10 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 11 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 12 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 13 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 14 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 15 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 16 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 17 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 18 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 19 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 20 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 21 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 22 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 23 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 24 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 25 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 26 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 27 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 28 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 29 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 30 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]
Node 31 will allocate 31.873 GB [24.199 GB (Gaussian), 7.548 GB (srcs), 0.047 MB (kappa), 128.168 MB (shear), 0.047 MB (isw)]

Run parameters:
  0.001 < z < 2.500
  2.997 < r/(Mpc/h) < 4079.279
  L_box = 8162.542 Mpc/h, N_grid = 4096
  Scales resolved: 7.698E-04 < k < 1.576E+00 h/Mpc
  Fourier-space resolution: dk = 7.698E-04 h/Mpc
  Real-space resolution: dx = 1.993E+00 Mpc/h
  Density field pre-smoothed on scales: x_s = 2.000E+00 Mpc/h
  1 galaxy populations
  1 lensing source planes
  50 lensing source planes
  1 ISW source planes
  Will include lensing shear in source catalog

Seed : 1001
*** Writing predictions
Writing predictions of redshift 0:
       Population 0, bias 1.
Writing predictions of redshift 0.1:
       Population 0, bias 1.
Writing predictions of redshift 0.2:
       Population 0, bias 1.
Writing predictions of redshift 0.3:
       Population 0, bias 1.
Writing predictions of redshift 0.4:
       Population 0, bias 1.
Writing predictions of redshift 0.5:
       Population 0, bias 1.
Writing predictions of redshift 0.6:
       Population 0, bias 1.
Writing predictions of redshift 0.7:
       Population 0, bias 1.
Writing predictions of redshift 0.8:
       Population 0, bias 1.
Writing predictions of redshift 0.9:
       Population 0, bias 1.
Writing predictions of redshift 1:
       Population 0, bias 1.
Writing predictions of redshift 1.1:
       Population 0, bias 1.
Writing predictions of redshift 1.2:
       Population 0, bias 1.
Writing predictions of redshift 1.3:
       Population 0, bias 1.
Writing predictions of redshift 1.4:
       Population 0, bias 1.
Writing predictions of redshift 1.5:
       Population 0, bias 1.
Writing predictions of redshift 1.6:
       Population 0, bias 1.
Writing predictions of redshift 1.7:
       Population 0, bias 1.
Writing predictions of redshift 1.8:
       Population 0, bias 1.
Writing predictions of redshift 1.9:
       Population 0, bias 1.
Writing predictions of redshift 2:
       Population 0, bias 1.
Writing predictions of redshift 2.1:
       Population 0, bias 1.
Writing predictions of redshift 2.2:
       Population 0, bias 1.
Writing predictions of redshift 2.3:
       Population 0, bias 1.
Writing predictions of redshift 2.4:
       Population 0, bias 1.
*** Creating Gaussian density field
Creating Fourier-space density and Newtonian potential
>    Relative time ellapsed 5651.0 ms
Transforming density and Newtonian potential
>    Relative time ellapsed 21521.7 ms
Normalizing density and Newtonian potential
>    Relative time ellapsed 656.0 ms
 <d>=-4.942E-12, <d^2>=1.184E+00

*** Creating physical matter density
Lognormalizin'
>    Relative time ellapsed 2094.8 ms

*** Computing normalization of density field
z=0.000E+00, <d^2_0>=1.046E+00, 00000000000
z=3.745E-02, <d^2_0>=1.046E+00, 00001727216
z=8.026E-02, <d^2_0>=9.981E-01, 00011614608
z=1.282E-01, <d^2_0>=9.906E-01, 00030079144
z=1.773E-01, <d^2_0>=9.951E-01, 00055759376
z=2.269E-01, <d^2_0>=9.981E-01, 00087348624
z=2.766E-01, <d^2_0>=1.000E+00, 00123649256
z=3.264E-01, <d^2_0>=1.002E+00, 00163566056
z=3.762E-01, <d^2_0>=1.001E+00, 00206110176
z=4.262E-01, <d^2_0>=1.003E+00, 00250460176
z=4.761E-01, <d^2_0>=1.000E+00, 00295788672
z=5.261E-01, <d^2_0>=9.978E-01, 00341531952
z=5.761E-01, <d^2_0>=1.000E+00, 00387109024
z=6.260E-01, <d^2_0>=1.001E+00, 00432094856
z=6.760E-01, <d^2_0>=1.000E+00, 00476101944
z=7.260E-01, <d^2_0>=1.001E+00, 00518860696
z=7.761E-01, <d^2_0>=9.984E-01, 00560143424
z=8.261E-01, <d^2_0>=9.993E-01, 00599813592
z=8.761E-01, <d^2_0>=9.994E-01, 00637690920
z=9.261E-01, <d^2_0>=9.995E-01, 00673752224
z=9.762E-01, <d^2_0>=9.999E-01, 00707919688
z=1.026E+00, <d^2_0>=1.001E+00, 00740163224
z=1.076E+00, <d^2_0>=9.990E-01, 00770556368
z=1.126E+00, <d^2_0>=9.978E-01, 00799055240
z=1.176E+00, <d^2_0>=1.001E+00, 00825716904
z=1.226E+00, <d^2_0>=1.002E+00, 00850586592
z=1.276E+00, <d^2_0>=9.991E-01, 00873751472
z=1.326E+00, <d^2_0>=1.001E+00, 00895196464
z=1.376E+00, <d^2_0>=1.001E+00, 00915069184
z=1.426E+00, <d^2_0>=9.999E-01, 00933437872
z=1.477E+00, <d^2_0>=9.998E-01, 00950277576
z=1.527E+00, <d^2_0>=9.991E-01, 00965773096
z=1.577E+00, <d^2_0>=1.000E+00, 00979907408
z=1.627E+00, <d^2_0>=9.992E-01, 00992803712
z=1.677E+00, <d^2_0>=9.994E-01, 01004525216
z=1.727E+00, <d^2_0>=9.998E-01, 01015158088
z=1.777E+00, <d^2_0>=1.000E+00, 01024666384
z=1.827E+00, <d^2_0>=1.000E+00, 01033183392
z=1.877E+00, <d^2_0>=9.998E-01, 01040799560
z=1.927E+00, <d^2_0>=1.000E+00, 01047523824
z=1.977E+00, <d^2_0>=1.000E+00, 01053408192
z=2.027E+00, <d^2_0>=9.997E-01, 01058517600
z=2.077E+00, <d^2_0>=9.997E-01, 01062928744
z=2.127E+00, <d^2_0>=1.001E+00, 01066606072
z=2.177E+00, <d^2_0>=9.991E-01, 01069644056
z=2.227E+00, <d^2_0>=1.000E+00, 01072089848
z=2.277E+00, <d^2_0>=1.000E+00, 01074034032
z=2.327E+00, <d^2_0>=1.000E+00, 01075365792
z=2.377E+00, <d^2_0>=9.998E-01, 01076259184
z=2.427E+00, <d^2_0>=1.000E+00, 01076631560
z=2.477E+00, <d^2_0>=1.000E+00, 01076685784
z=2.502E+00, <d^2_0>=1.000E+00, 01060356120
>    Relative time ellapsed 1851.5 ms

*** Getting point sources
 0-th galaxy population
   Poisson-sampling
   There will be 5896412334 objects in total
   Assigning coordinates
>    Relative time ellapsed 28655.9 ms

*** Re-distributing sources across nodes
>    Relative time ellapsed 17377.6 ms

*** Getting LOS information
Communication 0, MPI task 0 is now MPI task 31
Communication 1, MPI task 0 is now MPI task 30
Communication 2, MPI task 0 is now MPI task 29
Communication 3, MPI task 0 is now MPI task 28
Communication 4, MPI task 0 is now MPI task 27
Communication 5, MPI task 0 is now MPI task 26
Communication 6, MPI task 0 is now MPI task 25
Communication 7, MPI task 0 is now MPI task 24
Communication 8, MPI task 0 is now MPI task 23
Communication 9, MPI task 0 is now MPI task 22
Communication 10, MPI task 0 is now MPI task 21
Communication 11, MPI task 0 is now MPI task 20
Communication 12, MPI task 0 is now MPI task 19
Communication 13, MPI task 0 is now MPI task 18
Communication 14, MPI task 0 is now MPI task 17
Communication 15, MPI task 0 is now MPI task 16
Communication 16, MPI task 0 is now MPI task 15
Communication 17, MPI task 0 is now MPI task 14
Communication 18, MPI task 0 is now MPI task 13
Communication 19, MPI task 0 is now MPI task 12
Communication 20, MPI task 0 is now MPI task 11
Communication 21, MPI task 0 is now MPI task 10
Communication 22, MPI task 0 is now MPI task 9
Communication 23, MPI task 0 is now MPI task 8
Communication 24, MPI task 0 is now MPI task 7
Communication 25, MPI task 0 is now MPI task 6
Communication 26, MPI task 0 is now MPI task 5
Communication 27, MPI task 0 is now MPI task 4
Communication 28, MPI task 0 is now MPI task 3
Communication 29, MPI task 0 is now MPI task 2
Communication 30, MPI task 0 is now MPI task 1
Communication 31, MPI task 0 is now MPI task 0
>    Relative time ellapsed 205665.8 ms

*** Writing kappa source maps
>    Relative time ellapsed 629.3 ms

*** Writing shear source maps
>    Relative time ellapsed 45362.9 ms

*** Writing isw source maps
>    Relative time ellapsed 8.0 ms

*** Writing source catalogs
 0-th population (FITS)
>    Relative time ellapsed 16684.3 ms


|-------------------------------------------------|

>    Total time ellapsed 351403.6 ms

Here it is important to note that I've used the New shear method to compute shears.


I also included the new error bars in the prediction vs data plots:
bble_0

@damonge
Copy link
Owner Author

damonge commented Aug 11, 2020

OK, I just remembered that most of the smoothing should already be taken into account by the P(k) you're using, so we need to understand why the galaxy-galaxy plots seem overpredicted. I might worry that the lognormal transformation prediction coded by Anze could be numerically unstable, so let's check that: Can you take one of the Pk files (e.g. out_pk_srcs_pop0_z0.300.txt) and plot together pdd and pdm**2/pmm? Just wanna check how much of an effect the lognormal transformation has on the large ks.

Just to clarify: I think you said you're not using all the ~5-billion objects generated in the sims for these plots, right? If that's the case, what's the limitation? Memory?

@cramirezpe
Copy link
Collaborator

Here it is the plot you asked for:
out_pk

Regarding the limitation on the sim. I was hitting memory limit when trying to use all the galaxies, that's why I used 10% of them (and I also set the nside to 1024). That is strange because for the other issue I was able to run at least one for 100% of galaxies and nside set to 2048. Now I'm trying to run it again and apparently it works (but to show the results I need to wait for the analysis script which will take quite long).

Related to this, the script I'm using (a modification of the one in https://github.com/damonge/CoLoRe/blob/master/examples/cl_test/cl_test_ccl.py) is very slow when computing the power spectrum using anafast.

What I'm doing is the following:

d_values = np.array([hp.anafast(np.asarray([dmap[p1],e1map[p1],e2map[p1]]),np.asarray([dmap[p2],e1map[p2],e2map[p2]]), pol=True) for p1,p2 in pairs])

and then:

cl_dd_d = d_values[:,0]
cl_dm_d = d_values[:,3]
cl_mm_d = d_values[:,1]

I know that doing this I compute components that I don't need. Is there a faster way of doing it? Or maybe the possibility of calling it using multiple cores?

@cramirezpe
Copy link
Collaborator

Okay there is an error in my analysis that made me think I was using downsampling when not. The last plot I showed is with all the objects from the CoLoRe output, that is the NzBlue.txt is exactly the same as: https://github.com/damonge/CoLoRe/blob/master/examples/LSST/NzBlue.txt

@damonge
Copy link
Owner Author

damonge commented Aug 18, 2020

Sorry, slowly getting back to this. @cramirezpe can you remind me whether you were using any of the -D_BIAS_MODEL flags in the Makefile? I.e. these lines:

#Use normalized bias model

@fjaviersanchez
Copy link
Collaborator

Also, I have one realization of 4096 full-LSST number density (with one population only though) here: /global/cscratch1/sd/jsanch87/CoLoRe_2020. @cramirezpe could you please take a look? Also please let me know if you'd want me to generate more realizations.

@cramirezpe
Copy link
Collaborator

@damonge Both lines with -D_BIAS_MODEL are commented out.
@fjaviersanchez I cannot access that directory. I see it belongs to the group lsst (but I'm only a member of desi) probably this is the problem.

A part from this I will take a better look to the last error bars as we believe that I applied wrongly the error formulae (and that's why it looks so bad).

@fjaviersanchez
Copy link
Collaborator

Sorry @cramirezpe (I had the automatic mode while changing the group). Now it's readable by desi, please let me know if you still are unable to access the files. Thanks!

@damonge
Copy link
Owner Author

damonge commented Aug 19, 2020

Hi @cramirezpe

OK, I have a couple of ideas of what could be going wrong, but I'd like to try them out rather than go back-and-forth here ad infinitum. To speed things up, if you send me the following, I could then try these out:

  • The measured C_ells. If you can, do this for nside=1024 (or as close to that as you get your code to run).
  • The corresponding N(z)s of the different samples used.
  • The P_k prediction files from CoLoRe as well as the original linear P_k (or just a confirmation that it's the file currently in the repository).

I think I'd be able to take it from there. You can send these by email.

@cramirezpe
Copy link
Collaborator

Hi @damonge @javier

I was able to run the ccl analysis for nside 1024 both on my simulations and the one by Javier (always considering the n_grid=4096 version).

I added to the plot the md combination and also corrected the error bars.

My simulations where run with the new shear method and options:
- n_shear = 50
- nside = 1024

Now, the plot of the cls look like this. For my simulation:
mine_cls
mine_cls_ratio

For Javier simulation:
javier_cls
javier_cls_ratio

Apparently now the differences that we could see have been reduced but they still there.

I sent you the things you asked for. There are two files corresponding to the simulations run by me and by javier). They are composed by:

  • Measured C_ells: I include all the outputs from the script inside .dat files that can be read easily through values = np.loadtxt('filename.dat').

    The shape of these arrays is (3,3*1024). Where the first 3 for the number of possible pairs in ((0,0),(0,1),(1,1)) and 1024 for the nside used. This is the same nomenclature you used for your script.

    This files are included under the folder ccl_data

  • N(z) used (NzBlue.txt from here)

  • P(k) prediction files as well as the P(k) used as input (even though this one is the Pk_CAMB_test.dat) that appears in some of the examples.

  • I also inserted the param.cfg used in case it is needed.

For Javier's simulation I don't have much information because the N(z) and P(k) input files where missing but I assume he used the same ones as me (due to the naming that appears in the param.cfg file).

@fjaviersanchez
Copy link
Collaborator

@cramirezpe Thanks a lot for running these diagnostics. I'm also using NzBlue.txt and Pk_CAMB_test.dat. I also use the default galaxy bias model (i.e., I don't add any bias flags at compilation time so I think it is (1+delta)**b) with the file BzBlue.txt.

@damonge
Copy link
Owner Author

damonge commented Aug 25, 2020

This is great @cramirezpe

I think I know what is happening, but to make sure I'd need the following:

  1. As I said in an email, if you can send me the B-mode power spectra that'd be great. Basically, I wanna compare that with a small extra-power I get at high-ell on EE.
  2. Right now the second bin is super noisy because it has very few galaxies. Can you modify the edge that separates both bins so both have more or less the same number of galaxies? I think if you move it from 1.5 (which I think is what you're using now) to something between 0.8 and 1 it should do the trick.

We're getting very close now!

@cramirezpe
Copy link
Collaborator

Hi @damonge

I currently don't have the B-modes because I didn't save them. However they are given by anafast so I will send them to you once I compute them again. (It shouldn't take long).

However I don't know how to compute the theoretical values. With the others I used:

    cl_dd_t = np.array([ccl.angular_cl(cosmo, tr_d[p1], tr_d[p2], larr, p_of_k_a=pk2d_dd)
                        for p1, p2 in pairs])
    cl_dm_t = np.array([ccl.angular_cl(cosmo, tr_d[p1], tr_l[p2], larr, p_of_k_a=pk2d_dm)
                        for p1, p2 in pairs])
    cl_mm_t = np.array([ccl.angular_cl(cosmo, tr_l[p1], tr_l[p2], larr, p_of_k_a=pk2d_mm)
                        for p1, p2 in pairs])
    cl_md_t = np.array([ccl.angular_cl(cosmo, tr_d[p1], tr_l[p2], larr, p_of_k_a=pk2d_dm)
                        for p2, p1 in pairs])

Which tracer should I use now? (If any).

Regarding bins. I'm worried that the redshift limits can be affecting the results. In the example cl_test that you wrote, the redshift limits are set to [0,0.5] (here) while the nz input file for that example has values in the range [0,1].

In the LSST example, the nz input file has data in the range [0,3] and I also set the redshift limits of the script to [0,3]. It is possible that this is affecting the final result? Should I lower this value?

@damonge
Copy link
Owner Author

damonge commented Aug 25, 2020

No theory for the B-modes (they're only generated by numerical noise - and shape noise, which we don't have in these sims), so don't worry about that.

About binning: the redshift limit shouldn't affect the results as long as the theory is calculated with the right corresponding N(z)s. Where exactly do you set redshift limits in the script?

@cramirezpe
Copy link
Collaborator

Well, there are two places.

For the data: The usual split of galaxies in two (here). This is like a [0, inf] range (?)
For the theoretical values: We make the tracers using an nz array. This nz is built here. With a range [0,0.5] in the case of the cl_test_ccl script.

My question was whether having these two ranges with different values could be problematic..

@damonge
Copy link
Owner Author

damonge commented Aug 25, 2020

Oh, I see! Right, yes, the first bit (the data part) is agnostic to the outer edges, just the middle one. For building the nz, yes, just use [0, 3]. That should be enough to capture both redshift bins. Any parts of the nz arrays that are zero (or very close to zero) are automatically taken care of by CCL.

@andreufont
Copy link
Contributor

Hi @cramirezpe - what is the status of the tests above? Did you send the b-modes power to David?

@cramirezpe
Copy link
Collaborator

Hi @damonge

I was having problems with the binning and I wanted to check that everything was running ok.

I've computed the b-modes and they look okay for the crosscorrelation b-m b-d but not for b-b at low ls. (Plots below). The bmodes are obtained from the anafast function (It returns TT, EE, BB, TE, EB, TB which I identified as dd, mm, bb, dm, mb, db).

I also set the split point at z=0.82 (this is the exact point where at both sides there is roughly the same number of galaxies.) With this change some of the plots improve but others worsen, I show here only javier's simulation results:

The ratio between the power spectrum from data and the theoretical power spectrum:
javier_cls_ratio
I only showed the ratio since the normal plot is difficult to distinguish from the last results I sent.

Then the b-modes:
javier_bmodes

@damonge
Copy link
Owner Author

damonge commented Sep 1, 2020

@cramirezpe thanks a lot for this. I've got this mostly working, but there are still a couple of things it'd be good to pin down. Essentially, what is happening is that we need to account for two extra sources of smoothing:
A. The smoothing associated to the interpolation from 3D grid to lines of sight that the shear field is subjected to.
B. The smoothing associated to the pixel window function in healpix.
Once you account for these, I can get pretty reasonable plots. See below:
dm
I've added a couple of modifications on predictions.c to account for the first source of smoothing.

One associated problem is the non-flat B-modes you're seeing. These are due to numerical errors, and can come from two places:

  1. Inaccuracies in the "fast_shear" method. If you move the ellipticities of galaxies around a little bit, you generate B-modes.
  2. The fact that the C_ells you're computing from anafast are in fact a bit dodgy (since you don't really have a homogeneous mask, but a mask that traces the abundance of galaxies at each pixel). If this is important enough, it will cause E-B mixing, which would result in what you're seeing.
    These "numerical noise" B-modes should also appear in the E-mode power spectrum, and they actually give it extra power on high-ell, which I need to remove before applying the smoothings above (I'm doing that by just subtracting the B-modes, that's why I asked for them).

So, to make sure we understand what's happening, here are a few questions:

  • Were these simulations run with the "fast_shear" method? If so, could we run one with the "slow" method in order to test whether some of the B-modes are generated due to this? And also to check if the effective smoothing (A above) of both methods is very different.
  • Could you upload the script you're using to compare with CCL somewhere in the repo? I'd like to introduce a couple of modifications to account for the potential E-B leakage and see if the large-scale B-modes come from that.
  • To make sure the impact of the healpix window function is consistent, could you send over the same power spectra computed for nside=512?

@damonge
Copy link
Owner Author

damonge commented Sep 1, 2020

One extra question (probably for both @cramirezpe and @fjaviersanchez ): could we run one version of these simulations with a smoothing scale of 5 Mpc/h? The problem with the current sim is that the smoothing (2 Mpc/h) is practically the same as the cell size, and accounting for the smoothing of the cell size is a bit tricky. If we compared with a 5 Mpc/h smoothing we'd have a much more solid theory prediction.

@fjaviersanchez
Copy link
Collaborator

Sure, I'll run with a larger smoothing. Do you also want one with the full LSST density with the "slow" shear?

@cramirezpe
Copy link
Collaborator

  • I thought the simulations run by Javier where run using the "slow" shear. In my case I always used the "faster_shear" method for the large simulations.
  • I've uploaded the script I use to compute the cls to CoLoRe/examples/cl_test/ccl_. There are some things to note with respect to this:
    • The function that computes the cls is compute_all_cls. The others are functions that I used to store data. etc...
    • I switched from zsplit to bins, where the zbins variable defines the limits. To have the same behaviour as the zsplit=0.15. zbins should be set to [-inf,0.15,inf] (where inf a large number).
  • I made the computations for nside=512. I show the main plots here:
    javier_cls
    javier_cls_ratio
    javier_bmodes

And I also sent the cls through email.

@damonge
Copy link
Owner Author

damonge commented Sep 2, 2020

@fjaviersanchez if possible, having one slow LSST sim would be very useful (unless you can confirm the sim you gave @cramirezpe used the slow method already)

@damonge
Copy link
Owner Author

damonge commented Sep 2, 2020

@cramirezpe
The nside=512 power spectra were very useful to disentangle this, thanks a lot. Now let's figure out the source of the B-modes. To do that, we need two things:
1- A sim with the slow method (if possible, see previous comment)
2- Independent of the slow method, we also need a proper calculation of the power spectra accounting for the inhomogeneity of the shear mask. To do this, I've modified your script. As you'll see here, it uses a library called namaster that you'll have to install. It should be as simple as pip install namaster --user, but let me know if you run into trouble. Also, I have not tested that the script works because I don't have the data, so don't be surprised if it throws errors all over the place. If (when) it does, just copy them here and I'll help you debug them.

@fjaviersanchez
Copy link
Collaborator

Hi @damonge, @cramirezpe! The realization I sent around is for the "fast_shear" method. I just generated another one (with a larger r_s=5) here: /global/cscratch1/sd/jsanch87/CoLoRe_2020/out_4095_rs5.0_full_*. I'll generate a "slow" one too :)

@damonge
Copy link
Owner Author

damonge commented Sep 2, 2020

Awesome, thanks so much @fjaviersanchez !

@cramirezpe
Copy link
Collaborator

Hi @damonge ,

I tried to use the script to produce the power spectrum results. I needed to change some lines to make it work but it does not give sensible results. The cls computed using it look like this:


(the theoretical value is matches the one from the last result).

Apparently the modifications I made to the dd part are working well since I'm recovering roughly the same result as with the previous script.. I tried to follow the documentation to check the other parts of the code but I can't find what is wrong (if there is something to find).

Could you help me with that? I've created a new branch to push the changes into master, then it will be easier for you to see what has been changed.

Also @fjaviersanchez , I cannot access the r_smooth=5 simulation due to permissions (they are not set to desi group 😅)

@fjaviersanchez
Copy link
Collaborator

fjaviersanchez commented Sep 14, 2020

Hi everyone! Good news! I managed to finish one "slow" simulation with the full LSST density. It took 36 hours and 55 minutes 🤣 running on 16 Haswell nodes. The data lives here: /global/cscratch1/sd/jsanch87/CoLoRe_2020_slow

@fjaviersanchez
Copy link
Collaborator

fjaviersanchez commented Sep 14, 2020

More info about the charges:

Charged hours:82,721.96
Raw hours:590.87
Machine hours:20,409.51
CF:1
Secs:32,801
Cpu count:1,024
Alloc nodes:16
Alloc tres:billing=1024,cpu=1024,mem=1888G,node=16

And from the log:

*** Creating Gaussian density field 
Creating Fourier-space density and Newtonian potential 
>    Relative time ellapsed 72210.8 ms
Transforming density and Newtonian potential
>    Relative time ellapsed 233245.1 ms
Normalizing density and Newtonian potential 
>    Relative time ellapsed 2102.5 ms
 <d>=-7.598E-12, <d^2>=6.556E-01

*** Creating physical matter density
Lognormalizin'
>    Relative time ellapsed 41867.3 ms
*** Computing normalization of density field
>    Relative time ellapsed 60795.2 ms

*** Getting point sources
 0-th galaxy population
   Poisson-sampling
   There will be 5896681484 objects in total 
   Assigning coordinates
>    Relative time ellapsed 108992.3 ms

*** Re-distributing sources across nodes
>    Relative time ellapsed 66032.6 ms
*** Getting LOS information
>    Relative time ellapsed 127968042.1 ms

*** Writing kappa source maps
>    Relative time ellapsed 4231531.6 ms

*** Writing source catalogs
 0-th population (FITS)
>    Relative time ellapsed 111753.5 ms


|-------------------------------------------------|

>    Total time ellapsed 132914008.8 ms

@andreufont
Copy link
Contributor

Hi @fjaviersanchez , @damonge - is it possible to change the settings of this folder, such that everyone can access the simulation? This way, both @cramirezpe and @damonge would be able to run their analysis script and compare results.

@fjaviersanchez
Copy link
Collaborator

I just changed the permissions so it's public for everyone (I did the same for the previous "fast" version as well). Please let me know if this works. If not, I can try putting the files at some other location.

@cramirezpe
Copy link
Collaborator

Nice to have the huge simulation available, I'll see how it compares with the faster shear simulation.

@damonge , you were asking for the d,e1,e2 maps. I've shared the folder where I have all the analysis. In concrete here:
/global/cscratch1/sd/cramirez/CoLoRe_CCL/sims/javier/n_4096/ccl_data/20200913_150932/
you can find results for nside=1024. You can load the different outputs using np.loadtxt(filename) (maybe not the most efficient way of storing large data).


A part from that, I continued the analysis of what I had until now. I compared the r_smooth=5 simulation with the r_smooth=2.

The first thing to note here is that the r_smooth=5 simulation have n_grid=4095 vs n_grid=4096 for the other one. Is there a reason for that @javier?

Comparing both simulations at the same time we obtain (Sim 5 is the sim with r_smooth=5 and Sim 2 the r_smooth=2 one)

And the ratio sim/prediction (I restricted the y axis to make easy to see low ell regions):

In general the new r_smooth=5 simulation behaves better than the previous r_smooth=2 in low ell regions (as expected) but worse at high ell regions (also as expected).

Then the B-modes:

where we see again that the new r_smooth=5performs better.


I also had a comparison between a faster shear and slow shear (for less galaxies than the sim by Javier). But I'm observing non-zero B modes at high ell that I don't see in other sims. This can be an error from my side so I decided to share the plots once I'm confident about them.

@fjaviersanchez
Copy link
Collaborator

@cramirezpe there's no reason it's 4095 except that I don't know how to write... 🤦 I can rerun if needed. Apologies!

@fjaviersanchez
Copy link
Collaborator

Another option is to make the fast version with n_grid=4095 too :p

@andreufont
Copy link
Contributor

@fjaviersanchez - I wasn't aware you could do a FFT with an odd number of cells. Is it possible that the code is secretely rounding it up to 4096 or 4094?

@fjaviersanchez
Copy link
Collaborator

I think that this is not the case anymore. You may lose a bit of performance (I think it might not even be the case with the newer processors) but it will still compute it with an odd number of cells (as far as I know).

@fjaviersanchez
Copy link
Collaborator

Anyway, I'm rerunning with 4096 (I got some extra hours at NERSC so let's just use them).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants