Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output P(k) #38

Open
londumas opened this issue Dec 21, 2018 · 12 comments
Open

Output P(k) #38

londumas opened this issue Dec 21, 2018 · 12 comments

Comments

@londumas
Copy link
Contributor

It would be very useful to have a function in https://github.com/igmhub/picca, called something like apply_pk_lyacolore() in https://github.com/igmhub/picca/blob/master/py/picca/fitter2/pk.py.
It would transform the linear power-spectrum into the expected linear power spectrum from the LyaCoLoRe mocks, in a similar way as the transfer functions in cosmology.
Here are the different effects I see:

  • resolution of the mocks, i.e. size of the cells
  • effect of the threshold to set quasars.
@londumas
Copy link
Contributor Author

Here is the status of the chi2 for the auto and cross-correlation when fitting with r \in [10,180],
for v4.0.<*>/quick-2.0/ for the 10 realizations.

cf_chi2_vs_index

xcf_chi2_vs_index

@londumas
Copy link
Contributor Author

Adding the model from PR igmhub/picca#504, i.e. a Gaussian smoothing of the P(k) improves the chi2 on the stack of 10 auto-correlations by 1085 and by 233 for the cross-correlation. However, it does not seem to be 100% of the effect, for example the region around the BAO peak is not well modeled:

wedge_0

wedge_1

wedge_2

wedge_3

xcf_wedge_0

xcf_wedge_1

xcf_wedge_2

xcf_wedge_3

@fjaviersanchez
Copy link
Collaborator

@londumas, what values are you choosing for the smoothing radii? Are you leaving them as free parameters or setting them beforehand?

@londumas
Copy link
Contributor Author

londumas commented Jan 22, 2019

I leave the parameters free, here is what I get on the stack of the 10 correlations:

for the auto-correlation:

  • par_sigma_smooth = 3.116 +/- 0.019
  • per_sigma_smooth = 3.570 +/- 0.032

for the cross-correlation:

  • par_sigma_smooth = 1.73 +/- 0.27
  • per_sigma_smooth = 3.290 +/- 0.036

where the following function is multiplied to the input CAMB x Kaiser P(k):

def pk_gauss_smoothing(k, pk_lin, tracer1, tracer2, **kwargs):
    """
    Apply a Gaussian smoothing to the full correlation function
    """
    kp  = k*muk
    kt  = k*sp.sqrt(1.-muk**2)
    st2 = kwargs['per_sigma_smooth']**2
    sp2 = kwargs['par_sigma_smooth']**2
    return sp.exp(-(kp**2*sp2+kt**2*st2)/2.)

@fjaviersanchez
Copy link
Collaborator

fjaviersanchez commented Jan 22, 2019

Thanks a lot! In CoLoRe the input isotropic power-spectrum from CAMB is convolved with the smoothing first as:

new_Pk = CAMB_Pk*(exp(-0.5*k**2*r_sm**2))

Where r_sm is the input value from the CoLoRe parameter file. The problem is that the final product will have this smoothing + the smoothing within a cell (which is not strictly Gaussian but can be approximated as a Gaussian so it is a good idea to leave the r_sm parameter as floating as you did).

After this the RSD are applied from the velocity field. I think that the order won't make a big difference but I am not 100% sure. Is there an easy way to check this?

@londumas
Copy link
Contributor Author

@fjaviersanchez or @jfarr03, very good. What was the input r_sm in v4.0.<*>?

@fjaviersanchez
Copy link
Collaborator

@londumas, sorry for the slow reply. I don't know the answer. I looked for the parameter file(s) in the mocks directory but I didn't see anything. I saw that in the repo the script run_process_colore_multi_node.sh sets r_sm to 2 Mpc/h. @jfarr03 do you have this information on hand?

@jfarr03
Copy link
Collaborator

jfarr03 commented Jan 24, 2019

Sorry yes the smoothing radius is set to 2Mpc/h for all CoLoRe runs at the moment!

@andreufont
Copy link
Collaborator

Hi @londumas - Note that you don't expect to have a perfect match, since (as said by Javier) there is also the pixelization effect, similar to the "binsize" smoothing in picca fitter. For most runs James is using a cell size of 2.4 Mpc/h. But there is another thing missing in the modelling, and that is the log-normal transformation applied to the Gaussian field in the mocks.

When we transform the Gaussian field to a lognormal field, to generate what we call the "physical density", the large scale clustering is unchanged (the lognormal field has bias=1 and beta=0), but on small scales the correlation function is changed. By how much at a given scale, I don't know, but it should be easy to compute with equations similar to appendix A of 1205.2018

@londumas
Copy link
Contributor Author

Here is the status of this ticket as of v6.0/v6.0.<*>/eboss-0.0/:

  • the chi2 is good for both the fit to the stack of 10 auto or to 10 cross

  • maybe we have to work on the model of the BAO peak. What are the expected effects on it?

  • the distortion matrix seem to do its job correctly

  • the parameters sigma_smooth of the effect on the power spectrum are quite different from auto and cross. This has the effect of producing a very bad chi2 for the combined fit to the stack of the auto and the stack of the cross.

  • fit of the stack of the 10 auto

zeff                        2.31        
bias_LYA            
bias_eta_LYA          -0.159 +/- 0.001  
beta_LYA              1.198 +/- 0.014   
bias_eta_QSO        
beta_QSO            
ap                    1.024 +/- 0.013   
at                    0.986 +/- 0.015   
par_sigma_smooth      3.307 +/- 0.039   
per_sigma_smooth      3.422 +/- 0.065   
chi2/(ndata-npar)     1612.9/(1590-6)   
probability                 0.3 

auto_stack

  • 1 fit of the stack of the cross
zeff                        2.31        
bias_LYA            
bias_eta_LYA          -0.155 +/- 0.002  
beta_LYA              1.123 +/- 0.023   
bias_eta_QSO            1.0 +/- 0.0     
beta_QSO               0.262 +/- 0.0    
ap                    1.025 +/- 0.015   
at                    0.984 +/- 0.013   
drp_QSO               0.134 +/- 0.027   
par_sigma_smooth      -0.003 +/- 0.759  
per_sigma_smooth      3.051 +/- 0.062   
sigma_velo_lorentz_Q  -1.369 +/- 0.13   
chi2/(ndata-npar)     3211.1/(3180-8)   
probability                 0.31  
  • combined fit to both stack
zeff                        2.31        
bias_LYA            
bias_eta_LYA          -0.154 +/- 0.001  
beta_LYA              1.162 +/- 0.011   
bias_eta_QSO            1.0 +/- 0.0     
beta_QSO              0.245 +/- 0.001   
ap                     1.025 +/- 0.01   
at                     0.985 +/- 0.01   
drp_QSO               0.123 +/- 0.026   
par_sigma_smooth      2.723 +/- 0.036   
per_sigma_smooth      3.381 +/- 0.044   
sigma_velo_lorentz_Q   0.0 +/- 1.627    
chi2/(ndata-npar)     5451.0/(4770-9)   
probability                 0.0   

cross_stack

sigma_smooth

@londumas
Copy link
Contributor Author

@jfarr03 and @andreufont, here is the status of this ticket as of v8,0/v8.0.[|0,9|]:

  1. the auto of the raw: the stack of the 10 boxes and one fit of the stack
    similar to Bautista2018 (https://arxiv.org/pdf/1702.00176.pdf, figure 11)
  2. the cross of the raw: the stack of the 10 boxes and one fit of the stack
    similar to dMdB2018 (https://arxiv.org/pdf/1708.02225.pdf, figure 11)

Here are some remarks:

  • the peak looks a bit more sharp in the auto-correlation
  • the cross-correlation is not dropping totally accordingly at large scale. Maybe the shuffling is not perfect, or it comes from the --no-remove-mean-lambda-obs option?

These are things to look at as it would be nice to be able to fit well the broadband correlation. In DR12 it was not perfect either.

auto_raw_stack

cross_raw_stack

@londumas
Copy link
Contributor Author

Interestingly, the fit is more pleasing to the eye when not using the small scales, see bellow, this tells me that the P(k) at small scales, high k, is not perfectly understood. Do you have a prediction of it? or a measured P(k)?
fit_cf_raw-different-range

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants