GithubHelp home page GithubHelp logo

cosmosis's People

Contributors

amanzotti avatar angelachen626 avatar annis avatar bstoelzner avatar devonhollowood avatar dodelson avatar itrharrison avatar jbkowalkowski avatar jhod0 avatar joezuntz avatar johannesulf avatar marcpaterno avatar mdschneider avatar minaskar avatar ntessore avatar sebastianbocquet avatar soares-santos avatar tilmantroester avatar williamjameshandley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cosmosis's Issues

Demo after installatiom

Hello , i just successfully installing cosmosis on my intel mac, i tried running the demo 1 , i keep getting this error: OSError: unable to open configuration file ‘demos/demo1.ini’

Fast calculation for large SN likelihoods

Rick Kessler suggested a speed trick that's useful for many-sample SN likelihoods - calculating a diagonal covariance likelihood first and only if it is below than a certain threshold using the full dense matrix.

MPI runs produce "normal" stderr and stdout and log info but empty output files

When I run cosmoSIS (with the proper mpi flag) on our HPC, either interactively or remotely via slurm, the program would execute, there would be stderr and stdout files. Inside these files, there is the usual log. The log even states that the output is being saved to output file. For example, there will be lines like this:

Saving output -> nauti_des-y3_kids-1000_chain_nauti_des-y3_and_kids-1000.txt

However, the output files above were created and then left untouched, i.e. there was no write (nor read) after their creation at the start of the program. I tried two different samplers, polychord and nautilus to verify the issue is with CosmoSIS I/O itself.

I would appreciate any hint on how to proceed with debugging.

Setting prior on derived parameter

Hi!

I am trying to compare likelihood analysis results between some fiducial pipeline with a modified version of CLASS that I have developed. However there is a small difference: in the fiducial pipeline, h0 is a directly sampled parameter with a set prior, while in my version of CLASS it is a derived parameter.

Hence I'd like to ask if it is possible, somehow, to set a prior condition on the derived H0? For example in my CLASS module, after calculating H0, choosing to discard the point if it falls outside the fiducial prior range. Like that I can ensure that my pipeline works equivalently to the fiducial pipeline.

Thank you!

Lisa

default halo model parameters for mead2020?

I'm working on setting up a test pipeline using HMcode2020 for the nonlinear matter power spectrum and want to make sure I understand the input parameters.

Based on my reading of the pycamb code, for camb setting the nonlinear power spectrum to 'mead2020' specifies the dark matter only version of HM code, while mead2020_feedback specifies the version with baryonic feedback. It seems the baryonic feedback version needs three parameters A, eta, and logT, which have default values specified.

The cosmosis camb interface looks like it will require A and eta0 to be specified as part of halo_model_parameters in the values file whether or not _feedback is part of the nonlinear model name. Because of this, even when I just specify mead2020 I get errors for not having those parameters, even though (I think) they wouldn't be used.

If I'm understanding this structure correctly, it might be preferrable to edit the camb_interface in cosmosis to only require those feedback parameters if the nonlinear model is specifically mead2020_feedback, or at least to set up default values matching the camb defaults so that they can be optional in cosmosis runs.

Adding prior for derived parameters (parameters not sampled in)

Hi, when I run cosmosis, I do a computation that I write to the chains, but I would like to add a prior on this parameter. I tried including the parameter name under the [cosmological_parameters] in the priors ini file but it appears the prior was ignored. Is there a way to include a prior on such parameter?

Thank you!

Problems with using wmap9 likelihood

For a presentation I wanted to run the simple test sample with only the cosmo. parameters h0 and omega_m,_b,_l and _k. I installed cosmosis through Python, the standard lib wmap version seems to be Fortran though. I install cfitsio and lapack and ran the Makefile in the wmap9 folder to create the wmap-interface.so file. Now when I reference it in my .ini I want to run (full pipeline is consistency → camb → wmap9) I get:

Traceback (most recent call last):
   File "/home/hstemmler/env/lib/python3.9/site-packages/cosmosis/runtime/module.py", 
     line 287, in load_library library = ctypes.cdll.LoadLibrary(filepath)   
   File "/home/hstemmler/env/lib/python3.9/ctypes/__init__.py", 
     line 452, in LoadLibrary return self._dlltype(name)
   File "/home/hstemmler/env/lib/python3.9/ctypes/__init__.py", 
     line 374, in __init__ self._handle = _dlopen(self._name, mode)
OSError: /home/hstemmler/cosmosis-standard-library/./likelihood/wmap9/wmap_interface.so: 
  undefined symbol: __cosmosis_modules_MOD_datablock_get_double_array_1d

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
   File "/home/hstemmler/env/bin/cosmosis", line 4, 
     in <module> status = cosmosis.main.main()
   File "/home/hstemmler/env/lib/python3.9/site-packages/cosmosis/main.py", 
     line 389, in main return run_cosmosis(args)
   File "/home/hstemmler/env/lib/python3.9/site-packages/cosmosis/main.py", 
     line 193, in run_cosmosis pipeline = LikelihoodPipeline(ini, override=args.variables, 
     values=values, only=args.only)
   File "/home/hstemmler/env/lib/python3.9/site-packages/cosmosis/runtime/pipeline.py", 
     line 735, in __init__ super().__init__(arg=arg, load=load, modules=modules)
   File "/home/hstemmler/env/lib/python3.9/site-packages/cosmosis/runtime/pipeline.py", 
     line 383, in __init__ self.modules = [
   File "/home/hstemmler/env/lib/python3.9/site-packages/cosmosis/runtime/pipeline.py", 
     line 384, in <listcomp> module.Module.from_options(module_name,self.options,
     self.root_directory)
   File "/home/hstemmler/env/lib/python3.9/site-packages/cosmosis/runtime/module.py", 
     line 372, in from_options m = cls(module_name, filename,
   File "/home/hstemmler/env/lib/python3.9/site-packages/cosmosis/runtime/module.py", 
     line 103, in __init__ self.library, language = self.load_library(filename)
   File "/home/hstemmler/env/lib/python3.9/site-packages/cosmosis/runtime/module.py", 
     line 291, in load_library raise SetupError("You specified a path %s for a module. 
    "cosmosis.runtime.module.SetupError: You specified a path 
    /home/hstemmler/cosmosis-standard-library/./likelihood/wmap9/wmap_interface.so 
    for a module. File exists, but could not be opened. 
    Error was /home/hstemmler/cosmosis-standard-library/./likelihood/wmap9/wmap_interface.so: 
    undefined symbol:  __cosmosis_modules_MOD_datablock_get_double_array_1

To me it looks like a compatibility error between wmap9 being a Fortran package and me having installed Cosmosis through python, but I’m not well versed in coding so I really don’t know what to do here.

Thank you in advance!

Keeping count of module calls

I would like to know if there is a way in cosmosis to keep track of how many times a module was called for instance when running a sampler? Is there perhaps an iteration number stored somewhere in the data block?

I am asking this because I would like to call the part of an execute function only at certain times, as it is computationally really costly to do so at every step in the sampler. The number of calls would be useful in such case, as then a part of the code could be simply evaluated every n-steps, and otherwise use a cached method.

trouble installing cosmosis

I am following the instructions on:

https://cosmosis.readthedocs.io/en/latest/intro/installation.html

under the section "Conda-Forge (from scratch) on Linux and Intel Macs" to do an installation from scratch on SL7 linux, but I am getting the following errors:

Retrieving notices: ...working... failed
Traceback (most recent call last):
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/exceptions.py", line 1129, in __call__
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/cli/main.py", line 86, in main_subshell
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/cli/conda_argparse.py", line 93, in do_call
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/notices/core.py", line 75, in wrapper
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/notices/core.py", line 39, in display_notices
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/notices/http.py", line 36, in get_notice_responses
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/notices/http.py", line 39, in <genexpr>
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/concurrent/futures/_base.py", line 458, in result
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/concurrent/futures/thread.py", line 58, in run
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/notices/http.py", line 42, in <lambda>
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/notices/cache.py", line 37, in wrapper
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/notices/http.py", line 58, in get_channel_notice_response
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/requests/sessions.py", line 600, in get
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/requests/adapters.py", line 460, in send
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/requests/adapters.py", line 263, in cert_verify
OSError: Could not find a suitable TLS CA certificate bundle, invalid path: /data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/certifi/cacert.pem

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/bin/conda", line 15, in <module>
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/cli/main.py", line 129, in main
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/exceptions.py", line 1429, in conda_exception_handler
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/exceptions.py", line 1132, in __call__
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/exceptions.py", line 1172, in handle_exception
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/exceptions.py", line 1183, in handle_unexpected_exception
  File "/data/des91.b/data/mwang/cosmosisdesy6kp/env/lib/python3.10/site-packages/conda/exceptions.py", line 1245, in print_unexpected_error_report
ModuleNotFoundError: No module named 'conda.cli.main_info'

However if I change the python version from 3.9 to 3.10 in the following command:

conda install -y python=3.9 cosmosis cosmosis-build-standard-library

The installation completes successfully.

Constant extrapolation of matter power spectrum at large scales

Hi, I have observed that extending the kmin value below 1e-4 when calling camb for the matter power spectrum leads to a constant extrapolation using the value of the spectrum at k=1e-4. To reproduce the issue one option is to set kmin=1e-5 in examples/des-y3.ini instead of kmin=1e-4. I have tested a decrease of kmin with an external version of camb and the extrapolation seems to follow a power law. Might this constant extrapolation be coming from the camb_interface.py?

Thanks in advance

defining H(z) in class interface

Hi,
I need to use the bossdr12 likelihood for BAO which needs to have 'h' in section distances which at the moment isn't defined in the class interface. I have already tried multiple ways of defining this quantity but unfortunately nothing has worked.
I would really appreciate your help.

Missing "fastpt" module when trying to run project_2d

The fast-pt python module seems to be missing in the default conda environment, which gives an error when trying to run project_2d. The issue is solved by installing the fast-pt module in the conda environment.

modifications to class_interface.c

Hi @joezuntz

I noticed that you now replaced the old version of class_v2.7 with the latest version of class_v3.2. That's really great, I had to do it myself, but it's very nice that it is consistently replaced.

I have a few suggestions and a question about some modifications to class interface that may be helpful for some users, as I needed them myself.

  1. Add the calculation of the matter transfer function for a given (k,z) to class_interface.py. To do this, one needs to make the following modifications:
  • in boltzmann/class/class_v3.2.0/python/classy.pyxd add the definition of the new function of perturbations.c module in class_v3.2, perturbations_sources_at_k_and_z(), which calculates transfer function for a choice of (k,z)
  • in boltzmann/class/class_v3.2.0/python/classy.pyx include a call to the above function, lets say called tk_lin() in analogy to pk_lin(),
  • include a call to tk_lin() in boltzmann/class/class_interface.py.
  1. Make the calculation of cmb optional. I found this important when running MCMC for galaxy clustering statistics only, since calculating the Cls makes the MCMC too slow at each step in the chain. So it would be nice in the output list of class_interface.py to have the option of only calculating matter power spectrum and transfer function, 'mPk mTk' as one can do when running class directly.
  2. Would it be possible to have the extrapolation of the transfer function similar to what you have for the power spectrum? This would be helpful when calculating loops in galaxy statistics due to primordial non-Gaussianity, using FFTLog, which requires having kernels (which include the transfer function) at low and high k-values. In my version of cosmosis, I added a function to do this using the extrapolated p(k) and dividing it to primordial power spectrum and then building an interpolator for transfer function using init_interp_2d_akima_grid(), which extends the k-range covered by pk() extrapolator. But this seems too convoluted. There is perhaps a better way of doing this?

Installation error - 'omp.h' file not found

I'm getting this error when trying to install using the instructions under 'Conda-Forge (from scratch) on M1 (Silicon) Macs':

cd /Users/annaporredon/Codes/cosmosis_v2/cosmosis-standard-library/boltzmann/class/class_v3.2.0/build;clang -O3  -g -fPIC -O3 -g -fPIC -fopenmp -I/Users/annaporredon/Codes/cosmosis_v2/env/lib/python3.9/site-packages/cosmosis/ -std=c99  -std=gnu99 -D__CLASSDIR__='"/Users/annaporredon/Codes/cosmosis_v2/cosmosis-standard-library/boltzmann/class/class_v3.2.0"' -DHYREC -I../include -I../external/RecfastCLASS -I../external/heating -I../external/HyRec2020 -c ../tools/growTable.c -o growTable.o
In file included from ../tools/growTable.c:5:
In file included from ../include/growTable.h:8:
../include/common.h:12:10: fatal error: 'omp.h' file not found
#include "omp.h"
         ^~~~~~~
1 error generated.
make[2]: *** [growTable.o] Error 1

Segmentation Fault when running polychord sampler with MPI

I keep getting Segmentation Fault when I try to run the DES Y3 likelihood using the polychord sampler and MPI. Cosmosis runs without any issue if I either a/ use another sampler or b/ run without MPI. So I cannot pinpoint where exactly the issue arises.

I attach the output with the --segfaults flag below. The error message suggests that the MPI check in initialise_mpi subroutine of polychord fails.

Thank you for your help!

Setting up module 2pt_like
---------------------------
Doing point-mass marginalization: True
Using sigma_crit_inv factors in pm-marg: True
Doing small-scale marginalization: False
Using a single grade of parameter speeds in polychord.
Polychord num_repeats = 60  (from parameter file)
PolyChord: MPI is already initilised, not initialising, and will not finalize
##################################################################################

Your program crashed with an error signal: 11

This the trace of C functions being called
(the first one or two may be part of the error handling):
##################################################################################

/nfs/turbo/lsa-nguyenmn/cosmosis/cosmosis/datablock/cosmosis_py/../libcosmosis.so(cosmosis_segfault_handler+0x1f)[0x151f686491df]
/lib64/libc.so.6(+0x4eb20)[0x151f74da2b20]
/home/nguyenmn/.conda/envs/cosmosis-gnu/lib/python3.9/site-packages/mpi4py/../../../libmpi.so.40(PMPI_Comm_rank+0x37)[0x151f6767ead7]
/sw/pkgs/arc/intel/2022.1.2/mpi/2021.5.1/lib/libmpifort.so.12(mpi_comm_rank_+0xb)[0x151f431d1d4b]
/nfs/turbo/lsa-nguyenmn/cosmosis/cosmosis/samplers/polychord/polychord_src/libchord_mpi.so(__random_module_MOD_initialise_random+0x3e)[0x151f43ee356e]
/nfs/turbo/lsa-nguyenmn/cosmosis/cosmosis/samplers/polychord/polychord_src/libchord_mpi.so(__interfaces_module_MOD_run_polychord_full+0x5c7)[0x151f43f18067]
/nfs/turbo/lsa-nguyenmn/cosmosis/cosmosis/samplers/polychord/polychord_src/libchord_mpi.so(polychord_c_interface+0x83f)[0x151f43f188af]
/home/nguyenmn/.conda/envs/cosmosis-gnu/lib/python3.9/lib-dynload/../../libffi.so.7(+0x69ed)[0x151f6f52e9ed]
/home/nguyenmn/.conda/envs/cosmosis-gnu/lib/python3.9/lib-dynload/../../libffi.so.7(+0x6077)[0x151f6f52e077]
/home/nguyenmn/.conda/envs/cosmosis-gnu/lib/python3.9/lib-dynload/_ctypes.cpython-39-x86_64-linux-gnu.so(+0x13df7)[0x151f6f547df7]
/home/nguyenmn/.conda/envs/cosmosis-gnu/lib/python3.9/lib-dynload/_ctypes.cpython-39-x86_64-linux-gnu.so(+0x1437c)[0x151f6f54837c]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(_PyObject_MakeTpCall+0x316)[0x55690b8a6ba6]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(_PyEval_EvalFrameDefault+0x535b)[0x55690b944dbb]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(_PyFunction_Vectorcall+0x19a)[0x55690b90088a]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(_PyEval_EvalFrameDefault+0x609)[0x55690b940069]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(_PyFunction_Vectorcall+0x19a)[0x55690b90088a]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(_PyEval_EvalFrameDefault+0x609)[0x55690b940069]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(_PyFunction_Vectorcall+0x19a)[0x55690b90088a]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(_PyEval_EvalFrameDefault+0x3bc)[0x55690b93fe1c]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(+0x138550)[0x55690b899550]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(_PyFunction_Vectorcall+0x336)[0x55690b900a26]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(_PyEval_EvalFrameDefault+0x11e7)[0x55690b940c47]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(_PyFunction_Vectorcall+0x19a)[0x55690b90088a]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(_PyEval_EvalFrameDefault+0x4c84)[0x55690b9446e4]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(+0x138550)[0x55690b899550]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(_PyEval_EvalCodeWithName+0x47)[0x55690b980047]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(PyEval_EvalCodeEx+0x39)[0x55690b980089]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(PyEval_EvalCode+0x1b)[0x55690b9800ab]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(+0x251909)[0x55690b9b2909]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(+0x28c3a4)[0x55690b9ed3a4]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(+0x118d33)[0x55690b879d33]
/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/python3.9(PyRun_SimpleFileExFlags+0x19c)[0x55690b9f783c]
##################################################################################


And here is the python faulthandler report and trace:

Fatal Python error: Segmentation fault

Current thread 0x0000151f75ee2740 (most recent call first):
  File "/nfs/turbo/lsa-nguyenmn/cosmosis/cosmosis/samplers/polychord/polychord_sampler.py", line 266 in sample
  File "/nfs/turbo/lsa-nguyenmn/cosmosis/cosmosis/samplers/polychord/polychord_sampler.py", line 204 in worker
  File "/nfs/turbo/lsa-nguyenmn/cosmosis/cosmosis/main.py", line 84 in sampler_main_loop
  File "/nfs/turbo/lsa-nguyenmn/cosmosis/cosmosis/main.py", line 417 in run_cosmosis
  File "/nfs/turbo/lsa-nguyenmn/cosmosis/cosmosis/main.py", line 543 in main
  File "/home/nguyenmn/.conda/envs/cosmosis-gnu/bin/cosmosis", line 4 in <module>
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 0 on node gl-login2 exited on signal 11 (Segmentation fault).

error no module named consistency_interface

HI Joezuntz! I have already tried to install cosmosis in my laptop and got error "no module named consistency_interface" when I tried to run cosmosis planck.ini
Could you help me to fix this problem?

What is lock file?

Hi Joe,

I run cosmosis in the supercomputer and get the error like this:

Another CosmoSIS process was trying to use the same output file (output/output_3x2pt_SR_BOSS_hsc_case6.txt). 
This means one of three things:
1) you were trying to use MPI but left out the --mpi flag
2) you have another CosmoSIS run going trying to use the same filename
3) your file system cannot cope with file locks properly.  
In the last case you can set lock=F in the [output] section to disable this feature.

I already used MPI and run well in other supercomputer and I tried in new supercomputer but does not work. and also I did not running the cosmosis use the same filename. So I tried set lock=F, it works, so what is lock=F? Can I get the output from that?

Undefined symbol error with ACT DR4 likelihood

Hi @joezuntz ,

I have just installed the new cosmosis version and included the ACT likelihood (which runs fine), but when I try to run the test sampler on the ACT likelihood, I get

File "/home/margaret/env/lib/python3.9/site-packages/cosmosis/runtime/module.py", line 103, in __init__
    self.library, language = self.load_library(filename)
  File "/home/margaret/env/lib/python3.9/site-packages/cosmosis/runtime/module.py", line 291, in load_library
    raise SetupError("You specified a path %s for a module. "
cosmosis.runtime.module.SetupError: You specified a path /home/margaret/cosmosis-standard-library/likelihood/actpolfull_dr4.01/actpol.so for a module. File exists, but could not be opened. Error was /home/margaret/cosmosis-standard-library/likelihood/actpolfull_dr4.01/actpol.so: undefined symbol: __cosmosis_modules_MOD_datablock_put_double

Could you please tell me what I'm doing wrong?

Thank you!

des-y3 with class

Hi Joe,

I am trying to test des-y3 with class rather than camb. I am starting from your example ini file which uses camb. I am setting it up so it uses class but when I run the code I get errors due to the format of P(k)'s....

Do you have an example ini file that uses class fro des-y3?

Cheers,
Boris

Compiler error Ubuntu Instalation

Hi. I'm trying to install cosmosis in a conda enviroment but I keep getting the same error when trying to run make in the library. i have tried to change the python version and I'm still getting the same error. Here is the log:

+ make
make: Entering directory '/home/eduardo/firecrown/cosmosis-standard-library'
make[1]: Entering directory '/home/eduardo/firecrown/cosmosis-standard-library/boltzmann'
make[2]: Entering directory '/home/eduardo/firecrown/cosmosis-standard-library/boltzmann/isitgr'
cd camb_Jan12_isitgr && make libcamb.so
make[3]: Entering directory '/home/eduardo/firecrown/cosmosis-standard-library/boltzmann/isitgr/camb_Jan12_isitgr'
/home/eduardo/miniconda3/envs/firecrown_developer/bin/x86_64-conda-linux-gnu-gfortran -O3 -g -fPIC -fopenmp -I/home/eduardo/miniconda3/envs/firecrown_developer/lib/python3.11/site-packages/cosmosis/datablock -std=gnu -ffree-line-length-none -I. -c constants.f90
/home/eduardo/miniconda3/envs/firecrown_developer/bin/x86_64-conda-linux-gnu-gfortran -O3 -g -fPIC -fopenmp -I/home/eduardo/miniconda3/envs/firecrown_developer/lib/python3.11/site-packages/cosmosis/datablock -std=gnu -ffree-line-length-none -I. -c utils.F90
f951: internal compiler error: in cpp_diagnostic_at, at libcpp/errors.c:41
0x7f5f04842082 __libc_start_main
        ../csu/libc-start.c:308
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See <https://github.com/conda-forge/ctng-compilers-feedstock/issues/new/choose> for instructions.

I appreciate the help.

C++ modules cause Cosmosis to complain about .ini parameters not being used

When we compile a C++ module into Cosmosis, at run time it presents warnings of "Parameter '' in the [] section never used!". The C++ modules clearly do use the parameters, as they run correctly. See cut and paste of stream below.

This would merely be annoying, but it interacts poorly with a separate Cosmosis bug with C++, where when a parameter is in fact not found an opaque and not useful message is sent. One has to grovel to understand which parameter was not being read. A C++ debugger is invaluable, which gives the scale of the issue.

One thing at a time- this issue is about the warnings that Cosmosis generates in error.

=======
nid008549:y3_buzzard % srun -n 1 cosmosis --mpi buzzard.ini
DEBUG:root:CosmoSIS verbosity set to 40
Found 10 samples and 101 bins in redshift in file cluster_power_nz.txt
Calculating Limber: Kernel 1 = ('N', 'sample'), Kernel 2 = ('N', 'sample'), P_3D = MatterPower3D --> Output: galaxy_cl
Will project these spectra into 2D:
- galaxy_cl
**** WARNING: Parameter 'algorithm' in the [numberCounts] section never used!

**** WARNING: Parameter 'do_cartesian_product_of_bins' in the [numberCounts] section never used!

**** WARNING: Parameter 'eps_abs' in the [numberCounts] section never used!

**** WARNING: Parameter 'eps_rel' in the [numberCounts] section never used!

**** WARNING: Parameter 'lnm_high' in the [numberCounts] section never used!

**** WARNING: Parameter 'lnm_low' in the [numberCounts] section never used!

**** WARNING: Parameter 'lo_high' in the [numberCounts] section never used!

**** WARNING: Parameter 'lo_low' in the [numberCounts] section never used!
etc....

Have Importance sampler retain "nsample" metadata from input chain

As it's currently set up, the importance sampler output will have the same number of rows as the input chain, and as far as I can tell, it doesn't retain the nsample metadata that's reported at the end of polychord and multinest chains.

It might be useful for the importance sampler to retain that metadata, either as part of its own metadata, or via step which removes any excess samples in the original chain before running likelihoods. If the original chain output has more rows than should be retained for inferring the posterior, we'd want to make sure to remove those from the IS output as well.

N.b. That being said, this may be nice to have but unecessary: As I was writing this up, I looked at line counts for a couple different polychord and multinest outputs and found that the number of rows in the chain file equal to nsample, so it's not necessary to trim any lines before plotting posteriors. If this is always the case (is it?), the IS posteriors will be fine without tracking nsample.

Installation issues on cluster using conda-forge

Just leaving this here as it might help others.

I ran into the two following two issues when attempting to install on the Edinburgh Cuillin cluster using the "Conda-Forge (existing installation)" :

  1. The first error occurred when I attempted to run "source cosmosis-configure" which resulted in the following error:

configure.py: error: unrecognized arguments: viminfo=NONE

This was solved by running the following command "source cosmosis-configure --source SOURCE" .

  1. The second error related to the compiling of EuclidEmulator2.

src/euclidemu2.cpp:488:62: error: invalid use of incomplete type 'PyFrameObject' {aka 'struct _frame'}

This was solved by downgrading python 11 to 10 in the "conda create -p ./env -c conda-forge cosmosis cosmosis-build-standard-library "numpy<1.24"" command:

conda create -p ./env -c conda-forge python=3.10 cosmosis cosmosis-build-standard-library "numpy<1.24"

Hope this helps someone :)

Installation errors on a linux cluster: "undefined reference to `memcpy@GLIBC_2.14`"

I've been trying to install on a linux cluster that runs on this OS: Red Hat Enterprise Linux Server version 7.9. I could build the cosmosis environment succesfully using the intructions for Conda-Forge (from scratch) on Linux. But I'm getting some errors when trying to run make in cosmosis-standard-library.

make[2]: Entering directory '/users/PCON0003/porredon/cosmosisv2/cosmosis-standard-library/shear/limber'
/users/PCON0003/porredon/cosmosisv2/env/bin/x86_64-conda-linux-gnu-cc -O3 -g -fPIC  -I/users/PCON0003/porredon/cosmosisv2/env/lib/python3.9/site-packages/cosmosis/ -std=c99 -I /users/PCON0003/porredon/cosmosisv2/env/include  -o test_limber limber.o interp2d.o utils.o  test_limber.o -L/users/PCON0003/porredon/cosmosisv2/env/lib -lgsl -lgslcblas -lcosmosis -lm -L/users/PCON0003/porredon/cosmosisv2/env/lib/python3.9/site-packages/cosmosis/datablock -Wl,-rpath,/users/PCON0003/porredon/cosmosisv2/env/lib/python3.9/site-packages/cosmosis/datablock  -L. -llimber
/users/PCON0003/porredon/cosmosisv2/env/bin/ld: warning: libz.so.1, needed by /apps/gnu/8.4.0/lib64/libgfortran.so.5, not found (try using -rpath or -rpath-link)
/users/PCON0003/porredon/cosmosisv2/env/bin/ld: /apps/gnu/8.4.0/lib64/libgfortran.so.5: undefined reference to `memcpy@GLIBC_2.14'
/users/PCON0003/porredon/cosmosisv2/env/bin/ld: /users/PCON0003/porredon/cosmosisv2/env/lib/python3.9/site-packages/cosmosis/datablock/libcosmosis.so: undefined reference to `_gfortran_os_error_at@GFORTRAN_10'
/users/PCON0003/porredon/cosmosisv2/env/bin/ld: /apps/gnu/8.4.0/lib64/libstdc++.so.6: undefined reference to `aligned_alloc@GLIBC_2.16'
/users/PCON0003/porredon/cosmosisv2/env/bin/ld: /apps/gnu/8.4.0/lib64/libgfortran.so.5: undefined reference to `secure_getenv@GLIBC_2.17'
/users/PCON0003/porredon/cosmosisv2/env/bin/ld: /apps/gnu/8.4.0/lib64/libgfortran.so.5: undefined reference to `clock_gettime@GLIBC_2.17'
collect2: error: ld returned 1 exit status
make[2]: *** [Makefile:23: test_limber] Error 1
make[2]: Leaving directory '/users/PCON0003/porredon/cosmosisv2/cosmosis-standard-library/shear/limber'
make[1]: *** [/users/PCON0003/porredon/cosmosisv2/env/lib/python3.9/site-packages/cosmosis/config/subdirs.mk:11: all] Error 2
make[1]: Leaving directory '/users/PCON0003/porredon/cosmosisv2/cosmosis-standard-library/shear'
make: *** [/users/PCON0003/porredon/cosmosisv2/env/lib/python3.9/site-packages/cosmosis/config/subdirs.mk:11: all] Error 2
make: Leaving directory '/users/PCON0003/porredon/cosmosisv2/cosmosis-standard-library'

Cosmosis doesn't look for the right theory predictions

Hi,

I would like to use Cosmosis in order to fit projected measurements ( wg+/wgg)
I created a new theory block (tested with the test module), modified type_table.txt and twopoint.py to load the correct datasets from the .fits input.
I have defined in twopoint.py :
galaxy_position_red = "GPS"
galaxy_shear_plus_red = "G+S"
for redshift space quantities, which are passed to type_table as :
galaxy_position_red galaxy_shear_plus_red section_name theta bin_{0}_{1}

Then i call the module likelihood/2pt/2pt_like.py.
This module does give me the data and covariance measurements that i want.
But it doesn't not give the correct theory predictions. I have tried to give to the 2pt module the options
x_name, y_name, x_section,y_section (with x_section = y_section = section_name defined in type_table.txt) but it looks like Cosmosis doesn't use them at all. In order to perform the fit,
i had to change by hand the python function extract_theory_point from the gaussian_likelihood.py module.

Should i pass the options x_name, y_name, x_section,y_section to an other cosmosis module to make it work?
Thanks,

Romain

CAMB interface: accurate_massive_neutrinos argument name mismatch

I was attempting to set the optional camb setting of accurate_massive_neutrinos to T, but got the error

TypeError: set_matter_power() got an unexpected keyword argument 'accurate_massive_neutrinos'

Looking through the camb code that this gets passed to, it looks like that function in camb.py takes an argument with a slightly different name accurate_massive_neutrino_transfers, and things seem to work if I change the name of the optional parameter in the camb_interface.py and ini files to match.

Manual installation on cluster

Hello,
I am manually installing cosmosis2 on a cluster, following these directions. When doing pip install cosmosis, LAPACK_LINK seemed to not be considered, so lapack wasn't found. Instead another variable called LAPACKLIB was pointing to another place (in conda I think, where I didn't have lapack). We managed to go over this by adding a LDFLAGS pointing to the correct lapack path but it might be worth fixing it.

And there is a typo in "source cosomosis-configure" which might be an issue if someone copy/paste without checking!

developer installation for the new version ?

Hi Joe,

I have extended the old version of cosmosis to compute galaxy power spectrum and bispectrum. I wanted to upgrade to the new version, but I noticed that in the new version, the main cosmosis repository is not downloaded. For my extensions of cosmosis, I need to add C functions to read and write to data block specific formats. In the new version, how can I have access to the c_datablock.cc file? Should I just download the main cosmosis repository from github in addition to the cosmosis-standard library? Or is there a developer installation version?

Thanks in advance,
Azadeh

Try to turn on cobaya but got an error

Dear Joezuntz,
I met an error when running a metropolis sampler using cosmosis.
I install cosmosis using conda, following the step you present in https://github.com/joezuntz/cosmosis. And I install cobaya using 'python pip install cobaya --upgrade' after activate ./conda-env
Then I try to run the des-y1.ini, but change "sampler = test" to "sampler = metropolis", and in the [metropolis] module, I set "cobaya = T". But I get this respond

Using default covariance 1% of param widths Using the Cobaya proposal Pipeline ran okay. Likelihood 2pt = 5237.371107905489 Likelihood total = 5237.371107905489 Traceback (most recent call last): File "/home/saberofqft/conda-env/bin/cosmosis", line 4, in <module> status = cosmosis.main.main() File "/home/saberofqft/conda-env/lib/python3.9/site-packages/cosmosis/main.py", line 360, in main return run_cosmosis(args,pool) File "/home/saberofqft/conda-env/lib/python3.9/site-packages/cosmosis/main.py", line 293, in run_cosmosis sampler.config() File "/home/saberofqft/conda-env/lib/python3.9/site-packages/cosmosis/samplers/metropolis/metropolis_sampler.py", line 78, in config self.sampler = metropolis.MCMC(start, posterior, covmat, File "/home/saberofqft/conda-env/lib/python3.9/site-packages/cosmosis/samplers/metropolis/metropolis.py", line 48, in __init__ self.proposal = cobaya_proposal.CobayaProposalWrapper( TypeError: __init__() missing 1 required positional argument: 'random_state'

Can you help me? Thank you very much!

Error using polychord on nersc

Hi everyone!

I am trying to run some cosmosis chains on NERSC using the polychord sampler, but I have faced this error message:

/global/cfs/cdirs/des/zuntz/cosmosis-global/env-1/lib/python3.9/site-packages/cosmosis/samplers/polychord/polychord_src/libchord_mpi.so: cannot open shared object file: No such file or directory PolyChord could not be loaded.

In fact, there is no the libchord_mpi.so file in that directory, only libchord.so.

Could anyone help me with this issue, please?

PS: I have used the command source $CFS/des/zuntz/cosmosis-global/setup-cosmosis-nersc to setup the env, according to the documentation: https://cosmosis.readthedocs.io/en/latest/intro/installation.html?highlight=nersc#nersc

liblimber.so not found

Hi Joe,

Following the instructions you gave in #15 , I managed to successfully install my modified version of cosmosis (without conda) on a Linux cluster. However, when trying to run the code, I get an error message that OSError: liblimber.so: cannot open shared object file: No such file or directory. Checking the limber directory, the liblimber.so file is generated, so I dont understand the origin of the error. To give you a context, in my added modules, I am linking to liblimber to access the 2d interpolation functions (as described in limber). Strangely enough on my laptop (MacOs), the code runs fine. I also didnt have any issue running the old version on the cluster. So I am quite puzzled by what is going on.

Is there anything additional needed on the cluster to use the limber module in the new version of cosmosis?

Thanks.

Problem Installing Cosmosis

Hi Joe,

I am currently working on installing cosmosis on my machine, i was able to install it on my standard library and i cant check the version of the cosmosis (2.0.8) but moving forward trying to work on the demos, i was not able to run the demos esp "cosmosis examples/planck.ini" .
i am getting "permission denied". i don't know if I have done something wrong but I must say I have been following the installation procedure from the start.
20220619 053923 1

CMB shift parameter

The old camb interface calculated the CMB shift parameter. We should reinstate that.

Installation error on nersc - missing gsl?

I also tried to install on nersc, using the latest version of the conda env:
conda create -p ./env --clone $CFS/des/zuntz/cosmosis-global/env-latest

I got this error when trying to make cosmosis-standard-library:

gcc -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /global/cscratch1/sd/porredon/cosmosisv2/env/include -fPIC -O2 -isystem /global/cscratch1/sd/porredon/cosmosisv2/env/include -fPIC -Isrc -I/global/common/software/spackecp/cori/e4s-22.02/software/cray-cnl7-haswell/gcc-11.2.0/gsl-2.7-dnj35q4arliuhq6wm7lucflfn3hqknaf/include -I../src/ -I/global/cscratch1/sd/porredon/cosmosisv2/env/include/python3.9 -c src/euclidemu2.cpp -o build/temp.linux-x86_64-cpython-39/src/euclidemu2.o -std=c++11 "-D PRINT_FLAG=0" "-D PATH_TO_EE2_DATA_FILE=\"/global/cscratch1/sd/porredon/cosmosisv2/cosmosis-standard-library/structure/EuclidEmulator2/ee2_bindata.dat\""
g++ -shared -Wl,--allow-shlib-undefined -Wl,-rpath,/global/cscratch1/sd/porredon/cosmosisv2/env/lib -Wl,-rpath-link,/global/cscratch1/sd/porredon/cosmosisv2/env/lib -L/global/cscratch1/sd/porredon/cosmosisv2/env/lib -Wl,--allow-shlib-undefined -Wl,-rpath,/global/cscratch1/sd/porredon/cosmosisv2/env/lib -Wl,-rpath-link,/global/cscratch1/sd/porredon/cosmosisv2/env/lib -L/global/cscratch1/sd/porredon/cosmosisv2/env/lib build/temp.linux-x86_64-cpython-39/src/cosmo.o build/temp.linux-x86_64-cpython-39/src/emulator.o build/temp.linux-x86_64-cpython-39/src/euclidemu2.o -lgsl -lgslcblas -o build/lib.linux-x86_64-cpython-39/euclidemu2.cpython-39-x86_64-linux-gnu.so "-L /global/common/software/spackecp/cori/e4s-22.02/software/cray-cnl7-haswell/gcc-11.2.0/gsl-2.7-dnj35q4arliuhq6wm7lucflfn3hqknaf/lib -lgslcblas -lgsl"
/usr/bin/ld: cannot find -lgsl
/usr/bin/ld: cannot find -lgslcblas
collect2: error: ld returned 1 exit status

But I managed to solve it by doing conda install -c conda-forge gsl

Fisher Matrix Project

Following up with @Gabriel-Rodrigues1

Is a clustering and weak lensing survey, its actually the JPAS. Since I am interested in the Impact that the massive neutrinos has especially in the growth of structure, I am looking to the Matter power spectrum. And for what I understand the code can provide the linear and non linear regime, its that correct?
And also, can I choose the Fisher matrix parameters?

Yes - the camb module will generate linear and nonlinear matter power spectra. You have a few choices about which options to use, in the parameter file. You could have a look at the LSST example for some starting points, and documentation on the readthedocs page.

You can choose which parameters to make the FM over by changing the values file. You can fix any parameters you don't want included.

Fisher matrices can sometimes be difficult with neutrino mass because the fiducial value is usually very close to the edge of the parameter space at zero. You could try, though you might need to switch to a different starting value.

class temporary folder error on "make"

I am trying to install CosmoSIS on a cluster computer following the instructions outlined here. When I get to the "make" step, I receive the following error:

ERROR: Command errored out with exit status 1:
 command: venv-directory/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-wrnr0iwm/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-wrnr0iwm/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-w1x9c6v9
     cwd: /tmp/pip-req-build-wrnr0iwm/
Complete output (5 lines):
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/tmp/pip-req-build-wrnr0iwm/setup.py", line 34, in <module>
    with open(os.path.join(include_folder, 'common.h'), 'r') as v_file:
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-req-build-wrnr0iwm/../include/common.h'
----------------------------------------

Does anyone know what this means and how I can fix it?

Thank you!

Possible error in implementation of dynesty and nautilus

I believe both dynesty and nautilus may not be correctly implemented/interfaced. The problem is that dynesty seems to be passed the posterior function when it expects the likelihood function. I then modeled the implementation of nautilus into cosmosis after dynesty's, likely copying this mistake. Sorry for not catching that earlier.

Out of Memories in supercomputer

Hi, Joe!

I tried to compute MCMC in Cosmosis with this configuration:

[runtime]
sampler = emcee

[test]
save_dir = output/gammat_des_case5
fatal_errors=T

[output]
format=text
filename=output/output_gammat_des_case5.txt

[emcee]
walkers = 50
samples = 10000
nsteps = 2


; This parameter is used several times in this file, so is
; put in the DEFAULT section and is referenced below as %(2PT_FILE)s
[DEFAULT]
;2PT_FILE = likelihood/des-y3/2pt_NG_final_2ptunblind_02_24_21_wnz_covupdate.v2.fits
;2PT_FILE = /Users/nemas/Documents/PhD/task5_shear_ratio_module/make_input/cosmosis_input_desy3.fits
2PT_FILE = /fred/oz073/nimas/cosmosis1/cosmosis-standard-library/phd/data/cosmosis_input_desy3.fits

[pipeline]
modules =  consistency 
           camb 
           sigma8_rescale 
           fast_pt
           fits_nz 
           source_photoz_bias
           IA 
           pk_to_cl
           add_intrinsic
           2pt_gal_shear
           shear_m_bias 
           2pt_like

quiet=F
timing=F
debug=F
;priors = examples/des-y3-priors.ini
priors = phd/prior_shear_des_case4.ini
values = phd/values_shear_des_case5.ini
extra_output = cosmological_parameters/Omega_m cosmological_parameters/S8 cosmological_parameters/sigma_8 cosmological_parameters/sigma_12 data_vector/2pt_chi2


; It's worth switching this to T when sampling using multinest, polychord,
; or other samplers that can take advantage of differences in calculation speeds between
; different parameters.
fast_slow = F
first_fast_module = shear_m_bias
; For some use cases this might be faster:
;first_fast_module=lens_photoz_width


[consistency]
file = utility/consistency/consistency_interface.py

[camb]
file = boltzmann/camb/camb_interface.py
mode = all
lmax = 2500          ;max ell to use for cmb calculation
feedback=3         ;amount of output to print
AccuracyBoost=1.1 ;CAMB accuracy boost parameter
do_tensors = T
do_lensing = T
NonLinear = pk
halofit_version = takahashi
zmin_background = 0.
zmax_background = 4.
nz_background = 401
kmin=1e-4
kmax = 50.0
kmax_extrapolate = 500.0
nk=700

[sigma8_rescale]
file = utility/sample_sigma8/sigma8_rescale.py


[fits_nz]
file = number_density/load_nz_fits/load_nz_fits.py
nz_file = %(2PT_FILE)s
data_sets = lens source
prefix_section = T
prefix_extension = T



[source_photoz_bias]
file = number_density/photoz_bias/photoz_bias.py
mode = additive
sample = nz_source
bias_section = wl_photoz_errors
interpolation = linear

[fast_pt]
file = structure/fast_pt/fast_pt_interface.py
do_ia = T
k_res_fac = 0.5
verbose = F

[IA]
file = intrinsic_alignments/tatt/tatt_interface.py
sub_lowk = F
do_galaxy_intrinsic = F
ia_model = tatt

[pk_to_cl_gg]
file = structure/projection/project_2d.py
lingal-lingal = lens-lens
do_exact = lingal-lingal
do_rsd = True
ell_min_linspaced = 1
ell_max_linspaced = 4
n_ell_linspaced = 5
ell_min_logspaced = 5.
ell_max_logspaced = 5.e5
n_ell_logspaced = 80
limber_ell_start = 200
ell_max_logspaced=1.e5
auto_only=lingal-lingal
sig_over_dchi_exact = 3.5

[pk_to_cl]
file = structure/projection/project_2d.py
ell_min_logspaced = 0.1
ell_max_logspaced = 5.0e5
n_ell_logspaced = 100 
shear-shear = source-source  ;uncomment
shear-intrinsic = source-source
intrinsic-intrinsic = source-source
intrinsicb-intrinsicb=source-source
lingal-shear = lens-source
lingal-intrinsic = lens-source
; lingal-magnification = lens-lens
; magnification-shear = lens-source
; magnification-magnification = lens-lens
; magnification-intrinsic = lens-source 
verbose = F
get_kernel_peaks = F
sig_over_dchi = 20. 
shear_kernel_dchi = 10. 


[add_intrinsic]
file=shear/add_intrinsic/add_intrinsic.py
;shear-shear=T
position-shear=T
perbin=F


[2pt_gal_shear]
file = shear/cl_to_xi_fullsky/cl_to_xi_interface.py
ell_max = 40000
xi_type='02'
theta_file=%(2PT_FILE)s
bin_avg = T

[shear_m_bias]
file = shear/shear_bias/shear_m_bias.py
m_per_bin = True
cl_section = shear_xi_plus shear_xi_minus
cross_section = galaxy_shear_xi
verbose = F

[add_point_mass]
file=shear/point_mass/add_gammat_point_mass.py
add_togammat = False
use_fiducial = True
sigcrit_inv_section = sigma_crit_inv_lens_source

[2pt_like]
file = likelihood/2pt/2pt_point_mass/2pt_point_mass.py
;do_pm_marg = True
;do_pm_sigcritinv = True
;sigma_a = 10000.0
;no_det_fac = False
include_norm = False
data_file = %(2PT_FILE)s
data_sets = gammat
make_covariance=F
covmat_name=COVMAT

angle_range_gammat_1_1 = 30.00 300.0
angle_range_gammat_1_2 = 30.00 300.0
angle_range_gammat_1_3 = 30.00 300.0
angle_range_gammat_1_4 = 30.00 300.0
;angle_range_gammat_1_5 = 30.00 300.0
angle_range_gammat_2_1 = 15.76 300.0
angle_range_gammat_2_2 = 15.76 300.0
angle_range_gammat_2_3 = 15.76 300.0
angle_range_gammat_2_4 = 15.76 300.0
;angle_range_gammat_2_5 = 15.76 300.0
angle_range_gammat_3_1 = 11.07 300.0
angle_range_gammat_3_2 = 11.07 300.0
angle_range_gammat_3_3 = 11.07 300.0
angle_range_gammat_3_4 = 11.07 300.0
;angle_range_gammat_3_5 = 11.07 300.0
angle_range_gammat_4_1 = 8.75 300.0
angle_range_gammat_4_2 = 8.75 300.0
angle_range_gammat_4_3 = 8.75 300.0
angle_range_gammat_4_4 = 8.75 300.0
;angle_range_gammat_4_5 = 8.75 300.0

; we put these in a separate file because they are long
;%include examples/des-y3-scale-cuts.ini

; we put these in a separate file because they are long
;%include examples/des-y3-scale-cuts.ini

[shear_ratio_like]
file = likelihood/des-y3/shear_ratio/shear_ratio_likelihood.py
;data_file = /Users/nemas/Documents/PhD/task5_shear_ratio_module/make_input/shear_ratio_desy3.pkl
data_file = /fred/oz073/nimas/cosmosis1/cosmosis-standard-library/phd/data/shear_ratio_desy3.pkl
theta_min_1 = 6.0 3.2 2.2 1.7
theta_min_2 = 6.0 3.2 2.2 1.7
theta_min_3 = 6.0 3.2 2.2 1.7
theta_max = 60.0 31.5 22.1 17.5
include_norm = F

and in supercomputer I set sbatch file like this:

#!/bin/bash
#
#SBATCH --job-name=cosmosis_des_gammat_case5
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --time=168:00:00
#SBATCH --mem=100g
#SBATCH --mail-type=FAIL [email protected]
#SBATCH --chdir=/fred/oz073/nimas/cosmosis1/


source ./env1/bin/activate
source cosmosis-configure
cd cosmosis-standard-library



srun time cosmosis phd/gammat_des_case5.ini



but after 3/4 days, I got notification "Out of memories". actually, what happened with that? did I wrong in setting?

Nautilus and blobs

Thanks so much for helping to implement nautilus. I tested it on some DES likelihoods and it works fine for the most part. There is one minor, non-urgent issue in the implementation. If only one blob is returned with the likelihood, i.e., only the prior, nautilus.posterior(return_blobs=True) will return a one-dimensional array for the blobs, not two-dimensional. This leads to a crash since cosmosis implicitly assumes the array to be two-dimensional.

Issue with initial make

I'm attempting to install cosmosis on Midway as part of DES.

Installing via conda seems to have worked fine, with all the necessary packages getting installed. This issue comes from the make all command. The boltzmann/class/class_v3.2.0/Makefile has the command cd external/distortions && gunzip --keep --force Greens_data.dat.gz produces the following error:

gzip: unrecognized option '--keep'
Try `gzip --help' for more information.

This can be fixed by replacing that command with: cd external/distortions && gunzip --force -c Greens_data.dat.gz > Greens_data.dat.

However, we now get the following error:

make[2]: Entering directory '/project2/rkessler/PRODUCTS/COSMOSIS/cosmosis-standard-library/shear/limber'
/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/x86_64-conda-linux-gnu-cc -O3 -g -fPIC  -I/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/lib/python3.9/site-packages/cosmosis/ -std=c99 -I /project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/include  -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/include  -c -o limber.o limber.c
/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/x86_64-conda-linux-gnu-cc -O3 -g -fPIC  -I/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/lib/python3.9/site-packages/cosmosis/ -std=c99 -I /project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/include  -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/include  -c -o interp2d.o interp2d.c
/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/x86_64-conda-linux-gnu-cc -O3 -g -fPIC  -I/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/lib/python3.9/site-packages/cosmosis/ -std=c99 -I /project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/include  -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/include  -c -o utils.o utils.c
/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/x86_64-conda-linux-gnu-cc -shared -o /project2/rkessler/PRODUCTS/COSMOSIS/cosmosis-standard-library/shear/limber/liblimber.so -L/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/lib -lgsl -lgslcblas -lcosmosis -lm -L/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/lib/python3.9/site-packages/cosmosis/datablock  limber.o interp2d.o utils.o
/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/x86_64-conda-linux-gnu-cc -O3 -g -fPIC  -I/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/lib/python3.9/site-packages/cosmosis/ -std=c99 -I /project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/include  -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/include  -c -o test_limber.o test_limber.c
/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/x86_64-conda-linux-gnu-cc -O3 -g -fPIC  -I/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/lib/python3.9/site-packages/cosmosis/ -std=c99 -I /project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/include  -o test_limber limber.o interp2d.o utils.o  test_limber.o -L/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/lib -lgsl -lgslcblas -lcosmosis -lm -L/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/lib/python3.9/site-packages/cosmosis/datablock -L. -llimber
/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/../lib/gcc/x86_64-conda-linux-gnu/9.4.0/../../../../x86_64-conda-linux-gnu/bin/ld: /lib64/libpthread.so.0: undefined reference to `memcpy@GLIBC_2.14'
/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/../lib/gcc/x86_64-conda-linux-gnu/9.4.0/../../../../x86_64-conda-linux-gnu/bin/ld: /lib64/libpthread.so.0: undefined reference to `__h_errno@GLIBC_PRIVATE'
/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/../lib/gcc/x86_64-conda-linux-gnu/9.4.0/../../../../x86_64-conda-linux-gnu/bin/ld: /lib64/libpthread.so.0: undefined reference to `__mktemp@GLIBC_PRIVATE'
/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/../lib/gcc/x86_64-conda-linux-gnu/9.4.0/../../../../x86_64-conda-linux-gnu/bin/ld: /lib64/libpthread.so.0: undefined reference to `__libc_secure_getenv@GLIBC_PRIVATE'
/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/../lib/gcc/x86_64-conda-linux-gnu/9.4.0/../../../../x86_64-conda-linux-gnu/bin/ld: /lib64/libpthread.so.0: undefined reference to `__madvise@GLIBC_PRIVATE'
/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/../lib/gcc/x86_64-conda-linux-gnu/9.4.0/../../../../x86_64-conda-linux-gnu/bin/ld: /lib64/libpthread.so.0: undefined reference to `__getrlimit@GLIBC_PRIVATE'
/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/../lib/gcc/x86_64-conda-linux-gnu/9.4.0/../../../../x86_64-conda-linux-gnu/bin/ld: /lib64/libpthread.so.0: undefined reference to `__ctype_init@GLIBC_PRIVATE'
collect2: error: ld returned 1 exit status
make[2]: *** [Makefile:23: test_limber] Error 1
make[2]: Leaving directory '/project2/rkessler/PRODUCTS/COSMOSIS/cosmosis-standard-library/shear/limber'
make[1]: *** [/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/lib/python3.9/site-packages/cosmosis/config/subdirs.mk:11: all] Error 2
make[1]: Leaving directory '/project2/rkessler/PRODUCTS/COSMOSIS/cosmosis-standard-library/shear'
make: *** [/project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/lib/python3.9/site-packages/cosmosis/config/subdirs.mk:11: all] Error 2
make: Leaving directory '/project2/rkessler/PRODUCTS/COSMOSIS/cosmosis-standard-library'

which ld returns /project2/rkessler/PRODUCTS/miniconda/envs/cosmosis/bin/ld (which is the cosmosis conda environment).

ld --version returns:

GNU ld (GNU Binutils) 2.36.1
Copyright (C) 2021 Free Software Foundation, Inc.
This program is free software; you may redistribute it under the terms of
the GNU General Public License version 3 or (at your option) a later version.
This program has absolutely no warranty.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.