GithubHelp home page GithubHelp logo

simonsobs / lat_mflike Goto Github PK

View Code? Open in Web Editor NEW
4.0 4.0 11.0 66.69 MB

Multifrequency Likelihood for SO Large Aperture Telescope

Home Page: https://lat-mflike.readthedocs.io

License: Other

Python 100.00%
cmb cobaya likelihood

lat_mflike's People

Contributors

adrien-laposta avatar beringueb avatar cmbant avatar damonge avatar dependabot[bot] avatar mabitbol avatar mgerbino avatar paganol avatar sgiardie avatar thibautlouis avatar timothydmorton avatar umbnat92 avatar xgarrido avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lat_mflike's Issues

About cl_cib_Choi2020.dat file

Hi,

I hope you are all well and safe. I am trying to run a likelihood analysis with v0.7 and the likelihood keeps asking about cl_cib_Choi2020.dat file which I can't find anywhere (it is not part of fgspectra). Is this file public? If not, it is possible to run any SO analysis (with v0.7 sims)?

Best
Vivian

Planck likelihood

Just some thoughts on mflike-plik:

  • I needed to add python_path to the yaml to get it to work (doesn't seem to be a setup.py to actually install the module).

  • You could inherit the likelihood from InstallableLikelihood and put the .zip bundle as an addition to a release., then it could be unzipped automatically as part of installation. This is how e.g. the latest CAMspec module did it.

  • Speed reported by test yaml seems very slow (about same as CAMB, should be at least 100x faster!)

  • No unit test to check result is actually as expected

  • Would be helpful to add attributes to PlikMFLike to avoid apparently undefined variables. (would be cobaya-built-in way of doing something like the expected_params, which looks as though it's not actually used at the moment)

  • "radio" appears to be calculated but not used.

  • params_TT and PlikMFLike.yaml seem to be repeating settings for parameters but different. I'm not sure fixed refs as defaults in PlikMFLike.yaml is a very good idea (e.g. if they are simply inherited instead of overridden, MPI runs may start all chains at the same point in parameter space).

  • Including varying pol parameters in PlikMFLike.yaml probably means TT likelihood cannot be run efficiently without explicitly redefining them all to be fixed (though I don't know if you actually aim to support just TT?)

  • PowerSpectraAndCorrelation seems to refer to self.power_spectra which appears not to exist, and argss is undefined (both reported immediately by pycharm)

It would be great if it were as fast as the clik likelihood, and could be used instead of it! (e.g. would work on Windows, and avoid clik install nightmares). Could also consider making alternative data files based on clik data that exactly reproduce clik?

unit test failure

Just checking out the updates to this to see if I can run the test, and I get the following:

(cobaya) Timothys-MacBook-Pro:tests tmorton$ pytest -v test_mflike.py
============================================ test session starts ============================================
platform darwin -- Python 3.7.5, pytest-5.2.4, py-1.8.1, pluggy-0.13.1 -- /Users/tmorton/miniconda3/envs/cobaya/bin/python
cachedir: .pytest_cache
rootdir: /Users/tmorton/repositories/LAT_MFLike
plugins: flaky-3.6.1
collected 2 items

test_mflike.py::MFLikeTest::test_cobaya FAILED                                                        [ 50%]
test_mflike.py::MFLikeTest::test_mflike FAILED                                                        [100%]

================================================= FAILURES ==================================================
__________________________________________ MFLikeTest.test_cobaya ___________________________________________

self = <test_mflike.MFLikeTest testMethod=test_cobaya>

    def test_cobaya(self):
        info = {"likelihood":
                    {"mflike.MFLike":
                         {"input_file": pre + "00000.fits",
                          "cov_Bbl_file": pre + "w_covar_and_Bbl.fits"}},
                "theory":
                    {"camb":
                         {"extra_args":
                              {"lens_potential_accuracy": 1}}},
                "params": cosmo_params,
                "modules": packages_path}
        from cobaya.model import get_model
>       model = get_model(info)

test_mflike.py:80:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/cobaya/model.py:108: in get_model
    stop_at_error=info.get("stop_at_error", False))
../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/cobaya/model.py:145: in __init__
    packages_path=packages_path, timing=timing)
../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/cobaya/likelihood.py:281: in __init__
    name=name))
../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/cobaya/likelihood.py:77: in __init__
    standalone=standalone)
../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/cobaya/theory.py:62: in __init__
    standalone=standalone)
../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/cobaya/component.py:92: in __init__
    self.initialize()
../mflike.py:52: in initialize
    self.prepare_data()
../mflike.py:92: in prepare_data
    s = sacc.Sacc.load_fits(input_fname)
../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/sacc/sacc.py:768: in load_fits
    tracers = BaseTracer.from_tables(tracer_tables)
../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/sacc/tracers.py:150: in from_tables
    tracers.update(subcls.from_tables(subcls_table_list))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

cls = <class 'sacc.tracers.NuMapTracer'>
table_list = [<Table length=2>
  nu  bpss_nu
int64  int64
----- -------
  149     149
  150     150, <Table length=10000>
 ell  be...     1.0
 9993      1.0
 9994      1.0
 9995      1.0
 9996      1.0
 9997      1.0
 9998      1.0
 9999      1.0, ...]

    @classmethod
    def from_tables(cls, table_list):
        tracers = {}

        # Collect beam and bandpass tables describing the same tracer
        tr_tables = {}
        for table in table_list:
            # Read name and table type
            name = table.meta['SACCNAME']
            quantity = table.meta.get('SACCQTTY', 'generic')
            tabtyp = table.meta['EXTNAME'].split(':')[-1]
            if tabtyp not in ['bandpass', 'beam']:
>               raise KeyError("Unknown table type " + table.meta['EXTNAME'])
E               KeyError: 'Unknown table type tracer:NuMap:LAT_93_s0:bpss'

../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/sacc/tracers.py:457: KeyError
------------------------------------------- Captured stdout call --------------------------------------------

================================================================================
likelihood:mflike.MFLike
================================================================================


[model] *WARNING* Ignored blocks/options: ['modules']
[tools] *WARNING* *DEPRECATION*: The env var 'COBAYA_MODULES' will be deprecated in favor of 'COBAYA_PACKAGES_PATH' in the next version. Please, use that one instead.
[camb] Importing *local* CAMB from /Users/tmorton/cosmology/modules/code/CAMB
[tools] *WARNING* *DEPRECATION*: The env var 'COBAYA_MODULES' will be deprecated in favor of 'COBAYA_PACKAGES_PATH' in the next version. Please, use that one instead.
[tools] *WARNING* *DEPRECATION*: The env var 'COBAYA_MODULES' will be deprecated in favor of 'COBAYA_PACKAGES_PATH' in the next version. Please, use that one instead.
[mflike.mflike] Initialising.
--------------------------------------------- Captured log call ---------------------------------------------
WARNING  model:model.py:97 Ignored blocks/options: ['modules']
WARNING  tools:tools.py:944 *DEPRECATION*: The env var 'COBAYA_MODULES' will be deprecated in favor of 'COBAYA_PACKAGES_PATH' in the next version. Please, use that one instead.
INFO     camb:camb.py:225 Importing *local* CAMB from /Users/tmorton/cosmology/modules/code/CAMB
WARNING  tools:tools.py:944 *DEPRECATION*: The env var 'COBAYA_MODULES' will be deprecated in favor of 'COBAYA_PACKAGES_PATH' in the next version. Please, use that one instead.
WARNING  tools:tools.py:944 *DEPRECATION*: The env var 'COBAYA_MODULES' will be deprecated in favor of 'COBAYA_PACKAGES_PATH' in the next version. Please, use that one instead.
INFO     mflike.mflike:mflike.py:31 Initialising.
__________________________________________ MFLikeTest.test_mflike ___________________________________________

self = <test_mflike.MFLikeTest testMethod=test_mflike>

    def test_mflike(self):
        import camb
        camb_cosmo = cosmo_params.copy()
        camb_cosmo.update({"lmax": 9000, "lens_potential_accuracy": 1})
        pars = camb.set_params(**camb_cosmo)
        results = camb.get_results(pars)
        powers = results.get_cmb_power_spectra(pars, CMB_unit="muK")
        cl_dict = {k: powers["total"][:, v]
                   for k, v in {"tt": 0, "ee": 1, "te": 3}.items()}
        for select, chi2 in chi2s.items():
            from mflike import MFLike
            my_mflike = MFLike({"packages_path": packages_path,
                                "input_file": pre + "00000.fits",
                                "cov_Bbl_file": pre + "w_covar_and_Bbl.fits",
                                "defaults": {"polarizations":
                                                 select.upper().split("-"),
                                             "scales": {"TT": [2, 6002],
                                                        "TE": [2, 6002],
                                                        "ET": [2, 6002],
                                                        "EE": [2, 6002]},
>                                            "symmetrize": False}})

test_mflike.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/cobaya/likelihood.py:77: in __init__
    standalone=standalone)
../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/cobaya/theory.py:62: in __init__
    standalone=standalone)
../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/cobaya/component.py:92: in __init__
    self.initialize()
../mflike.py:52: in initialize
    self.prepare_data()
../mflike.py:92: in prepare_data
    s = sacc.Sacc.load_fits(input_fname)
../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/sacc/sacc.py:768: in load_fits
    tracers = BaseTracer.from_tables(tracer_tables)
../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/sacc/tracers.py:150: in from_tables
    tracers.update(subcls.from_tables(subcls_table_list))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

cls = <class 'sacc.tracers.NuMapTracer'>
table_list = [<Table length=2>
  nu  bpss_nu
int64  int64
----- -------
  149     149
  150     150, <Table length=10000>
 ell  be...     1.0
 9993      1.0
 9994      1.0
 9995      1.0
 9996      1.0
 9997      1.0
 9998      1.0
 9999      1.0, ...]

    @classmethod
    def from_tables(cls, table_list):
        tracers = {}

        # Collect beam and bandpass tables describing the same tracer
        tr_tables = {}
        for table in table_list:
            # Read name and table type
            name = table.meta['SACCNAME']
            quantity = table.meta.get('SACCQTTY', 'generic')
            tabtyp = table.meta['EXTNAME'].split(':')[-1]
            if tabtyp not in ['bandpass', 'beam']:
>               raise KeyError("Unknown table type " + table.meta['EXTNAME'])
E               KeyError: 'Unknown table type tracer:NuMap:LAT_93_s0:bpss'

../../../../miniconda3/envs/cobaya/lib/python3.7/site-packages/sacc/tracers.py:457: KeyError
------------------------------------------- Captured stdout call --------------------------------------------
[install] Installing external packages at '/var/folders/xg/b1jjg34910d6ysqb6rmkk_680000gq/T/LAT_packages'

================================================================================
likelihood:mflike.MFLike
================================================================================


[install] External component already installed.
[install] Doing nothing.
[install] The installation path has been written in the global config file.
[tools] *WARNING* *DEPRECATION*: The env var 'COBAYA_MODULES' will be deprecated in favor of 'COBAYA_PACKAGES_PATH' in the next version. Please, use that one instead.
[mflike.mflike] Initialising.
--------------------------------------------- Captured log call ---------------------------------------------
INFO     install:install.py:46 Installing external packages at '/var/folders/xg/b1jjg34910d6ysqb6rmkk_680000gq/T/LAT_packages'
INFO     install:install.py:93 External component already installed.
INFO     install:install.py:99 Doing nothing.
INFO     install:install.py:143 The installation path has been written in the global config file.
WARNING  tools:tools.py:944 *DEPRECATION*: The env var 'COBAYA_MODULES' will be deprecated in favor of 'COBAYA_PACKAGES_PATH' in the next version. Please, use that one instead.
INFO     mflike.mflike:mflike.py:31 Initialising.
============================================= warnings summary ==============================================
mflike/tests/test_mflike.py::MFLikeTest::test_cobaya
  /Users/tmorton/miniconda3/envs/cobaya/lib/python3.7/site-packages/ipykernel/iostream.py:14: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
    from imp import lock_held as import_lock_held

-- Docs: https://docs.pytest.org/en/latest/warnings.html
======================================= 2 failed, 1 warnings in 4.38s =======================================

Undefined variables

I think someone previously mentioned the cobaya "feature" that variables defined in the yaml appear undefined in python (e.g. in Pycharm/linters). There is quite a good solution to this, which is to use python annotations; I've added these to the base code for a forthcoming update and also easy to add to external likelihoods. See e.g. https://github.com/CobayaSampler/cobaya/blob/new_speed_blocking/cobaya/likelihoods/_base_classes/_des_prototype.py#L153
(in this branch you can also just define io variables as class attributes, however you can't have both class attributes and yaml in the same class to avoid confusion)

Expected calibration parameters causes trouble with multiple experiments

Running MFLike on a dataset with multiple experiments will not work if those experiments work on the same frequencies. This is because of a minor issue that propagates through MFLike: it saves the frequencies of the experiments in the MFLike.freqs list (line 349 of mflike.py), but does so per experiment. When it then populates the calibration parameter list (lines 86-93 of mflike.py), it does not check if a frequency appears multiple times and causes an error if it does, since it will create multiple entries for such frequencies.

The real solution should be to label the calibration parameters per experiment: e.g. SO LAT operating at 150 GHz should have a calibration parameter calT_LAT_150 whereas ACT operating at the same frequency should have a calibration parameter calT_ACT_150 (the current setup does not allow for something like this). However, from a quick glance, this looks like it needs some work in the TheoryForge to accommodate creating different theories for different experiments(?).

For now, a quick and easy fix could be to remove duplicates from the MFLike.freqs list - something as simple as replacing line 349 of mflike.py with

self.freqs = np.array(list(set([bands[b] for b in self.bands])))

is sufficient for the current system to work. For simple models where you don't care about having (separate) calibration/nuisance parameters, this works.

Likelihoods organization for different TT, TE, EE, TTTEEE modes

So far the MFLike likelihood has been tested only for TTTEEE mode and the associated MFLike.yaml file is filled for this mode (all the nuisance parameters are set for instance).

We can imagine to organize likelihoods for TT, TE, EE and TTTEEE in the same way as for Planck 2018 (https://github.com/CobayaSampler/cobaya/tree/master/cobaya/likelihoods/planck_2018_highl_plik) where each likelihoods just inherits from a base class that actually do the job. The current MFLike likelihood can play the same role but then we need to create independant likelihoods that inherits from it and we also need to define several yaml files with corresponding nuisance parameters (as well as corresponding proposal). Any thoughts ?

Running test out of the box gives -inf priors forever

Hi-- I'm trying to run chains on test_mflike.yaml (no edits to the file as cloned), and the chains are not moving past their initialization; seemingly because the priors for all the proposed points seem to be -inf? Here's a sample from the debug output:

2019-12-09 12:18:29,418 [3 : prior] Got logpriors = [-inf]
 2019-12-09 12:18:29,418 [3 : model] Posterior to be computed for parameters {'cosmomc_theta': 0.010484665433904548, 'logA': 3.072009238551706, 'ns': 1.0652250072601754, 'ombh2': 0.018800597607373643, 'omch2': 0.13909441242591114, 'Alens': 1.42590362113841, 'tau': 0.05262073918784727, 'a_tSZ': 3.0352069880212724, 'a_kSZ': 1.4381075879466305, 'a_p': 6.351253625825865, 'beta_p': 2.156042337771019, 'a_c': 4.456510050811277, 'beta_c': 2.3853734997906573, 'n_CIBC': 0.9762930812370438, 'a_s': 3.084028346702068, 'T_d': 9.357633010848831}
 2019-12-09 12:18:29,418 [3 : prior] Evaluating prior at array([0.01048467, 3.07200924, 1.06522501, 0.0188006 , 0.13909441,
       1.42590362, 0.05262074, 3.03520699, 1.43810759, 6.35125363,
       2.15604234, 4.45651005, 2.3853735 , 0.97629308, 3.08402835,
       9.35763301])
 2019-12-09 12:18:29,418 [3 : prior] Got logpriors = [-inf]
 2019-12-09 12:18:29,419 [3 : model] Posterior to be computed for parameters {'cosmomc_theta': 0.010484665433904548, 'logA': 3.072009238551706, 'ns': 1.0652250072601754, 'ombh2': 0.018800597607373643, 'omch2': 0.13909441242591114, 'Alens': 1.42590362113841, 'tau': 0.05262073918784727, 'a_tSZ': 3.2173958213321896, 'a_kSZ': 1.593669538939377, 'a_p': 6.145119221346361, 'beta_p': 2.1465505176398483, 'a_c': 4.864986398268181, 'beta_c': 2.4816065029525665, 'n_CIBC': 1.100816825435299, 'a_s': 3.080330509295114, 'T_d': 9.518145987411813}
 2019-12-09 12:18:29,419 [3 : prior] Evaluating prior at array([0.01048467, 3.07200924, 1.06522501, 0.0188006 , 0.13909441,
       1.42590362, 0.05262074, 3.21739582, 1.59366954, 6.14511922,
       2.14655052, 4.8649864 , 2.4816065 , 1.10081683, 3.08033051,
       9.51814599])
 2019-12-09 12:18:29,419 [3 : prior] Got logpriors = [-inf]
 2019-12-09 12:18:29,420 [3 : model] Posterior to be computed for parameters {'cosmomc_theta': 0.010484665433904548, 'logA': 3.072009238551706, 'ns': 1.0652250072601754, 'ombh2': 0.018800597607373643, 'omch2': 0.13909441242591114, 'Alens': 1.42590362113841, 'tau': 0.05262073918784727, 'a_tSZ': 2.903884368256298, 'a_kSZ': 1.6118481984479405, 'a_p': 6.34518933026812, 'beta_p': 2.188081681028017, 'a_c': 4.4351923879061115, 'beta_c': 2.430863199895848, 'n_CIBC': 1.0904170546822312, 'a_s': 3.0891121379660675, 'T_d': 9.197080046037364}
 2019-12-09 12:18:29,420 [3 : prior] Evaluating prior at array([0.01048467, 3.07200924, 1.06522501, 0.0188006 , 0.13909441,
       1.42590362, 0.05262074, 2.90388437, 1.6118482 , 6.34518933,
       2.18808168, 4.43519239, 2.4308632 , 1.09041705, 3.08911214,
       9.19708005])
 2019-12-09 12:18:29,420 [3 : prior] Got logpriors = [-inf]
 2019-12-09 12:18:29,420 [3 : model] Posterior to be computed for parameters {'cosmomc_theta': 0.010484665433904548, 'logA': 3.072009238551706, 'ns': 1.0652250072601754, 'ombh2': 0.018800597607373643, 'omch2': 0.13909441242591114, 'Alens': 1.42590362113841, 'tau': 0.05262073918784727, 'a_tSZ': 2.99638758959611, 'a_kSZ': 1.4268033051797486, 'a_p': 6.21016213147142, 'beta_p': 2.1517868192660976, 'a_c': 4.469622202499272, 'beta_c': 2.387853490882928, 'n_CIBC': 0.9887253973463992, 'a_s': 3.0911808001547674, 'T_d': 8.322494208373788}
 2019-12-09 12:18:29,420 [3 : prior] Evaluating prior at array([0.01048467, 3.07200924, 1.06522501, 0.0188006 , 0.13909441,
       1.42590362, 0.05262074, 2.99638759, 1.42680331, 6.21016213,
       2.15178682, 4.4696222 , 2.38785349, 0.9887254 , 3.0911808 ,
       8.32249421])
 2019-12-09 12:18:29,421 [3 : prior] Got logpriors = [-inf]
 2019-12-09 12:18:29,421 [3 : model] Posterior to be computed for parameters {'cosmomc_theta': 0.010484665433904548, 'logA': 3.072009238551706, 'ns': 1.0652250072601754, 'ombh2': 0.018800597607373643, 'omch2': 0.13909441242591114, 'Alens': 1.42590362113841, 'tau': 0.05262073918784727, 'a_tSZ': 3.0224097423185388, 'a_kSZ': 1.4127849822351908, 'a_p': 6.3393019757341325, 'beta_p': 2.149714199199823, 'a_c': 4.5072141054699, 'beta_c': 2.3885005666501886, 'n_CIBC': 1.0202959750401777, 'a_s': 3.1040564577165664, 'T_d': 9.143228794431089}
 2019-12-09 12:18:29,421 [3 : prior] Evaluating prior at array([0.01048467, 3.07200924, 1.06522501, 0.0188006 , 0.13909441,
       1.42590362, 0.05262074, 3.02240974, 1.41278498, 6.33930198,
       2.1497142 , 4.50721411, 2.38850057, 1.02029598, 3.10405646,
       9.14322879])
 2019-12-09 12:18:29,422 [3 : prior] Got logpriors = [27.476231036469883]
 2019-12-09 12:18:29,422 [3 : model] Got input parameters: OrderedDict([('cosmomc_theta', 0.010484665433904548), ('As', 2.15852290082278e-09), ('ns', 1.0652250072601754), ('ombh2', 0.018800597607373643), ('omch2', 0.13909441242591114), ('Alens', 1.42590362113841), ('tau', 0.05262073918784727), ('a_tSZ', 3.0224097423185388), ('a_kSZ', 1.4127849822351908), ('a_p', 6.3393019757341325), ('beta_p', 2.149714199199823), ('a_c', 4.5072141054699), ('beta_c', 2.3885005666501886), ('n_CIBC', 1.0202959750401777), ('a_s', 3.1040564577165664), ('T_d', 9.143228794431089)])
 2019-12-09 12:18:29,422 [3 : camb] Got parameters {'cosmomc_theta': 0.010484665433904548, 'As': 2.15852290082278e-09, 'ns': 1.0652250072601754, 'ombh2': 0.018800597607373643, 'omch2': 0.13909441242591114, 'Alens': 1.42590362113841, 'tau': 0.05262073918784727}
 2019-12-09 12:18:29,422 [3 : camb] Re-using computed results

etc. Any suggestions? Is there some additional setup I need to run this test?

Fix potential bug in theoryforge

Include cls=self.requested_cls in call to rot and remove it from initialization of Rotation_alm (one line above).

Note, current implementation does not result in a bug, because, in syslib: i) an additional unused key in Rotation_alm is just redundant; ii) requested_cls only include teee polarization spectra, which are rotated by default by Rotation_alm.

A bug would be introduced if requested_cls also included bb and/or eb and/or tb.

Update version requirements

Summary: Python 3.7 was deprecated last month, but MFLike still allows versions as early as 3.5 to install it, which can cause issues down the dependency chain.

tables-io has imposed a py>=3.8 restriction, and it is a requirement for newer versions of sacc (0.8 and newer as far as I can see). The classifiers for MFLike already say it supports py 3.8 and newer, but it only imposes the hard restriction py>=3.5, which is long deprecated by now.

Earlier, I tried pip installing mflike on python 3.7 and it caused an issue where mflike and its dependencies (say they) support my python version, but dependencies of those dependencies do not support it, and thus fails to install mflike.

As a sidenote, the required dependencies of mflike are also not kept up-to-date and it might be beneficial to review these:

  • cobaya has released version 3.3 (required version 3.1 is 2 years old)
  • sacc has released version 0.9 (required version 0.4.2 is 3 years old)

Issue with TE and ET cross-frequencies spectra

@xgarrido , @sgiardie and I found that the way mflike deals with TE and ET for cross frequency spectra is not clear and possibly not correct at the moment.

  • ET cross-freqeuncy spectra are not stored during the theory constrution, instead they are defined as TE spectra with the "hasYX_xsp" argument being 'True'.
  • In theoryforge.get_modified_theory(), cmbfg_dict is contructed with all different frequency and spectra combination as there is a loop for exp1, exp2 in product(self.experiments, self.experiments): and one for s in self.requested_cls:. So for 3 frequencies with TT,TE and EE, cmbfg_dict has 12 entries.
  • cmbfg_dict is then calibrated and rotated and this should work fine, becaus it contains all the crosses we are interested in.
  • But when cmbfg_dict is transferred to dls_dict, the entry of dls_dict["te", m["t1"],m["t2"]] get written twice: once for the actual ["te", m["t1"],m["t2"]] and another time when hasYX_xsp == True when it gets: `cmbfgdict["te", m["t2"],m["t1"]].
  • These two entries have the same CMB+FG but might have different calibrations.

Possible fixes include:

  • include an extra key "hasXY_xsp" in dls_dict
  • build the self.spec_meta dictionary using "pol" = TT/TE/ET etc and not the camb keys tt/te/ee

unit tests don't generically work

Trying to debug #7; starting by trying to run unit tests.

First, this line:

modules_path = os.environ.get("COBAYA_MODULES_PATH") or "/tmp/modules"

My understanding is that cobaya looks for and uses a $COBAYA_MODULES env var by default, so any unit tests here should probably imitate that.

Second, the cobaya documentation suggests not to install camb generically at the environment level, which means this won't generically work:

Bandpasses

Include bandpass integration in likelihood

Foregrounds

Make the foregrounds part a bit more efficient:

  • Don't load fgspectra every time you evaluate foregrounds
  • Don't compute foregrounds you don't need
  • Don't compute foregrounds in tt, te and ee if you're not gonna use all of those spectra.

unit tests failing with different chisq

I am testing the new v0.6 version, and getting a test failure with small differences in chisq:

============================================================================================================ short test summary info ============================================================================================================
FAILED mflike/tests/test_mflike.py::MFLikeTest::test_cobaya - AssertionError: 2413.3696293249977 != 2412.9275 within 2 places (0.4421293249979499 difference)
FAILED mflike/tests/test_mflike.py::MFLikeTest::test_mflike - AssertionError: 1384.95286887142 != 1384.5669 within 2 places (0.385968871419891 difference)
========================================================================================================= 2 failed in 282.36s (0:04:42) =========================================================================================================

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.