brinckmann / montepython_public Goto Github PK
View Code? Open in Web Editor NEWThis project forked from baudren/montepython_public
Public repository for the Monte Python Code
License: MIT License
This project forked from baudren/montepython_public
Public repository for the Monte Python Code
License: MIT License
Im testing some extended teleparallel theories (f(T)) that approach Lambda CDM in some limit, using Plack 2015 + JLA + WiggleZ likelihoods, is it normal to be getting such a high value for chi square (in the range 10000)? or do i simply need to compare that value relative to the standard model's with the same likelihoods, as in various papers im seeing a chi squared min of the range 700-1000
Hi,all
How to plot 3σ contours in 'info' analysis ,and modify analysis codes?
thanks!
Hello, I'm analyzing the output of CosmoHammer for my modified gravity model. There is nothing wrong with CLASS with my modification and I didn't touch anything in Monte Python except for cosmo_hammer.py. (as explained in baudren#105) And I just added some new parameters for my model. First I typed
python montepython/MontePython.py run -N 1000000 -p input/asdf.param -o chains/asdf --method CH
and I got
/!\ Appending to an existing folder: using the log.param instead of
input/asdf.param
Running Monte Python v3.0.1
with CLASS v2.7.0
Testing likelihoods for:
-> Planck_highl, Planck_lowl
----
clik version 723c1a4b0580
smica
Checking likelihood '/home/cosmos/plc-2.0/plc_2.0/hi_l/plik/plik_dx11dr2_HM_v18_TT.clik' on test data. got -380.979 expected -380.979 (diff -8.6809e-09)
----
----
clik version 723c1a4b0580
gibbs_gauss 1478fb2d-28fa-49ac-a8ae-677dbdc3600a
Checking likelihood '/home/cosmos/plc-2.0/plc_2.0/low_l/commander/commander_rc2_v1.1_l2_29_B.clik' on test data. got -7.32304 expected -7.32304 (diff -2.52096e-10)
----
Sampler: <class 'cosmoHammer.MpiCosmoHammerSampler.MpiCosmoHammerSampler'>
configuration:
Params: [0.00000e+00 0.00000e+00 2.22530e+00 1.19190e-01 1.04180e+00 3.07530e+00
9.62290e-01 9.46300e-02 6.10000e+01 1.30000e-01 6.86000e+00 2.22900e+02
3.80000e+01 3.52000e+01 1.02600e+02 0.00000e+00 6.75000e+00 9.41000e+00
1.92800e+01 8.17000e+01 9.98590e+02 9.95890e+02 1.00028e+02]
Burnin iterations: 10
Samples iterations: 30
Walkers ratio: 50
Reusing burn in: False
init pos generator: SampleBallPositionGenerator
stop criteria: IterationStopCriteriaStrategy
storage util: SampleFileUtil
likelihoodComputationChain:
Core Modules:
Data
Class
function
Likelihood Modules:
Planck_highl
Planck_lowl
start burn in
MPI Process rank 0 starts sampling
/!\ invalid value encountered in subtract
/!\ invalid value encountered in greater
Iteration finished:10
storing random state
burn in sampling done! Took: 2937.0032s
Mean acceptance fraction for burn in:0.0474
Reseting emcee sampler
start sampling after burn in
MPI Process rank 0 starts sampling
Iteration finished:10
Iteration finished:20
Iteration finished:30
sampling done! Took: 16738.5529s
Mean acceptance fraction:0.0808
and I typed
python montepython/MontePython.py info chains/asdf/CH/
finally I got
Running Monte Python v3.0.1
/!\ The content of the CH subfolder has been translated for Monte Python.
Please run an analysis of the entire folder now.
but when I tried to analyze the final output
python montepython/MontePython.py info chains/asdf/
I got this error:
Running Monte Python v3.0.1
--> Scanning file chains/asdf/chain_CH__sampling.txt : Removed everything: chain not converged
chain_CH__sampling-error_log.txt : Removed 0 non-markovian points, 0 points of burn-in, keep 23741 steps
--> Computing mean values
Traceback (most recent call last):
File "montepython/MontePython.py", line 40, in <module>
sys.exit(run())
File "/home/cosmos/montepython_public/montepython/run.py", line 31, in run
custom_command)
File "/home/cosmos/montepython_public/montepython/run.py", line 188, in safe_initialisation
cosmo, data, command_line, success = initialise(custom_command)
File "/home/cosmos/montepython_public/montepython/initialise.py", line 58, in initialise
analyze(command_line)
File "/home/cosmos/montepython_public/montepython/analyze.py", line 92, in analyze
convergence(info)
File "/home/cosmos/montepython_public/montepython/analyze.py", line 291, in convergence
compute_mean(mean, spam, total)
File "/home/cosmos/montepython_public/montepython/analyze.py", line 1873, in compute_mean
submean = np.sum(spam[j][:, 0]*spam[j][:, i+2])
IndexError: index 23 is out of bounds for axis 1 with size 23
It seems that there is some problem with the translation process for Monte Python, but I'm newer to Monte Python so I don't know how to address it. Could you please help me?
here's a file "asdf.log" generated by Monte Python from CH file:
```
chain_CH__sampling.txt Number of steps:10759 Steps accepted:10759 acc = 1 min(-loglike) = 398.43
chain_CH__sampling-error_log.txt Number of steps:-63 Steps accepted:23741 acc = -3.8e+02 min(-loglike) = -0.38
```
and "asdf.log" from CH folder
```
2019-01-06 10:50:46,854 INFO:Using CosmoHammer 0.6.1
2019-01-06 10:50:46,854 INFO:Using emcee 2.2.1
2019-01-06 10:50:46,855 INFO:Sampler: <class 'cosmoHammer.MpiCosmoHammerSampler.MpiCosmoHammerSampler'>
configuration:
Params: [0.00000e+00 0.00000e+00 2.22530e+00 1.19190e-01 1.04180e+00 3.07530e+00
9.62290e-01 9.46300e-02 6.10000e+01 1.30000e-01 6.86000e+00 2.22900e+02
3.80000e+01 3.52000e+01 1.02600e+02 0.00000e+00 6.75000e+00 9.41000e+00
1.92800e+01 8.17000e+01 9.98590e+02 9.95890e+02 1.00028e+02]
Burnin iterations: 10
Samples iterations: 30
Walkers ratio: 50
Reusing burn in: False
init pos generator: SampleBallPositionGenerator
stop criteria: IterationStopCriteriaStrategy
storage util: SampleFileUtil
likelihoodComputationChain:
Core Modules:
Data
Class
function
Likelihood Modules:
Planck_highl
Planck_lowl
2019-01-06 10:50:46,855 INFO:start burn in
2019-01-06 10:50:46,889 INFO:MPI Process rank 0 starts sampling
2019-01-06 11:39:43,858 INFO:Iteration finished:10
2019-01-06 11:39:43,858 INFO:storing random state
2019-01-06 11:39:43,859 INFO:burn in sampling done! Took: 2937.0032s
2019-01-06 11:39:43,859 INFO:Mean acceptance fraction for burn in:0.0474
2019-01-06 11:39:43,859 INFO:Reseting emcee sampler
2019-01-06 11:39:43,859 INFO:start sampling after burn in
2019-01-06 11:39:43,860 INFO:MPI Process rank 0 starts sampling
2019-01-06 12:50:08,875 INFO:Iteration finished:10
2019-01-06 14:20:01,112 INFO:Iteration finished:20
2019-01-06 16:18:42,412 INFO:Iteration finished:30
2019-01-06 16:18:42,412 INFO:sampling done! Took: 16738.5529s
2019-01-06 16:18:42,412 INFO:Mean acceptance fraction:0.0808
```
a few lines from "asdf.out" from CH folder
```
0.003846995504693862 -0.12502459553350692 2.2472903505163804 0.12384922398831993 1.0413070098001689 3.077978890000854 0.9650343111894872 0.10786681707903199 47.850073044578934 0.0977692201556215 7.801337795954111 223.70354878867923 36.94067386094072 25.08068105946423 108.80792223427453 1.653024298105409 6.864260841475928 10.340016984444906 19.73630764706021 91.23851821916166 997.4299237678556 996.3437130622725 100.04671829088544 nan nan nan nan nan nan
0.0037502633072408966 0.002374425341948726 2.2144240443299323 0.12026574523180777 1.0419245247212885 3.0714038249265174 0.9604753127145113 0.09067403660244437 67.06954239430483 -0.06643676684710167 5.781769815937481 218.72767402147934 32.24634270962544 46.89209986015958 104.68412499011586 -5.104588019739823 8.947236806690537 8.569814832412398 21.736981910986774 71.21979513590352 997.5526806918932 999.0325990082719 100.2331687374733 nan nan nan nan nan nan
0.007911644401904304 0.05376092715398769 2.2280456819592835 0.11711287516240178 1.0420832306015195 3.075734604467507 0.9670323081366486 0.09183857193790462 57.340733839015805 0.20183573353761264 5.561554818257315 211.5546764011501 33.711024397981966 36.01769865564616 111.34014449580111 2.6007899483778614 4.631616429170401 7.971067945851494 18.184706302522045 90.90070459350987 999.108612128633 996.1374430702206 100.18300000857732 2.16657918545e-09 68.41507 0.7007370261761605 0.2478014184984166 0.8147221776015532 11.152557373046875
0.0004886302944677685 -0.15350088209094193 2.2450364189046255 0.11609997253881224 1.0415313006197 3.0780513017013913 0.9707277754634048 0.0915613595326492 74.87468946031495 0.48537905150752575 5.896722621922505 169.76150769398998 49.22059760748415 39.36313408112218 91.36316687762344 5.418971426928258 11.125531652627771 9.866897236637664 16.60680450970747 65.61944183411443 998.5932225607319 997.7284201072526 100.27465610650705 nan nan nan nan nan nan
0.004840730144504865 -0.005499302920426844 2.207122730218392 0.115292252031806 1.0418926791593273 3.0756205481815475 0.9829321112890094 0.10951997689484101 70.16929948218126 0.3356244118400147 4.839288627956549 215.17789887946407 32.105442856743224 48.7147187658239 90.86318796323576 -9.459849891462653 8.1640039092423 9.332818120871984 16.962020678056746 90.7160059159369 997.9317869711858 995.7718029701946 100.41086525687074 nan nan nan nan nan nan
0.007873866966741165 0.11912113391300592 2.240708610096269 0.11743539880747099 1.041545069366967 3.0733947450257366 0.9659298606547433 0.09774459678180293 55.87765956552554 0.26652260367263075 4.558017462066843 207.17051589908192 39.5082690652956 44.24527327752238 88.26090476961922 -9.02445026174145 6.410133386015022 9.159864960334358 16.021892028204913 77.91974663097531 998.2226435051368 994.714078564709 100.09079586035725 nan nan nan nan nan nan
-0.020126562855181204 -0.10889409545086058 2.248130665439115 0.12145905662517498 1.0418063781235145 3.0742580972695968 0.9580713015123139 0.06252730051138866 65.38991939811847 0.20358287804530373 6.347775986956104 224.61868802962906 23.76244578073289 39.809070148689074 98.23658800298267 -5.611291611749822 6.887317117812717 9.702317835892357 23.254487311433355 84.42114480452626 998.9126225298736 997.195775199958 99.5216969371078 nan nan nan nan nan nan
-0.012444730998238188 0.061917315729992574 2.185352901902555 0.1173685169542292 1.0420755971662183 3.080471989726493 0.9592826920096886 0.07265599652971648 52.31859362096024 0.5593464762543845 4.098223270067347 203.28991486433193 21.497023937314435 36.672605612033166 102.06662052532097 -1.544833080528576 7.207222878080215 7.413069046030926 19.647068377245546 88.795211423061 998.8820925321946 995.5732890547073 99.59078136911464 nan nan nan nan nan nan
0.005100385378252516 -0.17463815313663636 2.2379218069040974 0.11712913516257391 1.0421913887923493 3.076440387425912 0.9687657951020978 0.11135277020384374 55.317302848784124 0.7165724065041246 7.025599734260181 229.92074540266327 27.412944323075557 46.21670770535092 114.34263030780473 3.0519453624367805 8.158124847796008 13.623341920239412 23.747111887567012 97.408060532173 999.1290542073439 995.3748038324145 100.10734906290101 nan nan nan nan nan nan
0.004790910646166329 -0.0612736832398382 2.243452956281451 0.11879876946789596 1.0417182804696 3.073774280616307 0.9550936470960631 0.08051719128565347 57.521079844316475 -0.22983871250032478 7.909806056582105 225.2448722731597 33.71008888030493 46.012027313067165 105.2745911648193 -1.9464789003279166 8.445014677412207 8.870532823807855 20.482991063351115 88.9585954597101 996.8394145519231 996.620580408743 100.15567461212146 nan nan nan nan nan nan
-0.024136309585230817 -0.00028583000427685176 2.233566238802488 0.11766283467746749 1.0416134534592416 3.0751171853458494 0.9706061258449273 0.06642667479520047 61.960604190763995 0.05783045216242434 5.747287203177584 256.87683834157303 38.65835718405072 28.585134872491555 112.47200004409224 6.406620367154805 5.598504289544604 6.170323165729334 19.48682293215828 85.58514838169134 999.5829010454643 993.8432600509541 100.14833929965675 nan nan nan nan nan nan
0.003815812565937198 0.009656257724668826 2.215810139435711 0.11702818636686624 1.041271319612017 3.0752315475329173 0.9602753576175292 0.09697978089932666 49.780545534507816 0.0590678120134801 7.885414260764911 267.9149835321906 38.07546258364491 28.465078830587647 101.15587919658162 0.04698176737235604 6.866325327252815 10.702195235714006 20.403714121976382 88.50890131468717 998.8488271457175 996.5274491871428 100.13871009795193 2.16548954687e-09 68.06308 0.6980806602971381 0.2477489889030199 0.8119210943165301 11.628448486328125
```
and "chain_CH_sampling.txt"
```
1.000000000000000000e+00 1.128373664323362618e+03 7.911644401904303903e-03 5.376092715398769067e-02 2.228045681959283542e+00 1.171128751624017778e-01 1.042083230601519528e+00 3.075734604467506905e+00 9.670323081366486440e-01 9.183857193790462281e-02 5.734073383901580456e+01 2.018357335376126382e-01 5.561554818257315347e+00 2.115546764011500898e+02 3.371102439798196571e+01 3.601769865564615714e+01 1.113401444958011126e+02 2.600789948377861371e+00 4.631616429170400728e+00 7.971067945851493697e+00 1.818470630252204501e+01 9.090070459350987164e+01 9.991086121286330126e+02 9.961374430702205700e+02 1.001830000085773236e+02 2.166579185450000074e-09 6.841507000000000005e+01 7.007370261761605157e-01 2.478014184984166091e-01 8.147221776015531880e-01 1.115255737304687500e+01
1.000000000000000000e+00 1.741987486276706932e+03 3.815812565937198170e-03 9.656257724668826370e-03 2.215810139435710813e+00 1.170281863668662381e-01 1.041271319612016910e+00 3.075231547532917276e+00 9.602753576175292460e-01 9.697978089932665635e-02 4.978054553450781583e+01 5.906781201348010324e-02 7.885414260764910921e+00 2.679149835321906039e+02 3.807546258364490654e+01 2.846507883058764676e+01 1.011558791965816226e+02 4.698176737235604250e-02 6.866325327252814859e+00 1.070219523571400622e+01 2.040371412197638179e+01 8.850890131468716504e+01 9.988488271457174505e+02 9.965274491871427927e+02 1.001387100979519289e+02 2.165489546869999843e-09 6.806307999999999936e+01 6.980806602971381114e-01 2.477489889030199122e-01 8.119210943165301453e-01 1.162844848632812500e+01
1.000000000000000000e+00 1.313163112403977721e+03 7.118635042115737874e-03 1.225288720076803534e-01 2.176968423103039729e+00 1.209586924391146540e-01 1.041624503120577083e+00 3.076864051924229404e+00 9.677031156263335987e-01 9.337927720397463938e-02 6.606523480309849106e+01 5.492406028225535475e-02 7.563747471626623309e+00 2.254236044718325616e+02 2.905275564040569236e+01 2.127322663238310696e+01 1.145726772748897417e+02 6.938723289933784955e+00 8.792232078826419439e+00 1.136112207643819794e+01 2.317358218741506093e+01 6.888182119098600253e+01 9.999540639437149139e+02 9.984927668594947363e+02 1.002819651594155772e+02 2.169027605220000147e-09 6.642723000000000866e+01 6.750000545611600788e-01 2.475812921151235024e-01 8.301517812284142650e-01 1.155123901367187500e+01
1.000000000000000000e+00 1.935167940573800252e+03 3.608993642508027261e-03 4.040579054579648832e-03 2.263780913914335002e+00 1.219054157984217762e-01 1.041672102799462207e+00 3.075568770204970548e+00 9.554840860765438926e-01 9.726029141872373496e-02 5.734529638547730457e+01 3.325415630187206206e-04 9.807925907894631479e+00 2.172276141534553631e+02 3.242391221834510873e+01 2.886204484009736149e+01 9.979613978429344456e+01 1.558084544967772711e-01 6.179074194029840150e+00 9.903037366660448981e+00 2.377631144755649473e+01 8.772584902356464909e+01 9.979025885416606343e+02 9.944325301410162865e+02 1.000472652290881257e+02 2.166219922179999811e-09 6.689997999999999934e+01 6.755220857044407223e-01 2.479533117059298242e-01 8.255504930576388345e-01 1.161827087402343750e+01
1.000000000000000000e+00 1.780481408755079201e+03 1.541905384037550990e-03 1.344484229445355034e-02 2.231326501523757777e+00 1.195806135545675608e-01 1.041839599475115508e+00 3.075618093721053103e+00 9.660970670735866683e-01 9.715425152946721710e-02 6.013838579822511576e+01 9.036379048022616378e-02 5.678815877730265527e+00 2.062385337651708141e+02 3.694636365662616839e+01 3.095491840856214338e+01 9.734748697610889678e+01 1.424537720949293096e+00 7.201873804060451967e+00 1.058831974839807089e+01 2.020824471577329362e+01 7.873392736392692370e+01 9.984761574672527331e+02 9.966897799179618005e+02 1.001830955362016482e+02 2.166326770399999923e-09 6.747535999999999490e+01 6.868513067300012764e-01 2.478154433642033583e-01 8.225809942482822379e-01 1.165792846679687500e+01
1.000000000000000000e+00 1.784817760038293954e+03 1.331989287368809307e-03 1.029311285609337406e-01 2.238523852814596982e+00 1.231702565533328020e-01 1.041708338952253454e+00 3.075964504433204905e+00 9.558879489241002281e-01 9.510461303001702116e-02 6.151040243116791828e+01 2.445276281834893872e-01 7.909615170747614776e+00 2.391541925393127315e+02 4.437892934548575141e+01 3.956719301070978645e+01 1.021670681716439617e+02 2.514407880452402821e-01 4.540489552153061936e+00 9.822354385119060893e+00 1.962374608280030230e+01 8.023471735594586107e+01 9.988320987787633385e+02 9.959012347147789797e+02 1.000447980099193614e+02 2.167077339189999997e-09 6.624706000000000472e+01 6.667880143001850834e-01 2.478461577681812245e-01 8.309306197888737255e-01 1.154983520507812500e+01
```
I am running parameter estimation using the Likelihood_mock_cmb
likelihood class and my input files include the TT, EE, and TE power spectra. Is there a quick way to turn off polarization and only include TT in the likelihood without changing the input files?
Hi,
I am running 5 parallel chains on a cluster but the, unfortunately, the job doesn't finish within the walltime of the cluster. In the paper https://arxiv.org/pdf/1804.07261.pdf it's mentioned that "When chains are restarted in the same directory using the --restart command, this will be done automatically". However, it is not in the code's documentation how this can be done in practice.
I am running the chains using
srun --ntasks=5 --mpi=pmi2 python montepython/MontePython.py run \
--conf default.conf -j fast\
-p $params -o $output -c $covmat --superupdate 20\
-N $Nstep --update $updatestep --silent`
and the output txt files look like this:
2018-10-26_100000__1.txt
2018-10-26_100000__2.txt
2018-10-26_100000__3.txt
2018-10-26_100000__4.txt
2018-10-26_100000__5.txt
Is there a way to quickly restart/continue the chain without using the output file names?
What are the (main) differences between MontePython and GetDist in the methods used to analyse chains?
As an experiment, I tried to analyse the same chains using both, and I get different results. The bestfit values are different.
For some chains (where convergence is slow) where MontePython gives NaN values for the sigma bounds for the difficult parameters, GetDist gives normal numeric values.
Hi I submitted the issue I've got before on here:
baudren#118
I haven't try to solve the problem yet after then.
But at that time I tried to install montepython on NERSC, I loaded anaconda version for python 2.7+
Because of anaconda, all required package was already set.
But I got the error when I use the command
code/montepython> python montepython/MontePython.py --help
[Thu Sep 27 01:38:16 2018] [unknown] Fatal error in PMPI_Init_thread: Other MPI error, error stack:
MPIR_Init_thread(537):
MPID_Init(246).......: channel initialization failed
MPID_Init(647).......: PMI2 init failed: 1
Aborted (core dumped)
Dear Developers,
I cannot find the pdf documentation mentioned in the description. Moreover, some of the things mentioned do not correspond to the current version of the code (e.g., Planck likelihood, etc).
Best,
Hi,
we are currently testing out the PolyChord extension for MontePython. Trying that we stumbled over the timings MontePython takes for the theory calculation, i.e. for the call to cosmo.compute()
in sampler.py
. We have compared the timing to single Class/Camb calls and to CosmoMC and the cosmo.compute()
call by MontePython seems to take about 2 times longer.
Is this expected or must there be something wrong in our setup?
We ran with the input from base2015.param
with the three Planck likelihoods for high-l, low-l, and lensing. Below you can find the timings we have measured:
Time for cosmo.compute
: 6.8 s
Time for Planck_highl: 0.008 s
Time for Planck_lowl: 0.09 s
Time for Planck_lensing: 0.007 s
Time for PPS: 2.861022949218750E-006
Time for theory: 2.83309
Time for lowl_SMW_70_dx11d_2014_10_03_v5c_Ap: 9.755587577819824E-002
Time for smica_g30_ftl_full_pp: 4.398822784423828E-004
Time for plik_dx11dr2_HM_v18_TT: 5.431890487670898E-003
Likelihood calculation time (seconds)= 2.9676
Time for explanatory.ini
: 3.6 s
Time for cosmo.compute()
: 3.7 s
Time for cosmo.lensed_cl
: 5 ms
Time for camb.get_results(pars)
: 3.7 s
Time for results.get_lensed_scalar_cls
: 0.5 ms
using:
Python version 2.7.10
MontePython version 3.0.1
classy version 2.7.0
pycamb version 0.1.8.1
Many thanks,
Lukas
using --update 500, does not produce the covariance matrix for me? any help?
mpirun -np 8 python MontePython.py run -o ../chains/fT1_convergenceTest -p ../sN1a_plc_Bao.param --update 500 -N 750000 --conf ../default.conf -f 1.5
--> Scanning file ../chains/fT1_convergenceTest/2018-11-17_750000__1.txt : Removed 314 points of burn-in, and first 50 percent, keep 1372 steps
2018-11-17_750000__2.txt1 342.104 1.216262e-01 -2.458553e-03 1.396239e-01 3.125691e+00 -1.911299e+01 -7.420085e-02 3.113078e-01 6.800000e+01 0.000000e+00
1 344.931 1.129606e-01 -5.812980e-02 1.503880e-01 3.198529e+00 -1.908845e+01 -7.028718e-02 2.925674e-01 6.800000e+01 0.000000e+00
: Removed 129 points of burn-in, and first 50 percent, keep 1702 steps
2018-11-17_750000__3.txt : Removed everything: chain not converged
2018-11-17_750000__4.txt : Removed everything: chain not converged
2018-11-17_750000__5.txt : Removed 72 points of burn-in, and first 50 percent, keep 2277 steps
2018-11-17_750000__6.txt1 350.949 1.354452e-01 1.616572e-01 1.568599e-01 3.272936e+00 -1.904745e+01 -8.283017e-02 3.411933e-01 6.800000e+01 0.000000e+00
: Removed 70 points of burn-in, and first 50 percent, keep 2221 steps
2018-11-17_750000__7.txt : Removed everything: chain not converged
2018-11-17_750000__8.txt : Removed everything: chain not converged
--> Computing mean values
--> Computing variance
--> Computing convergence criterium (Gelman-Rubin)
-> R-1 is 0.006293 for omega_cdm
0.035296 for b
0.005224 for alpha
0.009577 for beta
0.005479 for M
0.017239 for Delta_M
0.006294 for Omega_m
0.006294 for H0
0.006295 for sigma8
--> Not computing covariance matrix
Dear all,
I am trying to use montepython3.0 to make contour plots between tensor to scalar ratio (r_T) vs spectral index (n_s). I am having a problem in assigning a value in .param file, and its plotting.
Any suggestions will be appreciated.
Thankyou
What is the best way to run a model?
Should i run once with high N and --update 50, Or repeat the same run for a model various times for increasing N say 10000, 100000, 1000000.
Is there even any difference between the two ways? If so which way will theoretically get me to the highest resolution the fastest?
thank you
I'm trying to run Montepython 3 for the Euclid_pk likelihood, which uses the parameter cosmo.pk_cb. This is not defined in CLASS, so I'm obviously getting an error:
AttributeError: 'classy.Class' object has no attribute 'pk_cb'
Is this a bug? Should it just be cosmo.pk, as in the previous version of the code? Or is the likelihood file written for some other version of CLASS, not publicly released?
Are the default data values in likelihoods such as Planck_SZ.data and CFHTLens.data, the real data values from these probes? Because I could not find the values in these .data files in the related papers.
I'm trying to run MontePython with nested samping for the Planck 2015 likelihood (base2015TTTEEE), using the parameter file provided. Since nested sampling requires hard priors, I used the values in the 2015 Planck analysis (from the .ranges file in the publicly released Planck chains).
I'm getting a series of errors which prevent the chain from being output correctly. They all seem to be related to the values of the cosmological parameters passed on to CLASS.
In the default parameter file, the prior ranges of the parameters in question (omega_b seems to be the cause of the problem) are set to None. Why should setting hard parameters affect the initial and subsequent guesses? It shouldn't take the values into unphysical regions. If anything, it should prevent it. Am I missing something?
And is there a solution?
Here is a sample of the errors:
Error in Class: background_init(L:634) :condition (pba->shooting_failed == TRUE) is true; Shooting failed, try optimising input_get_guess(). Error message:
input_init(L:330) :error in input_find_root(&xzero, &fevals, &fzw, errmsg);
=>input_find_root(L:4006) :error in input_fzerofun_1d(x1, pfzw, &f1, errmsg);
=>input_fzerofun_1d(L:3543) :error in input_try_unknown_parameters(&input, 1, pfzw, output, error_message);
=>input_try_unknown_parameters(L:3728) :error in thermodynamics_init(&pr,&ba,&th);
=>thermodynamics_init(L:390) :error in thermodynamics_recombination(ppr,pba,pth,preco,pvecback);
=>thermodynamics_recombination(L:2611) :error in thermodynamics_recombination_with_recfast(ppr,pba,pth,preco,pvecback);
=>thermodynamics_recombination_with_recfast(L:3253) :error in generic_integrator(thermodynamics_derivs_with_recfast, zstart, zend, y, &tpaw, ppr->tol_thermo_integration, ppr->smallest_allowed_variation, &gi);
=>generic_integrator(L:112) :error in rkqs(&x, h, eps, &hdid, &hnext, derivs, parameters_and_workspace_for_derivs, pgi);
=>rkqs(L:156) :condition (xnew == *x) is true; stepsize underflow at x=nan
Error in Class: background_init(L:634) :condition (pba->shooting_failed == TRUE) is true; Shooting failed, try optimising input_get_guess(). Error message:
input_init(L:330) :error in input_find_root(&xzero, &fevals, &fzw, errmsg);
=>input_find_root(L:4006) :error in input_fzerofun_1d(x1, pfzw, &f1, errmsg);
=>input_fzerofun_1d(L:3543) :error in input_try_unknown_parameters(&input, 1, pfzw, output, error_message);
=>input_try_unknown_parameters(L:3728) :error in thermodynamics_init(&pr,&ba,&th);
=>thermodynamics_init(L:304) :error in thermodynamics_helium_from_bbn(ppr,pba,pth);
=>thermodynamics_helium_from_bbn(L:1218) :condition (omega_b > omegab[num_omegab-1]) is true; You have asked for an unrealistic high value omega_b = 5.304738e-02. The corresponding value of the primordial helium fraction cannot be found in the interpolation table. If you really want this value, you should fix YHe to a given value rather than to BBN
Hi,
How to add SNe pantheon likelihood according to JLA likelihood which papers is arxiv.org 1710.00845 ?
When running Monte Python with the Planck_actspt likelihood I get the error:
The path to the .clik file for the likelihood Planck_actspt was not found where indicated: ...
However the .clik file (from http://pla.esac.esa.int/pla/aio/product-action?COSMOLOGY.FILE_ID=COM_Data_Likelihood-actspt_R1.10.tar) is in exactly the location listed by the error, one level up from the directory for the Planck likelihood code. I am using plc v2.0. Is this a bug in Monte Python or an incompatibility between the Planck_actspt likelihood and the 2015 version of the plc?
I want to run MontePython on a cluster. Since I have no superuser rights, I use an installation of python in my home directory rather than in /usr/. I (seemingly) successfully installed Cython there.
However when attempting to run ./waf configure --install_all_deps I get the following:
Checking python module 'cython' : ok Checking python module 'cython' : ok Checking cython version (>0.12) : 0.29 Checking for program 'cython' : not found No suitable python distribution found Cause : '[Errno 2] No such file or directory: '/home/nedelko/planck/plc-2.0/bin/cython'' Compilation will continue without it (but I strongly advise that you install it)
(everything else is fine and green)
So, waf is searching for Cython inside the /plc-2.0/ folder instead of the python installation and I can't figure out why.
Now, it says that I may run ./waf install anyway, but it crashes with this:
Waf:
Entering directory /home/nedelko/planck/plc-2.0/build
add plik/component_plugin/rel2015/fg2015.c
add plik/component_plugin/rel2015/corrnoise.c
add plik/component_plugin/rel2015/leakage.c
Waf: Leaving directory /home/nedelko/planck/plc-2.0/build' Build failed Traceback (most recent call last): File "/home/nedelko/planck/plc-2.0/.waf-1.8.8-eab538dea2e33915d3770ff3f393b18b/waflib/Runner.py", line 150, in task_status return tsk.runnable_status() File "/home/nedelko/planck/plc-2.0/.waf-1.8.8-eab538dea2e33915d3770ff3f393b18b/waflib/Tools/fc.py", line 46, in runnable_status ret=tsk.runnable_status() File "/home/nedelko/planck/plc-2.0/.waf-1.8.8-eab538dea2e33915d3770ff3f393b18b/waflib/Tools/fc.py", line 40, in runnable_status return super(fc,self).runnable_status() File "/home/nedelko/planck/plc-2.0/.waf-1.8.8-eab538dea2e33915d3770ff3f393b18b/waflib/Task.py", line 313, in runnable_status new_sig=self.signature() File "/home/nedelko/planck/plc-2.0/.waf-1.8.8-eab538dea2e33915d3770ff3f393b18b/waflib/Task.py", line 302, in signature self.sig_implicit_deps() File "/home/nedelko/planck/plc-2.0/.waf-1.8.8-eab538dea2e33915d3770ff3f393b18b/waflib/Task.py", line 386, in sig_implicit_deps if prev==self.compute_sig_implicit_deps(): File "/home/nedelko/planck/plc-2.0/.waf-1.8.8-eab538dea2e33915d3770ff3f393b18b/waflib/Task.py", line 424, in compute_sig_implicit_deps upd(k.get_bld_sig()) File "/home/nedelko/planck/plc-2.0/.waf-1.8.8-eab538dea2e33915d3770ff3f393b18b/waflib/Node.py", line 459, in get_bld_sig self.cache_sig=ret=self.sig AttributeError: sig`
Hi I'm learning how to use the montepython code through some tests.
I'm really in the beginning step.
I ran the code with
~/research/software/monte_python/montepython_public$ python montepython/MontePython.py run -o sdss_lrg/ -p input/sdss_lrgDR7.param
and the tried to analyze and plot the result with
~/research/software/monte_python/montepython_public$ python montepython/MontePython.py info sdss_lrg/ --extra plot_files/example.plot
then at first I get this error message
Running Monte Python v3.0.1
Traceback (most recent call last):
File "montepython/MontePython.py", line 42, in <module>
sys.exit(run())
File "/home/sjc2/research/software/monte_python/montepython_public/montepython/run.py", line 31, in run
custom_command)
File "/home/sjc2/research/software/monte_python/montepython_public/montepython/run.py", line 188, in safe_initialisation
cosmo, data, command_line, success = initialise(custom_command)
File "/home/sjc2/research/software/monte_python/montepython_public/montepython/initialise.py", line 58, in initialise
analyze(command_line)
File "/home/sjc2/research/software/monte_python/montepython_public/montepython/analyze.py", line 76, in analyze
info = Information(command_line)
File "/home/sjc2/research/software/monte_python/montepython_public/montepython/analyze.py", line 2091, in __init__
execfile(command_line.optional_plot_file, plot_file_vars)
File "plot_files/example.plot", line 46
info.force_limits = ('H0':[60:70],'z_reio':[5:15])
^
SyntaxError: invalid syntax
so I changed the line of example.plot 'info.force_limits = ('H0':[60:70],'z_reio':[5:15])'
to ('H0'==[60,70],'z_reio'==[5,15])'.
Then the new error is came up....
Running Monte Python v3.0.1
--> Scanning file test/2018-11-14_10000__1.txt : Removed 0 non-markovian points, 15 points of burn-in, keep 91 steps
/!\ Convergence computed for a single file
/|\ Transforming omega_cdm into (0.01*omega_b+omega_cdm)/(H0/100.)**2
/_o_\ The new variable will be called $\Omega_\mathrm{m}$
Traceback (most recent call last):
File "montepython/MontePython.py", line 42, in <module>
sys.exit(run())
File "/home/sjc2/research/software/monte_python/montepython_public/montepython/run.py", line 31, in run
custom_command)
File "/home/sjc2/research/software/monte_python/montepython_public/montepython/run.py", line 188, in safe_initialisation
cosmo, data, command_line, success = initialise(custom_command)
File "/home/sjc2/research/software/monte_python/montepython_public/montepython/initialise.py", line 58, in initialise
analyze(command_line)
File "/home/sjc2/research/software/monte_python/montepython_public/montepython/analyze.py", line 92, in analyze
convergence(info)
File "/home/sjc2/research/software/monte_python/montepython_public/montepython/analyze.py", line 270, in convergence
info.remap_parameters(spam)
File "/home/sjc2/research/software/monte_python/montepython_public/montepython/analyze.py", line 2129, in remap_parameters
exec("spam[i][:, %i] = %s" % (index_to_change, value))
File "<string>", line 1, in <module>
NameError: name 'H0' is not defined
How can I solve this problem?
How do you exactly extract the maximum of likelihood of a simulation, i only seem to receive the min, help appreciated
Hi again,
I'm trying to analyze the test output running with example.param and fake_planck likelihood.
But when I run the command I got the error about not enough chains.
montepython_public> montepython/MontePython.py info test_planck/fake_planck/
Running Monte Python v3.0.1
--> Scanning file test_planck/fake_planck/2018-11-20_5__2.txt : Removed 0 non-markovian points, 4 points of burn-in, keep 1 steps
2018-11-20_5__3.txt : Removed everything: chain not converged
2018-11-20_5__1.txt : Removed everything: chain not converged
Traceback (most recent call last):
File "montepython/MontePython.py", line 40, in <module>
sys.exit(run())
File "/global/u2/l/leeyounb/cori/software/code/montepython_public/montepython/run.py", line 31, in run
custom_command)
File "/global/u2/l/leeyounb/cori/software/code/montepython_public/montepython/run.py", line 188, in safe_initialisation
cosmo, data, command_line, success = initialise(custom_command)
File "/global/u2/l/leeyounb/cori/software/code/montepython_public/montepython/initialise.py", line 58, in initialise
analyze(command_line)
File "/global/u2/l/leeyounb/cori/software/code/montepython_public/montepython/analyze.py", line 92, in analyze
convergence(info)
File "/global/u2/l/leeyounb/cori/software/code/montepython_public/montepython/analyze.py", line 268, in convergence
spam = remove_bad_points(info)
File "/global/u2/l/leeyounb/cori/software/code/montepython_public/montepython/analyze.py", line 1849, in remove_bad_points
"Please wait a bit to analyze this folder")
io_mp.AnalyzeError:
Analyze Error:
/|\ No decently sized chain was found. Please wait a bit to analyze this
/_o_\ folder
Is it possible to get the velocity power spectrum for dark matter or baryons from the CLASS code? I need the velocity perturbations in term of k.
In a few of my models i needed to amend class to be able to input the Hubble constant at current times. I simply copied the code of H0 in order to implement this but just wanted to make sure i'm doing it correctly. Has anyone had previous experience with this? Thanks in advance
Hi, I noticed that when we run montepython, it will automatically calculate fisher matrix to accelerate computation speed, right? However, my likelihood seems couldn't do that, do I have some options to turn off it?
Thanks a lot!
Hi all,
I'm usually using MultiNest (NS) as my default sampler and on a cluster I'm still working with MontePython v2.2.1. Once the NS run has finished, I use
montepython info /path/to/my/run/NS/
to convert the NS output to a regular MP chain (so that I could use montepython info /path/to/my/run/
in a second step to get all plots and files from the converted chain).
That procedure works fine with MP v2.2.1 but updating to v2.2.2 or now v3.0.1 results in various error messages when I execute the first command to convert the NS output to a chain.
MP v2.2.2 complains about a missing 'log.param' file in the 'NS' subfolder (which of course is one folder up as it should...) and MP v3.0.1 just doesn't find any decently sized chain in the NS subfolder (it's supposed to create it...).
To me it seems like that somehow the analyze.py
module has lost its capability to deal properly with the NS subfolder from v2.2.2 onwards...
Cheers,
Fabian
Hi all,
I run into the same issue as already posted and not yet resolved for MP v2.X when trying to set up the WMAP likelihood and just hope to get an answer here.
In short: setting up the likelihood ultimately fails because the 'pywlik' Python module is not created although neither the 'waf configure' / 'waf install' nor the sourcing of the 'clik_profile.sh' return any error messages...
Cheers,
Fabian
Hello,
I am running MontePython 2.2.2 with Planck2015 (TT+lowP) + JLA + BAO BOSS DR12 to probe a dark energy parameterization. After running successfully and trying to annalize the chains with the command:
python montepython/MontePython.py info chains/N2_nakawe/bao_jla_planck/nkw_1-3_cluster1_f1.7/ --want-covmat
I get the following error:
-> Computing histograms for omega_b
/!\ The 1D posterior could not be processed normally, probablydue to incomplete
or obsolete numpy and/or scipy versions.So the raw histograms will be
plotted.
Im running with a Python2.7 environment with all the packages that are expected, my numpy and scipy versions are:
Is it a negligible error/warning? or is it changing the output of the histograms to be computed?
I've not had any problems while running Planck or any other likelihoods.
If anyone could lend a hand here, it'd be appreciated!
Thank you in advance,
Dear all,
I am a Master student who is working on cosmology. I am doing some work about the constrains on alpha parameters in Horndeski gravity using the code MontePython together with hi_class. In these days I found something which made me confused.
The problem is about the calculation of the -log(Likelihood) in MontePython. When we start to run a chain, the code will calculate and list the value of -log(Likelihood) for each step. For example, if there are two parameters to constrain, the output will be like this:
No.ofstep -log(Likelihood) parameter1 parameter2
2 3000 0.50 0.40
5 2900 0.52 0.42
...
Theoretically, the calculation of -log(Likelihood) should be related to the covariance matrix following this equation:
-ln(Likelihood)=0.5d_a(C)^(-1)_{ab}*d_b
So theoretically, if the covariance matrix is different, the -log(Likelihood) for the same point in the parameter space should be different. But when I run MontePython, I found that the value of -log(Likelihood) is the same even we choose different covariance, which made me really confused.
My running step is as follows:
python montepython/Montepython.py run -p inputfile.param -o outputfolder -c convariancematrixNo1.covmat -f 0 -N 1
when we run this, the code will calculate the -log(Likelihood) for the starting point and we didn't change anything but the covariance matrix.
python montepython/Montepython.py run -p inputfile.param -o outputfolder -c convariancematrixNo2.covmat -f 0 -N 1
and the -log(Likelihood) we got will be exactly the same value, even if the covariance matrix are totally different.
Did I do something wrong or got a wrong understanding about the code? Thank you so much in advance.
Hi all,
I'm trying to convert PolyChord (PC) output to a standard Monte Python chain via:
montepython info /path/to/run/PC/
This yields the following error message:
Traceback (most recent call last):
File "/.../montepython_public_v3/montepython/MontePython.py", line 40, in <module>
sys.exit(run())
File "/.../montepython_public_v3/montepython/run.py", line 31, in run
custom_command)
File "/.../montepython_public_v3/montepython/run.py", line 190, in safe_initialisation
cosmo, data, command_line, success = initialise(custom_command)
File "/.../montepython_public_v3/montepython/initialise.py", line 58, in initialise
analyze(command_line)
File "/.../montepython_public_v3/montepython/analyze.py", line 82, in analyze
status = prepare(item, info)
File "/.../montepython_public_v3/montepython/analyze.py", line 222, in prepare
"is missing in the analyzed folder?")
io_mp.AnalyzeError:
Analyze Error:
/|\ The log param file PC/log.param is missing in the analyzed folder?
But of course, the log.param
file is one folder up in path/to/run/
... and I'm also not sure why Monte Python would need that file to just convert the PC output to a standard Monte Python chain?!
If I remember correctly, this is also the error message I saw when trying to convert MultiNest output back in the day as described in issue #5 (although this issue was for v3.0.1 whereas the current one is for v3.1.0...).
Cheers,
Fabian
Is there a way to set a sensible physical prior on the combination of w0 and wa? For example, ensuring that w0 + wa <= 0.
I have set priors -3 < w0 < -0.3 and -2 < wa < 2 on the parameters individually, but since the 1-sigma bounds are quite weak from current data, this is not sufficient to stop the chains from wandering into unphysical regions of parameter space, such that
w_fld >= 1./3.
or
w_fld_ini > 0
When it gets here, Class rightly complains for many reasons, and sometimes the chains completely stall and are unable to get back to sensible regions.
I imagine this issue would be automatically handled if one used a good estimate of the covariance matrix for the proposal, as the correlation between parameters might stop them moving simultaneously in the wrong direction. But I need to get the chains to progress sufficiently to be able to estimate this covariance!
Hi,
I have a question regarding the SDSS DR4 LRG likelihood in MontePython. In Tegmark et al., 2006, the nonlinear power spectrum is modeled as
P_gal(k) = (1+Qk^2)/(1+Ak) P_lin(k)
When I look at the likelihood code for mPk I cannot find that this factor is taken into account anywhere. Am I right or am I missing something?
Thank you very much for your help!
All the best,
Andrina
When I try with this command below,
montepython/MontePython.py info $SCRATCH/montepython_output/test_planck/high_l+lowl/ $SCRATCH/montepython_output/test_planck/planck_bao/ --extra plot_files/base2015TTTEEE.extra --want-covmat --bins 10
I got this error message so I cut down the bin number but same.
`-> Computing histograms for n_s
/!\ could not derive minimum credible intervals for this multimodal posterior
/!\ please try running longer chains or reducing the number of bins with --bins
BINS (default: 20)`
Hello,We added a new parameter "X" about Dark Energy to the CLASS. Now we want to constrain X using montepyhon, how can we change the montepython ? I was wondering if it would be possible to share the specific instructions? @brinckmann
In order to use the likelihood "bao_fs_boss_dr12", should we reference both papers "arXiv:1607.03155" and "arXiv:1708.09406" for that? Because in the paper "arXiv:1804.07261" both references are mentioned for this likelihood. However, I think the first reference is relevant.
Dear all,
I am using Planck likelihood and I have already downloaded the plc-2.0 folder, then I installed it using the following command:
./waf configure --install_all_deps --lapack_mkl=$MKLROOT
./waf install
The installation is successful and then I run the command:
source ~/test/planck/plc-2.0/bin/clik_profile.sh
(I put the plc-2.0 folder in a folder called "planck" . This folder is in a folder called "test" together with the folder "hi-class" and "montepython_public-2.2". The conf. file of montepython has also been written as followes:
root = '~/test/hi_class/'
path['cosmo'] = root
path['clik'] = '~/test/planck/plc-2.0'
Everything is fine until this point, but when I tried to running montepython, it turns out an error:
Testing likelihoods for:
-> Planck_highl_TTTEEE
Cannot use clik wrapper (cause = 'No module named lkl')
Cannot use clik_lensing wrapper (cause = 'No module named lkl_lensing')
Traceback (most recent call last):
File "montepython/MontePython.py", line 40, in <module>
sys.exit(run())
File "/home/zhao/test/montepython_public-2.2/montepython/run.py", line 31, in run
custom_command)
File "/home/zhao/test/montepython_public-2.2/montepython/run.py", line 188, in safe_initialisation
cosmo, data, command_line, success = initialise(custom_command)
File "/home/zhao/test/montepython_public-2.2/montepython/initialise.py", line 66, in initialise
data = Data(command_line, path)
File "/home/zhao/test/montepython_public-2.2/montepython/data.py", line 340, in __init__
self.initialise_likelihoods(self.experiments)
File "/home/zhao/test/montepython_public-2.2/montepython/data.py", line 460, in initialise_likelihoods
elem, elem, folder, elem)
File "<string>", line 1, in <module>
File "/home/zhao/test/montepython_public-2.2/montepython/likelihood_class.py", line 861, in __init__
"and try again.")
montepython.io_mp.MissingLibraryError:
Missing Library Error:
/|\ You must first activate the binaries from the Clik distribution. Please
/_o_\ run :
]$ source /path/to/clik/bin/clik_profile.sh
and try again.
I did run the source things but it does not work. Could anyone help me to find out where the problem is?
In any case, thank you for your help.
Hi, I want to apply the Parametrized POST-Friedman(PPF) method to constrain the parameter space. Bur w0_fld is completely unconstrained from the current results, I wonder if change LCDM model to a cosmological model under PPF by modifying somewhere in CLASS or MontePython?
Hi Thejs, all,
Great to see that PolyChord is now available as sampler choice, quite timely! While trying it out I ran into an "invalid index to scalar variable" error.
It's caused by line 360 in PolyChord.py
:
logl = sampler.compute_lkl(cosmo, data)[0,0]
And resolved by naively removing the square bracket expression:
logl = sampler.compute_lkl(cosmo, data)
Cheers,
Fabian
Hi, I've been trying to reproduce the data of an article using Planck's 2013 likelihood. I followed the steps for installing and usage but I keep getting the following error:
python montepython/MontePython.py run -p input/planck+pantheon2.param -o chains/deltaw0 -N 1000
Running Monte Python v3.0.1
with CLASS v2.7.1
Testing likelihoods for:
-> Planck_actspt, Planck_highl, Planck_lowl, Planck_lensing, Pantheon, bao
Cannot use clik wrapper (cause = 'No module named lkl')
Cannot use clik_lensing wrapper (cause = 'No module named lkl_lensing')
Traceback (most recent call last):
File "montepython/MontePython.py", line 40, in
sys.exit(run())
File "/home/dinorah/montepython_public/montepython/run.py", line 31, in run
custom_command)
File "/home/dinorah/montepython_public/montepython/run.py", line 188, in safe_initialisation
cosmo, data, command_line, success = initialise(custom_command)
File "/home/dinorah/montepython_public/montepython/initialise.py", line 66, in initialise
data = Data(command_line, path)
File "/home/dinorah/montepython_public/montepython/data.py", line 341, in init
self.initialise_likelihoods(self.experiments)
File "/home/dinorah/montepython_public/montepython/data.py", line 470, in initialise_likelihoods
elem, elem, folder, elem)
File "", line 1, in
File "/home/dinorah/montepython_public/montepython/likelihood_class.py", line 886, in init
except clik.lkl.CError:
AttributeError: 'module' object has no attribute 'lkl'
I don't understand what I did wrong, I have configured all the files correctly...
[Links back to issue raised on the old github page]
I'm getting segmentation faults when I use MontePython plus MultiNest - sometimes before any MultiNest output and sometimes after many lines of sample size/evidence values.
I'm using KiDS450xGAMA likelihood plus data consistent with van Uitert et al arXiv1706.05004, cosmic shear only. It runs fine with Metropolis-Hastings.
Please let me know any other information which would help.
Hi,
I want to only Pantheon systematic errors ,How to modify the init.py file?
Hello,
In the get_cl() function in montepython/likelihood_class.py [ln 182-199], the normalization step fails to account for cl['tp'] needing to be scaled by only one factor of 1.0e6.
for key in cl.iterkeys():
# All quantities need to be multiplied by this factor, except the
# phi-phi term, that is already dimensionless
if key not in ['pp', 'ell']:
cl[key] *= (T*1.e6)**2
As a result, when neglect_TD is turned off in a mock CMB code (likelihood_mock_cmb) the cl['tp'] term is 6 orders of magnitude too large.
Thanks!
I am finding that if I run multiple MontePython processes under SLURM on one cluster node, everything works fine. But if I split those same processes up over multiple nodes, MontePython fails with the following error:
`io_mp.ConfigurationError:
Configuration Error:
/|\ You are running in a folder that was created following a non-successful
/o\ initialisation (wrong parameter name, wrong likelihood, etc...). If you
have solved the issue, you should remove completely the output folder,
and try again. Alternatively, there could be a problem with cosmo
Test scripts showing the behavior are below. Is this my problem, or is this a bug? Appreciate any feedback on this.
Will Kinney
This SLURM script works fine:
`#!/bin/sh
#SBATCH --partition=debug
#SBATCH --qos=supporters
#SBATCH --time=00:05:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=2
#SBATCH --job-name="Base2015"
#SBATCH --output=Base2015.out
#SBATCH --mail-user=my_mail@localhost
#SBATCH --mail-type=ALL
##SBATCH --requeue
echo "SLURM_JOBID="$SLURM_JOBID
echo "SLURM_JOB_NODELIST"=$SLURM_JOB_NODELIST
echo "SLURM_NNODES"=$SLURM_NNODES
echo "SLURMTMPDIR="$SLURMTMPDIR
module load intel
module load intel-mpi
module load mkl
module load python/anaconda
source /projects/academic/whkinney/PlanckLike/plc-2.0/bin/clik_profile.sh
echo "Launching MontePython"
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so
export OMP_NUM_THREADS=4
srun -N 1 -n 2 python ./montepython/MontePython.py run --conf ./default.conf -o ./chains/Base2015/ -p ./input/base2015.param -c ./covmat/base2015.covmat -f 1.5 -N 5000
echo "All Done!"
`
But this SLURM script fails:
`#!/bin/sh
#SBATCH --partition=debug
#SBATCH --qos=supporters
#SBATCH --time=00:05:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=1
#SBATCH --job-name="Base2015"
#SBATCH --output=Base2015.out
#SBATCH --mail-user=my_mail@localhost
#SBATCH --mail-type=ALL
##SBATCH --requeue
echo "SLURM_JOBID="$SLURM_JOBID
echo "SLURM_JOB_NODELIST"=$SLURM_JOB_NODELIST
echo "SLURM_NNODES"=$SLURM_NNODES
echo "SLURMTMPDIR="$SLURMTMPDIR
module load intel
module load intel-mpi
module load mkl
module load python/anaconda
source /projects/academic/whkinney/PlanckLike/plc-2.0/bin/clik_profile.sh
echo "Launching MontePython"
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so
export OMP_NUM_THREADS=4
srun -N 2 -n 2 python ./montepython/MontePython.py run --conf ./default.conf -o ./chains/Base2015/ -p ./input/base2015.param -c ./covmat/base2015.covmat -f 1.5 -N 5000
echo "All Done!"
`
I am trying to use BK14 data to plot cosmological parameters. But instead I am getting an error
RuntimeError: latex was not able to process the following string:
'$BB\\alphadust$'
Here is the full report generated by latex:
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=latex)
restricted \write18 enabled.
entering extended mode
(/home/mohit/.cache/matplotlib/tex.cache/4177339663be02fb03cef3e5448c3192.tex
LaTeX2e <2017-04-15>
Babel <3.18> and hyphenation patterns for 84 language(s) loaded.
(/usr/share/texlive/texmf-dist/tex/latex/base/article.cls
Document Class: article 2014/09/29 v1.4h Standard LaTeX document class
(/usr/share/texlive/texmf-dist/tex/latex/base/size10.clo))
(/usr/share/texlive/texmf-dist/tex/latex/type1cm/type1cm.sty)
(/usr/share/texlive/texmf-dist/tex/latex/base/textcomp.sty
(/usr/share/texlive/texmf-dist/tex/latex/base/ts1enc.def))
(/usr/share/texlive/texmf-dist/tex/latex/geometry/geometry.sty
(/usr/share/texlive/texmf-dist/tex/latex/graphics/keyval.sty)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifpdf.sty)
(/usr/share/texlive/texmf-dist/tex/generic/oberdiek/ifvtex.sty)
(/usr/share/texlive/texmf-dist/tex/generic/ifxetex/ifxetex.sty)
Package geometry Warning: Over-specification in h'-direction.
width' (5058.9pt) is ignored.
Package geometry Warning: Over-specification in v'-direction.
height' (5058.9pt) is ignored.
) (./4177339663be02fb03cef3e5448c3192.aux)
(/usr/share/texlive/texmf-dist/tex/latex/base/ts1cmr.fd)
geometry driver: auto-detecting
geometry detected driver: dvips
! Undefined control sequence.
l.13 ...000000}{20.000000}{\sffamily $BB\alphadust
$}
No pages of output.
Transcript written on 4177339663be02fb03cef3e5448c3192.log.
Please Help!!!
Hi, im doing a few runs for a few different f(T) models using planck 2015 + JLA + wiggleZ_BAO, but a few of my main parameters seem to be outputting a non gaussian result such as
are these simply intrinsic properties of the model (which im supposing they arent since other similar papers published show they are meant to be gaussian), or a problem with my .param file, or something that will smooth out with more and more points taken per chain?
There's a link in README.rst to the Planck likelihood download -
http://www.sciops.esa.int/index.php?project=planck&page=Planck_Legacy_Archive
This is unfortunately returning at 404 so I'm guessing the link needs updating.
Dear MP team,
I want to check the lnL at given param and stop the MCMC.
Is there any way to enable this?
Best,
Bnu
Is it possible to constrain cosmological models with Planck 2018 data (specially polarization data) using MontePython? I mean that if it needs to define a new likelihood, because for Planck 2015 data the file plc-2.0 were available to use the corresponding data. And also, should we define new nuisance parameters (different from 2013 or 2015 data)?
Dear all,
Do you also get a biased 100theta_s when using the Planck likelihood ?
Best,
Hi,
Could you please provide a source/paper for the likelihood used in class Likelihood_mock_cmb(Likelihood)
. I cannot find an equivalent of the det_mix
in Hamimeche & Lewis 2008.
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.