GithubHelp home page GithubHelp logo

hmf's Introduction

hmf

The halo mass function calculator.

hmf is a python application that provides a flexible and simple way to calculate the Halo Mass Function for a range of varying parameters. It is also the backend to HMFcalc, the online HMF calculator.

Full Documentation

Read the docs.

Features

  • Calculate mass functions and related quantities extremely easily.
  • Very simple to start using, but wide-ranging flexibility.
  • Caching system for optimal parameter updates, for efficient iteration over parameter space.
  • Support for all LambdaCDM cosmologies.
  • Focus on flexibility in models. Each "Component", such as fitting functions, filter functions, growth factor models and transfer function fits are implemented as generic classes that can easily be altered by the user without touching the source code.
  • Focus on simplicity in frameworks. Each "Framework" mixes available "Components" to derive useful quantities -- all given as attributes of the Framework.
  • Comprehensive in terms of output quantities: access differential and cumulative mass functions, mass variance, effective spectral index, growth rate, cosmographic functions and more.
  • Comprehensive in terms of implemented Component models:
    • 5+ models of transfer functions including directly from CAMB
    • 4 filter functions
    • 20 hmf fitting functions
  • Includes models for Warm Dark Matter
  • Nonlinear power spectra via HALOFIT
  • Functions for sampling the mass function.
  • CLI scripts for producing any quantity included.
  • Python 2 and 3 compatible

Note

From v3.1, hmf supports Python 3.6+, and has dropped support for Python 2.

Quickstart

Once you have hmf installed, you can quickly generate a mass function by opening an interpreter (e.g. IPython/Jupyter) and doing:

>>> from hmf import MassFunction
>>> hmf = MassFunction()
>>> mass_func = hmf.dndlnm

Note that all parameters have (what I consider reasonable) defaults. In particular, this will return a Tinker (2008) mass function between 10^10 and 10^15 solar masses, at z=0 for the default PLANCK15 cosmology. Nevertheless, there are several parameters which can be input, either cosmological or otherwise. The best way to see these is to do:

>>> MassFunction.parameter_info()

We can also check which parameters have been set in our "default" instance:

>>> hmf.parameter_values

To change the parameters (cosmological or otherwise), one should use the update() method, if a MassFunction() object already exists. For example:

>>> hmf = MassFunction()
>>> hmf.update(cosmo_params={"Ob0": 0.05}, z=10) #update baryon density and redshift
>>> cumulative_mass_func = hmf.ngtm

For a more involved introduction to hmf, check out the tutorials, or the API docs.

Using the CLI

You can also run hmf from the command-line. For basic usage, do:

hmf run --help

Configuration for the run can be specified on the CLI or via a TOML file (recommended). An example TOML file can be found in examples/example_run_config.toml. Any parameter specifiable in the TOML file can alternatively be specified on the commmand line after an isolated double-dash, eg.:

hmf run -- z=1.0 hmf_model='SMT01'

Versioning

From v3.1.0, hmf will be using strict semantic versioning, such that increases in the major version have potential API breaking changes, minor versions introduce new features, and patch versions fix bugs and other non-breaking internal changes.

If your package depends on hmf, set the dependent version like this:

hmf>=3.1<4.0

Attribution

Please cite Murray, Power and Robotham (2013), Murray, Diemer, Chen, et al. (2021) and/or https://ascl.net/1412.006 (whichever is more appropriate) if you find this code useful in your research. Please also consider starring the GitHub repository.

hmf's People

Contributors

dependabot[bot] avatar jlashner avatar liuxx479 avatar liweitianux avatar mirochaj avatar pre-commit-ci[bot] avatar steven-murray avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hmf's Issues

From the scripts accuracy_test.py

What simple 'test everything works, plot a simple HMF for default conditions do you recommend breaking one's teeth on first?

I tried accuracy_test.py and it returned the following error.

File "accuracy_test.py", line 112, in main
accuracy_args.add_argument("--lnk-min", nargs="*", type=float,
help="the maximum wavenumber [default: %s]" %

                                                 h.transfer.lnk[0])

AttributeError: 'numpy.ndarray' object has no attribute 'lnk'
accuracy_test.py: AttributeError("'numpy.ndarray' object has no attribute 'lnk'",)
for help use --help

I tried to redo the same action in Ipython and got a different error.


In [11]: h = MassFunction()

In [12]: h.transfer

TypeError Traceback (most recent call last)
/home/sjturnbu/ in ()
----> 1 h.transfer
/usr/local/lib/python2.7/dist-packages/hmf/_cache.pyc in _get_property(self)
---> 52 value = f(self)
/usr/local/lib/python2.7/dist-packages/hmf/transfer.pyc in transfer(self)
--> 456 return self._lnT_cdm
/usr/local/lib/python2.7/dist-packages/hmf/_cache.pyc in _get_property(self)
---> 52 value = f(self)
/usr/local/lib/python2.7/dist-packages/hmf/transfer.pyc in _lnT_cdm(self)
--> 401 self._unnormalised_lnT,
/usr/local/lib/python2.7/dist-packages/hmf/_cache.pyc in _get_property(self)
---> 52 value = f(self)
/usr/local/lib/python2.7/dist-packages/hmf/transfer.pyc in _unnormalised_lnT(self)
--> 377 return get_transfer(self.transfer_fit, self).lnt(self.lnk)
/usr/local/lib/python2.7/dist-packages/hmf/transfer.pyc in get_transfer(name, t)
---> 31 return getattr(sys.modules[name], name)(t)
TypeError: getattr(): attribute name must be string


It is clearly finding the /hmf/transfer.pyc code ... but fails after that.

pycamb error

Hi, I just installed camb and pycamb in Xubuntu 16.04, and I have an error at moment of impoty pycamb:

Python 2.7.12 |Anaconda 4.2.0 (32-bit)| (default, Jul  2 2016, 17:41:35) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import pycamb
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named pycamb
>>> 

and in the jupyter notebook, when I tried to do this:



import sys, platform, os
#uncomment this if you are running remotely and want to keep in synch with repo changes
#if platform.system()!='Windows':
#    !cd $HOME/git/camb; git pull github master; git log -1
print('Using CAMB installed at '+ os.path.realpath(os.path.join(os.getcwd(),'..')))
sys.path.insert(0,os.path.realpath(os.path.join(os.getcwd(),'..')))
import camb

is this the error what I got:

Using CAMB installed at /home/xoca/Desktop/tesis/Tesis

---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-2-d725107ff633> in <module>()
      5 print('Using CAMB installed at '+ os.path.realpath(os.path.join(os.getcwd(),'..')))
      6 sys.path.insert(0,os.path.realpath(os.path.join(os.getcwd(),'..')))
----> 7 import camb

/home/xoca/anaconda2/lib/python2.7/site-packages/camb/__init__.py in <module>()
     10 __version__ = "0.1.2"
     11 
---> 12 from .baseconfig import dll_import
     13 from .camb import CAMBdata, MatterTransferData, get_results, get_transfer_functions, get_background, \
     14     get_age, get_zre_from_tau, set_z_outputs, set_feedback_level, set_params, get_matter_power_interpolator

/home/xoca/anaconda2/lib/python2.7/site-packages/camb/baseconfig.py in <module>()
     33     if not osp.isfile(CAMBL): sys.exit(
     34         '%s does not exist.\nPlease remove any old installation and install again.' % DLLNAME)
---> 35     camblib = ctypes.LibraryLoader(ifort_gfortran_loader).LoadLibrary(CAMBL)
     36 # camblib = ctypes.cdll.LoadLibrary(CAMBL)
     37 else:

/home/xoca/anaconda2/lib/python2.7/ctypes/__init__.pyc in LoadLibrary(self, name)
    438 
    439     def LoadLibrary(self, name):
--> 440         return self._dlltype(name)
    441 
    442 cdll = LibraryLoader(CDLL)

/home/xoca/anaconda2/lib/python2.7/ctypes/__init__.pyc in __init__(self, name, mode, handle, use_errno, use_last_error)
    360 
    361         if handle is None:
--> 362             self._handle = _dlopen(self._name, mode)
    363         else:
    364             self._handle = handle

OSError: /home/xoca/anaconda2/bin/../lib/libgomp.so.1: version `GOMP_4.0' not found (required by /home/xoca/anaconda2/lib/python2.7/site-packages/camb/camblib.so)

`

So, can you help me to solve this please?

pycamb installation

Hi Steven,

I would like to download your code and use it and it is like a day or two that I am fiddling around to install your code but it is kind of impossible. I can not install the version of pycamb you have suggested in the instruction because it uses CAMB version (January 2010) that I couldn't find on the web. I would be really really appreciate if you could provide a link to that version. Because when I compile other version I also get the error message that they are not compatible.

Thanks a lot and have a nice weekend.
Cheers.

Versioning/Release Strategy

The current release strategy is not working that well. A quick description of the current system:

Only master is protected. All new features and bugfixes and everything are pulled straight into master. On that pull, an action runs that determines (from commit messages) what version (if any) should be bumped, and creates a git tag, pushes that to GitHub, and triggers an upload to PyPI.

This is super nice and automated. The problems are:

  • Updates can't be lumped into a version. Version churn hurts users/libraries that depend on hmf.
  • The CHANGELOG is difficult to write/update properly, because you kinda don't know which version it will be bumped to.
  • Similar to the first point, breaking changes are all-or-nothing. They must send the code to the next major version and there's no real way back.

I can think of two alternatives that might alleviate some of this:

  1. Manually perform strict semantic versioning. In this case, we'd still use something like commit-analyzer to do the commit analysis and make the version bump as required, but the maintainer (@steven-murray) would do it locally whenever it seems right to do so. This means that as many things can be lumped together as we want before making an official release, and we can make rules about timescales for releases. Downsides are that it's not automated, which can be a bit of a problem when the maintainer has lots of different projects and tends to get back to this one in fits and starts. This could be alleviated by the maintainer having some repeated alert. Also, there's still no protected branch for future breaking versions. Also, it means master is not always corresponding to some deployed version (this might not be that bad).

  2. Have a more sophisticated set of branches, like git-flow. In this case, develop would be a protected branch in which minor versions and patches could accumulate until it is decided that a release should be cut. Timescale rules can be placed on releases etc. We would also have a breaking branch into which breaking changes for the future version should be merged. Updates can be merged into the current develop and breaking. Downsides are the increased complexity here. Is this code really so complex that it requires all of this management?

For number (2) it is not even really clear if something like commit-analyzer would be required. Bugfix branches should always end up increasing the patch version, merges from develop to master should always end up increasing the minor version, and from breaking we'd increase the major version.

If we go with (1), it would probably be useful to use commitizen with some kind of auto-changelog.

docs of fitting_functions

Need a way to inherit properties' docstrings so that fsigma, cutmask have docstrings for each fit.

growth_factor needs to be better

There are several aspects of the GrowthFactor class that could/should be better:

  • consistency in call signature between classes (esp. growth_factor function)
  • better growth rate implementation for general case.

HR4+ Fits

Add fits from HR4 and Manera2010.

[Question] Mass Conversion

I am a bit unsure of what the best way to do mass conversion is. I can think of two ways: the first (as is currently implemented) is to take the user's input masses in whatever they define the definition to be, calculate the HMF at those masses defined at whatever definition the fitting function uses and then convert the mass function by multiplying by dm_old/dm_new.

The second way would be to change the input masses themselves on input to the new definition, then just calculate the mass function there. This has the undesirable effect of needing to change the whole mass array every time a new definition is chosen, and might make some comparisons harder.

I guess each way should give the same result, right? That might make for an interesting test...

v3.1

It's time for 3.1. This should:

  • Implement flake8 and black checking.
  • Convert to pytest
  • Fix all current testing errors
  • Convert to setuptools_scm
  • Make conda packages available
  • Convert to Github Actions
  • Ensure PyPI uploads are made appropriately.

pre-calculate optimal order of parameters

the function get_best_param_order can get the best parameter order, but it should not be necessary. Every version should have pre-shipped and saved parameter orders to read in.

Also, need higher degree of transparency about whether attributes are parameters/ quantities.

note that pycamb is optional in docs

Hi Steven, I was checking out your hmf package, which looks excellent.
One thing is the basic usage from the documentation:

>>> from  hmf  import  MassFunction
>>>hmf  =  MassFunction()
>>>mass_func  =  hmf.dndlnm 

This fails unless (the optional) pycamb is install, and it's not immediately clear that the missing pycamb is the problem.

You might put in a line stating that you need to substitute

>>> hmf = MassFunction(transfer_fit="EH")

if pycamb is not present.

Input mass unit

Hi Steven,
I have two bug reports for https://github.com/steven-murray/hmf/blob/master/hmf/hmf.py
and a recommendation for further development.

  1. By comparing to the units of all other parts, I think the input mass unit is in term of log10( Msun/h ).
    In the code, it says log10(Solar Masses) .
  2. The parameters in the comments are inconsistent with the value using in the following part.
  3. I would recommend the feature to calculate the mass functions at different redshifts at once ( also just call CAMB once).

-Jiayi

[Feature Req] Consider using nu instead of m as integration variable

Is your feature request related to a problem? Please describe.
When integrating over the mass function, it is usually more robust to integrate over nu instead of m. Otherwise, it can be
hard to ensure convergence at either the lower or upper end.

Describe the solution you'd like
This is by no means clear. Typically, if you want dndm, you're kinda thinking in mass-space (I want dndm at mass m). So there needs to be a way to specify the vector based on mass. But when integrating over mass (eg. ngtm or any halo model quantity) we care more about converging at either end, and it would be more useful, probably, to be able to specify in terms of nu.

Potentially, we want a new framework that specifically takes care of integration over mass/nu. This class would then take care of ngtm and friends. You'd specify the limits in nu to this class, and mass would be a derived quantity. There may be some API kinds of problems here. What if the user wants ngtm at the same masses as dndm, and they want dndm at log-spaced masses? They'd have to interpolate ngtm and evaluate it at the masses of a different framework? Is this even something that anyone would want?

Also, there's the issue of how to re-use the existing MassFunction framework in the extended framework, so we don't have to implement calculation of mass given nu. Would the mass function be a subframework? But then would we kind of circuit it so that you use it to get nu(m), then invert to get m(nu), then pass those masses back into MassFunction? Seems convoluted and brittle. Or do we just add the option to specify the mass vector via nu in MassFunction? This seems somewhat simpler.

Describe alternatives you've considered
A possible alternative (or even something to try anyway) is to use backend tables and splines for all quantities. We could then provide functions that interpolate over nu and m. We could control the range of m/nu in the background and ensure it's large enough. This also means the user could call the mass function at irregularly-spaced m, or a single m. This is attractive, but not necessarily efficient or simple. Splines are headaches at the best of times. It's not direct -- calculations are happening that the user doesn't necessarily need (what if they only ever wanted to calculate dndm for a few masses... why then do it for all mass?).

nan values at low masses

Is there a way to avoid "nan"s at halo masses below a few 10^12 M_Sun/h for non-simple redshifts such as 0.1 or 0.3?

Sample Test in Docs Fails

I downloaded the latest hmf version (using "pip install hmf") and tried to run the sample script given in the help documents (http://hmf.readthedocs.org/en/latest/):

from hmf import MassFunction
hmf = MassFunction()
mass_func = hmf.dndlnm

The last line returns a type error that I cannot seem to diagnose; output below.


TypeError Traceback (most recent call last)
in ()
1 from hmf import MassFunction
2 hmf = MassFunction()
----> 3 mass_func = hmf.dndlnm
4 mass_variance = hmf.sigma

/Library/Python/2.7/site-packages/hmf/_cache.pyc in _get_property(self)
50 # If recalc is constructed, and it needs to be updated, recalculate
51 if getattr(self, recalc).get(name, True):
---> 52 value = f(self)
53 setattr(self, prop, value)
54

/Library/Python/2.7/site-packages/hmf/hmf.pyc in dndlnm(self)
372 The differential mass function in terms of natural log of M, len=len(M) [units :math:h^3 Mpc^{-3}]
373 """
--> 374 return self.M * self.dndm
375
376 @cached_property("M", "dndm")

/Library/Python/2.7/site-packages/hmf/_cache.pyc in _get_property(self)
50 # If recalc is constructed, and it needs to be updated, recalculate
51 if getattr(self, recalc).get(name, True):
---> 52 value = f(self)
53 setattr(self, prop, value)
54

/Library/Python/2.7/site-packages/hmf/hmf.pyc in dndm(self)
335 """
336 if self.z2 is None: # #This is normally the case
--> 337 dndm = self.fsigma * self.mean_dens * np.abs(self._dlnsdlnm) / self.M ** 2
338 if isinstance(self._fit, Behroozi):
339 ngtm_tinker = self._gtm(dndm)

/Library/Python/2.7/site-packages/hmf/_cache.pyc in _get_property(self)
50 # If recalc is constructed, and it needs to be updated, recalculate
51 if getattr(self, recalc).get(name, True):
---> 52 value = f(self)
53 setattr(self, prop, value)
54

/Library/Python/2.7/site-packages/hmf/hmf.pyc in fsigma(self)
318 The multiplicity function, :math:f(\sigma), for mf_fit. len=len(M)
319 """
--> 320 fsigma = self._fit.fsigma(self.cut_fit)
321
322 if np.sum(np.isnan(fsigma)) > 0.8 * len(fsigma):

/Library/Python/2.7/site-packages/hmf/_cache.pyc in _get_property(self)
50 # If recalc is constructed, and it needs to be updated, recalculate
51 if getattr(self, recalc).get(name, True):
---> 52 value = f(self)
53 setattr(self, prop, value)
54

/Library/Python/2.7/site-packages/hmf/hmf.pyc in _fit(self)
233 fit = self.mf_fit(self)
234 except:
--> 235 fit = get_fit(self.mf_fit, self)
236 return fit
237

/Library/Python/2.7/site-packages/hmf/fitting_functions.pyc in get_fit(name, h)
15 """
16 try:
---> 17 return getattr(sys.modules[name], name)(h)
18 except AttributeError:
19 raise AttributeError(str(name) + " is not a valid FittingFunction class")

TypeError: getattr(): attribute name must be string

dlna is too large for functions of redshift

It seems that the default dlna=1e-2 (in growth factor) creates small-scale ripples in redshift-dependent(?) quantities, but using dlna=1e-4 eliminates them. Need to balance up what is the best default here, performance-wise, but add docs which explain this.

tutorial docs

There should be better tutorial docs, especially on how to extend hmf and use its framework.

Results incorrect at finite redshift and w!=-1

The growth function used is not valid for wCDM. It is calculated by numerically integrating Equation 8 of Lukic+07. That equation is exactly 7.77 from Dodelson. Dodelson's textbook's errata page (http://home.fnal.gov/~dodelson/errata.html) points to a memo which provides the coupled PDEs to be solved. For Delta_w = 0.5, this causes a ~20% error in halo number density at z=0.6.

Possible solutions include correcting the growth function calculation, or obtaining the full P(k,z) from CAMB.

Fix logging handlers

Currently giving no errors if the user hasn't explicitly set a handler, which is bad. Need to fix this.

[Feature Req] Support for loading external components via CLI/TOML

Is your feature request related to a problem? Please describe.
If using the Python interpreter, externally-defined models can be used. However, these are not available if using
the CLI, or loading from TOML. This should be added.

Describe the solution you'd like
Probably best to convert all base components into ABCs that can register submodules, and provide a keyword for importing modules in the TOML. Then registered plugins can be automatically used.

Describe alternatives you've considered
Could just use importlib directly to load in any string that is given. But this is clunkier and prone to error.

Default values for sigma8 and n in hmf/transfer.py

In the initialisation of the Transfer class in hmf/transfer.py, there are still the old Planck13 parameters used as default values for n_s (n=0.9624) and sigma_8 (sigma_8=0.8344).
It would be reasonable to update these, since in the initialisation of the Cosmo class Planck15 is used as default parameter set for all other cosmological parameters.

Planck 2015 parameters are:
n = 0.9667
sigma_8 = 0.8159

NaNs with Bhattacharya mass function

The Bhattacharya mass function at z=0 with default cosmological parameters produces NaNs in the output via the HMFcalc interface, even with "Restrict mass range to fitted range?" checked. Here's the corresponding parameters.txt:

File Created On: 2019-08-09 00:12:09.914416
With version 1.0.5 of HMFcalc 
And version 1.7.0 of hmf (backend) 

SETS OF PARAMETERS USED 
=====================================================
   PLANCK-SMT 
=====================================================
Mmax: 15.0 
tau: 0.0925 
Mmin: 10.0 
transfer_options: {'fname': u'transfers/PLANCK_transfer.dat'} 
cs2_lam: 1 
cut_fit: True 
t_cmb: 2.725 
dlog10m: 0.05 
omegam: 0.2715 
N_nu: 3.04 
omegan: 0.0 
transfer_fit: FromFile 
omegak: 0.0005 
omegac: 0.226 
omegab: 0.0455 
nz: None 
z_reion: 11.35 
N_nu_massive: 0.0 
omegav: 0.728 
_fsig_params: {} 
delta_wrt: mean 
lnk_min: -15.0 
delta_h: 200.0 
wdm_mass: None 
z2: None 
delta_c: 1.686 
sigma_8: 0.81 
dlnk: 0.05 
h: 0.704 
H0: 70.4 
lnk_max: 15.0 
n: 0.967 
w: -1 
z: 0.0 
y_he: 0.24 
mf_fit: Bhattacharya 

The mass function is NaN up to about 6.0e11Msun/h. Above that the values are no longer NaN, but also don't agree with my own implementation of the Bhattacharya mass function (I'm not saying my implementation is correct - I was actually trying to validate it against HMFcalc when I found this problem).

How can I get CAMB

Hi,

I am trying to download and install your halo mass function code in my machine. I am wondering how I can obtain camb or what is the email address for this code in order to run this command
python setup.py install [--get=www.address-where-camb-code-lives.org]

on the other hand I don't have root permission, how could I install the code in this case?

Thanks in advance.

No handlers could be found for logger "hmf"

Goodmorning Steven,

I was wondering if you had encountered an error:


cumulative_mass_func = hmf.ngtm
No handlers could be found for logger "hmf"


before?

I was following your sample code from "http://hmf.readthedocs.org/en/latest/index.html"
In [1]: from hmf import MassFunction
In [2]: hmf = MassFunction()
In [3]: mass_func = hmf.dndlnm
In [4]: mass_variance = hmf.sigma
In [5]: cumulative_mass_func = hmf.ngtm
and it returned that error.

Thank you for any time you can spare to assist.

I have a readout of the install, but the install readout is too long for this submission : "There was an error creating your Issue: body is too long. "

Stephen Turnbull

py.test error after intall

Hi Steven,
I install the pycamb and hmf(release1.8.0) from you github.
No errors during installing. But when I try the py.test in your ./test dir, I meet the Failure:

=================================== FAILURES ===================================
___________________________ TestFcoll.test_fcolls[0] ___________________________

self = <test_fcoll.TestFcoll object at 0x7f8175af95d0>
pert = <hmf.hmf.MassFunction object at 0x7f8175af9850>, fit = 'PS'

def check_fcoll(self, pert, fit):
    if fit == "PS":
        anl = fcoll_PS(np.sqrt(pert.nu))
        num = pert.rho_gtm / pert.mean_dens

    elif fit == "Peacock":
        anl = fcoll_Peacock(np.sqrt(pert.nu))
        num = pert.rho_gtm / pert.mean_dens

    err = np.abs((num - anl) / anl)
    print np.max(err)
    print num / anl - 1
  assert np.max(err) < 0.05

E assert 1.1693592892307125 < 0.05
E + where 1.1693592892307125 = <function amax at 0x7f817e4ab050>(array([ 2.46884309e-01, 2.46889912e-01, 2.46894728e-01,\n 2.468987... 1.09377128e+00, 1.11834393e+00,\n 1.14353402e+00, 1.16935929e+00]))
E + where <function amax at 0x7f817e4ab050> = np.max

Do you think it is OK? What else should I do?

Thanks!
Shuo

Additional factor 1e6 in hmf/cosmo.py:105

The additional factor 1e6 in hmf/cosmo.py:105 could be removed if the prepending M (mega) at the unit conversion from grams to solar masses is removed as well:

>>> u.solMass.to(u.kg)
1.9891e+30

>>> u.MsolMass.to(u.kg)
1.9891e+36

Mass Definition

Changing mass definition (delta_h) should change all mass functions, not just those with explicit dependence.

use pathos' pool

Instead of using native Pool, use Pathos' Pool, so that pickling works better.

more intelligent setting of k bounds

At this point, the bounds of k are set statically by the user, and by default are quite conservative.

The default suggests that the typical application requires that k-integrals approximate (0,\infty) [though they are left free for other applications which approximate finite box size etc.].

Given that the requirement is an approximation for a given (set of) integral(s), it would perhaps be more useful to regard k-bounds as being set dynamically from a given user expectation of accuracy at given masses. This would allow the most efficient gridding, and less confusion.

Changing power spectrum

I am not sure if this is the right place for the question. If not, please direct me accordingly.
Q: I am trying to create halo mass functions using a custom power spectrum. I am curious how I may be able to change the power spectrum such that the halo mass function is calculated using the new power spectrum.

Thank you for your help.

Bug in hmf/hmf/transfer_models.py

Hi Steven,

There seems to be a typo in hmf/hmf/transfer_models.py in line 288 of class EH_BAO. np.exp(-(k * self.cosmo.h / kSilk) should be replaced by np.exp(-(k/ kSilk) as k is in Mpc^{-1} unit. And by changing this, you can match the original codes from Hu's website (http://background.uchicago.edu/~whu/transfer/tf_fit.c, you can find the same line if you search exp(-pow(k/k_silk,1.4)). Can you verify this please. Thanks!

Best,
Yin

Error when using a transfer function from file- __init__() got multiple values for keyword argument 'cosmo'

I'm trying to make a MassFunction object with a transfer function from a file output by CAMB. I had this working with an old version of HMF (not sure which- it's old enough it doesn't have hmf.__version__ and pip is being uncooperative when I ask for information with pip show) but get an error with the new version. The old code was

import hmf
options={}
options['fname']="/path/planck_for_hmf_transfer_out.dat"
mf=hmf.MassFunction(transfer_fit="FromFile", transfer_options=options, z=0)

The new code I'm using is

import hmf 
options={}
options['fname']="planck_for_hmf_transfer_out.dat"
hmf_cosmo = hmf.cosmo.get_cosmo("Planck15")
mf = hmf.MassFunction(z=0, transfer_model = hmf.transfer_models.FromFile, 
                      transfer_params={'cosmo': hmf_cosmo, 'model_parameters':options})

This executes without a problem and I can grab delta_c = mf.delta_c, but when I ask for mf.radii, I get an error that is essentially complaining that I'm trying to give it the cosmology twice. I've attached the error message (from iPython) as with_cosmo.txt.

If I run instead

mf = hmf.MassFunction(z=0, transfer_model = hmf.transfer_models.FromFile, 
                      transfer_params={'model_parameters':options})

it also doesn't work but gives a different error. I've attached this error message as without_cosmo.txt. Passing cosmo as a keyword argument to MassFunction() doesn't work because it doesn't expect cosmo as an argument. I've attached my CAMB output file as well (I had to change the file from .dat to .txt for github to take it, but it's plain text either way so I think it should be ok).

I'm not sure if this is really a bug or if I'm just doing it wrong- I couldn't find much of anything in the documentation about how to deal with reading a transfer function from a file. Any help is appreciated. In case you need to know about dependencies, I've attached the info on the conda environment I'm running the new code in.

with_cosmo.txt
without_cosmo.txt
planck_for_hmf_transfer_out.txt
conda_environment_updated_hmf.txt

hmf_integral_gtm seems brittle

Doing the integral up to 10**18 hardcoded is probably bad, and doesn't really make sense at higher redshift.

Since the spline is taken, should be integrated using that rather.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.