GithubHelp home page GithubHelp logo

nnpdf / eko Goto Github PK

View Code? Open in Web Editor NEW
6.0 6.0 2.0 197.38 MB

Evolution Kernel Operators

Home Page: https://eko.readthedocs.io

License: GNU General Public License v3.0

Python 84.05% Makefile 0.01% TeX 0.63% Mathematica 7.82% MATLAB 2.51% Shell 0.09% Gnuplot 1.73% C++ 0.58% Nix 0.05% Rust 2.41% HTML 0.13%
physics hep-ph high-energy-physics python

eko's Introduction

EKO

Tests Rust tests Docs CodeFactor

EKO is a Python module to solve the DGLAP equations in N-space in terms of Evolution Kernel Operators in x-space.

Installation

EKO is available via

  • PyPI: PyPI
pip install eko
  • conda-forge: Conda Version
conda install eko

Development

If you want to install from source you can run

git clone [email protected]:N3PDF/eko.git
cd eko
poetry install

To setup poetry, and other tools, see Contribution Guidelines.

Documentation

  • The documentation is available here: Docs
  • To build the documentation from source install graphviz and run in addition to the installation commands
poe docs

Tests and benchmarks

  • To run unit test you can do
poe tests
  • Benchmarks of specific part of the code, such as the strong coupling or msbar masses running, are available doing
poe bench
  • The complete list of benchmarks with external codes is available through ekomark: documentation

Citation policy

When using our code please cite

  • our DOI: DOI
  • our paper: arXiv

Contributing

  • Your feedback is welcome! If you want to report a (possible) bug or want to ask for a new feature, please raise an issue: GitHub issues
  • If you need help, for installation, usage, or anything related, feel free to open a new discussion in the "Support" section
  • Please follow our Code of Conduct and read the Contribution Guidelines

eko's People

Contributors

adrianneschauss avatar alecandido avatar andreab1997 avatar dependabot[bot] avatar felixhekhorn avatar giacomomagni avatar niclaurenti avatar pre-commit-ci[bot] avatar scarlehoff avatar scarrazza avatar t7phy avatar tgiani avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

eko's Issues

Ekomark & Ekobox true unit tests

Unit tests are much faster to run, and they are extremely useful to spot quickly and in a precise way if there is anything broken.

ekobox (and ekomark, but especially the other [1]) have no dedicated unit tests at the moment, so they should be written, after #79.

[1]: ekomark is quite coupled to banana, and in general really dedicated to benchmarks, so it might be harder to unit test

scale variations

allow renormalization scale and factorization scale to be varied independently

Minimize the amount of initial imports

Motivation: As soon as you install eko, you'll need to wait a while on first import. This is needed for the eko application, but unnecessary and completely counter-intuitive for the eko library.

At the moment importing eko means importing eko.runner, and also eko.output.

Then, as it makes sense, importing runner means importing everything (see dependency graph).

How to improve

From my point of view:

  • there is no way to improve on eko.runner: it's quite intuitive that uses all (or almost all) of the library
  • but I'd say that we should not wait on the compilation of all the anomalous dimensions and so on, if you only need interpolation or strong coupling (or even masses, basis rotation, ...)

In particular, having to wait every time you install anew it's already a bit unpleasant in yadism, but much more in banana or runcards.

In practice

I'd say we just need to:

  • shuffle back run_dglap inside eko.runner
  • at this point we can keep the main __init__.py just for the version (no need for a separate module)
  • update related docs

ekobox plots

  • import plots utils from a number of sources:
    • ekomark
    • IC paper
    • eko paper
  • always use eko paper style for plots
  • extend the mechanism of plots runcards:
    • functions should take dictionaries as inputs, not file paths (more general, wrappers might be provided, but they would be extremely thin)
    • provide as many defaults as possible: in principle one should give only a PDF set as input (better not to choose a default here, to remain a bit agnostic) and a plot should be produced
    • provide example runcards as a library (similar to matplotlib.style.library)
    • we should choose meaningful names for plots

Lazy loading for rust

We'd like PineAPPL as well to be able to consume an EKO one Q2 at a time, since evolving grids requesting a huge number of Q2 points would require splitting the grid otherwise.

In order to do this, the easiest is to directly provide and maintain a Rust crate that is able to manage the EKO output format.
Indeed, PyO3 would make it possible to run Python code from Rust, but it is more a handle on the interpreter itself, rather than proper bindings in this direction (even though it might be used for this, but it can be painful).

So, the easiest thing it's to make a standalone crate in this repository (unfortunately eko is already lost as a name, but we can still use something like get eko, i.e. geko).
In order to do this, we don't really need many ingredients:

  • a tar library, for which the one linked is an obvious candidate
  • a yaml library, that is another easy choice
  • an npy library, and here it becomes difficult; the alternatives are
    1. npyz not much maintained, but should be complete enough
    2. ndarray-npy, slightly more maintained in the recent future, but it does not seem to have a complete feature set
    3. npy, that is mentioned by the previous, and the number 1. claims to be its fork, but it looks untouched from 2018

#242 loader

In light of the release of the new runner, and the associated new internal format, I'd speed up the implementation of a first version of the loader.

First iteration (strictly required):

  • metadata loader
  • operators loader
  • operators headers syncer

Second iteration (nice-to-have):

  • load runcards

Alpha_s evolution bugged

in NLO alpha_s object does not recover the reference value

sc = StrongCoupling(..., alpha_ref, scale_ref, ...)
assert sc.a_s(scale_ref) == a_ref # this seems to fail

Added conda package

Added a conda recipe to conda-forge

conda-forge/staged-recipes#18289

Looks straightforward enough so I guess it will be part of conda-forge soon as soon as it is reviewed (I haven't pinged anyone yet because I'm waiting for the tests, ran only the Linux one in docker in my own computer).

Speed up first import

Compilation is needed only once per installation, nevertheless it is a bit annoying that actually the first time you try to run you don't have any feedback for a while.

There is an easy way to speed up compilation, since at the moment we are doing it on a single core. We can just add to the decorators:
https://github.com/N3PDF/eko/blob/47ad7d011bd853a86fe2cf7239931e28addb958b/src/eko/kernels/evolution_integrals.py#L18
the simple kwarg parallel=True.

https://numba.pydata.org/numba-doc/latest/user/threading-layer.html

If a code base does not use the threading or multiprocessing modules (or any other sort of parallelism) the defaults for the threading layer that ship with Numba will work well, no further action is required!

Travis does not run

It seems Travis does always fail - this is very likely due to the test_dglap.py tests, which are indeed very time consuming; can we increase the time limit somehow? or what else to do? (I set them to skip on my PC for exact this reason ;-) )

Versioning data format

Since:

  • we're now putting a lot of effort in the output format (see #105)
  • multiple libraries will be able to read it (see #97)

we should definitely start to assign versions to it (see NNPDF/pineappl#118).

Versions should be:

  • written in all the files
  • be easy to query (as easy as possible)
  • not overwritten by a library that generates new ones (even if it's able to read them)

Ideally, instead of reading old versions with a new version of the library (that might generate incompatibilities, or severely limit upgrades), the form of backward compatibility should be to allow the new library to convert an old format to the new one, but refuse direct reading of an old format (that might have undefined behavior in general).

Improve EKOs product

At the moment, the product is implemented in ekobox.utils, but it is rather naive:

  1. it is not updating metadata, that remain those of the initial object (since it's cloned)
  2. it is not doing a proper job when paths collide: a -> b -> c is not used when a -> c already existed in the LHS

In order to address point 1. a stable output is needed (as requested in #77 and sketched in #105).

The second point is rather subtle, since one might think that the two are the same, and it might want to use the already existing one not to accumulate more numerical error.
This is true only if the theory is the exact same, or somehow compatible. Otherwise, playing with the thresholds for nf, the two objects might be really distinct.

Since solving the problem of point 2. in general might be extremely complicated, we can keep doing what we're doing now, but implementing strong consistency checks on the theories related to the operators.
We can still allow for some differences (e.g. in evolution method), but we need to nullify all the entries in the metadata that have no single consistent value (such that you have to read the components' metadata).

  • add proper checks
  • compute proper metadata for the product (including null ones)
  • log operations into history.yaml and runcards to {n+1}.theory.yaml and {n+1}.operator.yaml
  • add product subcommand to the CLI

Cuda in eko

As we discussed the other day (/cc @felixhekhorn @alecandido) I've tried to run eko in a GPU. It doesn't work out of the box right now, but the only thing stopping the integrand to be a cuda kernel is the cern_polygamma function and the scipy zeta function. The rest seems to compile ok (besides some numpy functions that have to be massaged a bit, but nothing too dramatic).

If we wanted to run eko in GPU we would just have to create a kernel for some of these, but nothing too complicated.

Ps: then there are some problems that should not be, such as math.gamma or np.exp(complex) which in theory is supported but worst-case-scenario can also be kernel'd in or changed to things as cmath.exp which work fine.

`operator.py` name clash

Problem

If you try to open a simple python interpreter inside the src/eko it crashes badly, and also everything that use python crashes as well.

Example stack trace: Fatal Python error: initsite: Failed to import the site module Traceback (most recent call last): File "/home/alessandro/projects/N3PDF/eko/env/lib/python3.7/site.py", line 769, in main() File "/home/alessandro/projects/N3PDF/eko/env/lib/python3.7/site.py", line 747, in main paths_in_sys = addusersitepackages(paths_in_sys) File "/home/alessandro/projects/N3PDF/eko/env/lib/python3.7/site.py", line 345, in addusersitepackages addsitedir(USER_SITE, known_paths) File "/home/alessandro/projects/N3PDF/eko/env/lib/python3.7/site.py", line 202, in addsitedir addpackage(sitedir, name, known_paths) File "/home/alessandro/projects/N3PDF/eko/env/lib/python3.7/site.py", line 170, in addpackage exec(line) File "", line 1, in File "/home/alessandro/projects/N3PDF/eko/env/lib/python3.7/importlib/util.py", line 14, in from contextlib import contextmanager File "/usr/lib/python3.7/contextlib.py", line 5, in from collections import deque File "/home/alessandro/projects/N3PDF/eko/env/lib/python3.7/collections/__init__.py", line 21, in from operator import itemgetter as _itemgetter, eq as _eq File "/home/alessandro/projects/N3PDF/eko/src/eko/operator.py", line 9, in import logging File "/usr/lib/python3.7/logging/__init__.py", line 26, in import sys, os, time, io, traceback, warnings, weakref, collections.abc File "/usr/lib/python3.7/traceback.py", line 5, in import linecache File "/home/alessandro/projects/N3PDF/eko/env/lib/python3.7/linecache.py", line 8, in import functools File "/home/alessandro/projects/N3PDF/eko/env/lib/python3.7/functools.py", line 21, in from collections import namedtuple ImportError: cannot import name 'namedtuple' from 'collections' (/home/alessandro/projects/N3PDF/eko/env/lib/python3.7/collections/__init__.py)

I found out that it is because there is a name clash with some python standard library module (that for some reason are not protected in any way against this type of bugs).

Conclusion

I really suggest to change the name of your module because it is a potential source of unexpected bugs.

polarized setup

as I come myself from an polarized environment, I'd really like to sell it to this community too - meaning I'd like to implement it - should be easy after all ... but for that very reason, we could leave this also to someone external, to test this claimed "easiness"

Documentation

concerning NNPDF/nnpdf#656
my naive idea was to write everything in here and then link the NNPDF wiki to our documentation, because after all here is the implementation - is this possible @scarrazza ? or do I have to do a long version here and a short version there (or the other way round?)

NLO implementation

  • NLO splitting functions
  • NLO alpha_s -> different solvers become available
  • expanded vs. exponentiated solution

Doped PDF sets

JR on 20.08.21 08:34:

It is true that once EKO becomes public producing the doped PDF sets should be trivial

SF on 20.08.21 09:03:

For sure, EKO does allow for the number of flavours to be different in the beta function and in the evolution equation.

On the other hand, we never delivered doped sets in previous releases, and we don't even have an official paper on doped sets. In fact, the most official thing we have is probably Davide napoletano's master thesis on the NNPDF website! [1]

So I would not worry too much about them.

If someone really needs them, we will provide them upon request.

[1] http://nnpdf.mi.infn.it/wp-content/uploads/2017/10/NapoletanoTesi.pdf

also discussed here, I guess: https://inspirehep.net/literature/1393268

Recover plot operators

Not correct plotting of operators when benchmarking using sandbox. The generated pdf file has lot of blank pages and some of the plotted values are NaN.

Reproduce

  1. change plot_operator to True in sandbox.py
  2. Run python sandbox.py
  3. Look to the output in ...eko/benchmarks/data/apfel_bench/

Implement basic LO QCD solution

Steps:

  • encode the LO splitting functions
  • implement the beta function
  • maintain the alphas running fixed for the time being
  • solve the linear system algebra for the singlet and non-singlet sectors in N space
  • construct the final kernel operator in N space
  • invert the kernel to x space
  • benchmark results against APFEL

Evolved PDF object

At the moment, the only output of eko is the operator itself, and for the main package is even fine.

In order to have a usable evolved and interpolated object, you need to build/load the operator, apply it, dump the PDF set in lhapdf format, and load again with lhapdf.

It is nice to have lhapdf interface, but I'd say that it should not be mandatory to use an evolved set.

Proposal

Let's make:

  1. a shallow clone of lhapdf.PDFset (more or less as we are doing for ToyLH in banana)
  2. a function that takes a input PDF, an operator, and creates this object (so a constructor for the object above)

The benefit will be to have people being able to use eko alone, without the need for any extra non-trivial dependency.

Of course, an extra method could be:

  • dump the object in lhapdf format

but this should be considerably simple, reusing the one that has been already implemented (the class should be simply retain all the information that is currently consumed to dump).

Plots on website

A huge number of plots can be produced to extensively document eko options.

  • all solutions
  • several matching options
  • scale variations
  • all benchmarks
  • ...

To show all of them the paper is not the proper place, but not even is the documentation.

So a good option is to put them on eko website (i.e. GitHub Pages).

Binary output

Since we're going to get huge output object (in particular considering huge Q2 grids, currently tested only with a single Q2) instead of dumping json/yaml file maybe it would be better to move towards a json/yaml header for a binary blob (with instructions to decode it, like "row-major, (10, 10, 100, 100), float64").

Even better: dump wiht numpy and the descriptions may simply tell the method to use... (check how to do it np.save)

Call anomalous mass dimension \gamma_m

To avoid confusion, let's call he anomalous mass dimension \gamma_m (in contrast to the anomalous PDF dimension that we keep at \gamma). This has to be changed:

  • in the docs
  • in the paper
  • in the code (if used anywhere ... I think, gamma function and anomalous dimension are fine as names ...)

genpdf error

calling genpdf generate -p NNPDF40_nnlo_as_01180 n4uonly 2 we get

AlphaS_Lambda5: 0.342207, AlphaS_Lambda5: 0.239
AlphaS_MZ: 0.1180024
AlphaS_OrderQCD: 2
[...]

which is not readable by LHAPDF

CI Benchmarks

No need to run them in parallel here, they'll take what they'll take (and I'm not sure the CI has multiple cores available).

numba everything up

I think the code is in a good enough state to start the numbification, which will render it much faster.

This will help a lot with debugging, trying new interpolators, etc, etc

@felixhekhorn I will start doing this in parallel to your PR so you can merge it and get all the goodies of a faster code. I'll keep rebasing whatever you do in #9 so that you don't have to worry about anything breaking.

Q2 jobs

The moment we close #96, we can start splitting jobs for each separate Q2.

At the end:

  • one job has to compute thresholds
  • all the others will be just one per Q2 value

And then we rejoin everything together, once we have the two pieces (thresholds and partial).

Output format revamped

In the eko presentation arose again the topic of the output format, already faced in #60.

The request was to have a more standard format, and at the same time to split the metadata from the actual data (@Zaharid).

We would like to accomplish the first one (the choice of yaml was to have a broadly supported format), and we don't dislike the second.
Nevertheless, when you combine this requests with our strict requirement, i.e.:

  • we want to store a multidimensional array (rank 4 or rank 5)
  • we want to store it in a as-minimal-as-possible way
    it ends up in a particularly restrictive range of options.

The proposal was to use some broadly supported format like Apache Parquet, very common in the big data community.
These and the other database inherited formats are not suitable for our task, since they are optimized for tabular data, and so intrinsically two dimensional (even more, a few of the key points of Parquet are being appendable, readable in chunks, and columnar, and we don't get benefits from any of them).

The formats for multidimensional data available, broadly supported by the community (especially in science) are:

  • NetCDF, that is a general format but has an especially good library in python for managing the in-memory counterpart (i.e. xarray, closely connected to numpy and inspired by pandas)
  • HDF5, on which the former one is based, with its own python API

The first one is more specialized and preferable in general, but we don't need it as well, because it support so many features, while our goal is just to store a bare array of floats.

That's why our proposal is just to use the .npy format, coming from numpy library, and to zip it ourselves (using lz4 as it is done for pineapplgrids), who has a very simple API in python (i.e. numpy.save function, and the partner numpy.load).

It exists also an implementation of an API in C++ (or better a couple of), consisting in a very small codebase.
Many languages can interface directly with python (like Julia), and some support explicitly numpy with their own libraries (like ndarray), so we would support the numpy solution, since it is going to be a very flexible one, and at the same time the minimal thing required.

Memory saving output

We realized that most eko operations do not depend on multiple Q2 at the same time.

The full OperatorGrid is a rank-5 tensor, by using only one Q2 at a time we reduce the problem to a rank-4 tensor, with much less memory consumption.

In order to get a usable and flexible structure, a few capabilities are needed for the new structure:

  • separate Q2 storage: it will be {q2}.npy
  • lazy loading: loads one (or a subset of) Q2 at a time, and drop them once consumed
    • even the subset might be useful, for cases in which the consumer only accepts a rank-5 tensor
  • merge separately computed (but input compatible) outputs
  • split a single output into multiple ones
    • not strictly needed, but it's dual to the former one, and so nice to have

This new structure will be the upgraded version of the current Output object.

Moreover, a few more things might be implemented as related, and later used to support separate computation of Q2 elements:

  • replace/upgrade OperatorGrid
    • we need a manager for the computation, but it has not to hold the data, that has to be dumped as soon as possible
    • the easiest is to have the current OperatorGrid to hold the reference to a new Output object, and store everything in there
    • everything will include threshold operators, and partial Q2 results (those obtained before combining with threshold operators), together with full results
  • support separately threshold operators and partial Q2 in Output
    • they are also dumped on disk, with their own names: thresholds.npz (containing all the threshold elements) and {q2}.part.npy

The idea is that, an object supporting these features, can be computed separately, in a completely independent way (1 process for thresholds, plus one for each Q2, for example), and then merged together.
In order to make it easier to merge and compute the final one:

  • the thresholds.npz is never removed from the saved output
    • if other {q2}.part.npy come later, they can always be consumed
    • unless explicitly stated: provide an optimize() or clean() method, to get rid of it
  • everything can be merged together, with or without thresholds, but it's simply checking the input compatibility, and adding the arrays to the archive
  • if it contains both thresholds and partial objects, compute final ones
    • provide a combine() method
    • partial objects are removed after combination

Unify language on matching

the preferred name for matching in NNPDF seems to be "matching scale" (instead of "matching threshold") - we should check our documentation and adjust where needed

MSbar Masses Scale Variations

In principle, even the running of the masses is at fixed order, and the contribution from missing higher order can be estimated by the running that follows from RGE.

This will estimate MHOU in the masses' anomalous dimension, and it is yet another different scale from renormalization and factorization scales.

It might be argued (citation needed) that if not varied on its own, it should be kept the same of renormalization scale.

Vectorial integration

The problem

In order to compute the singlet operators eko needs four elements: S_qq, S_qg, S_gq, S_gg (in get_ker(k, l) only the k, l element is provided).

These elements will depend on the anomalous dimensions, but in order to compute each of them all the 4 anomalous dimensions should be evaluated, cf. get_gamma_singlet_0, called by get_Eigensystem_gamma_singlet_0.

But since the eigenvalue decomposition has to be performed N by N (Mellin) is not possible to integrate before the 4 anomalous dimensions and then rotate to the operators, but the single evaluation for a specific N will involve the decomposition, and so the calculation of all of the 4 anomalous dimensions.

What is happening so is that for a given N the anomalous dimensions are evaluated every time for each of the 4 integration of S_qq, S_qg, S_gq, S_gg, and in the integration process evaluating the integrand will take most of the time (almost the total amount, we believe).

A significant improvement would be integrate simultaneously the four elements, indeed when the singlet-gluon sector is relevant (in fact: always) all of the four elements are needed, and in this way all of the four elements produced by get_Eigensystem_gamma_singlet_0 are used, instead of taking one and discarding 3 every time.

Considering also the time required for the non-singlet calculation this will lead to a speed up naïvely estimated by a factor of 3.4=(4*4+1)/(4+1), independently of the complexity of the kernel (LO, NLO, NNLO, ... with whatever method used).

Naïve meaning
If not any other error is present at least this estimate is rough mainly because of the different complexity of the integration of the different elements.
Indeed if any relevant cancellation will smooth some of the elements the convergence will be much faster than for the other elements, because less evaluation are needed.
However in the worst case there is no gain and no loss in performance, since the time is spent mostly in integrating the most complex element, but all the evaluation done are already done by the current implementation, and in this way the other 3 entries will be used to improve the precision of the simplest elements.

Useless solution

scipy.integrate subpackage has a function called scipy.integrate.quad_vec, but it is currently implemented in python (while scipy.integrate.quad currently used is a binding for QUADPACK fortran implementation).

Since the latter is compiled switching to quad_vec it is likely to have no performance benefit.

Proposed solutions

  • drop scipy.integrate.quad and fall back to a sum
    • this will reduce the speed up, but it is quick to implement and benchmark
  • implement scipy.integrate.quad_vec in a C/Fortran compiled module of eko
    • this will get the theoretical speed up, but we will increase the complexity of package management (so we will loose in development time)
  • implement scipy.integrate.quad_vec in a numba compiled function, inside eko
    • in principle can obtain the same speed of the former, but it will be easier to manage and distribute
  • implement scipy.integrate.quad_vec in C/Fortran and pull request into scipy
    • slower development for the function itself (not familiar environment, external people slighlty involved), optimal solution for not polluting eko and keeping the delegation to an external utility

Implement entry point test

Steps:

  • copy a LO theory from NNPDF
  • create a python test as dict
  • send this dict to a run_dglap function

If someone disagree please complain now.

QED roadmap

Let's actually make an issue to summarize the QED development (i.e. the actual implementation of QED evolution in EKO), while the old #23 will keep serving the purpose of collecting ideas and related items.

  • unified basis implementation #89
  • anomalous dimensions implementation #111
  • running of the couplings #115
    • specify orders in the runcards, proposal PTO: n -> order: {qcd: ns, qed: n} such that we have an unambiguous PineAPPL compatible name (instead of a weird acronym...)
  • DGLAP solutions #135
  • and document them #211
  • matching conditions upgrade
    • even if we have no QED matching conditions (to be verified at level of logs of mass over matching scale ratios) we still have to implement the existing ones in the unified basis
    • the easiest is to just apply the rotation after defining them in QCD evolution basis

Implement travis pipeline

Steps:

  • create a python setup.py
  • create a conda-recipe
  • enable travis for linux and osx
  • enable tests

Modules


  • Computational backend
    • Math functions
    • Hardware acceleration

In this module we restrict ourselves to C-style programming, this is due to the limitations imposed by numba and openCL.


  • Evolution module
    • DGLAP solver
    • Mellin Inversion

  • API
    • Runcard parser
    • Output tensor

Renaming output does not work

Describe the bug
Renaming the output file leads to the thing being unusable

To Reproduce
Steps to reproduce the behavior:

  1. save an output to a.tar
  2. rename a.tar to b.tar
  3. try to load b.tar
  4. See error: "... metadata.yaml not found ..."

Expected behavior
it should still work, since EKOs are expensive we want to be able to reuse them (if e.g. my mistake the name was miscomputed)

Additional context
this is coming from here
maybe, if the things fail we should check if there is one directory and choose that one


  • add unit test for the future: dump -> rename -> attempt to load

Intrinsic Charm

Seems like the presence of intrinsic charm will affect also the evolution.

So in apfel/Evolution/initPDFs.f:

238          if(IntrinsicCharm.and.Nf_FF.lt.4)then
  1             do alpha=0,nin(igrid)
  2                do ifl=5,6
  3                   f0ph(ifl,alpha)  = 0d0
  4                   f0ph(-ifl,alpha) = 0d0
  5                enddo
  6                f0lep(3,alpha)  = 0d0
  7                f0lep(-3,alpha) = 0d0
  8             enddo

Exterminate `from_dict`

At the moment, we've from_dict methods all over the place, that expect a runcard with all the elements they need, but not specifying which ones.
At LO they are recapped by runner (where most of these methods are used):
https://github.com/N3PDF/eko/blob/813bd656159cde5c5d176b54ca99139e2280cb44/src/eko/runner.py#L44-L55

The information that is thus passed down is very unclear, and changes in the runcard format might affect silently all these methods.

Since they are always using a small amount of parameters, I propose a close replacement, based on the idea of sections:

bfd = interpolation.InterpolatorDispatcher(**operators_card["interpolation"])

This is somewhat a trade-off: it is not clear from the call which parameters are used (but it's clear that only one section is affected), and in any case can be determined by the callee (the signature of the function called).

Optimize EKO in flavor

We can optimize EKOs final object for cases in which it's known that not all flavors are needed.

This is corresponding to a similar optimization done at the level of PineAPPL FkTables NNPDF/pineappl#79.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.