MCNNTUNES documentation is available at mcnntunes.readthedocs.io.
If you use the package please cite the following references:
Tuning Monte Carlo generators with machine learning.
Home Page: https://mcnntunes.readthedocs.io/
License: GNU General Public License v3.0
MCNNTUNES documentation is available at mcnntunes.readthedocs.io.
If you use the package please cite the following references:
This step is required in order to build a consistent dataset of MC distributions and parameter values for a reference tune performed with standard tools, e.g. Professor, and the for the future machine learning techniques which may require cross-validation implementation.
This step requires the implementation of:
We should take #1 and apply https://professor.hepforge.org/.
Compare the output of the current mcnntune
code to #2.
We have to check what CMA-ES delivers as nominal error for each parameter. In comparison to professor, mcnntune errors are much smaller.
In order to make this repo public we have to:
We can follow the structure of Qibo, @mlazzarin could you please have a look / first try?
In the minimization step, after the execution of the CMA-ES algorithm, the covariance matrix is decomposed in eigenvalues and eigenvectors, and then a 1-sigma eigenvector basis is computed by adding the eigenvectors times the square root of the eigenvalues to the best parameters.
I've noticed that cov
is computed with the unscaled std best_std
:
https://github.com/scarrazza/mcnntunes/blob/16c73bda6fdcc3d72eac19f54671ab521d703228/src/mcnntunelib/app.py#L298-L300
while the 1-sigma eigenvector basis is computed by using result[0]
, which I think is scaled.
Then, we unscale the basis (line 327), so result[0]
gets mapped to best_x
. However, the contribution of the covariance matrix is unscaled twice, isn't it?
In other words, I think that we should replace result[0]
with best_x
in line 324 and remove the unscale function call in line 327.
@scarrazza could you take a look and see whether I'm right or I didn't understood these lines?
All of this is in the master branch.
Following step 2 of issue #8 I recreated the MC runs for the AZ tune.
I used the following variation ranges for the tunable parameters:
PARAMETER | MIN | MAX |
---|---|---|
intrinsicKT | 1 | 2.5 |
asMZisr | 0.12 | 0.14 |
pT0isr | 0.1 | 2.5 |
In the original paper they used 0.5 as lower bound for pt0isr, but when I did so during my thesis both Professor (third order) and Mcnntune suggested a lower value (but not Professor at fourth order).
I generated 256 runs, with half the errors of the previous datasets, and the resulting final tunes for Professor are:
Professor | Our results (order 3) | AZ (1406.3660) (order 4) |
---|---|---|
intrinsicKT | 1.84 +- 0.04 | 1.71 +- 0.03 |
asMZisr | 0.1236 +- 0.0002 | 0.1237 +- 0.0002 |
pt0isr | left bound (0.1) | 0.59 +- 0.08 |
The observables and the bin weights are the same in both cases. Unfortunately, the prediction for pt0isr is getting lower and lower, and I don't now if this makes sense. Lineplot shows this behaviour:
With the previous dataset with pt0isr min value of 0.5, there was a minimum in 0.54 (but only with fourth order interpolation). It was compatible with the AZ tune, but now it seems just a local minimum.
Then I performed the same tune with Mcnntune (without hyper optimization, but using the best configuration found in my thesis):
Professor | Direct model | Inverse model | Inverse model (DA) |
---|---|---|---|
intrinsicKT | 1.76 | 1.79 +- 0.08 | 1.81 +- 0.06 |
asMZisr | 0.1238 | 0.1233 +- 0.0004 | 0.1231 +- 0.0003 |
pt0isr | 0.3 (nearly flat chi2) | 0.4 +- 0.4 | 0 +- 0.3 |
While the other parameters make somehow sense (even though they are not identical) both models struggle with pt0isr. The direct model presents a nearly flat chi2 near the minimum
while the inverse model prediction has a very high error.
(now I'm doing some hyperparameter tuning to see what happens)
Steps to perform in order to publish a paper:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.