I have some problems running the system with the provided datasets. In particular I get the repeated output that jobs failed
~/AI-Descartes/symbolic-regression/run_emf$ java -cp ../emf/src/:../emf/yamlbeans-1.0.jar emf.Main opts-common.yaml opts-langmuir.yaml datasets/langmuir/sun_et_al/
Current relative path is: /home/gkronber/AI-Descartes/symbolic-regression/run_emf
YAML OPTS from dir: datasets/langmuir/sun_et_al/ [[outputDir: datasets/langmuir/sun_et_al//tmp], [infile: datasets/langmuir/sun_et_al//input.dat], [, opts-common.yaml], [, opts-langmuir.yaml], [, datasets/langmuir/sun_et_al//opts.yaml]]
YAML STRING infile: datasets/langmuir/sun_et_al//input.dat
YAML {infile=datasets/langmuir/sun_et_al//input.dat}
YAML opts from string infile: datasets/langmuir/sun_et_al//input.dat: {infile=datasets/langmuir/sun_et_al//input.dat}
YAML opts from opts-common.yaml: infile=... multitask=false nonconst_plusminus=false baron_exec=... baronOpts=... inputSampleSize=50 kill_existing_baron_jobs=false max_num_jobs=45 LicName=... printBaronFilesUpfront=true
YAML opts from opts-langmuir.yaml: infile=... multitask=false nonconst_plusminus=false baron_exec=... baronOpts=... inputSampleSize=50 kill_existing_baron_jobs=false max_num_jobs=45 LicName=... printBaronFilesUpfront=true dimensionVars={x=[1, 0], y=[0, 3]} use_dimensional_analysis=false relative_squared_error=false prod_exp_limit=6 add_noise_to_input=false max_num_consts=1 dimensionUnits=[bar, mm] max_tree_depth=3
WARNING: yaml key max_num_consts is being overridden in file datasets/langmuir/sun_et_al//opts.yaml.
YAML opts from datasets/langmuir/sun_et_al//opts.yaml: infile=... multitask=false nonconst_plusminus=false baron_exec=... baronOpts=... inputSampleSize=50 kill_existing_baron_jobs=false max_num_jobs=45 LicName=... printBaronFilesUpfront=true dimensionVars={x=[1, 0], y=[0, 3]} use_dimensional_analysis=false relative_squared_error=false prod_exp_limit=6 add_noise_to_input=false max_num_consts=4 dimensionUnits=[bar, mm] max_tree_depth=3 operators=[*, /, +] max_var_exp=2
UNOPS [*, /, +] [] []
The following yaml items have non-default values:
max_num_jobs: 45
multitask: false
max_num_consts: 4
prod_exp_limit: 6
printBaronFilesUpfront: true
inputSampleSize: 50
BOP
[MaxTime: 60;, PrTimeFreq: 1;, EpsA: 1e-4;, EpsR: 1e-3;]
outputDir: datasets/langmuir/sun_et_al//tmp
Operators: [*, /, +]
0: [0.07] -> 0.695
1: [0.11] -> 0.752
2: [0.2] -> 0.797
3: [0.31] -> 0.825
4: [0.56] -> 0.86
5: [0.8] -> 0.882
6: [1.07] -> 0.904
7: [1.46] -> 0.923
8: [3.51] -> 0.976
9: [6.96] -> 1.212
10: [12.06] -> 1.371
11: [17.26] -> 1.469
12: [27.56] -> 1.535
13: [41.42] -> 1.577
14: [55.2] -> 1.602
15: [68.95] -> 1.619
16: [86.17] -> 1.632
NUM TREES 31 with max depth 3
ArithNormTree Unnormalized (blank if same)
0 3: P
1 7: (P+P)
2 10: (P+P+P)
3 13: (P/(P+P))
4 13: (P+P+P+P)
5 16: (P/(P+P+P))
6 16: (P+P+P+P+P)
7 17: ((P+P)/(P+P))
8 17: (P+(P/(P+P)))
9 19: (P/(P+P+P+P))
10 19: (P+P+P+P+P+P)
11 20: ((P+P+P)/(P+P))
12 20: ((P+P)/(P+P+P))
13 20: (P+P+(P/(P+P)))
14 21: (P+((P+P)/(P+P)))
15 22: (P+P+P+P+P+P+P)
16 23: ((P+P+P)/(P+P+P))
17 23: ((P+P+P+P)/(P+P))
18 23: ((P+P)/(P+P+P+P))
19 23: (P+P+P+(P/(P+P)))
20 24: (P+P+((P+P)/(P+P)))
21 25: (P+P+P+P+P+P+P+P)
22 26: ((P+P+P)/(P+P+P+P))
23 26: ((P+P+P+P)/(P+P+P))
24 26: (P+P+P+P+(P/(P+P)))
25 27: (P+P+P+((P+P)/(P+P)))
26 27: ((P/(P+P))+(P/(P+P)))
27 29: ((P+P+P+P)/(P+P+P+P))
28 30: (P+P+P+P+((P+P)/(P+P)))
29 31: ((P/(P+P))+((P+P)/(P+P)))
30 35: (((P+P)/(P+P))+((P+P)/(P+P)))
-----------------------
IGNORING inputSampleSize, since too large (50 >= 17).
Printing all jobs upfront - ubound is 0.0
THESE JOBS FAILED: [0:P, 1:(P+P), 2:(P+P+P), 3:(P/(P+P)), 4:(P+P+P+P), 5:(P/(P+P+P)), 6:(P+P+P+P+P), 7:((P+P)/(P+P)), 8:(P+(P/(P+P))), 9:(P/(P+P+P+P)), 10:(P+P+P+P+P+P), 11:((P+P+P)/(P+P)), 12:((P+P)/(P+P+P)), 13:(P+P+(P/(P+P))), 14:(P+((P+P)/(P+P))), 15:(P+P+P+P+P+P+P), 16:((P+P+P)/(P+P+P)), 17:((P+P+P+P)/(P+P)), 18:((P+P)/(P+P+P+P)), 19:(P+P+P+(P/(P+P))), 20:(P+P+((P+P)/(P+P))), 21:(P+P+P+P+P+P+P+P), 22:((P+P+P)/(P+P+P+P)), 23:((P+P+P+P)/(P+P+P)), 24:(P+P+P+P+(P/(P+P))), 25:(P+P+P+((P+P)/(P+P))), 26:((P/(P+P))+(P/(P+P))), 27:((P+P+P+P)/(P+P+P+P)), 28:(P+P+P+P+((P+P)/(P+P))), 29:((P/(P+P))+((P+P)/(P+P))), 30:(((P+P)/(P+P))+((P+P)/(P+P)))]
RERUNNING
.still running: []
THESE JOBS FAILED: [0:P, 1:(P+P), 2:(P+P+P), 3:(P/(P+P)), 4:(P+P+P+P), 5:(P/(P+P+P)), 6:(P+P+P+P+P), 7:((P+P)/(P+P)), 8:(P+(P/(P+P))), 9:(P/(P+P+P+P)), 10:(P+P+P+P+P+P), 11:((P+P+P)/(P+P)), 12:((P+P)/(P+P+P)), 13:(P+P+(P/(P+P))), 14:(P+((P+P)/(P+P))), 15:(P+P+P+P+P+P+P), 16:((P+P+P)/(P+P+P)), 17:((P+P+P+P)/(P+P)), 18:((P+P)/(P+P+P+P)), 19:(P+P+P+(P/(P+P))), 20:(P+P+((P+P)/(P+P))), 21:(P+P+P+P+P+P+P+P), 22:((P+P+P)/(P+P+P+P)), 23:((P+P+P+P)/(P+P+P)), 24:(P+P+P+P+(P/(P+P))), 25:(P+P+P+((P+P)/(P+P))), 26:((P/(P+P))+(P/(P+P))), 27:((P+P+P+P)/(P+P+P+P)), 28:(P+P+P+P+((P+P)/(P+P))), 29:((P/(P+P))+((P+P)/(P+P))), 30:(((P+P)/(P+P))+((P+P)/(P+P)))]
RERUNNING
.THESE JOBS FAILED: [0:P, 1:(P+P), 2:(P+P+P), 3:(P/(P+P)), 4:(P+P+P+P), 5:(P/(P+P+P)), 6:(P+P+P+P+P), 7:((P+P)/(P+P)), 8:(P+(P/(P+P))), 9:(P/(P+P+P+P)), 10:(P+P+P+P+P+P), 11:((P+P+P)/(P+P)), 12:((P+P)/(P+P+P)), 13:(P+P+(P/(P+P))), 14:(P+((P+P)/(P+P))), 15:(P+P+P+P+P+P+P), 16:((P+P+P)/(P+P+P)), 17:((P+P+P+P)/(P+P)), 18:((P+P)/(P+P+P+P)), 19:(P+P+P+(P/(P+P))), 20:(P+P+((P+P)/(P+P))), 21:(P+P+P+P+P+P+P+P), 22:((P+P+P)/(P+P+P+P)), 23:((P+P+P+P)/(P+P+P)), 24:(P+P+P+P+(P/(P+P))), 25:(P+P+P+((P+P)/(P+P))), 26:((P/(P+P))+(P/(P+P))), 27:((P+P+P+P)/(P+P+P+P)), 28:(P+P+P+P+((P+P)/(P+P))), 29:((P/(P+P))+((P+P)/(P+P))), 30:(((P+P)/(P+P))+((P+P)/(P+P)))]
RERUNNING
.THESE JOBS FAILED: [0:P, 1:(P+P), 2:(P+P+P), 3:(P/(P+P)), 4:(P+P+P+P), 5:(P/(P+P+P)), 6:(P+P+P+P+P), 7:((P+P)/(P+P)), 8:(P+(P/(P+P))), 9:(P/(P+P+P+P)), 10:(P+P+P+P+P+P), 11:((P+P+P)/(P+P)), 12:((P+P)/(P+P+P)), 13:(P+P+(P/(P+P))), 14:(P+((P+P)/(P+P))), 15:(P+P+P+P+P+P+P), 16:((P+P+P)/(P+P+P)), 17:((P+P+P+P)/(P+P)), 18:((P+P)/(P+P+P+P)), 19:(P+P+P+(P/(P+P))), 20:(P+P+((P+P)/(P+P))), 21:(P+P+P+P+P+P+P+P), 22:((P+P+P)/(P+P+P+P)), 23:((P+P+P+P)/(P+P+P)), 24:(P+P+P+P+(P/(P+P))), 25:(P+P+P+((P+P)/(P+P))), 26:((P/(P+P))+(P/(P+P))), 27:((P+P+P+P)/(P+P+P+P)), 28:(P+P+P+P+((P+P)/(P+P))), 29:((P/(P+P))+((P+P)/(P+P))), 30:(((P+P)/(P+P))+((P+P)/(P+P)))]
RERUNNING
.THESE JOBS FAILED: [0:P, 1:(P+P), 2:(P+P+P), 3:(P/(P+P)), 4:(P+P+P+P), 5:(P/(P+P+P)), 6:(P+P+P+P+P), 7:((P+P)/(P+P)), 8:(P+(P/(P+P))), 9:(P/(P+P+P+P)), 10:(P+P+P+P+P+P), 11:((P+P+P)/(P+P)), 12:((P+P)/(P+P+P)), 13:(P+P+(P/(P+P))), 14:(P+((P+P)/(P+P))), 15:(P+P+P+P+P+P+P), 16:((P+P+P)/(P+P+P)), 17:((P+P+P+P)/(P+P)), 18:((P+P)/(P+P+P+P)), 19:(P+P+P+(P/(P+P))), 20:(P+P+((P+P)/(P+P))), 21:(P+P+P+P+P+P+P+P), 22:((P+P+P)/(P+P+P+P)), 23:((P+P+P+P)/(P+P+P)), 24:(P+P+P+P+(P/(P+P))), 25:(P+P+P+((P+P)/(P+P))), 26:((P/(P+P))+(P/(P+P))), 27:((P+P+P+P)/(P+P+P+P)), 28:(P+P+P+P+((P+P)/(P+P))), 29:((P/(P+P))+((P+P)/(P+P))), 30:(((P+P)/(P+P))+((P+P)/(P+P)))]
RERUNNING
.THESE JOBS FAILED: [0:P, 1:(P+P), 2:(P+P+P), 3:(P/(P+P)), 4:(P+P+P+P), 5:(P/(P+P+P)), 6:(P+P+P+P+P), 7:((P+P)/(P+P)), 8:(P+(P/(P+P))), 9:(P/(P+P+P+P)), 10:(P+P+P+P+P+P), 11:((P+P+P)/(P+P)), 12:((P+P)/(P+P+P)), 13:(P+P+(P/(P+P))), 14:(P+((P+P)/(P+P))), 15:(P+P+P+P+P+P+P), 16:((P+P+P)/(P+P+P)), 17:((P+P+P+P)/(P+P)), 18:((P+P)/(P+P+P+P)), 19:(P+P+P+(P/(P+P))), 20:(P+P+((P+P)/(P+P))), 21:(P+P+P+P+P+P+P+P), 22:((P+P+P)/(P+P+P+P)), 23:((P+P+P+P)/(P+P+P)), 24:(P+P+P+P+(P/(P+P))), 25:(P+P+P+((P+P)/(P+P))), 26:((P/(P+P))+(P/(P+P))), 27:((P+P+P+P)/(P+P+P+P)), 28:(P+P+P+P+((P+P)/(P+P))), 29:((P/(P+P))+((P+P)/(P+P))), 30:(((P+P)/(P+P))+((P+P)/(P+P)))]
RERUNNING
... output repeats endlessly
I had to compile the source files myself because the .jar file that is mentioned in the docs (https://github.com/IBM/AI-Descartes/blob/main/symbolic-regression/README.md) does not exist in the repository. I have the demo license of BARON running on my system because I do not want to spend beyond 200$ a month for an academic license, for just trying AI Descartes.
Please let me know how I can run one of the examples on my system.