digitalinblue / celero Goto Github PK
View Code? Open in Web Editor NEWC++ Benchmark Authoring Library/Framework
License: Other
C++ Benchmark Authoring Library/Framework
License: Other
Provide a possiblity to measure perfromance scale in multithread environment.
The makefile generated by cmake has no rule "install". Good to have that as using "make install" to install the lib is a convention.
Though we are using baseline measurements, it tends to be handy to know something about the hardware on which an experiment was ran. This is especially true when using raw measurements vs. baseline normalized measurements.
If no baseline case is defined, the program aborts. Update to notify the user of the omission before exiting.
I wish to use Celero on a Macbook. But I am using clang, and don't have XCode. Hope to have an option for this case.
Statistics are computed but go nowhere, if I'm reading things right. This affects confidence in the benchmark - variance/skewness notably, as well as min/max.
These are pretty important to show for benchmarks, IMHO. Maybe provide some option to opt-in given the width of terminal it would require.
Hello! Thank you for the support. Celero is being very helpful.
It would be nice if getExperimentValues() was a templated method. Thus, the problemSpace array could be parameterized in compile time by using templates.
Another option would be making the problemSpace some kind of map data structure, which maps a integer to a structure representing the input of the benchmark test.
In my opinion, this would give more flexibility in the setUp methods.
Hello there!
I did run Celero Benchmark and generated a output to a csv file.
This is what I obtained:
Group Experiment Problem Space Samples Iterations Baseline us/Iteration Iterations/sec Min (us) Mean (us) Max (us)
erastosthenes serial_erastosthenes 10 1 3 1 0.666667 1500000 2 2 2
erastosthenes serial_erastosthenes 100 1 3 1 0.666667 1500000 2 2 2
erastosthenes serial_erastosthenes 1000 1 3 1 2.66667 375000 8 8 8
erastosthenes serial_erastosthenes 10000 1 3 1 45.6667 21897.8 137 137 137
erastosthenes serial_erastosthenes 100000 1 3 1 847.667 1179.71 2543 2543 2543
erastosthenes serial_erastosthenes 1000000 1 3 1 7159.67 139.671 21479 21479 21479
erastosthenes serial_erastosthenes 10000000 1 3 1 152912 6.53971 458736 458736 458736
erastosthenes serial_erastosthenes 100000000 1 3 1 1665480 0.600427 4996442 4996440 4996442
erastosthenes serial_erastosthenes 1000000000 1 3 1 18366800 0.054446 55100430 55100400 55100430
In some cases Min(us) > Mean(us). Perhaps the columns are not labeled in a correct way.
I'm having a problem with the baseline code.
Despite of adding seven values into ProblemSetValues array, the baseline code is ran only once with the first value.
The benchmarks code runs normally, but after the first phase, it is compared with a artificial baseline which takes one second. Hence, only the first baseline code which is ran from the first value from ProblemSetValues is compared correctly with the other benchmarks code.
I've set the number of samples =1.
And the number of interations =1;
I'm doing something wrong?
class LandauVishkinBenchmark : public celero::TestFixture{
public:
const static int NUMBER_OF_RUNS = 1; //Number of samples
const static int NUMBER_OF_ITERATIONS = 1;
LandauVishkinBenchmark(){
this->ProblemSetValues.push_back(0);
this->ProblemSetValues.push_back(1);
this->ProblemSetValues.push_back(2);
this->ProblemSetValues.push_back(3);
this->ProblemSetValues.push_back(6);
this->ProblemSetValues.push_back(10);
this->ProblemSetValues.push_back(20);
}
virtual void SetUp(const int32_t errors){
_errors = errors;
}
virtual void TearDown(){
}
Text* createRandomPattern(Text* T,integer size);
virtual ~LandauVishkinBenchmark(){}
integer _errors;
};
Here is the output from Celero:
[==========]
[ CELERO ]
[==========]
[ STAGE ] Baselining
[==========]
[ RUN ] LV_DNA50_50.LV_DC_SE -- 1 run, 1 call per run.
[ DONE ] LV_DNA50_50.LV_DC_SE (4.328641 sec) [1 calls in 4.328641 sec] [4.328641 us/call] [0.023102 calls/sec]
[==========]
[ STAGE ] Benchmarking
[==========]
[ RUN ] LV_DNA50_50.LV_RMQ -- 1 run, 1 call per run.
[ DONE ] LV_DNA50_50.LV_RMQ (29.722101 sec) [1 calls in 29.722101 sec] [29.722101 us/call] [0.003364 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_RMQ 6.866382
[ RUN ] LV_DNA50_50.LV_DMIN -- 1 run, 1 call per run.
[ DONE ] LV_DNA50_50.LV_DMIN (13.512609 sec) [1 calls in 13.512609 sec] [13.512609 us/call] [0.007400 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DMIN 3.121675
[ RUN ] LV_DNA50_50.LV_DC -- 1 run, 1 call per run.
[ DONE ] LV_DNA50_50.LV_DC (0.517192 sec) [1 calls in 0.517192 sec] [0.517192 us/call] [0.193352 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC 0.119481
[ RUN ] LV_DNA50_50.LV_DC_NAV -- 1 run, 1 call per run.
[ DONE ] LV_DNA50_50.LV_DC_NAV (0.541809 sec) [1 calls in 0.541809 sec] [0.541809 us/call] [0.184567 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_NAV 0.125168
[ RUN ] LV_DNA50_50.LV_DC_PAR -- 1 run, 1 call per run.
[ DONE ] LV_DNA50_50.LV_DC_PAR (0.233204 sec) [1 calls in 0.233204 sec] [0.233204 us/call] [0.428809 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_PAR 0.053875
[==========]
[ STAGE ] Benchmarking
[==========]
[ RUN ] LV_DNA50_50.LV_RMQ -- 1 run, 1 call per run. Problem Set 1 of 6.
[ DONE ] LV_DNA50_50.LV_RMQ (45.937276 sec) [1 calls in 45.937276 sec] [45.937276 us/call] [0.002177 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_RMQ 45.937276
[ RUN ] LV_DNA50_50.LV_DMIN -- 1 run, 1 call per run. Problem Set 1 of 6.
[ DONE ] LV_DNA50_50.LV_DMIN (15.816064 sec) [1 calls in 15.816064 sec] [15.816064 us/call] [0.006323 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DMIN 15.816064
[ RUN ] LV_DNA50_50.LV_DC -- 1 run, 1 call per run. Problem Set 1 of 6.
[ DONE ] LV_DNA50_50.LV_DC (0.797697 sec) [1 calls in 0.797697 sec] [0.797697 us/call] [0.125361 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC 0.797697
[ RUN ] LV_DNA50_50.LV_DC_NAV -- 1 run, 1 call per run. Problem Set 1 of 6.
[ DONE ] LV_DNA50_50.LV_DC_NAV (1.028075 sec) [1 calls in 1.028075 sec] [1.028075 us/call] [0.097269 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_NAV 1.028075
[ RUN ] LV_DNA50_50.LV_DC_PAR -- 1 run, 1 call per run. Problem Set 1 of 6.
[ DONE ] LV_DNA50_50.LV_DC_PAR (0.301660 sec) [1 calls in 0.301660 sec] [0.301660 us/call] [0.331499 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_PAR 0.301660
[==========]
[ STAGE ] Benchmarking
[==========]
[ RUN ] LV_DNA50_50.LV_RMQ -- 1 run, 1 call per run. Problem Set 2 of 6.
[ DONE ] LV_DNA50_50.LV_RMQ (60.275921 sec) [1 calls in 60.275921 sec] [60.275921 us/call] [0.001659 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_RMQ 60.275921
[ RUN ] LV_DNA50_50.LV_DMIN -- 1 run, 1 call per run. Problem Set 2 of 6.
[ DONE ] LV_DNA50_50.LV_DMIN (17.519518 sec) [1 calls in 17.519518 sec] [17.519518 us/call] [0.005708 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DMIN 17.519518
[ RUN ] LV_DNA50_50.LV_DC -- 1 run, 1 call per run. Problem Set 2 of 6.
[ DONE ] LV_DNA50_50.LV_DC (1.136020 sec) [1 calls in 1.136020 sec] [1.136020 us/call] [0.088027 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC 1.136020
[ RUN ] LV_DNA50_50.LV_DC_NAV -- 1 run, 1 call per run. Problem Set 2 of 6.
[ DONE ] LV_DNA50_50.LV_DC_NAV (1.288852 sec) [1 calls in 1.288852 sec] [1.288852 us/call] [0.077588 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_NAV 1.288852
[ RUN ] LV_DNA50_50.LV_DC_PAR -- 1 run, 1 call per run. Problem Set 2 of 6.
[ DONE ] LV_DNA50_50.LV_DC_PAR (0.469037 sec) [1 calls in 0.469037 sec] [0.469037 us/call] [0.213203 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_PAR 0.469037
[==========]
[ STAGE ] Benchmarking
[==========]
[ RUN ] LV_DNA50_50.LV_RMQ -- 1 run, 1 call per run. Problem Set 3 of 6.
[ DONE ] LV_DNA50_50.LV_RMQ (75.057950 sec) [1 calls in 75.057950 sec] [75.057950 us/call] [0.001332 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_RMQ 75.057950
[ RUN ] LV_DNA50_50.LV_DMIN -- 1 run, 1 call per run. Problem Set 3 of 6.
[ DONE ] LV_DNA50_50.LV_DMIN (19.086265 sec) [1 calls in 19.086265 sec] [19.086265 us/call] [0.005239 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DMIN 19.086265
[ RUN ] LV_DNA50_50.LV_DC -- 1 run, 1 call per run. Problem Set 3 of 6.
[ DONE ] LV_DNA50_50.LV_DC (1.742060 sec) [1 calls in 1.742060 sec] [1.742060 us/call] [0.057403 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC 1.742060
[ RUN ] LV_DNA50_50.LV_DC_NAV -- 1 run, 1 call per run. Problem Set 3 of 6.
[ DONE ] LV_DNA50_50.LV_DC_NAV (1.428545 sec) [1 calls in 1.428545 sec] [1.428545 us/call] [0.070001 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_NAV 1.428545
[ RUN ] LV_DNA50_50.LV_DC_PAR -- 1 run, 1 call per run. Problem Set 3 of 6.
[ DONE ] LV_DNA50_50.LV_DC_PAR (0.532349 sec) [1 calls in 0.532349 sec] [0.532349 us/call] [0.187847 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_PAR 0.532349
[==========]
[ STAGE ] Benchmarking
[==========]
[ RUN ] LV_DNA50_50.LV_RMQ -- 1 run, 1 call per run. Problem Set 4 of 6.
[ DONE ] LV_DNA50_50.LV_RMQ (118.817409 sec) [1 calls in 118.817409 sec] [118.817409 us/call] [0.000842 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_RMQ 118.817409
[ RUN ] LV_DNA50_50.LV_DMIN -- 1 run, 1 call per run. Problem Set 4 of 6.
[ DONE ] LV_DNA50_50.LV_DMIN (24.625528 sec) [1 calls in 24.625528 sec] [24.625528 us/call] [0.004061 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DMIN 24.625528
[ RUN ] LV_DNA50_50.LV_DC -- 1 run, 1 call per run. Problem Set 4 of 6.
[ DONE ] LV_DNA50_50.LV_DC (2.905366 sec) [1 calls in 2.905366 sec] [2.905366 us/call] [0.034419 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC 2.905366
[ RUN ] LV_DNA50_50.LV_DC_NAV -- 1 run, 1 call per run. Problem Set 4 of 6.
[ DONE ] LV_DNA50_50.LV_DC_NAV (2.651918 sec) [1 calls in 2.651918 sec] [2.651918 us/call] [0.037709 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_NAV 2.651918
[ RUN ] LV_DNA50_50.LV_DC_PAR -- 1 run, 1 call per run. Problem Set 4 of 6.
[ DONE ] LV_DNA50_50.LV_DC_PAR (0.951985 sec) [1 calls in 0.951985 sec] [0.951985 us/call] [0.105044 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_PAR 0.951985
[==========]
[ STAGE ] Benchmarking
[==========]
[ RUN ] LV_DNA50_50.LV_RMQ -- 1 run, 1 call per run. Problem Set 5 of 6.
[ DONE ] LV_DNA50_50.LV_RMQ (172.621011 sec) [1 calls in 172.621011 sec] [172.621011 us/call] [0.000579 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_RMQ 172.621011
[ RUN ] LV_DNA50_50.LV_DMIN -- 1 run, 1 call per run. Problem Set 5 of 6.
[ DONE ] LV_DNA50_50.LV_DMIN (32.110696 sec) [1 calls in 32.110696 sec] [32.110696 us/call] [0.003114 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DMIN 32.110696
[ RUN ] LV_DNA50_50.LV_DC -- 1 run, 1 call per run. Problem Set 5 of 6.
[ DONE ] LV_DNA50_50.LV_DC (4.631579 sec) [1 calls in 4.631579 sec] [4.631579 us/call] [0.021591 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC 4.631579
[ RUN ] LV_DNA50_50.LV_DC_NAV -- 1 run, 1 call per run. Problem Set 5 of 6.
[ DONE ] LV_DNA50_50.LV_DC_NAV (4.179240 sec) [1 calls in 4.179240 sec] [4.179240 us/call] [0.023928 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_NAV 4.179240
[ RUN ] LV_DNA50_50.LV_DC_PAR -- 1 run, 1 call per run. Problem Set 5 of 6.
[ DONE ] LV_DNA50_50.LV_DC_PAR (1.290418 sec) [1 calls in 1.290418 sec] [1.290418 us/call] [0.077494 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_PAR 1.290418
[==========]
[ STAGE ] Benchmarking
[==========]
[ RUN ] LV_DNA50_50.LV_RMQ -- 1 run, 1 call per run. Problem Set 6 of 6.
[ DONE ] LV_DNA50_50.LV_RMQ (321.168932 sec) [1 calls in 321.168932 sec] [321.168932 us/call] [0.000311 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_RMQ 321.168932
[ RUN ] LV_DNA50_50.LV_DMIN -- 1 run, 1 call per run. Problem Set 6 of 6.
[ DONE ] LV_DNA50_50.LV_DMIN (51.033489 sec) [1 calls in 51.033489 sec] [51.033489 us/call] [0.001959 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DMIN 51.033489
[ RUN ] LV_DNA50_50.LV_DC -- 1 run, 1 call per run. Problem Set 6 of 6.
[ DONE ] LV_DNA50_50.LV_DC (7.547392 sec) [1 calls in 7.547392 sec] [7.547392 us/call] [0.013250 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC 7.547392
[ RUN ] LV_DNA50_50.LV_DC_NAV -- 1 run, 1 call per run. Problem Set 6 of 6.
[ DONE ] LV_DNA50_50.LV_DC_NAV (7.185174 sec) [1 calls in 7.185174 sec] [7.185174 us/call] [0.013918 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_NAV 7.185174
[ RUN ] LV_DNA50_50.LV_DC_PAR -- 1 run, 1 call per run. Problem Set 6 of 6.
[ DONE ] LV_DNA50_50.LV_DC_PAR (3.774900 sec) [1 calls in 3.774900 sec] [3.774900 us/call] [0.026491 calls/sec]
[ BASELINE ] LV_DNA50_50.LV_DC_PAR 3.774900
[==========]
[ STAGE ] Completed. 6 tests complete.
[==========]
Is there a FindCeleron.cmake or CeleronConfig.cmake available for an easy integration into other projects ?
It would be very helpful to have Travis.CI building Celero under multiple compilers/configurations and eventually running a Celero test suite. Fix the travis.yml file so Celero actually builds under at least one compiler and Travis quits complaining that the builds are broken when, in reality, the travis.yml is what is broken.
Implement a baseline macro that accepts time in microseconds which can be used for comparing benchmark results against.
This would be convenient for when the baseline time is known but need to compare benchmarks against the known time.
Example:
BASELINE_TIME(Group, BaselineName, MicroSeconds)
Achieve more precise measurements using mach time units.
I tried to compile Celero with gcc version 4.7.3 (Ubuntu/Linaro 4.7.3-1ubuntu1)
and got the following error:
[ 69%] Building CXX object CMakeFiles/celeroDemoComparison.dir/examples/DemoComparison.cpp.o
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:72:1: error: ‘<::’ cannot begin a template-argument list [-fpermissive]
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:72:1: note: ‘<:’ is an alternate spelling for ‘[’. Insert whitespace between ‘<’ and ‘::’
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:72:1: note: (if you use ‘-fpermissive’ G++ will accept your code)
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:72:1: error: ‘<::’ cannot begin a template-argument list [-fpermissive]
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:72:1: note: ‘<:’ is an alternate spelling for ‘[’. Insert whitespace between ‘<’ and ‘::’
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:78:1: error: ‘<::’ cannot begin a template-argument list [-fpermissive]
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:78:1: note: ‘<:’ is an alternate spelling for ‘[’. Insert whitespace between ‘<’ and ‘::’
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:78:1: error: ‘<::’ cannot begin a template-argument list [-fpermissive]
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:78:1: note: ‘<:’ is an alternate spelling for ‘[’. Insert whitespace between ‘<’ and ‘::’
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:84:1: error: ‘<::’ cannot begin a template-argument list [-fpermissive]
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:84:1: note: ‘<:’ is an alternate spelling for ‘[’. Insert whitespace between ‘<’ and ‘::’
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:84:1: error: ‘<::’ cannot begin a template-argument list [-fpermissive]
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:84:1: note: ‘<:’ is an alternate spelling for ‘[’. Insert whitespace between ‘<’ and ‘::’
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:90:1: error: ‘<::’ cannot begin a template-argument list [-fpermissive]
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:90:1: note: ‘<:’ is an alternate spelling for ‘[’. Insert whitespace between ‘<’ and ‘::’
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:90:1: error: ‘<::’ cannot begin a template-argument list [-fpermissive]
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:90:1: note: ‘<:’ is an alternate spelling for ‘[’. Insert whitespace between ‘<’ and ‘::’
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:96:1: error: ‘<::’ cannot begin a template-argument list [-fpermissive]
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:96:1: note: ‘<:’ is an alternate spelling for ‘[’. Insert whitespace between ‘<’ and ‘::’
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:96:1: error: ‘<::’ cannot begin a template-argument list [-fpermissive]
/home/lisitsyn/Matters/oss/Celero/examples/DemoComparison.cpp:96:1: note: ‘<:’ is an alternate spelling for ‘[’. Insert whitespace between ‘<’ and ‘::’
make[2]: *** [CMakeFiles/celeroDemoComparison.dir/examples/DemoComparison.cpp.o] Error 1
make[1]: *** [CMakeFiles/celeroDemoComparison.dir/all] Error 2
Make it easier for others to contribute demo and example experiments. Develop broiler-plate CMake to enforce some standardization among projects.
Here's the description of the Celero executable options:
https://github.com/DigitalInBlue/Celero#command-line
There is a -h
, but for me it doesn't print help.
Would it be possible to print a list of available options (-xml
, -g
, -o
at the moment) and not execute the benchmarks when the user adds a -h
or --help
option?
I would like to point out that identifiers like "_CELERO_CONSOLE_H_
" and "_CELERO_UTILITIES_H_
" do not fit to the expected naming convention of the C++ language standard.
Would you like to adjust your selection for unique names?
The Result class is currently embedded inside the Experiment class. While this is an ok relationship, it isn't very clean or maintainable. Break Result into its own class.
In many benchmarks I write, the first run needs to be thrown away because of things like JITing (various kinds) and that really screws up the statistics in the package.
Is it possible to allow all tests an evaluation prior to recording the benchmarks so I can get more correct statistics?
When building Celero (successfully) I get the warning
CMake Warning (dev):
Policy CMP0042 is not set: MACOSX_RPATH is enabled by default. Run "cmake
--help-policy CMP0042" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
MACOSX_RPATH is not specified for the following targets:
celero
This warning is for project developers. Use -Wno-dev to suppress it.
I don't think this is something to be worried about though.
The timing information printed on the console is incorrect. Consider the example given in README.md:
[ RUN ] Complex1 [1 samples of 710000 calls each.]
[ DONE ] DemoSimple.Complex1 1.194630 sec. [1.682577e-006 us/call] [59432.628973 calls/sec]
The math works out as follows:
1.194630 sec / 710000 calls = 1.682577e-006 seconds / call (not us === microseconds)
and
710000 calls / 1.194630 sec = 594326.2767551459 calls/sec (demo output is smaller by exactly 10x)
$ cat /etc/issue
Ubuntu 14.10 \n \l
$ gcc --version
gcc (Ubuntu 4.9.1-16ubuntu6) 4.9.1
$ mkdir build && cd build
$ cmake ..
$ cmake -DCMAKE_BUILD_TYPE=Debug ..
-- The C compiler identification is GNU 4.9.1
-- The CXX compiler identification is GNU 4.9.1
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- SYSTEM: Linux
CMAKE_CXX_COMPILER: /usr/bin/c++
-- Configuring done
-- Generating done
-- Build files have been written to: /home/gluttton/Projects/Celero/build
$ make
Scanning dependencies of target celero
--------8<--------
[ 17%] Building CXX object CMakeFiles/celero.dir/src/Console.cpp.o
/home/gluttton/Projects/Celero/src/Console.cpp:28:21: fatal error: curses.h: No such file or directory
#include <curses.h>
^
compilation terminated.
CMakeFiles/celero.dir/build.make:146: recipe for target 'CMakeFiles/celero.dir/src/Console.cpp.o' failed
--------8<--------
Error can be fixed by installing libncurses5-dev
.
I propose add:
find_package (Curses)
or:
set (CURSES_NEED_NCURSES TRUE)
find_package (Curses)
in CMakeLists.txt
and check required files at configuration time (during cmake
calling).
There is a wrong filename in CMakeLists.txt (the "u" of Unit must be uppercase). CMake reports this error:
CMake Error at CMakeLists.txt:229 (add_executable):
Cannot find source file:
examples/DemoSimpleJunit.cpp
$ cat /etc/issue
Ubuntu 14.10 \n \l
$ cmake --version
cmake version 2.8.12.2
$ cmake -DCELERO_ENABLE_EXPERIMENTS=ON .
-- The C compiler identification is GNU 4.9.1
-- The CXX compiler identification is GNU 4.9.1
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- SYSTEM: Linux
CMAKE_CXX_COMPILER: /usr/bin/c++
CMake Error at experiments/CMakeLists.txt:6 (ADD_SUBDIRECTORY):
The source directory
/home/gluttton/Projects/Celero/experiments/CMakeFiles
does not contain a CMakeLists.txt file.
-- Configuring incomplete, errors occurred!
See also "/home/gluttton/Projects/Celero/CMakeFiles/CMakeOutput.log".
This occurs because script inside experiments/CMakeLists.txt
try to find CMakeLists.txt
inside CMake build directory.
To prevent such errors I propose add something like:
if (${CMAKE_SOURCE_DIR} STREQUAL ${CMAKE_BINARY_DIR})
message (FATAL_ERROR "Prevented in-tree build. This is not allowed in experimental mode.")
endif (${CMAKE_SOURCE_DIR} STREQUAL ${CMAKE_BINARY_DIR})
in experiments/CMakeLists.txt
.
The [notice] output would be used inside a fixture to print experiment data out to the terminal, but not impact benchmark numbers.
Hello.
Celero is making my life much easier. 👍 👍 👍
:)
It would be nice if there is option to measure in ns, not in us. When the code takes a order of ns, celere report each call as 0.00000 us.
I wanted to try out Celero
and did this:
$ cmake CMakeLists.txt
$ make
Scanning dependencies of target celero
[ 6%] Building CXX object CMakeFiles/celero.dir/src/BenchmarkInfo.cpp.o
/Users/deil/code/Celero/src/BenchmarkInfo.cpp:21:4: warning: field 'resetCalls' will be initialized after field 'baselineUnit' [-Wreorder]
resetCalls(0),
^
/Users/deil/code/Celero/src/BenchmarkInfo.cpp:37:4: warning: field 'resetCalls' will be initialized after field 'baselineUnit' [-Wreorder]
resetCalls(calls),
^
/Users/deil/code/Celero/src/BenchmarkInfo.cpp:56:4: warning: field 'resetCalls' will be initialized after field 'baselineUnit' [-Wreorder]
resetCalls(other.pimpl->resetCalls),
^
3 warnings generated.
[ 13%] Building CXX object CMakeFiles/celero.dir/src/Celero.cpp.o
[ 20%] Building CXX object CMakeFiles/celero.dir/src/Console.cpp.o
[ 26%] Building CXX object CMakeFiles/celero.dir/src/Executor.cpp.o
[ 33%] Building CXX object CMakeFiles/celero.dir/src/JUnit.cpp.o
[ 40%] Building CXX object CMakeFiles/celero.dir/src/Print.cpp.o
[ 46%] Building CXX object CMakeFiles/celero.dir/src/ResultTable.cpp.o
[ 53%] Building CXX object CMakeFiles/celero.dir/src/TestVector.cpp.o
[ 60%] Building CXX object CMakeFiles/celero.dir/src/TestFixture.cpp.o
[ 66%] Building CXX object CMakeFiles/celero.dir/src/Timer.cpp.o
Linking CXX shared library libcelero.dylib
[ 66%] Built target celero
Scanning dependencies of target celeroDemoComparison
[ 73%] Building CXX object CMakeFiles/celeroDemoComparison.dir/examples/DemoComparison.cpp.o
Linking CXX executable celeroDemoComparison
[==========]
[ CELERO ]
[==========]
[ STAGE ] Baselining
[==========]
[ RUN ] StackOverflow.Baseline -- 100 samples, 5000000 calls per run.
[ DONE ] StackOverflow.Baseline (1.197661 sec) [5000000 calls in 1197661 usec] [0.239532 us/call] [4174804.055572 calls/sec]
[==========]
[ STAGE ] Benchmarking
[==========]
[ RUN ] StackOverflow.Compare -- 100 samples, 5000000 calls per run.
After a few minutes I aborted the StackOverflow.Compare
test.
Is this intentional that such a long-running test is run when a user says make
or is there a bug and the test is hanging for some reason?
Hi,
I tried to create static library only build with VS2015. I received the .lib file (around 14MB), but when I tried to complle your runnable example, I received LNK2019 errors (log file attached). However, the same example, compiled with dynamic libs (40KB celero.lib and ~1MB celero.dll) builds and runs without a problem.
Celero v1.1.0 does not run benchmarks which do not have problem spaces defined.
Improve the internal architecture to support more user features and better output reporting.
Create XML formatted output compatible with JUnit such that automated build systems can evaluate benchmark performance and report it just as they would with a unit testing framework.
template<class T> void DoNotOptimizeAwayExecuteLambda(T&& x)
{
// Begin DoNotOptimizeAway
volatile static T* xPrime = &x;
xPrime = &x;
x();
// End DoNotOptimizeAway
}
I want to propose some changes via pull request and guidelines tell me to submit it to develop
branch. But after commit 4dca141 develop
is behind master
. Something is out of date, either guidelines or develop
branch.
Hello!
I had a problem during compilation while using Ubuntu 15.04 with g++ 4.9.2.
When running the makefile generated by the CMakeLists.txt, this is the output:
Scanning dependencies of target celero
[ 3%] Building CXX object CMakeFiles/celero.dir/src/Archive.cpp.o
[ 6%] Building CXX object CMakeFiles/celero.dir/src/Benchmark.cpp.o
[ 10%] Building CXX object CMakeFiles/celero.dir/src/Callbacks.cpp.o
[ 13%] Building CXX object CMakeFiles/celero.dir/src/Celero.cpp.o
[ 17%] Building CXX object CMakeFiles/celero.dir/src/Console.cpp.o
[ 20%] Building CXX object CMakeFiles/celero.dir/src/Distribution.cpp.o
[ 24%] Building CXX object CMakeFiles/celero.dir/src/Executor.cpp.o
[ 27%] Building CXX object CMakeFiles/celero.dir/src/JUnit.cpp.o
[ 31%] Building CXX object CMakeFiles/celero.dir/src/Print.cpp.o
[ 34%] Building CXX object CMakeFiles/celero.dir/src/Experiment.cpp.o
[ 37%] Building CXX object CMakeFiles/celero.dir/src/Result.cpp.o
[ 41%] Building CXX object CMakeFiles/celero.dir/src/ResultTable.cpp.o
[ 44%] Building CXX object CMakeFiles/celero.dir/src/Statistics.cpp.o
[ 48%] Building CXX object CMakeFiles/celero.dir/src/TestVector.cpp.o
[ 51%] Building CXX object CMakeFiles/celero.dir/src/TestFixture.cpp.o
[ 55%] Building CXX object CMakeFiles/celero.dir/src/ThreadTestFixture.cpp.o
[ 58%] Building CXX object CMakeFiles/celero.dir/src/Timer.cpp.o
[ 62%] Building CXX object CMakeFiles/celero.dir/src/Utilities.cpp.o
Linking CXX shared library libcelero.so
[ 62%] Built target celero
Scanning dependencies of target celeroDemoMultithread
[ 65%] Building CXX object experiments/DemoMultithread/CMakeFiles/celeroDemoMultithread.dir/DemoMultithread.cpp.o
Linking CXX executable ../../celeroDemoMultithread
[ 65%] Built target celeroDemoMultithread
Scanning dependencies of target celeroDemoFileWrite
[ 68%] Building CXX object experiments/DemoFileWrite/CMakeFiles/celeroDemoFileWrite.dir/DemoFileWrite.cpp.o
Linking CXX executable ../../celeroDemoFileWrite
[ 68%] Built target celeroDemoFileWrite
Scanning dependencies of target celeroExperimentParameterPassing
[ 72%] Building CXX object experiments/ExperimentParameterPassing/CMakeFiles/celeroExperimentParameterPassing.dir/ExperimentParameterPassing.cpp.o
Linking CXX executable ../../celeroExperimentParameterPassing
[ 72%] Built target celeroExperimentParameterPassing
Scanning dependencies of target celeroExperimentSimpleComparison
[ 75%] Building CXX object experiments/ExperimentSimpleComparison/CMakeFiles/celeroExperimentSimpleComparison.dir/ExperimentSimpleComparison.cpp.o
Linking CXX executable ../../celeroExperimentSimpleComparison
[ 75%] Built target celeroExperimentSimpleComparison
Scanning dependencies of target celeroDemoSleep
[ 79%] Building CXX object experiments/DemoSleep/CMakeFiles/celeroDemoSleep.dir/DemoSleep.cpp.o
Linking CXX executable ../../celeroDemoSleep
[ 79%] Built target celeroDemoSleep
Scanning dependencies of target celeroExperimentCostOfPimpl
[ 82%] Building CXX object experiments/ExperimentCostOfPimpl/CMakeFiles/celeroExperimentCostOfPimpl.dir/ExperimentCostOfPimpl.cpp.o
Linking CXX executable ../../celeroExperimentCostOfPimpl
[ 82%] Built target celeroExperimentCostOfPimpl
Scanning dependencies of target celeroDemoSimpleJUnit
[ 86%] Building CXX object experiments/DemoSimpleJUnit/CMakeFiles/celeroDemoSimpleJUnit.dir/DemoSimpleJUnit.cpp.o
Linking CXX executable ../../celeroDemoSimpleJUnit
[ 86%] Built target celeroDemoSimpleJUnit
Scanning dependencies of target celeroDemoDoNotOptimizeAway
[ 89%] Building CXX object experiments/DemoDoNotOptimizeAway/CMakeFiles/celeroDemoDoNotOptimizeAway.dir/DemoDoNotOptimizeAway.cpp.o
Linking CXX executable ../../celeroDemoDoNotOptimizeAway
[ 89%] Built target celeroDemoDoNotOptimizeAway
Scanning dependencies of target celeroDemoTransform
[ 93%] Building CXX object experiments/DemoTransform/CMakeFiles/celeroDemoTransform.dir/DemoTransform.cpp.o
Linking CXX executable ../../celeroDemoTransform
[ 93%] Built target celeroDemoTransform
Scanning dependencies of target celeroExperimentSortingRandomInts
[ 96%] Building CXX object experiments/ExperimentSortingRandomInts/CMakeFiles/celeroExperimentSortingRandomInts.dir/ExperimentSortingRandomInts.cpp.o
/home/daniel/Desktop/Celero-master/experiments/ExperimentSortingRandomInts/ExperimentSortingRandomInts.cpp:125:48: error: wrong number of template arguments (0, should be 1)
template<class FwdIt, class Compare = std::less<>>
^
In file included from /usr/include/c++/4.9/string:48:0,
from /home/daniel/Desktop/Celero-master/include/celero/Benchmark.h:24,
from /home/daniel/Desktop/Celero-master/include/celero/Celero.h:43,
from /home/daniel/Desktop/Celero-master/experiments/ExperimentSortingRandomInts/ExperimentSortingRandomInts.cpp:1:
/usr/include/c++/4.9/bits/stl_function.h:367:12: error: provided for ‘template struct std::less’
struct less : public binary_function<_Tp, _Tp, bool>
^
/home/daniel/Desktop/Celero-master/experiments/ExperimentSortingRandomInts/ExperimentSortingRandomInts.cpp: In member function ‘virtual void CeleroUserBenchmark_SortRandInts_QuickSort::UserBenchmark()’:
/home/daniel/Desktop/Celero-master/experiments/ExperimentSortingRandomInts/ExperimentSortingRandomInts.cpp:139:58: error: no matching function for call to ‘quickSort(std::vector::iterator, std::vector::iterator)’
quickSort(std::begin(this->array), std::end(this->array));
^
/home/daniel/Desktop/Celero-master/experiments/ExperimentSortingRandomInts/ExperimentSortingRandomInts.cpp:139:58: note: candidate is:
/home/daniel/Desktop/Celero-master/experiments/ExperimentSortingRandomInts/ExperimentSortingRandomInts.cpp:126:6: note: template<class FwdIt, class Compare> void quickSort(FwdIt, FwdIt, Compare)
void quickSort(FwdIt first, FwdIt last, Compare cmp = Compare {})
^
/home/daniel/Desktop/Celero-master/experiments/ExperimentSortingRandomInts/ExperimentSortingRandomInts.cpp:126:6: note: template argument deduction/substitution failed:
experiments/ExperimentSortingRandomInts/CMakeFiles/celeroExperimentSortingRandomInts.dir/build.make:54: recipe for target 'experiments/ExperimentSortingRandomInts/CMakeFiles/celeroExperimentSortingRandomInts.dir/ExperimentSortingRandomInts.cpp.o' failed
make[2]: *** [experiments/ExperimentSortingRandomInts/CMakeFiles/celeroExperimentSortingRandomInts.dir/ExperimentSortingRandomInts.cpp.o] Error 1
CMakeFiles/Makefile2:605: recipe for target 'experiments/ExperimentSortingRandomInts/CMakeFiles/celeroExperimentSortingRandomInts.dir/all' failed
make[1]: *** [experiments/ExperimentSortingRandomInts/CMakeFiles/celeroExperimentSortingRandomInts.dir/all] Error 2
Makefile:117: recipe for target 'all' failed
make: *** [all] Error 2
ryan@DevPC-LX:~/stuff/Celero$ ./celeroDemoComparison
./celeroDemoComparison: symbol lookup error: ./celeroDemoComparison: undefined symbol: _ZN6celero16RegisterBaselineEPKcS1_mmSt10shared_ptrINS_7FactoryEE
ryan@DevPC-LX:~/stuff/Celero$ ldd ./celeroDemoComparison
linux-vdso.so.1 => (0x00007ffebb9cb000)
libcelero.so => /home/ryan/stuff/Celero/libcelero.so (0x00007fbb0164d000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fbb01314000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fbb0100e000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fbb00df8000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fbb00a32000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fbb00814000)
/lib64/ld-linux-x86-64.so.2 (0x00007fbb01888000)
ryan@DevPC-LX:~/stuff/Celero$
Hi,
I get the following error on the latest master branch:
[sgwood@mstdev31 Celero (master)]cmake .
-- The C compiler identification is GNU 4.8.3
-- The CXX compiler identification is GNU 4.8.3
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- SYSTEM: Linux
CMAKE_CXX_COMPILER: /usr/bin/c++
CMake Error at experiments/CMakeLists.txt:6 (ADD_SUBDIRECTORY):
The source directory
/nis_home/sgwood/programs/Celero/experiments/CMakeFiles
does not contain a CMakeLists.txt file.
-- Configuring incomplete, errors occurred!
Exclude noise from measured time.
I'd like to measure time of some function. This function take variable of user defined type and modify it. So for repeated calling of this function I need restore input parameters for each iteration. As result, the measured time of benchmark it time which was taken for execution of function (useful) and for restoring input parameter (noise).
Is there way to exclude this noise from report?
TestFixture class calls setUp
once before iterations
calls to UserBenchmark
and tearDown
once after. But state of the benchmark has to be cleared each iteration. As a result experiment sortingRandomInts
mentioned in docs sorts random array once and then sorts previously sorted array 9999 times for each of 30 samples...
As far as I understand the code, the function onExperimentStart gets executed after starting the timer and not before each sample as it is illustrated in the general program flow (TextFixture.cpp:65).
So therefore is there another onExperimentStart method or is the flow example wrong?
Hello!
I noticed Celero is printing an additional ',' comma while exporting data to a csv file.
This could be annoying when plotting data.
Title says all. Running different numbers of samples still works.
Thanks for the great project first.
The celero::DoNotOptimizeAway only cheats the compiler with calling putchar on the first char in data. But the compiler (at least GCC 4.7) is smart enough to keep calculation of first char and optimize other parts away.
Example:
Vector3 u, v;
celero::DoNotOptimizeAway(u + v);
the compiler will only calculate u[0] + v[0], and ignore u[1] + v[1] and u[2] + v[2]
this can be checked by generated asm code.
I have a dirty fix:
template<class T> void _dump_to_std(T&& datum) {
char* p = static_cast<char*>(static_cast<void*>(&datum));
for (size_t i=0; i<sizeof(T)/sizeof(char); ++i) {
putchar(*p);
p++;
}
}
///
/// \func DoNotOptimizeAway
///
/// \author Andrei Alexandrescu
///
template<class T> void DoNotOptimizeAway(T&& datum)
{
#ifdef WIN32
if(_getpid() == 1)
#else
if(getpid() == 1)
#endif
{
_dump_to_std(datum);
}
}
Attempting to compile on FreeBSD fails due to use of putchar without including stdio.h
Confirmed that adding the include fixes the issue.
Modern GPUs are doing excellent jobs for int32/fp32 but at least with NVidia and possibly AMD are underperforming on int64 which has significant impact on some applications, whether they be fixed point, time keeping, or workhorse types for algorithms such the numeric types reproblas is built on. I can understand in some cases why fp64 is doing bad but int64 has no excuse when int32 performs so many times faster.
Of course maybe these guesses are incorrect, I've only heard of them as heresay.
This issue calls for each benchmark to be extended to support int32/int64 which may help people understand which GPUs will work for what applications better.
Maybe add an option for dividing up which classes you care about, if you want to preserve the existing subset of functionality (like if you didn't want to run 2x as many tests...)
Celero/include/celero/Pimpl.h:52:32: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
const T* const operator->() const;
Support choosing individual test groups to be executed via command line. Pattern after googletest.
If Celero is compiled in Debug, then it should notify on the terminal that results are for debugging only as any measurements made while in Debug are likely not representative of non-debug results.
Currently https://github.com/DigitalInBlue/Celero only links to http://www.helleboreconsulting.com, which is not useful to get started with Celero
.
With Google I did find some documentation for Celero
, e.g.:
http://www.codeproject.com/Articles/525576/Celero-A-Cplusplus-Benchmark-Authoring-Library
Would it be possible to link to all useful Celero
documentation from the README, which will then appear on the front page: https://github.com/DigitalInBlue/Celero ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.