Simple cross-platform NodeJS CLI tool to benchmark different programs
Go to the Release section of the GitHub repository
Download the node-benchmark-cli-{version}.tgz
Install it with
npm i node-benchmark-cli-{version}.tgz
You can optionally install it globally with
npm i --location=global node-benchmark-cli-{version}.tgz
where you have to replace {version} with the version number you downloaded (ex: 0.1.0)
If you installed it locally you can run it with
npx node-benchmark-cli
If instead you installed it globally you can run it with
node-benchmark-cli
Usage: node-benchmark-cli [options] [dir]
Arguments:
dir the root directory where the programs to brenchmark are (default: ".")
Options:
-v, --verbose verbose output (default: false)
-n, --num_runs <n> number of runs of each configuration (default: 1)
-h, --help display help for command
The directory which contains the programs to benchmark (default: '.')
Enable verbose output (default: false)
Specify how many times each benchmark should be run (defaults: 1)
In order to benchmark a given program you should define a .benchmark.(js|mjs|cjs)
file where you can specify how to build, run and clean up your program. You can place this file wherever you want as long as it is within the directory you specify to this program
This program will scan the directory you specify through the CLI and will look for every .benchmark.(js|mjs|cjs)
files it is able to find.
NOTE: the directory scan will follow the .gitignore rules you specify, so keep that in mind when deciding where to create the
.benchmark.(js|mjs|cjs)
, since if it is within an ignored directory it wont be found
Once it has found every .benchmark.(js|mjs|cjs)
file within the directory and its subdirectories, it will star running every configuration the number of times you specified.
The .benchmark.(js|mjs|cjs)
file is basically a NodeJS module which exports 1 or more configurations. Each configuration follows the following structure
{
name: string;
build?: string | CallableFunction;
run: string;
cleanup?: string | CallableFunction;
};
Every configuration must have specify a name
and how to run
the program. Sometimes though you may need to build your program and you may want to cleanup the file generated by the build after the program has been benchmarked. For this reason you can optionally specify two additional properties: build
and cleanup
.
NOTE: while
run
can only be specified as a string, since it will run the program to benchmark as a child process, thebuild
andcleanup
properties can be either strings, if you want to run those as sub processes, or JS functions which will be invoked directly by NodeJS. If you decide to use the function definition please define anasync
function, to ensure the CLI does not hang and keeps working smoothly
Here is an example structure
benchmark
├── c
│ ├── .benchmark.cjs
│ └── app.c
├── js
│ ├── .benchmark.js
│ ├── app2.mjs
│ └── app.js
├── cpp
│ ├── .benchmark.mjs
│ └── app.cpp
└── python
├── .benchmark.mjs
└── app.py
As you can see the program is able to load the configuration using any of the extension among: .js
, .cjs
and .mjs
. Ideally every program you want to benchmark will be in its own directory with a .benchmark.(js|mjs|cjs)
file associated to it.
Here is some examples of the .benchmark.(js|mjs|cjs)
files
//.benchmark.cjs
const { rm } = require("fs/promises")
const compiler = "gcc"
const name = 'app.c'
module.exports = [
{
name: `${name} - not optimized`,
build: `${compiler} app.c -o app0`,
run: './app0',
cleanup: "rm app0",
},
{
name: `${name} - optimized`,
build: `${compiler} app.c -O3 -o app3`,
run: './app3',
async cleanup() {
await rm('app3');
}
}
]
//.benchmark.mjs
const name = "app.js";
export default {
name: `app.js`,
run: "node app.js",
};
NOTE: you can use the extension
.js
but I would suggest to use either the.cjs
extension to tell node the files must be treated as a CommonJS module, or the.mjs
extension to tell node that it is a ES6 module.
Each .benchmark.(js|mjs|cjs)
has to specify commands and functions considering the path relative to where the .benchmark.(js|mjs|cjs)
file is, since each program/function will be run from the directory where the relative .benchmark.(js|mjs|cjs)
file is.
If you discover any bug or have any interesting ideas or suggestions on how to improve it, please feel free to open an issue.
This program measures resource consumption by running a function to retrieve the info from the OS every 40ms. Moreover, running time is measured using the prerformance.now()
method as the difference between when the child process has successfully been spawned and when it exits. This means that the results of this benchmark are not accurate enough for research purposes. The only objective of this CLI program is to provide an easy way to benchmark different algorithms implemented using different languages or different strategies.
The reason why resource consumption and time are measured this way is that this is the easiest whay I come up with to ensure that this program could run on any platform without restrictions. More accurate results could have been obtained using some OS specific code, but this falls out the scope of this project.
So please, take this as granted and refrain from opening issues related to inaccuracy, unless the inaccuracy is caused by a bug in the code.