Python script to create a compiler workload and evaluate external performance.
This script is intended to be used with MemFS to create a workload for evaluating system performance. The primary workload script does the following activities in the specified directory:
- Clone a repository
- Make/build in the repository
- Stat every file
- Report time and stats
Because compilation of a project is different on every system, project-specific steps are embedded in the workload file. The following projects are supported:
- Redis (76MB, 591 files, 137,475 LOC)
- Postgres (422MB, 4,807 files, 910,948 LOC)
- Nginx (62MB, 440 files, 155,056 LOC)
- Apache Web Server (362MB, 4,059 files, 503,006 LOC)
- Ruby (197MB, 3,281 files, 918,052 LOC)
- Python 3 (382MB, 3,570 files, 931,814 LOC)
Running a workload is as follows:
$ python3 workload.py -o results.csv -p nginx /tmp/testdir
This will run the workload on the Nginx repository, building in /tmp/testdir
and appending the results to results.csv
. Various options and defaults are provided and can be further inspected with:
$ python3 workload.py --help
Note that this script depends on many system and compiling dependencies to be available. Because they were already available on my system, I don't necessarily have a list of that I can expose through a requirements file. However, at a minimum, Git is required as are Xcode developer tools on a Macbook Pro.
For testing MemFS, I've created a simple script that runs through a single instance of the testing protocol. This should be used with care, however, as it is built for a specific system.