greensoftwarelab / energy-languages Goto Github PK
View Code? Open in Web Editor NEWThe complete set of tools for energy consumption analysis of programming languages, using Computer Language Benchmark Game
License: MIT License
The complete set of tools for energy consumption analysis of programming languages, using Computer Language Benchmark Game
License: MIT License
This is tangentially related to #3.
The 2017 paper (linked today in an article in Finland's largest newspaper about the energy efficiency of IT) implies that TypeScript is about 16 times less energy-efficient with the fannkuch-redux benchmark.
It seems that this is down to the fact that the TypeScript implementation of the program itself is inefficient.
Running hyperfine against the transpiled JavaScript for the TypeScript program (which, naturally, is just JavaScript, so should exactly match the performance of JavaScript), shows this:
$ hyperfine "node TypeScript/fannkuch-redux/fannkuchredux.js 11" "node JavaScript/fannkuch-redux/fannkuchredux.node-4.js 11"
Benchmark 1: node TypeScript/fannkuch-redux/fannkuchredux.js 11
Time (mean ± σ): 4.518 s ± 0.079 s [User: 4.490 s, System: 0.016 s]
Range (min … max): 4.409 s … 4.630 s 10 runs
Benchmark 2: node JavaScript/fannkuch-redux/fannkuchredux.node-4.js 11
Time (mean ± σ): 2.516 s ± 0.031 s [User: 2.497 s, System: 0.012 s]
Range (min … max): 2.454 s … 2.559 s 10 runs
Summary
'node JavaScript/fannkuch-redux/fannkuchredux.node-4.js 11' ran
1.80 ± 0.04 times faster than 'node TypeScript/fannkuch-redux/fannkuchredux.js 11'
(I ran these with parameter 11
because I didn't have the patience to wait for Hyperfine to finish all trials at parameter 12
as used in the scripts in the repository, but it seems the performance gap between the programs grows more and more.)
Hi,
I can't not pull large resource files doing : git lfs pull
Error message : This repository is over its data quota. Purchase more data packs to restore access.
Can we find this files on another location?
Hello,
I have decided to run the experiment by myself, however I can't figure out what the data generated in the {language}.csv is. Mind helping me out?
I've managed to calculate the exact same "Time" values as those shown in SLE’17 Table 4 : starting from these results tables —
https://sites.google.com/view/energy-efficiency-languages/results
So now I understand those normalized Time values show the arithmetic mean of the times for each language —
(language1 time1 + language1 time2 + language1 time3 + language1 timeN) / N
— divided by the arithmetic mean of the times for C language.
In other words, "the ratio of averages" for each language to the average for C.
For example, because the fannkuch-redux programs are run for ~20x more time than the reverse-complement programs; the un-normalized time fannkuch-redux contributes to the average have more weight than the time reverse-complement contributes to the average.
That may be what was intended.
If each program was intended to contribute the same weight to the Table 4 average, then Table 4 should show "the average of ratios" ?
https://jlmc.medium.com/understanding-three-simple-statistics-for-data-visualizations-2619dbb3677a
Hi everyone,
seems that it is not fair to have TypeScript in this tools because it compiles to esnext
(not es5
). In this case performance, energy consumption and memory usage depends on NodeJS (which run esnext
code).
I guess that compiled TypeScript code is almost the same as JavaScript code written to solve the same problem (and should have almost the same results with pure JavaScript) and here we should have one of solutions:
es5
) and JavaScript (esnext
)es5
) and TypeScript (compiled to es5
not esnext
)Hi,
I am running the energy measurements benchmark tests for Python. The tests are run on a Coffee Lake CPU (CPU model is 158). Since this specific model is not included in the supported 5 models in the rapl.c file, I overwrote the model read it from my computer to force it to be 60, e.g., HASWELL. Then, I recompile it to get the main binary.
In the result CSV file (Python.csv), all the metrics I got are zeros except the execution time, i.e., the last column, as shown below.
Does anyone get a similar issue? Could this be caused by the changes I made to the rapl.c file? If so and I have only the Coffee Lake CPU which is not in the supported CPUs, is there a way I can run the measurements anyway?
Thanks,
Aaron
fasta implementation for C is broken; "Segmentation fault"
While converting the Python 2 scripts to Python 3 today I noticed that all of the compile_all.py
scripts are very similar. However, a few of them have slightly different formatting. In each compile_all.py
the code checks to see if the action
value is equal to measure
. In the following files this code is indented an extra level:
Fortran/compile_all.py
Java/compile_all.py
Lua/compile_all.py
JRuby/compile_all.py
FSharp/compile_all.py
Perl/compile_all.py
Java-GraalVM/compile_all.py
compile_all.py
Chapel/compile_all.py
Ada/compile_all.py
Racket/compile_all.py
Go/compile_all.py
OCaml/compile_all.py
The effect of this is that in the files in this list check to see if the Makefile
file exists and then later check to see if the action is measure
. The files that are not in this list check if the action is measure
whether or not the Makefile
exists.
When action is equal to measure
the compile_all.py
script sleeps for 5 seconds. I have not looked into how this affects the outcome but the scripts should probably be consistent even if there's no effect on the end results. I'm assuming that the check should only happen when Makefile
exists.
It is a performance-oriented general purpose language following the C style with manual memory management.
Add LuaJIT (not Lua) to the benchmark.
It's very fast, should be at similar speed like C.
Hey @States, following up the conversation in a new thread. Where can the Julia community contribute results if this repo is for archive purposes only?
.chpl
is the correction extension for Chapel programs. Changing the extensions will enable proper syntax highlighting in GitHub.
README.md shows 2 references to the benchmarks game website.
The reference at the top of README.md is correct.
The reference at the bottom of README.md is wrong.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/
The engine that is used for the project is dated on 2017, which means that since then there could be quite some optimisations that could yield very different results.
The Free Pascal code using the inline functions, but inlining is off.
Here is the documentation about this feature:
By default, inline procedures are not allowed. Inline code must be enabled using the command-line switch -Si or {$inline on} directive.
So there are two possible decisions:
Hi. I have had some trouble running energy-languages. When I try to modprobe msr, I get a message the module is not found. Which package do I need to run it?
Hi,
As far as I can see, C# benchmarks are using .NET Core 1.1, which is released in 2017. There were many upgrades to this version over the years. I think using .NET 1.1 as C# can be misrepresenting because we can consider .NET Core 1.1 as an experimental version for cross-platform C# and there are many improvements made to .NET framework. The latest one is currently .NET 6. Performance is much better in the newer frameworks.
I think using .NET 5 or .NET 6 will better represent C# community here.
Thank you,
Dogac
very interesting initiative!
about the python benchmark, maybe it would be great to check the result using numpy and/or numba (also pypy).
because nowadays, no one would use pure python (when possible) when performance is important.
thanks!
It would be interesting to compare program complexity (of which LOC is some sort of measure) to the energy and time usage.
Javascript and Typescript benchmarks are using Node.js v7.9.0, which was released on 2017-04-11 and uses v8 version 5.5.372.43.
Is there a specific reason to use such an old Node release?
I think that using an outdated version of v8 might invalidate the benchmark results.
I think we should use the oldest maintainance LTS (Dubnium, v10.22.1, running v8 version 6.8.275.32), or maybe the active LTS (Erbium, v12.19.0, running v8 version 7.8.279.23)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.