GithubHelp home page GithubHelp logo

greensoftwarelab / energy-languages Goto Github PK

View Code? Open in Web Editor NEW
673.0 31.0 111.0 1.71 MB

The complete set of tools for energy consumption analysis of programming languages, using Computer Language Benchmark Game

License: MIT License

Makefile 10.90% Python 5.33% C++ 3.67% C 18.09% C# 3.94% Dart 2.27% F# 1.80% Fortran 3.01% Go 2.35% PHP 1.87% Ruby 1.76% Java 11.11% Common Lisp 3.97% Lua 3.70% Pascal 15.34% Perl 1.42% Rust 3.27% Swift 2.85% JavaScript 1.65% TypeScript 1.69%
clbg energy programming languages

energy-languages's People

Contributors

ben-albrecht avatar felipemoz avatar logankilpatrick avatar marcocouto avatar meehew avatar states avatar zenunomacedo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

energy-languages's Issues

TypeScript fannkuch-redux implementation skews results in paper

This is tangentially related to #3.

The 2017 paper (linked today in an article in Finland's largest newspaper about the energy efficiency of IT) implies that TypeScript is about 16 times less energy-efficient with the fannkuch-redux benchmark.
It seems that this is down to the fact that the TypeScript implementation of the program itself is inefficient.

Running hyperfine against the transpiled JavaScript for the TypeScript program (which, naturally, is just JavaScript, so should exactly match the performance of JavaScript), shows this:

$ hyperfine "node TypeScript/fannkuch-redux/fannkuchredux.js 11" "node JavaScript/fannkuch-redux/fannkuchredux.node-4.js 11"
Benchmark 1: node TypeScript/fannkuch-redux/fannkuchredux.js 11
  Time (mean ± σ):      4.518 s ±  0.079 s    [User: 4.490 s, System: 0.016 s]
  Range (min … max):    4.409 s …  4.630 s    10 runs

Benchmark 2: node JavaScript/fannkuch-redux/fannkuchredux.node-4.js 11
  Time (mean ± σ):      2.516 s ±  0.031 s    [User: 2.497 s, System: 0.012 s]
  Range (min … max):    2.454 s …  2.559 s    10 runs

Summary
  'node JavaScript/fannkuch-redux/fannkuchredux.node-4.js 11' ran
    1.80 ± 0.04 times faster than 'node TypeScript/fannkuch-redux/fannkuchredux.js 11'

(I ran these with parameter 11 because I didn't have the patience to wait for Hyperfine to finish all trials at parameter 12 as used in the scripts in the repository, but it seems the performance gap between the programs grows more and more.)

Resource LFS overquota

Hi,

I can't not pull large resource files doing : git lfs pull

Error message : This repository is over its data quota. Purchase more data packs to restore access.

Can we find this files on another location?

"ratio of averages" weights SLE’17 "Table 4. Normalized global results"

  1. I've managed to calculate the exact same "Time" values as those shown in SLE’17 Table 4 : starting from these results tables —

    https://sites.google.com/view/energy-efficiency-languages/results

  2. So now I understand those normalized Time values show the arithmetic mean of the times for each language —

(language1 time1 + language1 time2 + language1 time3 + language1 timeN) / N

— divided by the arithmetic mean of the times for C language.

In other words, "the ratio of averages" for each language to the average for C.

  1. As-far-as-I-can-tell that calculation seems to "weight" each program measurement differently.

For example, because the fannkuch-redux programs are run for ~20x more time than the reverse-complement programs; the un-normalized time fannkuch-redux contributes to the average have more weight than the time reverse-complement contributes to the average.

That may be what was intended.

  1. On the other hand, the intention may have been that each program - binary-trees, fannkuch-redux, fasta, reverse-complement - contributed the same weight to the Table 4 average.

If each program was intended to contribute the same weight to the Table 4 average, then Table 4 should show "the average of ratios" ?

https://jlmc.medium.com/understanding-three-simple-statistics-for-data-visualizations-2619dbb3677a

https://dl.acm.org/doi/pdf/10.1145/5666.5673

TypeScript vs JavaScript

Hi everyone,

seems that it is not fair to have TypeScript in this tools because it compiles to esnext (not es5). In this case performance, energy consumption and memory usage depends on NodeJS (which run esnext code).

I guess that compiled TypeScript code is almost the same as JavaScript code written to solve the same problem (and should have almost the same results with pure JavaScript) and here we should have one of solutions:

  1. JavaScript (es5) and JavaScript (esnext)
  2. JavaScript (es5) and TypeScript (compiled to es5 not esnext)

results of the energy measurements are all zeros

Hi,

I am running the energy measurements benchmark tests for Python. The tests are run on a Coffee Lake CPU (CPU model is 158). Since this specific model is not included in the supported 5 models in the rapl.c file, I overwrote the model read it from my computer to force it to be 60, e.g., HASWELL. Then, I recompile it to get the main binary.

In the result CSV file (Python.csv), all the metrics I got are zeros except the execution time, i.e., the last column, as shown below.

image

Does anyone get a similar issue? Could this be caused by the changes I made to the rapl.c file? If so and I have only the Coffee Lake CPU which is not in the supported CPUs, is there a way I can run the measurements anyway?

Thanks,

Aaron

Inconsistent indentation in compile_all.py scripts may have unintended consequences

While converting the Python 2 scripts to Python 3 today I noticed that all of the compile_all.py scripts are very similar. However, a few of them have slightly different formatting. In each compile_all.py the code checks to see if the action value is equal to measure. In the following files this code is indented an extra level:

Fortran/compile_all.py
Java/compile_all.py
Lua/compile_all.py
JRuby/compile_all.py
FSharp/compile_all.py
Perl/compile_all.py
Java-GraalVM/compile_all.py
compile_all.py
Chapel/compile_all.py
Ada/compile_all.py
Racket/compile_all.py
Go/compile_all.py
OCaml/compile_all.py

The effect of this is that in the files in this list check to see if the Makefile file exists and then later check to see if the action is measure. The files that are not in this list check if the action is measure whether or not the Makefile exists.

When action is equal to measure the compile_all.py script sleeps for 5 seconds. I have not looked into how this affects the outcome but the scripts should probably be consistent even if there's no effect on the end results. I'm assuming that the check should only happen when Makefile exists.

Update Javascript engine.

The engine that is used for the project is dated on 2017, which means that since then there could be quite some optimisations that could yield very different results.

Free Pascal: inlining is not turned on

The Free Pascal code using the inline functions, but inlining is off.
Here is the documentation about this feature:

By default, inline procedures are not allowed. Inline code must be enabled using the command-line switch -Si or {$inline on} directive.

So there are two possible decisions:

  1. Add {$inline ON} into the begining of source code;
  2. Add -Si switch into makefile.

Trouble loading msr

Hi. I have had some trouble running energy-languages. When I try to modprobe msr, I get a message the module is not found. Which package do I need to run it?

Use Better C# Framework

Hi,

As far as I can see, C# benchmarks are using .NET Core 1.1, which is released in 2017. There were many upgrades to this version over the years. I think using .NET 1.1 as C# can be misrepresenting because we can consider .NET Core 1.1 as an experimental version for cross-platform C# and there are many improvements made to .NET framework. The latest one is currently .NET 6. Performance is much better in the newer frameworks.

I think using .NET 5 or .NET 6 will better represent C# community here.

Thank you,
Dogac

Python examples

very interesting initiative!

about the python benchmark, maybe it would be great to check the result using numpy and/or numba (also pypy).
because nowadays, no one would use pure python (when possible) when performance is important.

thanks!

Add 'Lines of Code' to listing

It would be interesting to compare program complexity (of which LOC is some sort of measure) to the energy and time usage.

Is Javascript being benchmarked with an outdated version of NodeJS?

Javascript and Typescript benchmarks are using Node.js v7.9.0, which was released on 2017-04-11 and uses v8 version 5.5.372.43.
Is there a specific reason to use such an old Node release?
I think that using an outdated version of v8 might invalidate the benchmark results.

I think we should use the oldest maintainance LTS (Dubnium, v10.22.1, running v8 version 6.8.275.32), or maybe the active LTS (Erbium, v12.19.0, running v8 version 7.8.279.23)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.