GithubHelp home page GithubHelp logo

kostya / benchmarks Goto Github PK

View Code? Open in Web Editor NEW
2.8K 2.8K 251.0 1.73 MB

Some benchmarks of different languages

License: MIT License

C 7.44% Crystal 3.14% D 4.42% Go 5.83% JavaScript 3.68% Nim 4.06% Python 3.67% Ruby 7.13% Scala 4.15% Brainfuck 3.58% C++ 12.21% Rust 5.69% Java 6.10% Julia 3.22% Shell 0.39% C# 5.35% Perl 3.65% Makefile 12.72% Haskell 3.16% Clojure 0.41%
benchmarks languages

benchmarks's People

Contributors

9il avatar akarin123 avatar beached avatar cmcaine avatar dbohdan avatar dtolnay avatar gavr123456789 avatar gohryt avatar goldenreign avatar jackstouffer avatar k-bx avatar kostya avatar lqdc avatar martinnowak avatar miloyip avatar nuald avatar orthoxerox avatar philnguyen avatar pmarcelll avatar proyb6 avatar radszy avatar rap2hpoutre avatar ricvelozo avatar sfesenko avatar snadrus avatar tchaloupka avatar w-diesel avatar willabides avatar zapov avatar zhaozhixu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

benchmarks's Issues

Swift

Hello

Can you add swift benchmark please?

Json benchmark

Hi!

It would be interesting to see a benchmark comparing json parsing. You can try with this big json: https://github.com/zeMirco/sf-city-lots-json

We tried to optimized json parsing in Crystal a lot and we believe it might be one of the fastest out there. And, as usual, it's implemented in Crystal itself.

Here's a code that you can try:

require "json"

text = File.read("citylots.json")
json = Json.parse(text) as Hash
puts json.length

Thanks!

Crystal flags for benchmarking.

Hello,

I don't know if it isn't used already, but, here goes, per Crystal's own documentation:
"Make sure to always use --release for production-ready executables and when performing benchmarks."

Just my 2 cents.

What about PHP?

PHP is slow, we all know that, but it can be interesting to know how (should be done with PHP7 CLI I think)

EDIT: I could submit a PR if you want.

C# benchmark with coreclr

I know we have the benchmarks with Mono but since coreclr was just released it would be great to get the benchmarks updated with that.

Update PyPy to latest.

Latest PyPy release is 5.6.0.
Speed will improve, but not that much. (but it's a newest release)

Brainfuck V2 implementations are broken

Many by 387b17d

The condition should be testing for zero or non-zero not greater than.

This "Hello World!" contains a relevent test case.

>++++++++[-<+++++++++>]<.>[][<-]>+>-[+]++>++>+++[>[->+++<<+++>]<<]>-----.
>->+++..+++.>-.<<+[>[+>+]>>]<--------------.>>.+++.------.--------.>+.>+.

NB: Some languages use an unsigned byte value which would make the two condition types equivalent.

UPDATES Required

NodeJS is now 5.7.0
Go is now 1.6

Both should have significant performance improvements

BF benchmark: Kotlin uses arrays while Java and C# use lists.

Hi. If you change C# to use int[] instead of List<int> for the tape and the program, it becomes much faster. Please align the implementations to use the same abstractions. If you want, I can submit a PR for C#, but I think it's better to change the Kotlin version.

Julia native result reported for Matmul is not from xtime

...or at least, I strongly believe it is.

I think you used its own self-reported time rather than the output of xtime.rb by mistake in this case. matmul-native.jl prints the time that it thinks it takes. This isn't fair to the other benchmarks because only the julia-native code gets to ignore the overhead of the testing framework.

On my machine I get similar results for Rust and C but Julia-native's time is way off. Instead of something close to 0.15s I get 0.75s ~5x slower than reported. On the other hand, the other languages are slightly faster, which makes sense since I'm using a new i7 instead of an i5.

Consider use JMH to run JVM benchmarks

Hi.

Once the JVM (until JDK8) have several issues regarding its warm up process, I would like to suggest you to use JMH to make JVM-related benchmarks (Kotlin, Scala and Java it self). It will generate results more close to the production environment - where the JVM have already applied most of its JIT optimizations.

BTW, I'm glad with your benchmark initiative. Good job!

Cheers!

BF2: Kotlin does not flush stdout after each character

On this line of the Kotlin program, printing is handled. Kotlin's .print(char) function calls directly into Java's print function, which flushes on new line. Kotlin should flush the output stream after each character is printed to implement the behavior specified in the README, that "stdout should be flushed after each symbol."

An alternative solution would be to not flush stdout in the other languages, instead leaving it up to the standard library's natural flow.

Update Kotlin

With the benchmark a very out of date version of Kotlin is used (1.0.3). Please update Kotlin to the latest stable version (1.3.11).

Node.js UPDATE

Please kindly update the Node.js version, or at least add a new entry that goes like 'JavaScript Node Latest'.

Almost all the other languages and implementations are using bleeding edge versions except JavaScript Node.js and JavaScript V8

Node.js latest stable = 5.0.0

Matmul - Julia - Single-precision floats

You use single-precision (32-bit) floats for the Julia version of Matmul. That's kind of cheating compared to the other implementations that use double-precision (64-bit) floats.

A few runtime updates

Go -> 1.7
NodeJS -> 6.4
Python3 -> 3.5.2

:Removal:
JXCore - unmaintained, defunct, and abandoned

Julia code runs in global scope

In Julia running code outside of predefined functions carries a heavy performance penalty. For instance, simply rewriting the matrix multiplication benchmark as follows yields a performance improvement by a factor of 3 on my machine:

function matgen(n)
tmp = 1.0 / n / n
[ float32(tmp * (i - j) * (i + j - 2)) for i=1:n, j=1:n ]
end

function main()
n = 100
if length(ARGS) >= 1
n = int(ARGS[1])
end
t = time()
n = int(n / 2 * 2)
a = matgen(n)
b = matgen(n)
c = a * b
v = int(n/2) + 1
println(c[v, v])
println(time() - t)
end

main()
main()

The same goes for the other benchmarks. Technically, comprehensions are fairly slow too (compared to unrolled @simd/@inbounds annotated for loops), but the matrix generation doesn't particularly matter in this benchmark. Also note that the main() function is invoked twice here to show the kind of performance improvement the JIT produces (roughly 300 times on my machine). In general, it is good practice in Julia to first run performance sensitive functions on a tiny dataset to invoke the JIT, then run the actual computation.

P.S. Also note that this particular benchmark implementation essentially measures the performance of whatever OpenBLAS version you compiled Julia to use and virtually any language should be able to obtain similar results.

Brainfuck v2 implementations are broken

The benchmarked implementations of brainfuck are (mostly) not correct. I tested the C and Python versions, but I suspect they all share an algorithmic bug.

Failing testcase: http://esoteric.sange.fi/brainfuck/bf-source/prog/BOTTLES.BF

Correct reference interpreter (generator): https://github.com/pablojorge/brainfuck/blob/master/haskell/bf2c.hs

Expected behavior: print bottles from 99 to zero, quickly.

Actual behavior: BF interpreters freeze after 91 bottles remain.

Julia Timing

One thing that I discovered in Julia is that the current benchmark is not very accurate. It would be better to call @time main() in order to get the time and memory consumption sans the JIT for more accurate results. I have found this makes some difference in the results. For example, with brainfuck the results show Julia to be only 0.45 seconds slower than Crystal.

Suggestion: add compile/build duration

Some concerns about new (rust, scala, swift) or old (haskell) compiled languages is the build/compile speed, it would be nice to see the build duration also.

Mono is faster with --llvm flag

Hi,

On my setup OSX 10.10 with Mono JIT compiler version 3.12.0 Mono . When I run matmult.exe with --llvm flag enabled it takes 11.71s, whereas the original took 21.60s.

Resulting run command looks like:

../xtime.rb mono -O=all --gc=sgen --llvm matmul.exe 1500

Is there such option on your Ubuntu setup? If yes, would you check how it affects the performance?

Add PyPy3.5 to testing

There was a first PyPy3.5 beta release with Python 3.5 support, maybe include it as well?

Suggestions

Please include C Clang for the Base64 benchmark. My results on my machine:
GCC:
encode: 1333333600, 1.08
decode: 1000000000, 2.07

Clang:
encode: 1333333600, 1.23
decode: 1000000000, 1.44

Also, please modify the D implementation of the Matmul benchmark. dotProduct is optimized, every other language usees the naive implementation (that's why D is so fast in this benchmark).

Mandelbrot implementation

I think I missed where the Mandelbrot is implemented.

Could you link it into the readme file?

Also, very cool that you reference your source for the origin of many of these benchmarks.
Thanks

Nim & Clang Update

You're using the very latest version of Rust and Go compilers, but your Nim is 8.5 months behind...

The current version of Nim is 0.16.0 stable (or 0.16.1 devel) and Clang 3.9.1 (or ideally 4.0 SVN).

Also please make sure you're compiling Nim code with -d:release.

Thank you very much for a great benchmark! ๐Ÿฅ‡ ๐Ÿ˜ƒ

c++ for bench.b could be implemented 20% faster for x64 and twice faster for x86

numbers are with disabled print

namespace modified
{
	enum op_type {
		INC,
		MOVE,
		LOOP,
		PRINT
	};

	struct Op;
	using Ops = vector<Op>;

	using data_t = ptrdiff_t;

	struct Op
	{
		op_type op;
		data_t val;
		Ops loop;
		Op(Ops v) : op(LOOP), loop(v) {}
		Op(op_type _op, data_t v = 0) : op(_op), val(v) {}
	};

	class Tape
	{
		using vect = vector<data_t>;
		vect	tape;
		vect::iterator	pos;
	public:
		Tape()
		{
			tape.reserve(8);
			tape.push_back(0);
			pos = tape.begin();
		}

		inline data_t get() const
		{
			return *pos;
		}
		inline void inc(data_t x)
		{
			*pos += x;
		}
		inline void move(data_t x)
		{
			auto d = std::distance(tape.begin(), pos);
			d += x;
			if (d >= (data_t)tape.size())
				tape.resize(d + 1);
			pos = tape.begin();
			std::advance(pos, d);
		}
	};

	class Program
	{
		Ops ops;
	public:
		Program(const string& code)
		{
			auto iterator = code.cbegin();
			ops = parse(&iterator, code.cend());
		}

		void run() const
		{
			Tape tape;
			_run(ops, tape);
		}
	private:
		static Ops parse(string::const_iterator *iterator, string::const_iterator end)
		{
			Ops res;
			while (*iterator != end)
			{
				char c = **iterator;
				*iterator += 1;
				switch (c) {
				case '+':
					res.emplace_back(INC, 1);
					break;
				case '-':
					res.emplace_back(INC, -1);
					break;
				case '>':
					res.emplace_back(MOVE, 1);
					break;
				case '<':
					res.emplace_back(MOVE, -1);
					break;
				case '.':
					res.emplace_back(PRINT);
					break;
				case '[':
					res.emplace_back(parse(iterator, end));
					break;
				case ']':
					return res;
				}
			}
			return res;
		}

		static void _run(const Ops &program, Tape &tape)
		{
			for (auto &op : program)
			{
				switch (op.op) 
				{
				case INC:
					tape.inc(op.val);
					break;
				case MOVE:
					tape.move(op.val);
					break;
				case LOOP:
					while (tape.get() > 0)
						_run(op.loop, tape);
					break;
				case PRINT:
					if (do_print())
					{
						printf("%c", (int)tape.get());
						fflush(stdout);
					}
					break;
				}
			}
		}
	};
}

bf2_bench.zip
x86
x64

Repeat benchmarks to eliminate noise

I can run the same benchmark a few times and get wildly different results. Consider having xtime.rb loop 10-100 times and take the minimum to filter out some of this noise.

Runtime & Compiler Updates

Crystal 0.20.0 [b0cc6f7] (2016-11-22)
[latest is 0.21.0]

LDC - the LLVM D compiler (0.15.2-beta1)
[latest is 1.2.0-beta1]

DMD64 D Compiler v2.068.0
[latest is v2.073.2]

gdc (crosstool-NG crosstool-ng-1.20.0-232-gc746732 - 20150830-2.066.1-dadb5a3784) 5.2.0
[latest is 2.068.2]

Swift in Base64

It should work on Linux and if GCD included, it should speed up significantly as fast as Rust.

import Foundation

let strsize = 10_000_000
let tries = 100
let longString = String(repeating: "a", count: strsize)
let data = longString.data(using: .utf8)
var base64en:Data? = nil
var total: Int = 0

//Encode
for _ in 0..<tries {
    autoreleasepool {
        base64en = data!.base64EncodedData()
        total = total &+ base64en!.endIndex
    }
}
print(total)

//Dencode
total=0
for _ in 0..<tries {
    autoreleasepool {
        total = total &+ Data(base64Encoded: base64en!)!.endIndex
    }
}
print(total)

Nim 0.11.2

Nim has recently updated to 0.11.2, any news on updating?

Include Vert.x JavaScript

Please include Vert.x ( http://vertx.io/ ) in your benchmarks as the polygot platform seems very promising and tends to benefit from the Java HotSpot's runtime optimizations.... perhaps some benchmark warm up will be needed. Thanks

latest version of jruby with java 10 and graal

Hello,
Can we run this benchmark against latest jruby 9.2, with java 10 and enable this options for java:
export JAVA_OPTS="-XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCI -XX:+UseJVMCICompiler -Xcompile.invokedynamic -Xfixnum.cache=false -Xmn512m -Xms2048m -Xmx2048m"

Clojure JSON benchmark

here's a solution for Clojure using the Cheshireparser

(let [data (parse-stream (clojure.java.io/reader "./1.json") true)
      len  (count data)]
  (loop [sx 0.0 sy 0.0 sz 0.0 [coord & coords] data]
    (if-let [{:keys [x y z]} coord]
      (recur (+ sx x) (+ sy y) (+ sz z) coords)
      (println (/ sx len) (/ sy len) (/ sz len)))))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.