kostya / benchmarks Goto Github PK
View Code? Open in Web Editor NEWSome benchmarks of different languages
License: MIT License
Some benchmarks of different languages
License: MIT License
Hello
Can you add swift benchmark please?
Hi!
It would be interesting to see a benchmark comparing json parsing. You can try with this big json: https://github.com/zeMirco/sf-city-lots-json
We tried to optimized json parsing in Crystal a lot and we believe it might be one of the fastest out there. And, as usual, it's implemented in Crystal itself.
Here's a code that you can try:
require "json"
text = File.read("citylots.json")
json = Json.parse(text) as Hash
puts json.length
Thanks!
Hello,
I don't know if it isn't used already, but, here goes, per Crystal's own documentation:
"Make sure to always use --release for production-ready executables and when performing benchmarks."
Just my 2 cents.
PHP is slow, we all know that, but it can be interesting to know how (should be done with PHP7 CLI I think)
EDIT: I could submit a PR if you want.
I know we have the benchmarks with Mono but since coreclr was just released it would be great to get the benchmarks updated with that.
Please update Crystal compiler to v0.19.2.
Latest PyPy release is 5.6.0.
Speed will improve, but not that much. (but it's a newest release)
Many by 387b17d
The condition should be testing for zero or non-zero not greater than.
This "Hello World!" contains a relevent test case.
>++++++++[-<+++++++++>]<.>[][<-]>+>-[+]++>++>+++[>[->+++<<+++>]<<]>-----.
>->+++..+++.>-.<<+[>[+>+]>>]<--------------.>>.+++.------.--------.>+.>+.
NB: Some languages use an unsigned byte value which would make the two condition types equivalent.
NodeJS is now 5.7.0
Go is now 1.6
Both should have significant performance improvements
Will be nice to see if we have any improvements with last ruby version
Hi. If you change C# to use int[]
instead of List<int>
for the tape and the program, it becomes much faster. Please align the implementations to use the same abstractions. If you want, I can submit a PR for C#, but I think it's better to change the Kotlin version.
Still relevant today, just as fast as C/C++, depending on the tests ... and great memory usage.
http://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=fpascal&lang2=gpp
...or at least, I strongly believe it is.
I think you used its own self-reported time rather than the output of xtime.rb by mistake in this case. matmul-native.jl prints the time that it thinks it takes. This isn't fair to the other benchmarks because only the julia-native code gets to ignore the overhead of the testing framework.
On my machine I get similar results for Rust and C but Julia-native's time is way off. Instead of something close to 0.15s I get 0.75s ~5x slower than reported. On the other hand, the other languages are slightly faster, which makes sense since I'm using a new i7 instead of an i5.
LDC latest version is 1.18.0
DMD latest version is 2.088.0
GDC latest version is 9.2.0
Thanks for your work!!!
Hi.
Once the JVM (until JDK8) have several issues regarding its warm up process, I would like to suggest you to use JMH to make JVM-related benchmarks (Kotlin, Scala and Java it self). It will generate results more close to the production environment - where the JVM have already applied most of its JIT optimizations.
BTW, I'm glad with your benchmark initiative. Good job!
Cheers!
JXCore claims to be significantly faster and more memory-efficient than Node.js
Here's a link where one guys proves it, just last 2 months:
https://www.youtube.com/watch?v=xE_oH1tJI0w
And YES! It will run your Node.js scripts unmodified
On this line of the Kotlin program, printing is handled. Kotlin's .print(char)
function calls directly into Java's print function, which flushes on new line. Kotlin should flush the output stream after each character is printed to implement the behavior specified in the README, that "stdout should be flushed after each symbol."
An alternative solution would be to not flush stdout in the other languages, instead leaving it up to the standard library's natural flow.
With the benchmark a very out of date version of Kotlin is used (1.0.3). Please update Kotlin to the latest stable version (1.3.11).
Please kindly update the Node.js version, or at least add a new entry that goes like 'JavaScript Node Latest'.
Almost all the other languages and implementations are using bleeding edge versions except JavaScript Node.js and JavaScript V8
Node.js latest stable = 5.0.0
You use single-precision (32-bit) floats for the Julia version of Matmul. That's kind of cheating compared to the other implementations that use double-precision (64-bit) floats.
Go -> 1.7
NodeJS -> 6.4
Python3 -> 3.5.2
:Removal:
JXCore - unmaintained, defunct, and abandoned
In Julia running code outside of predefined functions carries a heavy performance penalty. For instance, simply rewriting the matrix multiplication benchmark as follows yields a performance improvement by a factor of 3 on my machine:
function matgen(n)
tmp = 1.0 / n / n
[ float32(tmp * (i - j) * (i + j - 2)) for i=1:n, j=1:n ]
end
function main()
n = 100
if length(ARGS) >= 1
n = int(ARGS[1])
end
t = time()
n = int(n / 2 * 2)
a = matgen(n)
b = matgen(n)
c = a * b
v = int(n/2) + 1
println(c[v, v])
println(time() - t)
end
main()
main()
The same goes for the other benchmarks. Technically, comprehensions are fairly slow too (compared to unrolled @simd/@inbounds annotated for loops), but the matrix generation doesn't particularly matter in this benchmark. Also note that the main() function is invoked twice here to show the kind of performance improvement the JIT produces (roughly 300 times on my machine). In general, it is good practice in Julia to first run performance sensitive functions on a tiny dataset to invoke the JIT, then run the actual computation.
P.S. Also note that this particular benchmark implementation essentially measures the performance of whatever OpenBLAS version you compiled Julia to use and virtually any language should be able to obtain similar results.
The benchmarked implementations of brainfuck are (mostly) not correct. I tested the C and Python versions, but I suspect they all share an algorithmic bug.
Failing testcase: http://esoteric.sange.fi/brainfuck/bf-source/prog/BOTTLES.BF
Correct reference interpreter (generator): https://github.com/pablojorge/brainfuck/blob/master/haskell/bf2c.hs
Expected behavior: print bottles from 99 to zero, quickly.
Actual behavior: BF interpreters freeze after 91 bottles remain.
Would be nice to have a sumarizing table with all benchmarks results on the front page (README.md)
One thing that I discovered in Julia is that the current benchmark is not very accurate. It would be better to call @time main()
in order to get the time and memory consumption sans the JIT for more accurate results. I have found this makes some difference in the results. For example, with brainfuck
the results show Julia to be only 0.45 seconds slower than Crystal.
Please update... Thanks a bunch!
Some concerns about new (rust, scala, swift) or old (haskell) compiled languages is the build/compile speed, it would be nice to see the build duration also.
Hi,
On my setup OSX 10.10 with Mono JIT compiler version 3.12.0 Mono . When I run matmult.exe with --llvm
flag enabled it takes 11.71s, whereas the original took 21.60s.
Resulting run command looks like:
../xtime.rb mono -O=all --gc=sgen --llvm matmul.exe 1500
Is there such option on your Ubuntu setup? If yes, would you check how it affects the performance?
There was a first PyPy3.5 beta release with Python 3.5 support, maybe include it as well?
Please include C Clang for the Base64 benchmark. My results on my machine:
GCC:
encode: 1333333600, 1.08
decode: 1000000000, 2.07
Clang:
encode: 1333333600, 1.23
decode: 1000000000, 1.44
Also, please modify the D implementation of the Matmul benchmark. dotProduct is optimized, every other language usees the naive implementation (that's why D is so fast in this benchmark).
I think I missed where the Mandelbrot is implemented.
Could you link it into the readme file?
Also, very cool that you reference your source for the origin of many of these benchmarks.
Thanks
Please update the NodeJS benchmark.
You're using the very latest version of Rust and Go compilers, but your Nim is 8.5 months behind...
The current version of Nim is 0.16.0 stable (or 0.16.1 devel) and Clang 3.9.1 (or ideally 4.0 SVN).
Also please make sure you're compiling Nim code with -d:release
.
Thank you very much for a great benchmark! ๐ฅ ๐
Will be great if you can run again the ruby scrips using truffle ruby and latest version of ruby
Open Swift project Elements/Silver become alfa.
http://elementscompiler.com/elements/silver/
It might developed by Delfi/ObjectivePascal
numbers are with disabled print
namespace modified
{
enum op_type {
INC,
MOVE,
LOOP,
PRINT
};
struct Op;
using Ops = vector<Op>;
using data_t = ptrdiff_t;
struct Op
{
op_type op;
data_t val;
Ops loop;
Op(Ops v) : op(LOOP), loop(v) {}
Op(op_type _op, data_t v = 0) : op(_op), val(v) {}
};
class Tape
{
using vect = vector<data_t>;
vect tape;
vect::iterator pos;
public:
Tape()
{
tape.reserve(8);
tape.push_back(0);
pos = tape.begin();
}
inline data_t get() const
{
return *pos;
}
inline void inc(data_t x)
{
*pos += x;
}
inline void move(data_t x)
{
auto d = std::distance(tape.begin(), pos);
d += x;
if (d >= (data_t)tape.size())
tape.resize(d + 1);
pos = tape.begin();
std::advance(pos, d);
}
};
class Program
{
Ops ops;
public:
Program(const string& code)
{
auto iterator = code.cbegin();
ops = parse(&iterator, code.cend());
}
void run() const
{
Tape tape;
_run(ops, tape);
}
private:
static Ops parse(string::const_iterator *iterator, string::const_iterator end)
{
Ops res;
while (*iterator != end)
{
char c = **iterator;
*iterator += 1;
switch (c) {
case '+':
res.emplace_back(INC, 1);
break;
case '-':
res.emplace_back(INC, -1);
break;
case '>':
res.emplace_back(MOVE, 1);
break;
case '<':
res.emplace_back(MOVE, -1);
break;
case '.':
res.emplace_back(PRINT);
break;
case '[':
res.emplace_back(parse(iterator, end));
break;
case ']':
return res;
}
}
return res;
}
static void _run(const Ops &program, Tape &tape)
{
for (auto &op : program)
{
switch (op.op)
{
case INC:
tape.inc(op.val);
break;
case MOVE:
tape.move(op.val);
break;
case LOOP:
while (tape.get() > 0)
_run(op.loop, tape);
break;
case PRINT:
if (do_print())
{
printf("%c", (int)tape.get());
fflush(stdout);
}
break;
}
}
}
};
}
I can run the same benchmark a few times and get wildly different results. Consider having xtime.rb loop 10-100 times and take the minimum to filter out some of this noise.
https://github.com/brianmario/yajl-ruby
Can only the stdlib be used?
Crystal 0.20.0 [b0cc6f7] (2016-11-22)
[latest is 0.21.0]
LDC - the LLVM D compiler (0.15.2-beta1)
[latest is 1.2.0-beta1]
DMD64 D Compiler v2.068.0
[latest is v2.073.2]
gdc (crosstool-NG crosstool-ng-1.20.0-232-gc746732 - 20150830-2.066.1-dadb5a3784) 5.2.0
[latest is 2.068.2]
It should work on Linux and if GCD included, it should speed up significantly as fast as Rust.
import Foundation
let strsize = 10_000_000
let tries = 100
let longString = String(repeating: "a", count: strsize)
let data = longString.data(using: .utf8)
var base64en:Data? = nil
var total: Int = 0
//Encode
for _ in 0..<tries {
autoreleasepool {
base64en = data!.base64EncodedData()
total = total &+ base64en!.endIndex
}
}
print(total)
//Dencode
total=0
for _ in 0..<tries {
autoreleasepool {
total = total &+ Data(base64Encoded: base64en!)!.endIndex
}
}
print(total)
Nim has recently updated to 0.11.2, any news on updating?
Please update
NodeJS is now at 7.0.0 {a major release yesterday}
Please include Vert.x ( http://vertx.io/ ) in your benchmarks as the polygot platform seems very promising and tends to benefit from the Java HotSpot's runtime optimizations.... perhaps some benchmark warm up will be needed. Thanks
Hello,
Can we run this benchmark against latest jruby 9.2, with java 10 and enable this options for java:
export JAVA_OPTS="-XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCI -XX:+UseJVMCICompiler -Xcompile.invokedynamic -Xfixnum.cache=false -Xmn512m -Xms2048m -Xmx2048m"
here's a solution for Clojure using the Cheshireparser
(let [data (parse-stream (clojure.java.io/reader "./1.json") true)
len (count data)]
(loop [sx 0.0 sy 0.0 sz 0.0 [coord & coords] data]
(if-let [{:keys [x y z]} coord]
(recur (+ sx x) (+ sy y) (+ sz z) coords)
(println (/ sx len) (/ sy len) (/ sz len)))))
This should allow GCC to use more aggressive loop optimizations.
EDIT: Per this, you might need to add -msse
/-msse2
, and possibly -ffast-math
and/or -fassociative-math
to GCC's flags.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.