avulanov / ann-benchmark Goto Github PK
View Code? Open in Web Editor NEWBenchmarks of artificial neural network library for Spark MLlib
License: Apache License 2.0
Benchmarks of artificial neural network library for Spark MLlib
License: Apache License 2.0
Hi,
I'm reading your ann-bench mark spark version. When you do the following, shouldn't the sampling need to be done for every node ? It seems u just did for once and every node share the same sample data.
val sample = train.sample(true, 1.0 / i, 11L).collect
val parallelData = dataPartitions.flatMap(x => sample)
Hi Alex,
I am trying to reproduce the benchmark results and I have a quick question of how many OpenBLAS threads you have used and what's the runtime scalability that you got. I am expecting with N threads, compute runtime should improve from M secs to M/N/2 secs.
Here is what I am trying:
I have 20 nodes and 16 cores on each node.
SparkContext: 20 nodes, 16 cores, sc.defaultParallelism 320
def gramSize(n: Int) = (n*n+1)/2
// I have not used saxpy f2jBLAS and NativeBLAS yet but that will be used over here for comparisons.
// I am not sure if f2jBLAS can run on multiple threads or not but OpenBLAS should run fine
val combOp = (v1: Array[Float], v2: Array[Float]) => {
var i = 0
while (i < v1.length) {
v1(i) += v2(i)
i += 1
}
v1
}
val n = gramSize(4096)
val vv = sc.parallelize(0 until sc.defaultParallelism).map(i => Array.fill[Float](0))
vv.persist
Option 1: 320 partitions, 1 thread on combOp per partition
val start = System.nanoTime();
vv.treeReduce(combOp, 2);
val reduceTime = (System.nanoTime() - start)*1e-9
reduceTime: Double = 5.6390302430000006
Option 2: 20 partitions, 1 thread on combOp per partition
val coalescedvv = vv.coalesce(20)
coalescedvv.count
val start = System.nanoTime();
coalescedvv.treeReduce(combOp, 2);
val reduceTime = (System.nanoTime() - start)*1e-9
reduceTime: Double = 3.9140685640000004
Option 3: 20 partitions, OpenBLAS numThread=16 per partition
Setting up OpenBLAS on cluster, I will update soon.
Let me know your thoughts. I think if underlying operations are Dense BLAS level1, level2 or level3, running with higher OpenBLAS threads and reducing number of partitions should help in decreasing cross partition shuffle.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.