intellabs / spmp Goto Github PK
View Code? Open in Web Editor NEWsparse matrix pre-processing library
Home Page: https://github.com/IntelLabs/SpMP/wiki
License: Other
sparse matrix pre-processing library
Home Page: https://github.com/IntelLabs/SpMP/wiki
License: Other
# DISCONTINUATION OF PROJECT # This project will no longer be maintained by Intel. Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project. Intel no longer accepts patches to this project. If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project. SpMP (sparse matrix pre-processing) library includes optimized parallel implementations of a few key sparse matrix pre-processing routines: currently, task dependency graph construction of Gauss-Seidel like loops with data-dependent loop carried dependencies, and cache-locality optimizing reorderings like breadth-first search (BFS) and reverse Cuthill-McKee (RCM). In addition, SpMP includes auxiliary routines like parallel matrix transpose that is useful for moving back and forth between compressed sparse row (CSR) and compressed sparse column (CSC), matrix market file I/O, load balanced sparse matrix dense vector multiplication (SpMV), and optimized dissemination barrier. The pre-processing routines implemented in SpMP are very important for achieving high performance of key sparse matrix operations such as sparse triangular solver, Gauss-Seidel (GS) smoothing, incomplete LU (ILU) factorization, and SpMV, in particular in modern machines with many cores and deep memory hierarchy. At the same time it is very challenging to have efficient parallel implementations of the pre-processing routines. An intention of SpMP design is to showcase a "best known method" in high-performance implementations of those pre-processing routines. SpMP can also be used as an usual library, for example within a sparse iterative solver package. However, if a package uses its own unique sparse matrix format, a direct invocation of SpMP can involve non-trivial conversion overhead. Therefore, we strive to document appropriately so that our optimization approach can be adopted by other software packages. We recommend to explore SpMP starting from the two examples provided in test directory: test/gs_test.cpp and test/reordering_test.cpp . test/gs_test.cpp shows how to parallelize GS-like loops using the level scheduling approach with point-to-point synchronization and redundant transitive dependency elimination described in [1]. test/reordering_test.cpp shows how to optimize cache locality of SpMV by using BFS and RCM reorderings. SpMP has the following file structure: CSR.hpp/cpp: a simple compressed sparse row structure with support for routines like parallel matrix transposition. LevelSchedule.hpp/cpp: an implementation of dependency graph construction of GS-like loops described in [1]. reordering/ConnectedComponents.cpp: parallel detection of connected components that is used for parallel BFS for graphs with multiple connected components. Implementation of algorithm described in [2]. reordering/RCM.cpp: parallel BFS and RCM reordering. Our BFS implementation incorporates optimizations described in [3]. Our RCM implementation uses pseudo-diameter heuristic described in [4] for selecting source nodes, and uses the method described in [5] for the final construction of RCM permutation. synk/*: fast implementation of dissemination barrier Permute.cpp: parallel permutation of CSR matrices SpMV.cpp: load-balanced SpMV mm_io.cpp and COO.hpp/cpp: matrix market file I/O Laplacian.cpp: generation of 3D 27-pt Laplacian matrices (useful for quickly testing w/o any file I/O) Utils.hpp/cpp: miscellaneous routines like comparing two vectors with floating-point numbers, permuting vectors, and so on [1] Park et al., Sparsifying Synchronizations for High-Performance Shared-Memory Sparse Triangular Solver, ISC 2014, (http://pcl.intel-research.net/publications/trsolver_isc14.pdf) [2] Patwary et al., Multi-core spanning forest algorithms using the disjoint-set data structure, IPDPS 2012 [3] Chhugani et al., Fast and Efficient Graph Traversal Algorithms for CPUs: Maximizing Single-Node Efficiency, IPDPS 2012 [4] Kumfert, AN OBJECT-ORIENTED ALGORITHMIC LABORATORY FOR ORDERING SPARSE MATRICES. [5] Karantasis et al., Parallelization of Reordering Algorithms for Bandwidth and Wavefront Reduction, SC 2014 <!-- reviewed 5/1/23 MRB -->
The following is the result from 7 runs:
========== Run #1 ============
m = 1000005 nnz = 3105536 3.105520 bytes = 53266512.000000
original bandwidth 987649
SpMV BW 14.72 gflops 126.22 gbps
MKL SpMV BW 5.22 gflops 44.76 gbps
MKL inspector-executor SpMV BW 15.73 gflops 134.91 gbps
========== Run #2 ============
m = 1000005 nnz = 3105536 3.105520 bytes = 53266512.000000
original bandwidth 987649
SpMV BW 9.83 gflops 84.28 gbps
MKL SpMV BW 5.77 gflops 49.50 gbps
MKL inspector-executor SpMV BW 15.26 gflops 130.88 gbps
========== Run #3 ============
m = 1000005 nnz = 3105536 3.105520 bytes = 53266512.000000
original bandwidth 987649
SpMV BW 10.12 gflops 86.76 gbps
MKL SpMV BW 5.24 gflops 44.95 gbps
MKL inspector-executor SpMV BW 15.33 gflops 131.50 gbps
========== Run #4 ============
m = 1000005 nnz = 3105536 3.105520 bytes = 53266512.000000
original bandwidth 987649
SpMV BW 19.35 gflops 165.99 gbps
MKL SpMV BW 4.83 gflops 41.45 gbps
MKL inspector-executor SpMV BW 13.83 gflops 118.65 gbps
========== Run #5 ============
m = 1000005 nnz = 3105536 3.105520 bytes = 53266512.000000
original bandwidth 987649
SpMV BW 19.23 gflops 164.88 gbps
MKL SpMV BW 5.47 gflops 46.89 gbps
MKL inspector-executor SpMV BW 14.64 gflops 125.59 gbps
========== Run #6 ============
m = 1000005 nnz = 3105536 3.105520 bytes = 53266512.000000
original bandwidth 987649
SpMV BW 10.02 gflops 85.93 gbps
MKL SpMV BW 3.66 gflops 31.35 gbps
MKL inspector-executor SpMV BW 10.10 gflops 86.60 gbps
========== Run #7 ============
m = 1000005 nnz = 3105536 3.105520 bytes = 53266512.000000
original bandwidth 987649
SpMV BW 19.28 gflops 165.37 gbps
MKL SpMV BW 5.83 gflops 49.97 gbps
MKL inspector-executor SpMV BW 15.30 gflops 131.19 gbps
This is running: test/reordering_test
The matrix is webbase-1M:
https://sparse.tamu.edu/Williams/webbase-1M
So is it normal that BW changes from run to run dramatically? And sometimes MKL BW is higher and sometimes MKL BW is lower. Are those expected or not? If not, do I miss something?
Many thanks,
Yu Bai
Hello,
my FEM code has a completely standard implementation of a CSR matrix.
let's say that i have the 3 csr arrays:
values, cols, rows
i would like to do something like
solve( values, cols, indices , x, b)
or eventually
ilu0 myILU0object(values, cols, rows )
myILU0object.solve(x,b) //x = ILU^-1*b, that is applying the preconditioner
any example of this?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.