This repository contains the Julia code that produces all the experimental results in the paper Accelerated Stochastic Power Iteration.
The main code of the implementation for different PCA algorithms is in the file eigensolvers.jl
.
-
minibatch_sgd_m(data, x, beta, iters, u, s, seed=1)
: This function is the implementation for our algorithm Mini-batch Power Method with Momentum.data
is the data matrix where eacho row represents a single point,x
is the initial point,beta
is the momentum parameter,iters
is the maximum number of iterations,u
is the true eigenvector that is used to compute the error for each iterate,s
is the mini-batch size andseed
is the random seed. -
minibatch_svrg_m(data, x, beta, epoch, m, u,s, seed=1)
: This function is the implementation for our algorithm Variance Reduced Power Method with Momemtum. The argumentm
is the epoch length andepoch
is the number of epochs. The rest arguments are the same asminibatch_sgd_m()
.
Additionally, we provide a drop-in replacement for sklearn.decomposition.PCA
in Momentum-PCA.ipynb
.
All the experimental results are generated from the Jupyter notebooks:
Mini-batches-no-acceleration.ipynb
: This notebook shows the example that naively adding momentum in Oja’s algorithm does not obtain acceleration.
Mini-batches.ipynb
: This notebook shows the acceleration for momentum stochastic power iteration with mini-batching and variance reduction.
Stability.ipynb
: This notebook shows the instability of Lanczos method for finding multiple eigenvalues.
Best_Ball.ipynb
: This notebook shows the performance of Best Heavy Ball method that auto-tune the momentum parameter in the power iteration.
Inhomogeneous.ipynb
: This notebook show the better performance of Inhomogeneous Polynomial Recurrence than the constant momentum power method.
- Blog post: Accelerated Stochastic Power Iteration
- Paper: Christopher De Sa, Bryan He, Ioannis Mitliagkas, Christopher Ré, Peng Xu, Accelerated Stochastic Power Iteration, arXiv preprint, 2017.