GithubHelp home page GithubHelp logo

awesome-pruning's Introduction

Awesome Pruning Awesome

A curated list of neural network pruning and related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awesome-deep-learning-papers and Awesome-NAS.

Please feel free to pull requests or open an issue to add papers.

Table of Contents

Type of Pruning

Type F W S Other
Explanation Filter pruning Weight pruning Special Networks other types

A Survey of Structured Pruning (arXiv version and IEEE T-PAMI version)

Please cite our paper if it's helpful:

@article{he2024structured,
  author={He, Yang and Xiao, Lingao},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={Structured Pruning for Deep Convolutional Neural Networks: A Survey}, 
  year={2024},
  volume={46},
  number={5},
  pages={2900-2919},
  doi={10.1109/TPAMI.2023.3334614}}

The related papers are categorized as below: Structured Pruning Taxonomy

2023

Title Venue Type Code
Revisiting Pruning at Initialization Through the Lens of Ramanujan Graph ICLR W PyTorch(Author)(Releasing)
Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask? ICLR W -
Bit-Pruning: A Sparse Multiplication-Less Dot-Product ICLR W Code Deleted
NTK-SAP: Improving neural network pruning by aligning training dynamics ICLR W -
A Unified Framework for Soft Threshold Pruning ICLR W PyTorch(Author)
CrAM: A Compression-Aware Minimizer ICLR W -
Trainability Preserving Neural Pruning ICLR F -
DFPC: Data flow driven pruning of coupled channels without data ICLR F PyTorch(Author)
TVSPrune - Pruning Non-discriminative filters via Total Variation separability of intermediate representations without fine tuning ICLR F PyTorch(Author)
HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers ICLR F -
MECTA: Memory-Economic Continual Test-Time Model Adaptation ICLR F -
DepthFL : Depthwise Federated Learning for Heterogeneous Clients ICLR F -
OTOv2: Automatic, Generic, User-Friendly ICLR F PyTorch(Author)
Over-parameterized Model Optimization with Polyak-Lojasiewicz Condition ICLR F -
Pruning Deep Neural Networks from a Sparsity Perspective ICLR WF PyTorch(Author)
Holistic Adversarially Robust Pruning ICLR WF -
How I Learned to Stop Worrying and Love Retraining ICLR WF PyTorch(Author)
Symmetric Pruning in Quantum Neural Networks ICLR S -
Rethinking Graph Lottery Tickets: Graph Sparsity Matters ICLR S -
Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks ICLR S -
Searching Lottery Tickets in Graph Neural Networks: A Dual Perspective ICLR S -
Diffusion Models for Causal Discovery via Topological Ordering ICLR S -
A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis ICLR Other -
Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! ICLR Other -
Minimum Variance Unbiased N:M Sparsity for the Neural Gradients ICLR Other -

2022

Title Venue Type Code
Parameter-Efficient Masking Networks NeurIPS W PyTorch(Author)
"Lossless" Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach NeurIPS W PyTorch(Author)
Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing NeurIPS W PyTorch(Author)
Models Out of Line: A Fourier Lens on Distribution Shift Robustness NeurIPS W PyTorch(Author)
Robust Binary Models by Pruning Randomly-initialized Networks NeurIPS W PyTorch(Author)
Rare Gems: Finding Lottery Tickets at Initialization NeurIPS W PyTorch(Author)
Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning NeurIPS W PyTorch(Author)
Pruning’s Effect on Generalization Through the Lens of Training and Regularization NeurIPS W -
Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropagation NeurIPS W PyTorch(Author)
Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective NeurIPS W -
Sparse Winning Tickets are Data-Efficient Image Recognizers NeurIPS W PyTorch(Author)
Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks NeurIPS W -
Weighted Mutual Learning with Diversity-Driven Model Compression NeurIPS F -
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance NeurIPS F -
Data-Efficient Structured Pruning via Submodular Optimization NeurIPS F PyTorch(Author)
Structural Pruning via Latency-Saliency Knapsack NeurIPS F PyTorch(Author)
Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm NeurIPS WF -
Pruning Neural Networks via Coresets and Convex Geometry: Towards No Assumptions NeurIPS WF -
Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints NeurIPS WF PyTorch(Author)
Advancing Model Pruning via Bi-level Optimization NeurIPS WF PyTorch(Author)
Emergence of Hierarchical Layers in a Single Sheet of Self-Organizing Spiking Neurons NeurIPS S -
CryptoGCN: Fast and Scalable Homomorphically Encrypted Graph Convolutional Network Inference NeurIPS S PyTorch(Author)(Releasing)
Transform Once: Efficient Operator Learning in Frequency Domain NeurIPS Other PyTorch(Author)(Releasing)
Most Activation Functions Can Win the Lottery Without Excessive Depth NeurIPS Other PyTorch(Author)
Pruning has a disparate impact on model accuracy NeurIPS Other -
Model Preserving Compression for Neural Networks NeurIPS Other PyTorch(Author)
Prune Your Model Before Distill It ECCV W PyTorch(Author)
FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks ECCV W -
FairGRAPE: Fairness-Aware GRAdient Pruning mEthod for Face Attribute Classification ECCV F PyTorch(Author)
SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning ECCV F PyTorch(Author)
Ensemble Knowledge Guided Sub-network Search and Fine-Tuning for Filter Pruning ECCV F PyTorch(Author)
CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution ECCV F PyTorch(Author)
Soft Masking for Cost-Constrained Channel Pruning ECCV F PyTorch(Author)
Filter Pruning via Feature Discrimination in Deep Neural Networks ECCV F -
Disentangled Differentiable Network Pruning ECCV F -
Interpretations Steered Network Pruning via Amortized Inferred Saliency Maps ECCV F PyTorch(Author)
Bayesian Optimization with Clustering and Rollback for CNN Auto Pruning ECCV F PyTorch(Author)
Multi-granularity Pruning for Model Acceleration on Mobile Devices ECCV WF -
Exploring Lottery Ticket Hypothesis in Spiking Neural Networks ECCV S PyTorch(Author)
Towards Ultra Low Latency Spiking Neural Networks for Vision and Sequential Tasks Using Temporal Pruning ECCV S -
Recent Advances on Neural Network Pruning at Initialization IJCAI W PyTorch(Author)
FedDUAP: Federated Learning with Dynamic Update and Adaptive Pruning Using Shared Data on the Server IJCAI F -
On the Channel Pruning using Graph Convolution Network for Convolutional Neural Network Acceleration IJCAI F -
Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization IJCAI F -
Neural Network Pruning by Cooperative Coevolution IJCAI F -
SPDY: Accurate Pruning with Speedup Guarantees ICML W PyTorch(Author)
Sparse Double Descent: Where Network Pruning Aggravates Overfitting ICML W PyTorch(Author)
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks ICML W PyTorch(Author)
Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness ICML F PyTorch(Author)
Winning the Lottery Ahead of Time: Efficient Early Network Pruning ICML F PyTorch(Author)
Topology-Aware Network Pruning using Multi-stage Graph Embedding and Reinforcement Learning ICML F PyTorch(Author)
Fast Lossless Neural Compression with Integer-Only Discrete Flows ICML F PyTorch(Author)
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks ICML Other PyTorch(Author)
PAC-Net: A Model Pruning Approach to Inductive Transfer Learning ICML Other -
Neural Network Pruning Denoises the Features and Makes Local Connectivity Emerge in Visual Tasks ICML Other PyTorch(Author)
Interspace Pruning: Using Adaptive Filter Representations To Improve Training of Sparse CNNs CVPR W -
Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network CVPR W -
When To Prune? A Policy Towards Early Structural Pruning CVPR F -
Fire Together Wire Together: A Dynamic Pruning Approach With Self-Supervised Mask PredictionFire Together Wire Together: A Dynamic Pruning Approach With Self-Supervised Mask Prediction CVPR F -
Revisiting Random Channel Pruning for Neural Network Compression CVPR F PyTorch(Author)(Releasing)
Learning Bayesian Sparse Networks With Full Experience Replay for Continual Learning CVPR F -
DECORE: Deep Compression With Reinforcement Learning CVPR F -
CHEX: CHannel EXploration for CNN Model Compression CVPR F -
Compressing Models With Few Samples: Mimicking Then Replacing CVPR F PyTorch(Author)(Releasing)
Contrastive Dual Gating: Learning Sparse Features With Contrastive Learning CVPR WF -
DiSparse: Disentangled Sparsification for Multitask Model Compression CVPR Other PyTorch(Author)
Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, And No Retraining ICLR (Spotlight) W PyTorch(Author)
On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning ICLR (Spotlight) W -
An Operator Theoretic View On Pruning Deep Neural Networks ICLR W PyTorch(Author)
Effective Model Sparsification by Scheduled Grow-and-Prune Methods ICLR W PyTorch(Author)
Signing the Supermask: Keep, Hide, Invert ICLR W -
How many degrees of freedom do we need to train deep networks: a loss landscape perspective ICLR W PyTorch(Author)
Dual Lottery Ticket Hypothesis ICLR W PyTorch(Author)
Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently ICLR W PyTorch(Author)
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training ICLR W PyTorch(Author)
SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning ICLR (Spotlight) F PyTorch(Author)(Releasing)
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models ICLR (Spotlight) F PyTorch(Author)
Revisit Kernel Pruning with Lottery Regulated Grouped Convolutions ICLR F PyTorch(Author)
Plant 'n' Seek: Can You Find the Winning Ticket? ICLR F PyTorch(Author)
Proving the Lottery Ticket Hypothesis for Convolutional Neural Networks ICLR F PyTorch(Author)
On the Existence of Universal Lottery Tickets ICLR F PyTorch(Author)
Training Structured Neural Networks Through Manifold Identification and Variance Reduction ICLR F PyTorch(Author)
Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning ICLR F PyTorch(Author)
Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients ICLR WF PyTorch(Author)
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training ICLR Other PyTorch(Author)
Prune and Tune Ensembles: Low-Cost Ensemble Learning with Sparse Independent Subnetworks AAAI W -
Prior Gradient Mask Guided Pruning-Aware Fine-Tuning AAAI F -
Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition AAAI Other -

2021

Title Venue Type Code
Validating the Lottery Ticket Hypothesis with Inertial Manifold Theory NeurIPS W -
The Elastic Lottery Ticket Hypothesis NeurIPS W PyTorch(Author)
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot? NeurIPS W PyTorch(Author)
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks NeurIPS W -
You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership NeurIPS W PyTorch(Author)
Pruning Randomly Initialized Neural Networks with Iterative Randomization NeurIPS W PyTorch(Author)
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration NeurIPS W PyTorch(Author)
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks NeurIPS W PyTorch(Author)
A Winning Hand: Compressing Deep Networks Can Improve Out-of-Distribution Robustness NeurIPS W PyTorch(Author)
Rethinking the Pruning Criteria for Convolutional Neural Network NeurIPS F -
Only Train Once: A One-Shot Neural Network Training And Pruning Framework NeurIPS F PyTorch(Author)
CHIP: CHannel Independence-based Pruning for Compact Neural Networks NeurIPS F PyTorch(Author)
RED : Looking for Redundancies for Data-FreeStructured Compression of Deep Neural Networks NeurIPS F -
Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition NeurIPS F PyTorch(Author)
Sparse Flows: Pruning Continuous-depth Models NeurIPS WF PyTorch(Author)
Scaling Up Exact Neural Network Compression by ReLU Stability NeurIPS S PyTorch(Author)
Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme NeurIPS S PyTorch(Author)
Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks NeurIPS Other PyTorch(Author)
ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting ICCV F PyTorch(Author)
Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search ICCV F -
GDP: Stabilized Neural Network Pruning via Gates with Differentiable Polarization ICCV F -
Auto Graph Encoder-Decoder for Neural Network Pruning ICCV F -
Exploration and Estimation for Model Compression ICCV F -
Sub-Bit Neural Networks: Learning To Compress and Accelerate Binary Neural Networks ICCV Other PyTorch(Author)
On the Predictability of Pruning Across Scales ICML W -
A Probabilistic Approach to Neural Network Pruning ICML F -
Accelerate CNNs from Three Dimensions: A Comprehensive Pruning Framework ICML F -
Group Fisher Pruning for Practical Network Compression ICML F PyTorch(Author)
Towards Compact CNNs via Collaborative Compression CVPR F PyTorch(Author)
Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks CVPR F PyTorch(Author)
NPAS: A Compiler-aware Framework of Unified Network Pruning andArchitecture Search for Beyond Real-Time Mobile Acceleration CVPR F -
Network Pruning via Performance Maximization CVPR F -
Convolutional Neural Network Pruning with Structural Redundancy Reduction CVPR F -
Manifold Regularized Dynamic Network Pruning CVPR F -
Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic Distillation CVPR FO -
Content-Aware GAN Compression CVPR S PyTorch(Author)
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network ICLR W PyTorch(Author)
Layer-adaptive Sparsity for the Magnitude-based Pruning ICLR W PyTorch(Author)
Pruning Neural Networks at Initialization: Why Are We Missing the Mark? ICLR W -
Robust Pruning at Initialization ICLR W -
A Gradient Flow Framework For Analyzing Network Pruning ICLR F PyTorch(Author)
Neural Pruning via Growing Regularization ICLR F PyTorch(Author)
ChipNet: Budget-Aware Pruning with Heaviside Continuous Approximations ICLR F PyTorch(Author)
Network Pruning That Matters: A Case Study on Retraining Variants ICLR F PyTorch(Author)

2020

Title Venue Type Code
Optimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is Sufficient NeurIPS W -
Winning the Lottery with Continuous Sparsification NeurIPS W PyTorch(Author)
HYDRA: Pruning Adversarially Robust Neural Networks NeurIPS W PyTorch(Author)
Logarithmic Pruning is All You Need NeurIPS W -
Directional Pruning of Deep Neural Networks NeurIPS W -
Movement Pruning: Adaptive Sparsity by Fine-Tuning NeurIPS W PyTorch(Author)
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot NeurIPS W PyTorch(Author)
Neuron Merging: Compensating for Pruned Neurons NeurIPS F PyTorch(Author)
Neuron-level Structured Pruning using Polarization Regularizer NeurIPS F PyTorch(Author)
SCOP: Scientific Control for Reliable Neural Network Pruning NeurIPS F PyTorch(Author)
Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement Learning NeurIPS F -
The Generalization-Stability Tradeoff In Neural Network Pruning NeurIPS F PyTorch(Author)
Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough NeurIPS WF -
Pruning Filter in Filter NeurIPS Other PyTorch(Author)
Position-based Scaled Gradient for Model Quantization and Pruning NeurIPS Other PyTorch(Author)
Bayesian Bits: Unifying Quantization and Pruning NeurIPS Other -
Pruning neural networks without any data by iteratively conserving synaptic flow NeurIPS Other PyTorch(Author)
Meta-Learning with Network Pruning ECCV W -
Accelerating CNN Training by Pruning Activation Gradients ECCV W -
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning ECCV (Oral) F PyTorch(Author)
DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation ECCV F -
DHP: Differentiable Meta Pruning via HyperNetworks ECCV F PyTorch(Author)
DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search S ECCV Other -
Differentiable Joint Pruning and Quantization for Hardware Efficiency ECCV Other -
Channel Pruning via Automatic Structure Search IJCAI F PyTorch(Author)
Adversarial Neural Pruning with Latent Vulnerability Suppression ICML W -
Proving the Lottery Ticket Hypothesis: Pruning is All You Need ICML W -
Network Pruning by Greedy Subnetwork Selection ICML F -
Operation-Aware Soft Channel Pruning using Differentiable Masks ICML F -
DropNet: Reducing Neural Network Complexity via Iterative Pruning ICML F -
Soft Threshold Weight Reparameterization for Learnable Sparsity ICML WF Pytorch(Author)
Structured Compression by Weight Encryption for Unstructured Pruning and Quantization CVPR W -
Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-Based Approach CVPR W -
Towards Efficient Model Compression via Learned Global Ranking CVPR (Oral) F Pytorch(Author)
HRank: Filter Pruning using High-Rank Feature Map CVPR (Oral) F Pytorch(Author)
Neural Network Pruning with Residual-Connections and Limited-Data CVPR (Oral) F -
DMCP: Differentiable Markov Channel Pruning for Neural Networks CVPR (Oral) F TensorFlow(Author)
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression CVPR F PyTorch(Author)
Few Sample Knowledge Distillation for Efficient Network Compression CVPR F -
Discrete Model Compression With Resource Constraint for Deep Neural Networks CVPR F -
Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration CVPR F -
APQ: Joint Search for Network Architecture, Pruning and Quantization Policy CVPR F -
Multi-Dimensional Pruning: A Unified Framework for Model Compression CVPR (Oral) WF -
A Signal Propagation Perspective for Pruning Neural Networks at Initialization ICLR (Spotlight) W -
ProxSGD: Training Structured Neural Networks under Regularization and Constraints ICLR W TF+PT(Author)
One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation ICLR W -
Lookahead: A Far-sighted Alternative of Magnitude-based Pruning ICLR W PyTorch(Author)
Data-Independent Neural Pruning via Coresets ICLR W -
Provable Filter Pruning for Efficient Neural Networks ICLR F -
Dynamic Model Pruning with Feedback ICLR WF -
Comparing Rewinding and Fine-tuning in Neural Network Pruning ICLR (Oral) WF TensorFlow(Author)
AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates AAAI F -
Reborn filters: Pruning convolutional neural networks with limited data AAAI F -
DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks AAAI Other -
Pruning from Scratch AAAI Other -

2019

Title Venue Type Code
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask NeurIPS W TensorFlow(Author)
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers NeurIPS W -
Global Sparse Momentum SGD for Pruning Very Deep Neural Networks NeurIPS W PyTorch(Author)
AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters NeurIPS W -
Network Pruning via Transformable Architecture Search NeurIPS F PyTorch(Author)
Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks NeurIPS F PyTorch(Author)
Model Compression with Adversarial Robustness: A Unified Optimization Framework NeurIPS Other PyTorch(Author)
Adversarial Robustness vs Model Compression, or Both? ICCV W PyTorch(Author)
MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning ICCV F PyTorch(Author)
Accelerate CNN via Recursive Bayesian Pruning ICCV F -
Learning Filter Basis for Convolutional Neural Network Compression ICCV Other -
Co-Evolutionary Compression for Unpaired Image Translation ICCV S -
COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning IJCAI F Tensorflow(Author)
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration CVPR (Oral) F PyTorch(Author)
Towards Optimal Structured CNN Pruning via Generative Adversarial Learning CVPR F PyTorch(Author)
Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure CVPR F PyTorch(Author)
On Implicit Filter Level Sparsity in Convolutional Neural Networks, Extension1, Extension2 CVPR F PyTorch(Author)
Structured Pruning of Neural Networks with Budget-Aware Regularization CVPR F -
Importance Estimation for Neural Network Pruning CVPR F PyTorch(Author)
OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks CVPR F -
Variational Convolutional Neural Network Pruning CVPR F -
Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search CVPR Other TensorFlow(Author)
Collaborative Channel Pruning for Deep Networks ICML F -
Approximated Oracle Filter Pruning for Destructive CNN Width Optimization github ICML F -
EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis ICML F PyTorch(Author)
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks ICLR (Best) W TensorFlow(Author)
SNIP: Single-shot Network Pruning based on Connection Sensitivity ICLR W TensorFLow(Author)
Dynamic Channel Pruning: Feature Boosting and Suppression ICLR F TensorFlow(Author)
Rethinking the Value of Network Pruning ICLR F PyTorch(Author)
Dynamic Sparse Graph for Efficient Deep Learning ICLR F CUDA(3rd)

2018

Title Venue Type Code
Frequency-Domain Dynamic Pruning for Convolutional Neural Networks NeurIPS W -
Discrimination-aware Channel Pruning for Deep Neural Networks NeurIPS F TensorFlow(Author)
Learning Sparse Neural Networks via Sensitivity-Driven Regularization NeurIPS WF -
Constraint-Aware Deep Neural Network Compression ECCV W SkimCaffe(Author)
A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers ECCV W Caffe(Author)
Amc: Automl for model compression and acceleration on mobile devices ECCV F TensorFlow(3rd)
Data-Driven Sparse Structure Selection for Deep Neural Networks ECCV F MXNet(Author)
Coreset-Based Neural Network Compression ECCV F PyTorch(Author)
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks IJCAI F PyTorch(Author)
Accelerating Convolutional Networks via Global & Dynamic Filter Pruning IJCAI F -
Weightless: Lossy weight encoding for deep neural network compression ICML W -
Compressing Neural Networks using the Variational Information Bottleneck ICML F PyTorch(Author)
Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions ICML Other PyTorch(Author)
CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-Quantization CVPR W -
β€œLearning-Compression” Algorithms for Neural Net Pruning CVPR W -
PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning CVPR F PyTorch(Author)
NISP: Pruning Networks using Neuron Importance Score Propagation CVPR F -
To prune, or not to prune: exploring the efficacy of pruning for model compression ICLR W -
Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers ICLR F TensorFlow(Author), PyTorch(3rd)

2017

Title Venue Type Code
Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee NeurIPS W TensorFlow(Author)
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon NeurIPS W PyTorch(Author)
Runtime Neural Pruning NeurIPS F -
Structured Bayesian Pruning via Log-Normal Multiplicative Noise NeurIPS F -
Bayesian Compression for Deep Learning NeurIPS F -
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression ICCV F Caffe(Author), PyTorch(3rd)
Channel pruning for accelerating very deep neural networks ICCV F Caffe(Author)
Learning Efficient Convolutional Networks Through Network Slimming ICCV F PyTorch(Author)
Variational Dropout Sparsifies Deep Neural Networks ICML W -
Combined Group and Exclusive Sparsity for Deep Neural Networks ICML WF -
Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning CVPR W -
Pruning Filters for Efficient ConvNets ICLR F PyTorch(3rd)
Pruning Convolutional Neural Networks for Resource Efficient Inference ICLR F TensorFlow(3rd)

2016

Title Venue Type Code
Dynamic Network Surgery for Efficient DNNs NeurIPS W Caffe(Author)
Learning the Number of Neurons in Deep Networks NeurIPS F -
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding ICLR (Best) W Caffe(Author)

2015

Title Venue Type Code
Learning both Weights and Connections for Efficient Neural Networks NeurIPS W PyTorch(3rd)

Related Repo

Awesome-model-compression-and-acceleration

EfficientDNNs

Embedded-Neural-Network

awesome-AutoML-and-Lightweight-Models

Model-Compression-Papers

knowledge-distillation-papers

Network-Speed-and-Compression

awesome-pruning's People

Contributors

adityakusupati avatar alexpasqua avatar andreabrg avatar armandxiao avatar he-y avatar leejzh avatar magnusja avatar wang1104014663 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

awesome-pruning's Issues

Add code and workshop links for implicit filter level sparsity

Hi Yang
Thanks for maintaining this list.
The CVPR 2019 paper 'On Implicit Filter Level Sparsity in Convolutional Neural Networks' also has 2 workshop versions, with a few additional results.
Implicit Filter Sparsification In Convolutional Neural Networks (https://arxiv.org/abs/1905.04967), ODML-CDNNR Workshop, ICML 2019
Emergence of Implicit Filter Sparsity in Convolutional Neural Networks (https://openreview.net/forum?id=rylVvNS3hE), Deep Phenomena Workshop, ICML 2019

Also, an example of using the implicit sparsity for speedup is available in the following repository
https://github.com/mehtadushy/SelecSLS-Pytorch as part of the model (https://github.com/mehtadushy/SelecSLS-Pytorch/blob/master/models/selecsls.py#L280)

I would appreciate if you could update the list to include these!

New pruning type for our ICLR22 paper: Grouped Kernel Pruning β€” a densely structured pruning granularity with better pruning freedom than filter/channel methods.

Greetings,

We are the authors of the said paper/code, and we thank you for your inclusion; it has certainly generated some traffic for us.

However, it might be worth noting that our pruning granularity is not F (filter level). Our algorithm prunes at a Grouped Kernel level, which is β€” to the best of our knowledge β€” the most fine-grained approach under the constraint of outputting a densely structured pruned network, much like channel or filter prunings.

Since pushing the pruning freedom further while remaining structured is probably our most important contribution, we'd appreciate a simple fix (and maybe a new type category if you're feeling generous β€” as we'd certainly welcome more adaptations on the grouped kernel pruning framework). Thanks!

ICLR 17 type

I think ICLR17 paper "pruning convolutional neural networks for resource efficient inference" is filter type.

See the endnote in page2.

A wrong type label

Thanks for your collection. However, "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"(ICLR2019) is a weight-level pruning algorithm. As the paper says, it only tries unstructured pruning and the structured pruning is in its plan.

The code of SCOP (NeurIPS 2020) has been released.

Hi Yang,

Thanks for the awesome paper list for network pruning! Here are our papers and codes about pruning.

The code of NeurIPS 2020 paper 'SCOP: Scientific Control for Reliable Neural Network Pruning' has been released and the link is
https://github.com/huawei-noah/Pruning/tree/master/SCOP_NeurIPS2020. The code is written with Pytorch.

What's more, another paper on AAAI 2020, Reborn filters: Pruning convolutional neural networks with limited data, proposes a filter pruning method to reduce the computation cost. The paper link is https://ojs.aaai.org/index.php/AAAI/article/view/6058

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.