Skip to content

Latest commit

 

History

History
76 lines (58 loc) · 3.44 KB

File metadata and controls

76 lines (58 loc) · 3.44 KB

Pruning

  • Comparing Fine-tuning and Rewinding in Neural Network Pruning
  • A Signal Propagation Perspective for Pruning Neural Networks at Initialization
  • Data-Independent Neural Pruning via Coresets
  • Dynamic Model Pruning with Feedback
  • Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization

Unstructure Pruning

  • Learning both Weights and Connections for Efficient Neural Networks
  • Learning Sparse Neural Networks via Sensitivity-Driven Regularization
  • Learning to prune deep neural networks via layer-wise optimal brain surgeon
  • Dynamic Network Surgery
  • Faster Gaze Prediction With Dense Networks and Fisher Pruning
  • WSNet: Compact and Efficient Networks Through Weight Sampling
  • Lookahead: A Far-sighted Alternative of Magnitude-based Pruning

Structure Pruning

  • Learning Efficient Convolutional Networks through Network Slimming
  • Pruning Filters for Efficient ConvNets
  • Channel Pruning for Accelerating Very Deep Neural Networks
  • Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures
  • Structured Bayesian Pruning via Log-Normal Multiplicative Noise
  • ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
  • SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization
  • Structured Pruning of Deep Convolutional Neural Networks
  • Discrimination-aware Channel Pruning for Deep Neural Networks
  • ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions
  • Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
  • Improving Deep Neural Network Sparsity through Decorrelation Regularization
  • Structured Pruning of Neural Networks with Budget-Aware Regularization
  • Towards Optimal Structured CNN Pruning via Generative Adversarial Learning
  • Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression
  • Learning To Share: Simultaneous Parameter Tying and Sparsification in Deep Learning
  • Provable Filter Pruning for Efficient Neural Networks

Sharing-based Pruning

  • Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration
    • Is pruning by absolute value meaningful?
  • Cross Domain Model Compression by Structurally Weight Sharing

Spectral (Frequency domain) Pruning

  • Frequency-Domain Dynamic Pruning for Convolutional Neural Networks

Dynamic Pruning

Pruning is conducted based on input features.

  • Dynamic Channel Pruning: Feature Boosting and Suppression

Activation / Feature Compression

  • Accelerating Convolutional Neural Networks via Activation Map Compression

Low-Rank Factorization

  • Speeding up Convolutional Neural Networks with Low Rank Expansions
  • Trained Rank Pruning for Efficient Deep Neural Networks
  • Accelerating Very Deep Convolutional Networks for Classification and Detection
  • Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition

Pruning Theory

  • Stronger generalization bounds for deep nets via a compression approach
  • Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds
  • Dynamic Model Pruning with Feedback
    • Theoretical analysis of Dynamic Network Surgery?

Robust (Adversarial) Pruning

  • Sparse DNNs with improved adversarial robustness

RNN Pruning

  • Learning Intrinsic Sparse Structures within Long Short-Term Memory
  • One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation