- Comparing Fine-tuning and Rewinding in Neural Network Pruning
- A Signal Propagation Perspective for Pruning Neural Networks at Initialization
- Data-Independent Neural Pruning via Coresets
- Dynamic Model Pruning with Feedback
- Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization
- Learning both Weights and Connections for Efficient Neural Networks
- Learning Sparse Neural Networks via Sensitivity-Driven Regularization
- Learning to prune deep neural networks via layer-wise optimal brain surgeon
- Dynamic Network Surgery
- Faster Gaze Prediction With Dense Networks and Fisher Pruning
- WSNet: Compact and Efficient Networks Through Weight Sampling
- Lookahead: A Far-sighted Alternative of Magnitude-based Pruning
- Learning Efficient Convolutional Networks through Network Slimming
- Pruning Filters for Efficient ConvNets
- Channel Pruning for Accelerating Very Deep Neural Networks
- Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures
- Structured Bayesian Pruning via Log-Normal Multiplicative Noise
- ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
- SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization
- Structured Pruning of Deep Convolutional Neural Networks
- Discrimination-aware Channel Pruning for Deep Neural Networks
- ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions
- Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
- Improving Deep Neural Network Sparsity through Decorrelation Regularization
- Structured Pruning of Neural Networks with Budget-Aware Regularization
- Towards Optimal Structured CNN Pruning via Generative Adversarial Learning
- Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression
- Learning To Share: Simultaneous Parameter Tying and Sparsification in Deep Learning
- Provable Filter Pruning for Efficient Neural Networks
- Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration
- Is pruning by absolute value meaningful?
- Cross Domain Model Compression by Structurally Weight Sharing
- Frequency-Domain Dynamic Pruning for Convolutional Neural Networks
Pruning is conducted based on input features.
- Dynamic Channel Pruning: Feature Boosting and Suppression
- Accelerating Convolutional Neural Networks via Activation Map Compression
- Speeding up Convolutional Neural Networks with Low Rank Expansions
- Trained Rank Pruning for Efficient Deep Neural Networks
- Accelerating Very Deep Convolutional Networks for Classification and Detection
- Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
- Stronger generalization bounds for deep nets via a compression approach
- Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds
- Dynamic Model Pruning with Feedback
- Theoretical analysis of Dynamic Network Surgery?
- Sparse DNNs with improved adversarial robustness
- Learning Intrinsic Sparse Structures within Long Short-Term Memory
- One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation