Neural Network Optimization Comparison Hub
4 papers - avg viability 3.8
Top Papers
- Hierarchical Zero-Order Optimization for Deep Neural Networks(5.0)
Develop a more efficient zero-order optimization method for deep neural network training utilizing Hierarchical Zero-Order optimization.
- Uncovering a Winning Lottery Ticket with Continuously Relaxed Bernoulli Gates(4.0)
A novel approach to discovering sparse subnetworks in neural networks using differentiable optimization techniques.
- PRISM: Distribution-free Adaptive Computation of Matrix Functions for Accelerating Neural Network Training(3.0)
PRISM offers a framework for accelerating neural network training by optimizing matrix function computations without needing explicit spectral bounds.
- SCORE: Replacing Layer Stacking with Contractive Recurrent Depth(3.0)
SCORE offers a novel recurrent approach to optimize deep neural networks by replacing traditional layer stacking with contractive updates.