Papers
1–2 of 2Research Paper·Feb 11, 2026
Hierarchical Zero-Order Optimization for Deep Neural Networks
Zeroth-order (ZO) optimization has long been favored for its biological plausibility and its capacity to handle non-differentiable objectives, yet its computational complexity has historically limited...
5.0 viability
Research Paper·Jan 29, 2026
PRISM: Distribution-free Adaptive Computation of Matrix Functions for Accelerating Neural Network Training
Matrix functions such as square root, inverse roots, and orthogonalization play a central role in preconditioned gradient methods for neural network training. This has motivated the development of ite...
3.0 viability