Papers
1–3 of 3Research Paper·Jan 23, 2026
Uncertainty propagation through trained multi-layer perceptrons: Exact analytical results
We give analytical results for propagation of uncertainty through trained multi-layer perceptrons (MLPs) with a single hidden layer and ReLU activation functions. More precisely, we give expressions f...
3.0 viability
Research Paper·Feb 1, 2026
Rod Flow: A Continuous-Time Model for Gradient Descent at the Edge of Stability
How can we understand gradient-based training over non-convex landscapes? The edge of stability phenomenon, introduced in Cohen et al. (2021), indicates that the answer is not so simple: namely, gradi...
3.0 viability
Research Paper·Feb 3, 2026·B2B
Principles of Lipschitz continuity in neural networks
Deep learning has achieved remarkable success across a wide range of domains, significantly expanding the frontiers of what is achievable in artificial intelligence. Yet, despite these advances, criti...
2.0 viability