Optimization Comparison Hub
10 papers - avg viability 5.9
Top Papers
- Learning to Solve Orienteering Problem with Time Windows and Variable Profits(7.0)
A learning-based solver for the orienteering problem with time windows and variable profits (OPTWVP) that outperforms existing methods in solution quality and computational efficiency.
- A General Neural Backbone for Mixed-Integer Linear Optimization via Dual Attention(7.0)
Develop an attention-driven neural architecture to enhance mixed-integer linear optimization with improved solver efficiency.
- Beyond the Markovian Assumption: Robust Optimization via Fractional Weyl Integrals in Imbalanced Data(7.0)
A novel optimization algorithm using Fractional Calculus to improve performance in imbalanced datasets, particularly for financial fraud detection.
- Efficient Policy Learning with Hybrid Evaluation-Based Genetic Programming for Uncertain Agile Earth Observation Satellite Scheduling(7.0)
A hybrid evaluation-based genetic programming approach for optimizing Earth observation satellite scheduling under uncertainty, offering a balance between computational cost and scheduling performance.
- Dynamic Momentum Recalibration in Online Gradient Learning(7.0)
SGDF dynamically recalibrates gradient momentum in SGD, offering improved optimization performance and a potential drop-in replacement for existing optimizers.
- Weak-SIGReg: Covariance Regularization for Stable Deep Learning(7.0)
Stabilize deep learning training, especially for Vision Transformers, by regularizing the covariance matrix of representations, preventing optimization collapse.
- Stein-Rule Shrinkage for Stochastic Gradient Estimation in High Dimensions(6.0)
A shrinkage-based enhancement to stochastic gradient methods that improves performance in high-dimensional learning tasks.
- Divide and Learn: Multi-Objective Combinatorial Optimization at Scale(5.0)
Optimize multi-objective combinatorial problems efficiently using bandit optimization over decomposed decision spaces.
- Minor First, Major Last: A Depth-Induced Implicit Bias of Sharpness-Aware Minimization(4.0)
A study on Sharpness-Aware Minimization (SAM) reveals its implicit bias in deep linear networks, suggesting potential for improved optimization strategies.
- The Effect of Mini-Batch Noise on the Implicit Bias of Adam(2.0)
A theoretical investigation into how mini-batch noise influences implicit bias in the Adam optimizer.