State of the Field
Recent advancements in optimization algorithms are increasingly focused on enhancing efficiency and effectiveness across various applications. Notably, the introduction of regret matching algorithms has demonstrated superior performance in large-scale constrained optimization, outperforming traditional methods like projected gradient descent. This shift is complemented by the development of Certificate-Guided Pruning, which offers explicit guarantees of optimality in black-box optimization, addressing the challenges posed by noisy evaluations. Additionally, the integration of large language models into heuristic design for vehicle routing problems showcases a novel approach to solving NP-hard challenges, significantly improving computational efficiency. Hybrid methods, such as combining genetic algorithms with graph neural networks, are also gaining traction, enhancing solution quality in timetabling tasks. These innovations collectively indicate a trend toward more robust, adaptable algorithms capable of tackling complex real-world problems, particularly in resource-constrained environments, thereby paving the way for more efficient operational solutions across industries.
Papers
1–10 of 11Decision Making under Imperfect Recall: Algorithms and Benchmarks
In game theory, imperfect-recall decision problems model situations in which an agent forgets information it held before. They encompass games such as the ``absentminded driver'' and team games with l...
Enhancing CVRP Solver through LLM-driven Automatic Heuristic Design
The Capacitated Vehicle Routing Problem (CVRP), a fundamental combinatorial optimization challenge, focuses on optimizing fleet operations under vehicle capacity constraints. While extensively studied...
Certificate-Guided Pruning for Stochastic Lipschitz Optimization
We study black-box optimization of Lipschitz functions under noisy evaluations. Existing adaptive discretization methods implicitly avoid suboptimal regions but do not provide explicit certificates of...
Enhancing Genetic Algorithms with Graph Neural Networks: A Timetabling Case Study
This paper investigates the impact of hybridizing a multi-modal Genetic Algorithm with a Graph Neural Network for timetabling optimization. The Graph Neural Network is designed to encapsulate general ...
Gradient Regularized Natural Gradients
Gradient regularization (GR) has been shown to improve the generalizability of trained models. While Natural Gradient Descent has been shown to accelerate optimization in the initial phase of training...
GEGO: A Hybrid Golden Eagle and Genetic Optimization Algorithm for Efficient Hyperparameter Tuning in Resource-Constrained Environments
Hyperparameter tuning is a critical yet computationally expensive step in training neural networks, particularly when the search space is high dimensional and nonconvex. Metaheuristic optimization alg...
Why Adam Can Beat SGD: Second-Moment Normalization Yields Sharper Tails
Despite Adam demonstrating faster empirical convergence than SGD in many applications, much of the existing theory yields guarantees essentially comparable to those of SGD, leaving the empirical perfo...
Automatic Generation of Polynomial Symmetry Breaking Constraints
Symmetry in integer programming causes redundant search and is often handled with symmetry breaking constraints that remove as many equivalent solutions as possible. We propose an algebraic method whi...
Preconditioning Benefits of Spectral Orthogonalization in Muon
The Muon optimizer, a matrix-structured algorithm that leverages spectral orthogonalization of gradients, is a milestone in the pretraining of large language models. However, the underlying mechanisms...
Construct, Merge, Solve & Adapt with Reinforcement Learning for the min-max Multiple Traveling Salesman Problem
The Multiple Traveling Salesman Problem (mTSP) extends the Traveling Salesman Problem to m tours that start and end at a common depot and jointly visit all customers exactly once. In the min-max varia...