- Streamlining within the Riemannian Realm: Environment friendly Riemannian Optimization with Loopless Variance Discount(arXiv)
Writer : Yury Demidovich, Grigory Malinovsky, Peter Richtárik
Summary : On this research, we examine stochastic optimization on Riemannian manifolds, specializing in the essential variance discount mechanism utilized in each Euclidean and Riemannian settings. Riemannian variance-reduced strategies normally contain a double-loop construction, computing a full gradient initially of every loop. Figuring out the optimum interior loop size is difficult in follow, because it is determined by sturdy convexity or smoothness constants, which are sometimes unknown or onerous to estimate. Motivated by Euclidean strategies, we introduce the Riemannian Loopless SVRG (R-LSVRG) and PAGE (R-PAGE) strategies. These strategies change the outer loop with probabilistic gradient computation triggered by a coin flip in every iteration, guaranteeing easier proofs, environment friendly hyperparameter choice, and sharp convergence ensures. Utilizing R-PAGE as a framework for non-convex Riemannian optimization, we show its applicability to varied necessary settings. For instance, we derive Riemannian MARINA (R-MARINA) for distributed settings with communication compression, offering the perfect theoretical communication complexity ensures for non-convex distributed optimization over Riemannian manifolds. Experimental outcomes assist our theoretical findings.
2. Focused Variance Discount: Strong Bayesian Optimization of Black-Field Simulators with Noise Parameters(arXiv)
Writer : John Joshua Miller, Simon Mak
Summary : The optimization of a black-box simulator over management parameters x arises in a myriad of scientific purposes. In such purposes, the simulator typically takes the shape f(x,θ), the place θ are parameters which might be unsure in follow. Strong optimization goals to optimize the target E[f(x,Θ)], the place Θ∼P is a random variable that fashions uncertainty on θ. For this, current black-box strategies usually make use of a two-stage strategy for choosing the subsequent level (x,θ), the place x and θ are optimized individually by way of completely different acquisition features. As such, these approaches don’t make use of a joint acquisition over (x,θ), and thus could fail to completely exploit control-to-noise interactions for efficient sturdy optimization. To handle this, we suggest a brand new Bayesian optimization methodology referred to as Focused Variance Discount (TVR). The TVR leverages a novel joint acquisition perform over (x,θ), which targets variance discount on the target inside the desired area of enchancment. Below a Gaussian course of surrogate on f, the TVR acquisition could be evaluated in closed type, and divulges an insightful exploration-exploitation-precision trade-off for sturdy black-box optimization. The TVR can additional accommodate a broad class of non-Gaussian distributions on P by way of a cautious integration of normalizing flows. We show the improved efficiency of TVR over the state-of-the-art in a collection of numerical experiments and an software to the sturdy design of vehicle brake discs beneath operational uncertainty