Skip to main navigationSkip to main content
The University of Southampton
CORMSIS Centre for Operational Research, Management Sciences and Information Systems

CORMSIS Seminar Event

16:00 - 17:00
23 July 2015

For more information regarding this event, please telephone Huifu Xu on 23657 or email .

Event details

Extra-gradient method with reduced-variance for pseudo-monotone stochastic variational inequalities

AbstractWe propose an extra-gradient method with step-size away from zero and of O(1/L) for stochastic variational inequalities (SVI) requiring only pseudo-monotonicity (L is the Lipschitz constant). We give asymptotic and explicit non-asymptotic convergence guarantees. Precisely, we prove that a.s. the generated sequence is bounded and its distance to the solution set converges to zero. In the convergence and complexity analysis, we allow for an unbounded feasible set, unbounded operator, non-uniform variance of the oracle and, also, we do not require any specific form of monotonicity nor regularization procedures, conditions which appear to be new. In the Stochastic Approximation procedure, we employ an iterative variance-reduction procedure but maintaining optimal oracle complexity O(\epsilon^{-2}) (modulo a logarithmic term of first-order) and achieving an accelerated rate O(M^{-1}) in terms of the mean (quadratic) natural residual. In prior literature, rates were O(1/M) in terms of squared-distance to the solution set in the bounded setting for a strong-monotone or weak-sharp SVI. For the general monotone case, assuming uniform-variance over the feasible set, rates were O(M^{-1/2}) in terms of the mean gap-function for bounded set and in terms of the mean of relaxed gap-function recently introduced by Monteiro-Svaiter in the case of unbounded set but still with uniform-variance. The generated sequence also endues a new feature: the sequence is bounded in Lp if the oracle error has finite p-moment. Explicit estimates for rate, complexity and p-moments are given which depend on problem parameters and distance from few initial iterates to the solution set. Sharper constants are possible in case of uniform variance. We discuss such bounds in terms of dimension and variance. Our analysis can be extended to distributed solution of Cartesian Variational Inequalities in the setting that agents wish to solve their problems with partial coordination among them (e.g., distributed multi-agent Nash-Equilibria).

Speaker information

Philip Thompson,Instituto Nacional de Matemática Pura Aplicada (IMPA) ,Philip Thompson is a PhD student in Mathematics (Optimization) at Instituto Nacional de Matemática Pura Aplicada (IMPA) with joint research collaboration with Centro de Modelamiento Matemático (CMM) - Universidad de Chile. He works under the supervision of Alejandro Jofré and Alfredo Iusem. His area of research is stochastic optimization with emphasis on stochastic variational inequalities and related questions in continuous optimization, variational analysis and probability/mathematical statistics.

Privacy Settings