# American Institute of Mathematical Sciences

April  2017, 4(2): 125-148. doi: 10.3934/jdg.2017008

## A continuous-time approach to online optimization

 1 Centre de mathématiques appliquées, École polytechnique, Université Paris-Saclay, Palaiseau, France 2 CNRS (French National Center for Scientific Research), Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LIG, F-38000, Grenoble, France

Received  January 2017 Revised  February 2017 Published  March 2017

We consider a family of mirror descent strategies for online optimization in continuous-time and we show that they lead to no regret. From a more traditional, discrete-time viewpoint, this continuous-time approach allows us to derive the no-regret properties of a large class of discrete-time algorithms including as special cases the exponential weights algorithm, online mirror descent, smooth fictitious play and vanishingly smooth fictitious play. In so doing, we obtain a unified view of many classical regret bounds, and we show that they can be decomposed into a term stemming from continuous-time considerations and a term which measures the disparity between discrete and continuous time. This generalizes the continuous-time based analysis of the exponential weights algorithm from [29]. As a result, we obtain a general class of infinite horizon learning strategies that guarantee an $\mathcal{O}(n^{-1/2})$ regret bound without having to resort to a doubling trick.

Citation: Joon Kwon, Panayotis Mertikopoulos. A continuous-time approach to online optimization. Journal of Dynamics & Games, 2017, 4 (2) : 125-148. doi: 10.3934/jdg.2017008
##### References:

show all references

##### References:
Graphical illustration of the greedy (dashed) and lazy (solid) branches of the projected subgradient (PSG) method
Greedy and Lazy Mirror Descent with γn = 1
Summary of the algorithms discussed in Section 6.The suffix "L" indicates a "lazy" variant; the INPUT column stands for the stream of payoff vectors which is used as input for the algorithm and the NORM column specifies the norm of the ambient space; finally, $\xi_{n}$ represents a zero-mean stochastic process with values in $\mathbb{R}^{d}$
 ALGORITHM $\mathcal{C}$ $h(x)$ $\eta _{n}$ INPUT NORM EW $\Delta_d$ $\sum_{i} x_{i}\log x_{i}$ CONSTANT $v_{n}$ $\ell^{1}$ EW' $\Delta_{d}$ $\sum_{i} x_{i}\log x_{i}$ $\eta /\sqrt{n}$ $v_{n}$ $\ell^{1}$ SFP $\Delta_d$ ANY $\eta /n$ $v_{n}$ $\ell^{1}$ VSFP $\Delta_d$ ANY $\eta n^{-\alpha} \;\;(0<\alpha<1)$ $v_{n}$ $\ell^{1}$ OGD-L ANY $\frac{1}{2} \left\| x \right\|_2^2$ CONSTANT $-\nabla f_{n}(x_{n})$ $\ell^{2}$ OMD-L ANY ANY CONSTANT $-\nabla f_{n}(x_{n})$ ANY PSG-L ANY $\frac{1}{2} \left\| x \right\|_2^2$ $1$ $-\gamma_{n}\nabla f(x_{n})$ $\ell^{2}$ MD-L ANY ANY $1$ $-\gamma_{n} \nabla f(x_{n})$ ANY RSA-L ANY ANY $1$ $-\gamma_{n} (\nabla f(x_{n})+\xi_n)$ ANY SPSG-L ANY $\frac{1}{2} \left\| x \right\|_2^2$ $1$ $-\gamma_{n}(\nabla f(x_{n})+\xi_n)$ $\ell^{2}$
 ALGORITHM $\mathcal{C}$ $h(x)$ $\eta _{n}$ INPUT NORM EW $\Delta_d$ $\sum_{i} x_{i}\log x_{i}$ CONSTANT $v_{n}$ $\ell^{1}$ EW' $\Delta_{d}$ $\sum_{i} x_{i}\log x_{i}$ $\eta /\sqrt{n}$ $v_{n}$ $\ell^{1}$ SFP $\Delta_d$ ANY $\eta /n$ $v_{n}$ $\ell^{1}$ VSFP $\Delta_d$ ANY $\eta n^{-\alpha} \;\;(0<\alpha<1)$ $v_{n}$ $\ell^{1}$ OGD-L ANY $\frac{1}{2} \left\| x \right\|_2^2$ CONSTANT $-\nabla f_{n}(x_{n})$ $\ell^{2}$ OMD-L ANY ANY CONSTANT $-\nabla f_{n}(x_{n})$ ANY PSG-L ANY $\frac{1}{2} \left\| x \right\|_2^2$ $1$ $-\gamma_{n}\nabla f(x_{n})$ $\ell^{2}$ MD-L ANY ANY $1$ $-\gamma_{n} \nabla f(x_{n})$ ANY RSA-L ANY ANY $1$ $-\gamma_{n} (\nabla f(x_{n})+\xi_n)$ ANY SPSG-L ANY $\frac{1}{2} \left\| x \right\|_2^2$ $1$ $-\gamma_{n}(\nabla f(x_{n})+\xi_n)$ $\ell^{2}$

Impact Factor: