Advanced Search
Article Contents
Article Contents

On fast convergence rates for generalized conditional gradient methods with backtracking stepsize

  • *Corresponding author: Daniel Walter

    *Corresponding author: Daniel Walter

This paper is handled by Zi Xu as guest editor.

Karl Kunisch was supported by the ERC advanced grant 668998 (OCLOC) under the EU's H2020 research program

Abstract Full Text(HTML) Figure(5) Related Papers Cited by
  • A generalized conditional gradient method for minimizing the sum of two convex functions, one of them differentiable, is presented. This iterative method relies on two main ingredients: First, the minimization of a partially linearized objective functional to compute a descent direction and, second, a stepsize choice based on an Armijo-like condition to ensure sufficient descent in every iteration. We provide several convergence results. Under mild assumptions, the method generates sequences of iterates which converge, on subsequences, towards minimizers. Moreover, a sublinear rate of convergence for the objective functional values is derived. Second, we show that the method enjoys improved rates of convergence if the partially linearized problem fulfills certain growth estimates. Most notably these results do not require strong convexity of the objective functional. Numerical tests on a variety of challenging PDE-constrained optimization problems confirm the practical efficiency of the proposed algorithm.

    Mathematics Subject Classification: 90C25, 49J52, 65K10, 49M41.


    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Optimal solution $ \bar{u} $ and convergence of relevant quantities for Example 5.7

    Figure 2.  Optimal solution $ \bar{u} $ and convergence of relevant quantities for Example 5.8

    Figure 3.  Snapshots of the optimal control $ \bar{u} $ at several time instances

    Figure 4.  Evolution of $ ||\bar{u}(t)||_{L^2({\mathit{\Omega}})} $ and $ ||\bar{p}(t)||_{L^2({\mathit{\Omega}})} $

    Figure 5.  Verification of Assumption 5.11 and convergence of relevant quantities

  • [1] F. J. A. Artacho and M. H. Geoffroy, Metric subregularity of the convex subdifferential in Banach spaces, J. Nonlinear Convex Anal., 15 (2014), 35-47. 
    [2] A. Beck, First-Order Methods in Optimization, vol. 25, Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM); Philadelphia, PA: Mathematical Optimization Society (MOS), 2017. doi: 10.1137/1.9781611974997.ch1.
    [3] A. BeckE. Pauwels and S. Sabach, The cyclic block conditional gradient method for convex optimization problems, SIAM J. Optim., 25 (2015), 2024-2049.  doi: 10.1137/15M1008397.
    [4] D. P. Bertsekas, Nonlinear Programming, Belmont, MA: Athena Scientific, 2016.
    [5] K. Bredies, M. Carioni, S. Fanzon and F. Romero, A generalized conditional gradient method for dynamic inverse problems with optimal transport regularization, 2020.
    [6] K. BrediesD. A. Lorenz and P. Maass, A generalized conditional gradient method and its connection to an iterative shrinkage method, Comput. Optim. Appl., 42 (2009), 173-193.  doi: 10.1007/s10589-007-9083-3.
    [7] M. D. Canon and C. D. Cullum, A tight upper bound on the rate of convergence of the Frank-Wolfe algorithm, SIAM J. Control, 6 (1968), 509-516. 
    [8] C. W. Combettes and S. Pokutta, Complexity of linear minimization and projection on some sets, Operations Research Letters, 49 (2021), 565-571.  doi: 10.1016/j.orl.2021.06.005.
    [9] K. Deckelnick and M. Hinze, A note on the approximation of elliptic control problems with bang-bang controls, Comput. Optim. Appl., 51 (2012), 931-939.  doi: 10.1007/s10589-010-9365-z.
    [10] V. F. Dem'yanov and A. M. Rubinov, Approximate Methods in Optimization Problems, Translated from the Russian by Scripta Technica, Inc. Translation editor: George M. Kranc. (Modern Analytic and Computational Methods in Science and Mathematics. No. 32.) New York: American Elsevier Publishing Company, Inc. 1970. IX, 256 p., Dfl. 80.00. 1970.
    [11] Q. DenoyelleV. DuvalG. Peyré and E. Soubies, The sliding Frank-Wolfe algorithm and its application to super-resolution microscopy, Inverse Probl., 36 (2020), 42.  doi: 10.1088/1361-6420/ab2a29.
    [12] J. C. Dunn, Rates of convergence for conditional gradient algorithms near singular and nonsingular extremals, SIAM J. Control Optim., 17 (1979), 187-211.  doi: 10.1137/0317015.
    [13] J. C. Dunn, Convergence rates for conditional gradient sequences generated by implicit step length rules, SIAM J. Control Optim., 18 (1980), 473-487.  doi: 10.1137/0318035.
    [14] J. C. Dunn and S. Harshbarger, Conditional gradient algorithms with open loop step size rules, J. Math. Anal. Appl., 62 (1978), 432-444.  doi: 10.1016/0022-247X(78)90137-3.
    [15] I. Ekeland and R. Témam, Convex Analysis and Variational Problems, vol. 28, Philadelphia, PA: Society for Industrial and Applied Mathematics, 1999. doi: 10.1137/1.9781611971088.
    [16] A. FlinthF. de Gournay and P. Weiss, On the linear convergence rates of exchange and continuous methods for total variation minimization, Mathematical Programming, 190 (2021), 221-257.  doi: 10.1007/s10107-020-01530-0.
    [17] M. Frank and P. Wolfe, An algorithm for quadratic programming, Naval Res. Logist. Quart., 3 (1956), 95-110.  doi: 10.1002/nav.3800030109.
    [18] D. Garber and E. Hazan, Faster rates for the frank-wolfe method over strongly-convex sets, in Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015 (eds. F. R. Bach and D. M. Blei), vol. 37 of JMLR Workshop and Conference Proceedings, (2015), 541-549, http://proceedings.mlr.press/v37/garbera15.html.
    [19] J. Guélat and P. Marcotte, Some comments on Wolfe's 'away step', Math. Program., 35 (1986), 110-119.  doi: 10.1007/BF01589445.
    [20] J. Guillerme, Intermediate value theorems and fixed point theorems for semi-continuous functions in product spaces, Proc. Am. Math. Soc., 123 (1995), 2119-2122.  doi: 10.2307/2160947.
    [21] R. HerzogG. Stadler and G. Wachsmuth, Directional sparsity in optimal control of partial differential equations, SIAM J. Control Optim., 50 (2012), 943-963.  doi: 10.1137/100815037.
    [22] M. Jaggi, Revisiting Frank-Wolfe: Projection-free sparse convex optimization, in Proceedings of the 30th International Conference on Machine Learning (eds. S. Dasgupta and D. McAllester), no. 1 in Proceedings of Machine Learning Research, PMLR, Atlanta, Georgia, USA, (2013), 427-435, https://proceedings.mlr.press/v28/jaggi13.html.
    [23] T. Kerdreux, A. d'Aspremont and S. Pokutta, Projection-free optimization on uniformly convex sets, in Proceedings of The 24th International Conference on Artificial Intelligence and Statistics (eds. A. Banerjee and K. Fukumizu), vol. 130 of Proceedings of Machine Learning Research, PMLR, (2021), 19-27, https://proceedings.mlr.press/v130/kerdreux21a.html.
    [24] O. A. Ladyženskaja, V. A. Solonnikov and N. N. Uralceva, Linear and Quasilinear Equations of Parabolic Type, Translations of Mathematical Monographs, Vol. 23, American Mathematical Society, Providence, R.I., 1968, Translated from the Russian by S. Smith.
    [25] E. Levitin and B. Polyak, Constrained minimization methods, USSR Computational Mathematics and Mathematical Physics, 6 (1966), 1-50.  doi: 10.1016/0041-5553(66)90114-5.
    [26] I. NeitzelK. PieperB. Vexler and D. Walter, A sparse control approach to optimal sensor placement in PDE-constrained parameter estimation problems, Numer. Math., 143 (2019), 943-984.  doi: 10.1007/s00211-019-01073-3.
    [27] K. Pieper, Finite Element Discretization and Efficient Numerical Solution Of Elliptic and Parabolic Sparse Control Problems, PhD Dissertation, Technische Universität München, 2015, http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:91-diss-20150420-1241413-1-4.
    [28] K. PieperB. Q. TangP. Trautmann and D. Walter, Inverse point source location with the Helmholtz equation on a bounded domain, Comput. Optim. Appl., 77 (2020), 213-249.  doi: 10.1007/s10589-020-00205-y.
    [29] K. Pieper and D. Walter, Linear convergence of accelerated conditional gradient algorithms in spaces of measures, ESAIM, Control Optim. Calc. Var., 27 (2021), 37.  doi: 10.1051/cocv/2021042.
    [30] F. Pörner, Regularization Methods for Ill-Posed Optimal Control Problems, dissertation, Universität Würzburg, 2018, http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-163153.
    [31] J. Preininger and P. T. Vuong, On the convergence of the gradient projection method for convex optimal control problems with bang-bang solutions, Comput. Optim. Appl., 70 (2018), 221-238.  doi: 10.1007/s10589-018-9981-6.
    [32] A. Rakotomamonjy, R. Flamary and N. Courty, Generalized conditional gradient: analysis of convergence and applications, 2015, https://arXiv.org/abs/1510.06567.
    [33] R. T. RockafellarConvex Analysis, Princeton University Press, Princeton, NJ, 1997. 
    [34] A. Schindele and A. Borzì, Proximal methods for elliptic optimal control problems with sparsity cost functional, Applied Mathematics, 7 (2016), 967. 
    [35] C. Schneider and G. Wachsmuth, Regularization and discretization error estimates for optimal control of ODEs with group sparsity, ESAIM, Control Optim. Calc. Var., 24 (2018), 811-834.  doi: 10.1051/cocv/2017049.
    [36] G. Stadler, Elliptic optimal control problems with $L^1$-control cost and applications for the placement of control devices, Comput. Optim. Appl., 44 (2009), 159-181.  doi: 10.1007/s10589-007-9150-9.
    [37] P. Trautmann and D. Walter, A fast primal-dual-active-jump method for minimization in ${\rm{BV}}((0, t);\mathbb{R}^d)$, 2021, https://arXiv.org/abs/2106.00633.
    [38] G. Wachsmuth and D. Wachsmuth, Convergence and regularization results for optimal control problems with sparsity functional, ESAIM, Control Optim. Calc. Var., 17 (2011), 858-886.  doi: 10.1051/cocv/2010027.
    [39] H.-K. Xu, Convergence analysis of the frank-wolfe algorithm and its generalization in banach spaces, 2017, https://arXiv.org/abs/1710.07367.
    [40] Y. Xu and T. Yang, Frank-wolfe method is automatically adaptive to error bound condition, 2018, https://arXiv.org/abs/1810.04765.
    [41] Z.-B. Xu and G. F. Roach, Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces, J. Math. Anal. Appl., 157 (1991), 189-210.  doi: 10.1016/0022-247X(91)90144-O.
    [42] Y. Yu, X. Zhang and D. Schuurmans, Generalized conditional gradient for sparse estimation, J. Mach. Learn. Res., 18 (2017), 46, Id/No 144.
  • 加载中



Article Metrics

HTML views(2626) PDF downloads(592) Cited by(0)

Access History

Other Articles By Authors



    DownLoad:  Full-Size Img  PowerPoint