\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents
Early Access

Early Access articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Early Access publication benefits the research community by making new scientific discoveries known as quickly as possible.

Readers can access Early Access articles via the “Early Access” tab for the selected journal.

Controlling stochastic gradient descent using stochastic approximation for robust distributed optimization

  • *Corresponding author: Adit Jain

    *Corresponding author: Adit Jain 

Dedicated to Professor George Yin on the occasion of his 70th birthday. This research was funded by the Army research office under grant W911NF-24-1-0083 and National Science Foundation under grant CCF-2312198

Abstract / Introduction Full Text(HTML) Figure(4) / Table(1) Related Papers Cited by
  • This paper deals with the problem of controlling the stochastic gradient descent, performed by multiple learners where the aim is to estimate the respective $ \arg\min f $ using noisy gradients obtained by querying a stochastic oracle. Each query has a learning cost, and the noisy gradient response has varying degrees of noise variance, the bound of which is assumed to vary in a Markovian fashion. For a single learner, the decision problem is to choose when to query the oracle such that the learning cost is minimized. A constrained Markov decision process (CMDP) is formulated to solve the decision problem of a single learner. Structural results are proven for the optimal policy for the CMDP, which is shown to be threshold decreasing in the queue state. For multiple learners, a constrained switching control game is formulated for scheduling and controlling $ N $ learners querying the same oracle, one at a time. The structural results are extended for the optimal policy achieving the Nash equilibrium. The structural results are used to propose a stochastic approximation algorithm to search for the optimal policy, which tracks the parameters of the optimal policy using a sigmoidal approximation and does not require knowledge of the underlying transition probabilities. The paper also briefly discusses applications in federated learning and numerically shows the convergence properties of the proposed algorithm.

    Mathematics Subject Classification: Primary: 62L20, 91A15, 90C40; Secondary: 60J20.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Pictorial representation of the two system models considered in the paper: (a) shows a single learner with queue length $ b_k $ querying a stochastic oracle in state $ o_k $. Learner sends the query $ q_k $ and receives a noisy gradient of $ f $ at the query point $ r_k $ and a bound on the noise variance $ I^2_k $. (b) illustrates a multi-learner setting where multiple learners query the same oracle, but only a single learner can be scheduled to query the oracle. The querying protocol remains similar to (a), but each learner has a different queue, and the oracle has different states with respect to different learners

    Figure 2.  Approximate threshold parameters (denoted by $ \theta $) in constant step size SASPS algorithm estimating the true parameters (denoted by $ \phi $) for different oracle states

    Figure 3.  Approximate threshold parameters (denoted by $ \theta $) in constant step size SASPS algorithm tracking changes to the true parameters (denoted by $ \phi $) for different oracle states. There is a change introduced in the transition probabilities at iteration $ 3000 $ (dashed)

    Figure 4.  Approximate threshold parameters (denoted by $ \theta $) in decreasing step size SASPS-N algorithm tracking the true parameters (denoted by blue scatter points $ \phi $) for different learners (rows) and oracle states (columns)

    Table 1.  Summary of the mathematical notation used in the text

    Symbol Description
    $ k $ Time index for the stochastic gradient descent (SGD)
    $ m $ Time index for stochastic approximation for structured policy search (SASPS) algorithm
    $ \hat{x} $ Learner's estimate of the minima
    $ s $ State variable for the oracle state, learner state, and arrival state
    $ q $ query posed to the oracle by the learner
    $ u $ Action of the learner (Learning or No Learning)
    $ r $ Noisy gradient evaluation by the oracle of function at query point
    $ \sigma _k $ Variance of the noise added at time $ k $
    $ \sigma $ Tolerance parameter of learner for noise variance
    $ \Theta $ True threshold parameter of the optimal policy
    $ \tilde{\Theta} $ Threshold parameter of the SASPS tracking the true threshold parameter
    $ \mu $, $ \gamma $ Step size of the SGD and SASPS respectively
    $ \beta $ Discount Factor
    $ l $ Learning Cost
    $ c $ Queue Cost
     | Show Table
    DownLoad: CSV
  • [1] Constrained Markov Decision Processes | Eitan Altman | Taylor & Francis, URL https://www.taylorfrancis.com/books/mono/10.1201/9781315140223/constrained-markov-decision-processes-eitan-altman.
    [2] A. Ajalloeian and S. U. Stich, On the convergence of SGD with biased gradients, 2021, http://arXiv.org/abs/2008.00051.
    [3] E. Altman, B. Gaujal and A. Hordijk, Discrete-Event Control of Stochastic Networks: Multimodularityand Regularity, Springer, 2003. doi: 10.1007/b93837.
    [4] P. Bachman, A. Sordoni and A. Trischler, Learning algorithms for active learning, in Proceedings of the 34th International Conference on Machine Learning, PMLR, 2017,301-310, https://proceedings.mlr.press/v70/bachman17a.html, ISSN: 2640-3498.
    [5] F. J. Beutler and K. W. Ross, Optimal policies for controlled Markov chains with a constraint, Journal of Mathematical Analysis and Applications, 112 (1985), 236-252.  doi: 10.1016/0022-247X(85)90288-4.
    [6] L. Bottou, Stochastic learning, in Advanced Lectures on Machine Learning: ML Summer Schools 2003, Canberra, Australia, February 2 - 14, 2003, Tübingen, Germany, August 4 - 16, 2003, Revised Lectures (eds. O. Bousquet, U. von Luxburg and G. Rätsch), Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 2004,146-168. doi: 10.1007/978-3-540-28650-9_7.
    [7] J. Filar and K. Vrieze, Competitive Markov Decision Processes, Springer Science & Business Media, 2012.
    [8] P. Ganesh, H. Chang, M. Strobel and R. Shokri, On the impact of machine learning randomness on group fairness, in 2023 ACM Conference on Fairness, Accountability, and Transparency, ACM, Chicago IL USA, 2023, 1789-1800. doi: 10.1145/3593013.3594116.
    [9] S. Ghadimi and G. Lan, Stochastic first- and Zeroth-Order methods for nonconvex stochastic programming, SIAM Journal on Optimization, 23 (2013), 2341-2368.  doi: 10.1137/120880811.
    [10] J. W. Huang and V. Krishnamurthy, Transmission control in cognitive radio as a Markovian dynamic game: Structural result on randomized threshold policies, IEEE Transactions on Communications, 58 (2010), 301-310.  doi: 10.1109/TCOMM.2010.01.080157.
    [11] J. W. Huang, H. Mansour and V. Krishnamurthy, A dynamical games approach to transmission-rate adaptation in multimedia WLAN, IEEE Transactions on Signal Processing, 58 (2010), 3635-3646, Conference Name: IEEE Transactions on Signal Processing. doi: 10.1109/TSP.2010.2046894.
    [12] A. Jain and V. Krishnamurthy, Controlling federated learning for covertness, 2023, http://arXiv.org/abs/2308.08825, arXiv: 2308.08825 [cs, eess].
    [13] P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, R. G. L. D'Oliveira, H. Eichner, S. E. Rouayheb, D. Evans, J. Gardner, Z. Garrett, A. Gascón, B. Ghazi, P. B. Gibbons, M. Gruteser, Z. Harchaoui, C. He, L. He, Z. Huo, B. Hutchinson, J. Hsu, M. Jaggi, T. Javidi, G. Joshi, M. Khodak, J. Konečný, A. Korolova, F. Koushanfar, S. Koyejo, T. Lepoint, Y. Liu, P. Mittal, M. Mohri, R. Nock, A. Özgür, R. Pagh, M. Raykova, H. Qi, D. Ramage, R. Raskar, D. Song, W. Song, S. U. Stich, Z. Sun, A. T. Suresh, F. Tramèr, P. Vepakomma, J. Wang, L. Xiong, Z. Xu, Q. Yang, F. X. Yu, H. Yu and S. Zhao, Advances and open problems in federated learning, 2021, http://arXiv.org/abs/1912.04977, arXiv: 1912.04977 [cs, stat]. doi: 10.1561/9781680837896.
    [14] N. Kourtellis, K. Katevas and D. Perino, FLaaS: Federated learning as a service, in Proceedings of the 1st Workshop on Distributed Machine Learning, ACM, Barcelona Spain, 2020, 7-13. doi: 10.1145/3426745.3431337.
    [15] H. Kushner and G. G. Yin, Stochastic Approximation and Recursive Algorithms and Applications, Springer Science & Business Media, 2003, Google-Books-ID: EC2w1SaPb7YC.
    [16] R. T. A. J. Leenders, Models for network dynamics: A Markovian framework*, Journal of Mathematical Sociology, Publisher: Taylor & Francis Group. doi: 10.1080/0022250X.1995.9990149.
    [17] B. McMahan, E. Moore, D. Ramage, S. Hampson and B. A. Arcas, Communication-efficient learning of deep networks from decentralizeddata, in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR, 2017, https://proceedings.mlr.press/v54/mcmahan17a.html.
    [18] M. H. Ngo and V. Krishnamurthy, Optimality of threshold policies for transmission scheduling in correlated fading channels, IEEE Transactions on Communications, 57 (2009), 2474-2483.  doi: 10.1109/TCOMM.2009.08.070350.
    [19] M. H. Ngo and V. Krishnamurthy, Monotonicity of constrained optimal transmission policies in correlated fading channels with ARQ, IEEE Transactions on Signal Processing, 58 (2010), 438-451.  doi: 10.1109/TSP.2009.2027735.
    [20] A. E. Ouadrhiri and A. Abdelhadi, Differential privacy for deep and federated learning: a survey, IEEE Access, 10 (2022), 22359-22380, Conference Name: IEEE Access. doi: 10.1109/ACCESS.2022.3151670.
    [21] A. Rodio, F. Faticanti, O. Marfoq, G. Neglia and E. Leonardi, Federated learning under heterogeneous and correlated client availability, 2023, http://arXiv.org/abs/2301.04632, arXiv: 2301.04632 [cs]. doi: 10.1109/INFOCOM53939.2023.10228876.
    [22] Y. Rychener, B. Taskesen and D. Kuhn, Metrizing fairness, 2023, http://arXiv.org/abs/2205.15049, arXiv: 2205.15049 [cs, math, stat].
    [23] A. Sekhari, J. Acharya, G. Kamath and A. T. Suresh, Remember what you want to forget: Algorithms for machine unlearning, in Advances in Neural Information Processing Systems, vol. 34, Curran Associates, Inc., 2021, 18075-18086, https://proceedings.neurips.cc/paper_files/paper/2021/hash/9627c45df543c816a3ddf2d8ea686a99-Abstract.html.
    [24] J. N. TsitsiklisK. Xu and Z. Xu, Private sequential learning, Operations Research, 69 (2021), 1575-1590.  doi: 10.1287/opre.2020.2021.
    [25] O. J. VriezeS. H. TijsT. E. S. Raghavan and J. A. Filar, A finite algorithm for the switching control stochastic game, Operations-Research-Spektrum, 5 (1983), 15-24.  doi: 10.1007/BF01720283.
    [26] J. XuK. Xu and D. Yang, Learner-private convex optimization, IEEE Transactions on Information Theory, 69 (2023), 528-547.  doi: 10.1109/TIT.2022.3203989.
  • 加载中

Figures(4)

Tables(1)

SHARE

Article Metrics

HTML views(347) PDF downloads(218) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return