\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Global convergence and geometric characterization of slow to fast weight evolution in neural network training for classifying linearly non-separable data

  • * Corresponding author: Ziang Long

    * Corresponding author: Ziang Long 
Abstract / Introduction Full Text(HTML) Figure(7) / Table(1) Related Papers Cited by
  • In this paper, we study the dynamics of gradient descent in learning neural networks for classification problems. Unlike in existing works, we consider the linearly non-separable case where the training data of different classes lie in orthogonal subspaces. We show that when the network has sufficient (but not exceedingly large) number of neurons, (1) the corresponding minimization problem has a desirable landscape where all critical points are global minima with perfect classification; (2) gradient descent is guaranteed to converge to the global minima. Moreover, we discovered a geometric condition on the network weights so that when it is satisfied, the weight evolution transitions from a slow phase of weight direction spreading to a fast phase of weight convergence. The geometric condition says that the convex hull of the weights projected on the unit sphere contains the origin.

    Mathematics Subject Classification: 90C26, 68W40.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Geometric Condition in Lemma 5.2 ($ d = 3 $)

    Figure 2.  2-dim section of $ \mathbb{R}^d $ spanned by $ \tilde{{\mathit{\boldsymbol{w}}}}_1 $ and $ {\mathit{\boldsymbol{n}}} $

    Figure 3.  Number of iterations to convergence v.s. $ \theta $, the anlge between subspaces $ V_1 $ and $ V_2 $

    Figure 4.  Left: convergent iterations vs. number of neurons ($ d = 2 $). Right: histogram of norm of weights: $ \max\limits_{t}\left| {{\mathit{\boldsymbol{W}}}}^{t}\right| $ ($ d = 2 $ and $ k = 4 $)

    Figure 5.  Dynamics of weights: $ \tilde{{\mathit{\boldsymbol{w}}}}_j $ and $ {\mathit{\boldsymbol{u}}}_j $

    Figure 6.  Left: Slow-to-Fast transition during LeNet [16] training on MNIST dataset. Right: 2D projections of MNIST features from a trained convolutional neural network [18]. The 10 classes are color coded, the feature points cluster near linearly independent subspaces

    Figure 7.  Top row: Projections onto $ \mathcal{S}^2 $ (inside randomly selected 3D subspaces) of weight vectors in the first fully connected layer of a trained LeNet. Bottom row: Projections onto $ \mathcal{S}^2 $ (inside randomly selected 3D subspaces) of weight vectors and their convex hull in the second fully connected layer of a trained LeNet

    Table 1.  Iterations taken ($ \text{mean}\pm\text{std} $) to convergence with random and half space initializations

    # of Neurons ($ 2k $) Random Init. Half Space Init.
    6 578.90$ \pm $205.43 672.41$ \pm $226.53
    8 423.96$ \pm $190.91 582.16$ \pm $200.81
    10 313.29$ \pm $178.67 550.19$ \pm $180.59
    12 242.72$ \pm $178.94 517.26$ \pm $172.46
    14 183.53$ \pm $108.60 500.42$ \pm $215.87
    16 141.00$ \pm $80.66 487.42$ \pm $220.48
    18 126.52$ \pm $62.07 478.25$ \pm $202.71
    20 102.09$ \pm $32.32 412.46$ \pm $195.92
    22 90.65$ \pm $28.01 454.08$ \pm $203.00
    24 82.93$ \pm $26.76 416.82$ \pm $216.58
     | Show Table
    DownLoad: CSV
  • [1] Z. Allen-Zhu, Y. Li and Z. Song, A convergence theory for deep learning via over-parameterization, preprint, arXiv: 1811.03962.
    [2] A. Brutzkus and A. Globerson, Globally optimal gradient descent for a ConvNet with Gaussian inputs, preprint, arXiv: 1702.07966.
    [3] A. Brutzkus and A. Globerson, Over-parameterization improves generalization in the XOR detection problem, preprint.
    [4] A. Brutzkus, A. Globerson, E. Malach and S. Shalev-Shwartz, SGDlearns over-parameterized networks that provably generalize on linearly separable data, 6th International Conference on Learning Representations, Vancouver, BC, Canada, 2018
    [5] R. T. des Combes, M. Pezeshki, S. Shabanian, A. Courville and Y. Bengio, Convergence properties of deep neural networks on separable data, 2019., Available from: https://openreview.net/forum?id=HJfQrs0qt7.
    [6] S. S. Du, X. Zhai, B. Poczós and A. Singh, Gradient descent provably optimizes over-parameterized neural networks, preprint, arXiv: 1810.02054.
    [7] C. Ho and S. Zimmerman, On the number of regions in an $m$-dimensional space cut by $n$ hyperplanes, Austral. Math. Soc. Gaz., 33 (2006), 260-264. 
    [8] S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural Comput., 9 (1997), 1735-1780.  doi: 10.1162/neco.1997.9.8.1735.
    [9] A. Krizhevsky, I. Sutskever and G. E. Hinton, ImageNet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems, 2012, 1097-1105. doi: 10.1145/3065386.
    [10] LeNet-5 - A Classic CNN Architecture., Available from: https://engmrk.com/lenet-5-a-classic-cnn-architecture/.,
    [11] Y. Li and Y. Liang, Learning overparameterized neural networks via stochastic gradient descent on structured data, preprint, arXiv: 1808.01204.
    [12] S. Liang, R. Sun, Y. Li and R. Srikant, Understanding the loss surface of neural networks for binary classification, preprint, arXiv: 1803.00909.
    [13] B. Neyshabur, Z. Li, S. Bhojanapalli, Y. LeCun and N. Srebro, Towards understanding the role of over-parametrization in generalization of neural networks, preprint, arXiv: 1805.12076.
    [14] Q. Nguyen, M. C. Mukkamala and M. Hein, On the loss landscape of a class of deep neural networks with no bad local valleys, preprint, arXiv: 1809.10749.
    [15] S. RenK. HeR. Girshick and J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Machine Intell., 39 (2017), 1137-1149.  doi: 10.1109/TPAMI.2016.2577031.
    [16] A. Rosebrock, LeNet - Convolutional neural network in Python, 2016., Available from: https://www.pyimagesearch.com/2016/08/01/lenet-convolutional-neural-network-in-python/.
    [17] D. SilverA. HuangC. J. MaddisonA. Guez and L. et al. Sifre, Mastering the game of Go with deep neural networks and tree search, Nature, 529 (2016), 484-489.  doi: 10.1038/nature16961.
    [18] H. Wang, Y. Wang, Z. Zhou, X. Ji and D. Gong, et al., CosFace: Large margin cosine loss for deep face recognition, preprint, arXiv: 1801.09414. doi: 10.1109/CVPR.2018.00552.
    [19] P. YinJ. Xin and Y. Qi, Linear feature transform and enhancement of classification on deep neural network, J. Sci. Comput., 76 (2018), 1396-1406.  doi: 10.1007/s10915-018-0666-1.
  • 加载中

Figures(7)

Tables(1)

SHARE

Article Metrics

HTML views(1859) PDF downloads(177) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return