13.8 C
New York
Monday, March 4, 2024

Research on Global Minima for Machine Learning part5

Research on Global Minima for Machine Learning part5

  1. Gradient Descent Finds Global Minima for Generalizable Deep Neural Networks of Practical Sizes(arXiv)

Author : Kenji Kawaguchi, Jiaoyang Huang

Abstract : In this paper, we theoretically prove that gradient descent can find a global minimum of non-convex optimization of all layers for nonlinear deep neural networks of sizes commonly encountered in practice. The theory developed in this paper only requires the practical degrees of over-parameterization unlike previous theories. Our theory only requires the number of trainable parameters to increase linearly as the number of training samples increases. This allows the size of the deep neural networks to be consistent with practice and to be several orders of magnitude smaller than that required by the previous theories. Moreover, we prove that the linear increase of the size of the network is the optimal rate and that it cannot be improved, except by a logarithmic factor. Furthermore, deep neural networks with the trainability guarantee are shown to generalize well to unseen test samples with a natural dataset but not a random dataset.

2. Gradient Descent Finds Global Minima of Deep Neural Networks (arXiv)

Author : Simon S. Du, Jason D. Lee, Haochuan Li, Liwei Wang, Xiyu Zhai

Abstract : Gradient descent finds a global minimum in training deep neural networks despite the objective function being non-convex. The current paper proves gradient descent achieves zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet). Our analysis relies on the particular structure of the Gram matrix induced by the neural network architecture. This structure allows us to show the Gram matrix is stable throughout the training process and this stability implies the global optimality of the gradient descent algorithm. We further extend our analysis to deep residual convolutional neural networks and obtain a similar convergence resul

Source link

Latest stories