Foundations of Data Science
2020 , Volume 2 , Issue 3
Select all articles
Export/Reference:
In this paper we are concerned with the learnability of energies from data obtained by observing time evolutions of their critical points starting at random initial equilibria. As a byproduct of our theoretical framework we introduce the novel concept of mean-field limit of critical point evolutions and of their energy balance as a new form of transport. We formulate the energy learning as a variational problem, minimizing the discrepancy of energy competitors from fulfilling the equilibrium condition along any trajectory of critical points originated at random initial equilibria. By
Recently, neural networks (NN) with an infinite number of layers have been introduced. Especially for these very large NN the training procedure is very expensive. Hence, there is interest to study their robustness with respect to input data to avoid unnecessarily retraining the network.
Typically, model-based statistical inference methods, e.g. Bayesian neural networks, are used to quantify uncertainties. Here, we consider a special class of residual neural networks and we study the case, when the number of layers can be arbitrarily large. Then, kinetic theory allows to interpret the network as a dynamical system, described by a partial differential equation. We study the robustness of the mean-field neural network with respect to perturbations in initial data by applying UQ approaches on the loss functions.
This paper presents novel mathematical results in support of the probabilistic learning on manifolds (PLoM) recently introduced by the authors. An initial dataset, constituted of a small number of points given in an Euclidean space, is given. The points are independent realizations of a vector-valued random variable for which its non-Gaussian probability measure is unknown but is, a priori, concentrated in an unknown subset of the Euclidean space. A learned dataset, constituted of additional realizations, is constructed. A transport of the probability measure estimated with the initial dataset is done through a linear transformation constructed using a reduced-order diffusion-maps basis. It is proven that this transported measure is a marginal distribution of the invariant measure of a reduced-order Itô stochastic differential equation. The concentration of the probability measure is preserved. This property is shown by analyzing a distance between the random matrix constructed with the PLoM and the matrix representing the initial dataset, as a function of the dimension of the basis. It is further proven that this distance has a minimum for a dimension of the reduced-order diffusion-maps basis that is strictly smaller than the number of points in the initial dataset.
The supervised learning problem to determine a neural network approximation
We consider a combined state and drift estimation problem for the linear stochastic heat equation. The infinite-dimensional Bayesian inference problem is formulated in terms of the Kalman–Bucy filter over an extended state space, and its long-time asymptotic properties are studied. Asymptotic posterior contraction rates in the unknown drift function are the main contribution of this paper. Such rates have been studied before for stationary non-parametric Bayesian inverse problems, and here we demonstrate the consistency of our time-dependent formulation with these previous results building upon scale separation and a slow manifold approximation.
Readers
Authors
Editors
Referees
Librarians
Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]