Statistical Learning for High-dimensional Dynamical Systems: Diffusions on Manifolds & Agent-based Systems

Seminar:

Statistical Learning for High-dimensional Dynamical Systems: Diffusions on Manifolds & Agent-based Systems
Thursday, November 15, 2018
3:30PM – 4:30PM
POB 6.304

Mauro Maggioni

We discuss geometry-based statistical learning techniques for learning approximations to certain classes of high-dimensional dynamical systems.

In the first scenario, we consider systems that are well-approximated by a stochastic process of diffusion type on a low-dimensional manifold. Neither the process nor the manifold are known: we assume we only have access to a (typically expensive) simulator that can return short paths of the stochastic system, given an initial condition. We introduce a statistical learning framework for estimating local approximations to the system, for stitching these pieces together and form a fast global reduced model for the system, called ATLAS. ATLAS is guaranteed to be accurate in the sense of producing stochastic paths whose distribution is close to that of paths generated by the original system not only at small time scales, but also at very large time scales (under suitable assumptions on the dynamics). We discuss applications to homogenization of rough diffusions in low and high dimensions, as well as relatively simple systems with separations of time scales, and deterministic chaotic systems high-dimensions, that are well-approximated by stochastic diffusion-like equations.

In the second scenario we consider a system of interacting agents: given only observed trajectories of the system, we are interested in estimating the interaction laws between the agents. We consider both the mean-field limit (i.e. the number of agents going to infinity) and the case of a finite number of agents, with an increasing number of observations. We show that at least in particular cases, where the interaction is governed by an (unknown) function of distances, the high-dimensionality of the state space of the system does not affect the learning rates. We prove that in these case in fact we can achieve an optimal learning rate for the interaction kernel, equal to that of a one-dimensional regression problem. We exhibit efficient algorithms for constructing our estimator for the interaction kernels, with statistical guarantees, and demonstrate them on various simple examples.

Bio
Dr. Mauro Maggioni is a Bloomberg Distinguished Professor of Mathematics, and Applied Mathematics and Statistics at Johns Hopkins University. He works at the intersection of harmonic analysis, approximation theory, high-dimensional probability, statistical and machine learning, model reduction, stochastic dynamical systems, spectral graph theory, and statistical signal processing. He received his B.Sc. in Mathematics summa cum laude at the Universita degli Studi in Milano in 1999, and the Ph.D. in Mathematics from the Washington University, St. Louis, in 2002. He was a Gibbs Assistant Professor in Mathematics at Yale University till 2006, when he moved to Duke University, becoming Professor in Mathematics, Electrical and Computer Engineering, and Computer Science in 2013. He received the Popov Prize in Approximation Theory in 2007, an NSF CAREER award and Sloan Fellowship in 2008, and was selected as a Fellow of the American Mathematical Society in 2013.

Hosted by Omar Ghattas and George Biros