KU Probabilisty and Statistics Seminar (Spring 2019)
The seminars were held on Wednesdays 4pm-5pm at Snow 306.
Current seminar Seminar website Homepage
|February 21 (Smith Colloquium)||
Sunder Sethuraman (University of Arizona)
We consider a family of stochastic models of evolving two-dimensional Young diagrams, given in terms of certain energies, with Gibbs invariant measures. `Static' scaling limits of the shape functions, under these Gibbs measures, have been much studied over the years. In this talk, we discuss corresponding `dynamical' limits which are less understood. We show that the hydrodynamic scaling limits of the Young diagram shape functions may be described by different types parabolic PDEs, depending on the energy structure.
|February 28 (Smith Colloquium)||
Leonid Petrov (University of Virginia)
Asymptotic representation theory of symmetric groups is a rich and beautiful subject with deep connections with probability, mathematical physics, and algebraic combinatorics. A one-parameter deformation of this theory related to infinite random matrices over a finite field leads to a randomization of the classical Robinson-Schensted correspondence between words and Young tableaux. Exploring such randomizations we find unexpected applications to six vertex (square ice) type models and traffic systems on a 1-dimensional lattice.
|May 9 (Smith Colloquium)||
Jiequn Han (Princeton University)
Recent work linking deep neural networks and dynamical systems opened up new avenues to analyze deep learning. In particular, it is observed that new insights can be obtained by recasting deep learning as an optimal control problem on difference or differential equations. However, the mathematical aspects of such a formulation have not been systematically explored. This talk introduces the mathematical formulation of the population risk minimization problem in deep learning as a mean-field optimal control problem. Mirroring the development of classical optimal control, we state and prove optimality conditions of both the Hamilton-Jacobi-Bellman type and the Pontryagin type. These mean-field results reflect the probabilistic nature of the learning problem. In addition, by appealing to the mean-field Pontryagin's maximum principle, we establish some quantitative relationships between population and empirical learning problems. This serves to establish a mathematical foundation for investigating the algorithmic and theoretical connections between optimal control and deep learning.