Inverse problems are omnipresent in many scientific fields such as systems biology, engineering, medical imaging, and geophysics. The main challenges toward obtaining meaningful real-time solutions to large, data-intensive inverse problems are ill-posedness of the problem, large parameter dimensions, and/or complex model constraints. This talk discusses computational challenges of inverse problems by exploiting a combination of tools from applied linear algebra, parameter estimation and optimization, and statistics. For instance, for large scale ill-posed inverse problems, approximate solutions are computed using a regularization method that solves a nearby well-posed problem. Oftentimes, the selection of a proper regularization parameter is the most critical and computationally intensive task and may hinder real-time computations of the solution. We present a new framework for solving ill-posed inverse problems by computing optimal regularized inverse matrices. We further discuss randomized Newton and randomized quasi-Newton approaches to efficiently solve large linear least-squares problems, where the very large data sets present a significant computational burden (e.g., the size may exceed computer memory or data are collected in real-time). In this framework, randomness is introduced as a means to overcome computational limitations, and probability distributions that can exploit structure and/or sparsity are considered. We will present numerical examples, from deblurring, tomography, and machine learning to illustrate the challenges and our proposed methods.
Matthias (Tia) Chung is an Assistant Professor in the Department of Mathematics at Virginia Tech and member of the Computational Modeling and Data Analytics division in the Academy of Integrated Science. He joined the Virginia Tech in 2012, holds a Dipl. math. (Master of Science equivalent) from the University of Hamburg, Germany, and a Dr. rer. nat. (Ph.D. equivalent degree) in Computational Mathematics from the University of Lübeck, Germany. Before joining Virginia Tech, he was a Post-Doctoral Fellow at Emory University and Assistant Professor at Texas State University. Matthias Chung is an active member of the Society for Industrial and Applied Mathematics (SIAM) and its CSE, UQ, IS, and LA activity groups. Matthias Chung’s research concerns various forms of cross-disciplinary inverse problems. Driven by its application, he and his lab develops and analyzes efficient numerical methods for inverse problems. Applications of interest include, but are not limited to, systems biology, medical and geophysical imaging, and dynamical systems. Challenges such as ill-posedness, large-scale, and uncertainty estimates are addressed by utilizing tools from and developing methods for regularization, randomized methods, stochastic learning, Bayesian inversion, and optimization. Research project are supported by NSF, NIH, and USDA.
Models are imperfect representations of complex physical processes. Representing the uncertainties caused by using inadequate models is crucial to making reliable predictions. We present a model inadequacy representation in the form of a stochastic operator acting on the state variable and discuss methods of incorporating prior knowledge of model shortcomings and relevant physics. This formulation is developed in the context of an inadequate model for contaminant transport through heterogeneous porous media.
We introduce a novel framework for imaging and removal of multiples from waveform data based on model order reduction. The reduced order model (ROM) is an orthogonal projection of the wave equation propagator (Green's function) on the subspace of discretely sampled time domain wavefield snapshots. The projection can be computed just from the knowledge of the boundary waveform data using the block Cholesky factorization. Once the ROM is found, its use is twofold.
First, given a rough knowledge of kinematics, the projected propagator can be backprojected to obtain an image of reflectors in the medium. ROM computation implicitly orthogonalizes the wavefield snapshots. This highly nonlinear procedure differentiates our approach from the conventional linear migration methods (Kirchhoff, RTM). It allows to resolve the reflectors independently of the knowledge of the kinematics and to untangle the nonlinear interactions between the reflectors. As a consequence, the resulting images are almost completely free from the multiple reflection artifacts.
Second, the ROM computed from the original, multiply scattered waveform data can be used to generate the Born data set, i.e. the data that the measurements would produce if the propagation of waves in the unknown medium obeyed Born approximation instead of the full wave equation. Obviously, such data only contains primary reflections and the multiples are removed. Moreover, the multiply scattered energy is mapped back to primaries. Consecutively, existing linear imaging and inversion techniques can be applied to Born data to obtain reconstructions in a direct, non-iterative manner.
We discuss geometry-based statistical learning techniques for performing model reduction and modeling of certain classes of stochastic high-dimensional dynamical systems. We consider two complementary settings. In the first one, we are given long trajectories of a system, e.g. from molecular dynamics, and we estimate, in a robust fashion, an effective number of degrees of freedom of the system, which may vary in the state space of the system, and a local scale where the dynamics is well-approximated by a reduced dynamics with a small number of degrees of freedom. We then use these ideas to produce an approximation to the generator of the system and obtain, via eigenfunctions of an empirical Fokker-Planck equation (constructed from data), reaction coordinates for the system that capture the large time behavior of the dynamics. We present various examples from molecular dynamics illustrating these ideas.
In the second setting we only have access to a (large number of expensive) simulators that can return short paths of the stochastic system, and introduce a statistical learning framework for estimating local approximations to the system, that can be (automatically) pieced together to form a fast global reduced model for the system, called ATLAS. ATLAS is guaranteed to be accurate (in the sense of producing stochastic paths whose distribution is close to that of paths generated by the original system) not only at small time scales, but also at large time scales, under suitable assumptions on the dynamics. We discuss applications to homogenization of rough diffusions in low and high dimensions, as well as relatively simple systems with separations of time scales, and deterministic chaotic systems in high dimensions, that are well-approximated by stochastic diffusion-like equations.
Dr. Mauro Maggioni is a Bloomberg Distinguished Professor of Mathematics, and Applied Mathematics and Statistics at Johns Hopkins University. He works at the intersection of harmonic analysis, approximation theory, high-dimensional probability, statistical and machine learning, model reduction, stochastic dynamical systems, spectral graph theory, and statistical signal processing. He received his B.Sc. in Mathematics summa cum laude at the Universitá degli Studi in Milano in 1999, and the Ph.D. in Mathematics from the Washington University, St. Louis, in 2002. He was a Gibbs Assistant Professor in Mathematics at Yale University till 2006, when he moved to Duke University, becoming Professor in Mathematics, Electrical and Computer Engineering, and Computer Science in 2013. He received the Popov Prize in Approximation Theory in 2007, an NSF CAREER award and Sloan Fellowship in 2008, and was nominated Fellow of the American Mathematical Society in 2013.
In this presentation we consider a numerical approximation technique for the Boltzmann equation based on a moment-system approximation in velocity dependence and a discontinuous Galerkin (DG) finite-element approximation in position dependence. The closure relation for the moment systems derives from the minimization of a suitable divergence. This divergence-based closure yields a hierarchy of tractable symmetric hyperbolic moment systems that retain the fundamental structural properties of the Boltzmann equation. The resulting combined DG moment method corresponds to a Galerkin approximation of the Boltzmann equation in re-normalized form. The new moment-closure formulation engenders a new upwind numerical flux function that ensures entropy stability of the DG finite-element approximation. The Galerkin form of moment methods enables the estimation of a posteriori errors, while the hierarchical structure provides an intrinsic mode for local model refinement. We will present numerical results for the DG finite element moment method and the goal-oriented adaptive refinement.
Turbulence is a complex fluid phenomenon that is ubiquitous in high Reynolds number flows. One of the biggest challenges in understanding turbulence lies in computational modeling. This challenge stems from the fact that turbulence is comprised of a wide range of scales that interact with each other. The problem is particularly difficult in the case of wall bounded turbulence because the presence of a wall introduces new length scales. Large Eddy Simulations (LES) directly represent large scale turbulent motions and model the effects of small scale motions. However, in the near-wall region the large dynamically important eddies scale in viscous wall units, which makes resolving them very expensive. This motivates a wall-modeled LES approach where the near-wall region is modeled. This approach will be very useful in engineering design, for instance in computing flow over an aircraft, or simulating a reacting flow inside a jet engine. This approach can also be applied to modeling atmospheric flow and provide a basis for better weather prediction simulations. To aid in the development of new wall models, we pursue an asymptotic analysis of the filtered Navier-Stokes equations, in the limit in which the horizontal filter scale is large compared to the thickness of the wall layer. We show that in this limit the filtered velocities in the near-wall layer are determined to zeroth order by filtered velocities at the boundary of the wall layer. Further, the asymptotics suggest that there is a scaled universal velocity profile f in the near-wall region. The profile f is evaluated through analysis of DNS data from channel flow at Reτ = 5200. We use the resulting profile f to formulate a predictive near-wall model. The model depends only on the filtered velocities at the boundary of the near-wall layer and can supply boundary conditions for a wall-modeled LES.
Spatial organization of DNA into chromosomes, which can be characterized by cell-specific interactions between local or long range segments, is thought to contribute to controlling and regulating essential nuclear functions in cells, but is only partially understood. In this talk, we describe a multi-scale approach that has been applied to understand this problem. Using molecular dynamics simulations we showed that DNA structure can be significantly affected by epigenetic modifications such as DNA methylation, the distribution of which shows disease (e.g., cancer)-dependent scale-invariant behavior in various biological samples. We then utilized Hi-C data to construct models for autosomes of different types of human cells and show that the epigenetic markers are strongly related to the 3-D chromosome structure. The high-order structure of chromosome is thus thought to be strongly affected by the DNA sequence. The DNA sequence is characterize by multi-scale clustering of dinucleotides and the human genome can be roughly viewed as a co-polymer. We try to make a connection of such a property of genome to the higher-order chromosome structure formation, including topologically associated domains (TADs) and compartments. Finally, we will discuss the tissue-specific organization of the chromosome structure in differentiation, development, and disease, in which segregation/intermingling of DNA segments of different properties characterized by CGI (CpG islands) distribution and transcription factor binding play important roles.
Speaker Affiliation: Assistant Professor & Steve Hsu Keystone Research Faculty Scholar, Multiscale Computational Physics Laboratory, Department of Mechanical and Nuclear Engineering, Kansas State University
The coupling between the intrinsic angular momentum and the hydrodynamic linear momentum has been known to be prominent in fluid flows involving physics across multiple length and time scales, e.g. turbulence, nonequilibrium flows and flows at micro-/nano-scale. Since the classical Navier-Stokes equations and Boltzmann’s kinetic theory are derived on the basis of monatomic gases or volumeless points, efforts to derive constitutive equations involving intrinsic rotation for fluids of polyatomic molecules have been found since the 1960s. One of the proposed continuum theories for polyatomic molecules was Morphing Continuum Theory (MCT). The theory was originally formulated under the framework of rational continuum mechanics and thermodynamic irreversible processes. The mathematically rigorous continuum mechanics presents a complete and closed set of governing equations, but leaves the physical meanings unexplained. Similar to the correlation between Boltzmann's kinetic theory and the classical continuum mechanics, an advanced kinetic theory involving the Boltzmann-Curtiss (B-C) distribution function and the B-C equation will be introduced for a morphing continuum. The method of the most probable distribution method is used to derive the Boltzmann-Curtiss distribution. The corresponding Boltzmann-Curtiss equations will be demonstrated to be the MCT governing equations without any dissipation terms, i.e. the system (flows with inner structures) is in equilibrium and at the Boltzmann-Curtiss distribution. A first-order approximation to the B-C distribution will be used to further derive the B-C transport equations. The corresponding governing equations will then be compared with the MCT equations. Furthermore, a path to reduce the presented MCT equations down to the classical N-S equations will be demonstrated and discussed.
Dr. James M. Chen is an Assistant Professor of Mechanical Engineering and the Steve Hsu Keystone Research Faculty Scholar at Kansas State University. He earned his B.S. at National Chung-Hsing University (2000), M.S. at National Taiwan University (2005) and Ph.D. in mechanical and aerospace engineering and applied mathematics (minor) at The George Washington University (2011). He joined Kansas State University as an Assistant Professor in 2015. Prior to joining K-State, he was a faculty at the Penn State University system (2012-2015). He has published more than 30 peer-reviewed journal articles in multiscale computational mechanics, fracture mechanics, theoretical & computational fluid dynamics and atomistic simulation for thermo-electro-mechanical coupling. He received the Young Investigator Award from Air Force Office of Scientific Research in 2017. His research at MCPL has been supported by AFOSR, NSF and NASA. His current interests are on the kinetic description of Morphing Continuum, compressible turbulence, supersonic/hypersonic flows, atomistic electrodynamics, triboelectricity and high-level programming.
We present the Sequential Ensemble Transform (SET) method for generating approximate samples from a posterior distribution as a solution to Bayesian inverse problems. The method explores the posterior by solving a sequence of discrete, linear optimal transport problems, resulting in a series of transport maps which map prior samples to posterior samples. This allows us to efficiently characterize statistical properties of quantities of interest, quantify uncertainty, and compute moments. We present theory proving that the sequence of Dirac mixture distributions generated by the SET method converges to the true posterior. Numerically, we show this method can offer superior computational efficiency when compared to resampling-based Sequential Monte Carlo (SMC) methods in the regime of low mutation steps and small ensemble size; the regime where particle degeneracy is likely to occur.
Since the first observation of a nonlinear process in light matter interaction, more powerful lasers have opened new ways to explore nonlinear phenomena, including intense light filament propagation in the atmosphere. Motivated by experiments and applications, in this talk I will present a brief overview of the basic principles leading to nonlinear Schrӧdinger-like equations, evolving into current challenges on modeling and simulations.
He earned his MS in Applied Mathematics at the California Institute of Technology in 1983 and his PhD in Applied Mathematics, University of Arizona in 1988. Between 1989 and 2008, he moved through the ranks from Assistant to Full Professor of Mathematics at the University of New Mexico, where he held the position of Chair of the Department of Mathematics and Statistics between 2004 and 2008. He is currently Professor and Department Chair of Mathematics at Southern Methodist University. He has had visiting positions at Brown University, Universita di Brescia Italy, University of Limoges and University of Rouen, France and Deusto Tech, Bilbao. He has been a visiting scientist at the Los Alamos National Laboratory and the US Air Force Laboratory. His main research area has been in modeling in Nonlinear Optics and Photonics. In 2016, he was elected Fellow of the Optical Society of America.