Earthquakes are highly non-linear multiscale problems, encapsulating the geometry and rheology of propagating shear fractures that render the Earth’s crust and emanate destructive seismic waves. Using physics-based earthquake scenarios, modern numerical methods and hardware specific optimizations sheds light on the dynamics, and severity, of earthquake behaviour. This is enabled by the open-source software SeisSol (www.seissol.org) that couples seismic wave propagation of high-order accuracy in space and time (minimal dispersion errors) with frictional fault failure, off-fault inelasticity and visco-elastic attenuation. SeisSol exploits unstructured tetrahedral meshes to account for complex geometries, e.g. high resolution topography and bathymetry, 3D subsurface structure, and complex fault networks. The achieved degree of realism and accuracy is enabled by recent computational optimizations targeting strong scalability on many-core CPUs and a ten-fold speedup owing to an efficient local time-stepping algorithm (Uphoff et al., SC’17).
The potential of in-scale earthquake rupture simulations for augmenting earthquake source observations is demonstrated in two recent examples: i) The 2016 $M_w$7.8 Kaikoura, New Zealand earthquake, considered the most complex rupture observed to date and causing surface rupture of at least 21 segments of the Marlborough fault system. High resolution dynamic rupture modeling unravels the event's riddles in a physics-based manner (Ulrich et al., Nature Comm. 2019, https://rdcu.be/bqZOI); ii) A “reloaded” scenario of the 1992 $M_w$7.3 Landers earthquake (Wollherr et al., preprint https://eartharxiv.org/kh6j9/) producing high-quality synthetic ground motions of variability close to what is commonly assumed in Ground Motion Prediction Equations despite very complex rupture evolution.
Lastly, I will discuss future directions for exploiting expected exascale computing infrastructure with the ExaHyPE high-performance engine for hyperbolic systems of PDEs (www.exahype.eu). Specifically, we aim to represent complex geometries with novel geometric transformations and multi-physics by diffuse interfaces on adaptive cartesian meshes, thus avoiding manual meshing. I will also touch on a recently developed dynamic source inversion approach using a bayesian framework where the posterior probability density function is sampled using the Parallel Tempering Monte Carlo algorithm.
Alice-Agnes Gabriel is an Assistant Professor of Geophysics at Ludwig Maximilians University (LMU) of Munich. Her research focuses on understanding the physics of earthquakes using theoretical analysis, physics-based forward models, innovative observation techniques and high-performance computing to bridge spatio-temporal scales. She interlaces earthquake dynamics with long-term deformation, seismic cycling, tsunami genesis and laboratory experiments in large-scale collaborative research projects fusing expertise from Earth science, physics and computational mathematics to study the fundamentals of earthquake physics and develop methodological innovations for seismology. She is specifically interested in simulating waves and rupture processes within arbitrarily complex geological structures to enhance classic probabilistic seismic hazard assessment and a wide range of industry applications. Her career is distinguished by first-rate earthquake scenarios realized on some of the largest supercomputers worldwide. Her research was rewarded as Best Paper of SC'17, as a Gordon Bell Prize Finalist at SC'14, with a PRACE ISC Award in 2014 and an AGU OSPA Award in 2012.
Dr. Gabriel has a BSc and MSc in theoretical physics from TU Dresden, Germany, a Ph.D. in seismology from ETH Zurich, Switzerland and was a postdoctoral scholar at Ludwig Maximilians University of Munich, Germany.
Topology optimization is able to provide unintuitive and innovative design solutions and a performance improvement (e.g. weight savings) in excess of 50% is not uncommonly demonstrated in a wide range of engineering design problems. With the rise of advance materials and additive manufacturing, topology optimization is attracting much attention in the recent years. This presentation will introduce topology optimization in structural design, fiber composites and architected material. It will also include more recent advances topology optimization, multiscale design optimization breaking down the barrier between material and structural designs. Another direction of interests in large-scale topology optimization using the latest sparse data structures tailored to novel level set method. We have demonstrated an order of magnitude improvements on both the memory footage and the computation time. These efforts represent a pathway to applying topology optimization for complex multiphysics multifunctional structures, which may be too complex to rely on designers’ intuition.
Dr. H Alicia Kim is Jacobs Scholar Chair Professor in the Structural Engineering Department of the University of California, San Diego and leads the Multiscale Multiphysics Design Optimization (M2DO) lab. Her interests are in design optimization for structures including level set topology optimization, multiscale optimization, coupled multi physics optimization, modeling and optimization of composite materials and multifunctional structures. She has published around 200 journal and conference papers in these fields including award winning papers at the AIAA conferences and World Congresses on Structural and Multidisciplinary Optimization. Her research in topology optimization began in the 90’s at the University of Sydney, Australia where she developed one of the first boundary based topology optimization methods. She continued her research at the University of Warwick and the University of Bath, UK before moving to the current position in 2015.
Pseudospectral methods, based on high degree polynomials, have spectral accuracy when solving differential equations but typically lead to dense and ill-conditioned matrices. The ultraspherical spectral method is a numerical technique to solve ordinary and partial differential equations, leading to almost banded well-conditioned linear systems while maintaining spectral accuracy. In this talk, we introduce the ultraspherical spectral method and develop it into a spectral element method using a modification to a hierarchical Poincaré-Steklov domain decomposition method.
Prof. Alex Townsend is an assistant professor at Cornell University in the Mathematics Department. His research is in Applied Mathematics and focuses on spectral methods, low-rank techniques, orthogonal polynomials, and fast transforms. Prior to Cornell, he was an Applied Math instructor at MIT (2014-2016) and a DPhil student at the University of Oxford (2010-2014). He was awarded a SIGEST paper award in 2019, the SIAG/LA Early Career Prize in applicable linear algebra in 2018, and the Leslie Fox Prize in numerical analysis in 2015.
Magnetic resonance imaging (MRI) is a powerful and versatile imaging technology, which has provided unprecedented capabilities to probe the structural, functional, and metabolic information of living systems. Since its inception, the MR imaging process has been formulated as a “communication” problem – i.e., it involves both encoding and decoding. The encoding process maps an underlying image function that depends on physical, physiological, and experimental parameters into sensory data utilizing spin physics; and the decoding process reconstructs this desired image function from the measured data. This long-standing encoding/decoding model often results in poor trade-offs between image resolution, signal-to-noise ratio, and data acquisition speed, which limits the practical utility of high-dimensional MRI.
In this talk, I will present a novel imaging framework to tackle these challenges, by using an integrated encoding and decoding paradigm. The proposed framework leverages advanced mathematical models and algorithms to tightly integrate the encoding and decoding processes. It exploits the synergistic interactions between spin physics, statistical inference, and machine learning to help overcome major technical barriers of the existing MRI technologies. I will illustrate the power of this framework using two concrete approaches, i.e., subspace imaging and statistical imaging, and will highlight their impacts on applications in cardiovascular imaging and quantitative neuroimaging. Finally, I will discuss some exciting new opportunities with this framework that leverage the rapid development of advanced computing and machine learning technologies.
Bo Zhao is a postdoctoral fellow at the Martinos Center for Biomedical Imaging of Harvard Medical School. Dr. Zhao received his Ph.D. in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign. His research lies in the general area of computational imaging and medical imaging, with emphasis on the modality of magnetic resonance imaging (MRI). He has focused on developing advanced mathematical models, computational algorithms, and data acquisition schemes to push the limits of MR imaging systems. His research has been recognized by several prestigious awards, including an NIH Pathway to Independence Award (K99/R00) and an NIH Ruth L. Kirschstein National Research Service Award Postdoctoral Fellowship. Dr. Zhao is an Associate Member of the Bioimaging and Signal Processing Technical Committee in the IEEE Signal Processing Society.
While modern deep learning algorithms have proven to be surprisingly powerful, their unstructured nature has posed challenges that are difficult to overcome. I will talk about two projects aiming to address some of these challenges. First, safety is a major obstacle to using controllers based on reinforcement learning on real robots. I will describe how to use structured models---in particular, decision trees---to enable safe reinforcement learning. Second, deep generative models have difficulty capturing global structure in images such as repetitions and symmetries. I will describe how to incorporate programmatic structure into these models to capture global structure.
Osbert Bastani is a research assistant professor at the University of Pennsylvania. He is broadly interested in research at the intersection of machine learning and programming languages, and is currently working on trustworthy machine learning.
We discuss numerical simulations of oceanic internal gravity waves (IGWs) on a global scale, on US Navy, NASA, and European high performance computing platforms. IGWs are waves that exist on the interfaces between oceanic layers of different densities. IGWs of tidal frequency are known as internal tides. Beyond tidal frequencies, there is a spectrum of IGWs known as the IGW continuum. The rollover and breaking of IGWs controls most of the mixing in the open-ocean beneath the mixed layer. IGWs also impact the speed of sound, and yield a measurable sea surface height (SSH) signal. Therefore IGWs are important for satellite altimetry missions, including the upcoming Surface Water and Ocean Topography (SWOT) mission, and for operational oceanography in general. We describe our work with the US Navy HYbrid Coordinate Ocean Model (HYCOM), in which we pioneered high-resolution global ocean models simultaneously forced by atmospheric fields and the astronomical tidal potential. We also examine newer simulations performed under similar conditions, on NASA supercomputers, with the Massachusetts Institute of Technology general circulation model (MITgcm). Finally, we briefly describe related work done with the European ocean forecasting model, the Nucleus for European Modeling of the Oceans (NEMO). We summarize several papers on comparison of the modeled internal tides and the IGW continuum spectrum to altimetry and observations from moorings. We briefly discuss the generation of the continuum spectrum and the potential implications for a better understanding of ocean mixing.