Past Events

Seminars are held Tuesdays and Thursdays in POB 6.304 from 3:30-5:00 pm, unless otherwise noted. Speakers include scientists, researchers, visiting scholars, potential faculty, and ICES/UT Faculty or staff. Everyone is welcome to attend. Refreshments are served at 3:15 pm.

Tuesday, Jul 17

Characteristics of Mixed Finite Element Approximations

Tuesday, Jul 17, 2018 from 3:30PM to 5PM | POB 6.304

  • Additional Information

    Hosted by Leszek Demkowicz

    Sponsor: ICES Seminar

    Speaker: Phillippe R. B. Devloo

    Speaker Affiliation: Professor, LabMeC research team, School of Civil Engineering UNICAMP, Brazil

  • Abstract

    In this presentation we give an overview of the research efforts in developing mixed finite element approximations of conservation laws. We develop hp adaptive H(div) conforming spaces in one, two and three dimensions by combining vector fields with H^1 conforming spaces. By the fact that the vector fields are generated using Piola transformations the H(div) conforming spaces are applicable to two dimensional manifolds and/or nonlinear geometric maps. It is shown that by increasing the internal order of approximation of elements with order k boundary fluxes, convergence rates of order h^k+1 for flux and order H^k+2 for pressure are obtained.

    Arbitrary orders of approximation H^k+n for div(σ) can be obtained by further increasing the internal order of approximation.
    H(div) approximations can, similarly to H^1 approximations, benefit from the use of quarterpoint element mappings for the resolution of singularities.
    H(div) approximations with internal bubble functions naturally lead to a procedure for computing highly efficient error estimators.

    We combine H(div) approximations with a multiscale hybrid mixed (MHM) approximation method to obtain a multiscale approximation method with local conservation.

    All numerical results were obtained by algorithms implemented in the NeoPZ programming environment that is freely available from github

Tuesday, Jul 10

  • Additional Information

    Hosted by Irene Gamba

    Sponsor: ICES Seminar

    Speaker: Jeff Haack

    Speaker Affiliation: Computational Physics and Methods Group Los Alamos National Laboratory

  • Abstract

    Staff, postdocs, and students in the Computational Physics and Methods Group at Los Alamos National Laboratory collaborate on multidisciplinary teams composed of engineers, physicists, applied mathematicians, and computer scientists. We cover application areas that include neutron and radiation transport, shock hydrodynamics, multiphase fluid dynamics, turbulent mixing, ocean dynamics for climate modeling, astrophysics, and plasma physics. In this talk I will give an overview of these various application areas from the numerical methods and algorithms perspective. Specifically, I will describe several production codes and libraries that are co-developed by staff in our group. These codes and libraries are being used on some of the largest supercomputers in the world to address questions of national interest.

Wednesday, Jun 20

Modelling and Simulation of Biological Systems

Wednesday, Jun 20, 2018 from 2PM to 4PM | POB 6.304

  • Additional Information

    Hosted by Mary Wheeler

    Sponsor: ICES Seminar

    Speaker: Gabriel Wittum

    Speaker Affiliation: ECRC, KAUST Thuwal, Saudi-Arabia and G-CSC, University of Frankfurt Kettenhofweg 139, 60325 Frankfurt am Main, Germany

  • Abstract

    Biological systems are distinguished by their enormous complexity and variability. That is why mathematical modelling and computational simulation of those systems is very difficult, in particular thinking of detailed models which are based on first principles. The difficulties start with geometric modelling which needs to extract basic structures from highly complex and variable phenotypes, on the other hand also has to take the statistic variability into account. Moreover, the models of the processes running on these geometries are not yet well established, since these are equally complex and often couple many scales in space and time. Thus, simulating such systems always means to put the whole frame to test, from modelling to the numerical methods and software tools used for simulation. These need to be advanced in connection with validating simulation results by comparing them to experiments. To treat problems of this complexity, novel mathematical models, methods and software tools are necessary. In recent years, such models, numerical methods and tools have been developed, allowing to attack these problems. In the talk we consider two examples as paradigms for the process of modelling and simulation in biosciences. The first example is the diffusion of xenobiotics through human skin, the second one is dynmaics of vesicles in synapses of neurons.

    Professor Wittum’s research focuses on a general approach to modelling and simulation of problems from empirical sciences, in particular using high performance computing (HPC). Particular areas of focus include: the development of advanced numerical methods for modelling and simulation, such as fast solvers like parallel adaptive multi-grid methods, allowing for application to complex realistic models; the development of corresponding simulation frameworks and tools; and the efficient use of top-level supercomputers for that purpose. These methods and tools are applied towards problem-solving in fields including computational fluid dynamics, environmental research, energy research, finance, neuroscience, pharmaceutical technology and beyond. ​​He received his Dr. rer. nat. habil., University of Heidelberg, 1991; Dr. rer. nat., Kiel University, 1987; and Diploma in Mathematics, University of Karlsruhe, 1983.

  • Additional Information

    Hosted by Alex Demkov

    Sponsor: Institute for Computational Engineering and Sciences (ICES)

    Speaker: Marvin Cohen Leeor Kronik Mei-Yin Chou Alexey Zayak Andrew Rappe Amir Natan John Joannopoulos Chris Palmstrom Igor Vasiliev Renata Wentzcovitch Steve Louie Noa Marom Serdar Ogut Manish Jain Gyeong Hwang Alex Demkov Tinsley Oden

  • Abstract

    By Invitation Only

Saturday, May 19

2018 CSEM Commencement

Saturday, May 19, 2018 | Bass Concert Hall

  • Additional Information

    Hosted by Robert Moser

    Sponsor: [UT Graduate School](

    Speaker: President Greg Fenves

  • Abstract

    Traditional ceremony to celebrate the conferring of master's and Ph.D. degrees. This year the following CSEM graduates will participate:

    PhD Ceremony (Noon)

    Federico Fuentes – May 2018 grad, Advisor: Professor Demkowicz

    John Hawkins – May 2018 grad, Advisor: Professors Press/Finkelstein

    Ellen Le – May 2018 grad, Advisor: Professors Bui/Nguyen

    Zhen (Jane) Tao – Dec 2017 grad, Advisor: Professor Arbogast

    Dhairya Malhotra – Dec 2017 grad, Advisor: Professor Biros

    Amir Gholaminejad – Aug 2017 grad, Advisor: Professor Biros

    Master’s Ceremony (9 am)

    Muneeza Azmat

    Abhishek Shende

Wednesday, May 9

High-order space-time approximations of dynamic poroelasticity models

Wednesday, May 9, 2018 from 1PM to 2:30PM | POB 6.304

  • Additional Information

    Hosted by Mary Wheeler

    Sponsor: ICES Seminar

    Speaker: Uwe Koecher

    Speaker Affiliation: Professor, Helmut-Schmidt-Universität / Universität der Bundeswehr Hamburg

  • Abstract

    The accurate high-order approximation in space and time is of fundamental importance for the simulation of dynamic poroelastic models which include coupled fluid flow, deformation and wave propagation.

    Dynamic poroelastic models appear for example Lithium-ion battery fast-charge simulations and include sharp concentration and pressure gradients, high mechanical stresses, elastic wave propagation, memory-effects on the permeability, multi-phase behaviour and electro-chemical reactions.

    In this contribution our high-order space-time discretisations, including mixed finite elements (MFEM) for the flow variables and interior-penalty discontinuous Galerkin finite elements (IPDG) for the displacement and velocity variables, are presented. For the discretisation in time we use a high-order accurate discontinous Galerkin dG(r) discretisation.

    The arising linear block systems are solved with our sophisticated monolithic solver technology with flexible multi-step fixed-stress preconditioning. Inside the preconditioner highly optimised system solvers for low order approximations can be used. Additionally, our solver technology allows for parallel-in-time application.

    The performance properties and their potiential for battery simluations and further applications are illustrated by numerical experiments.

    [1] U. Koecher, M. Bause: A mixed discontinuous-continuous Galerkin time discretisation for Biot's system, Comput. Appl. Math., submitted, arXiv:1805.00771, 2018.

    [2] U. Koecher: Numerical investigation of the condition number of fully discrete systems from SIPG discretisations for elastic wave propagation. Numer. Math. Adv. Appl. ENUMATH 2017, submitted, p. 1-8, 2017.

    [3] J. Both, U. Koecher: Numerical investigation on the fixed-stress splitting scheme for Biot’s equations: Optimality of the tuning parameter, Numer. Math. Adv. Appl. ENUMATH 2017, submitted, p. 1-8, 2017.

    [4] M. Bause, F. Radu, U. Koecher: Space-time finite element approximation of the Biot poroelasticity system with iterative coupling, Comput. Meth. Appl. Mech. Engrg. 320:745-768, 2017.

Friday, Apr 27

Decision Entropy Theory: Establishing Objective Non-Informative Prior Probabilities or Accounting for Unknown Unknowns and Black Swans or A Crazy Idea -- ATTENTION: Back to POB 6.304

Friday, Apr 27, 2018 from 10AM to 11AM | POB 6.304

Important Update: ATTENTION: The seminar is being moved back to POB 6.304.
  • Additional Information

    Hosted by Federico Fuentes and Sriram Nagaraj

    Sponsor: ICES Seminar-Babuska Forum Series

    Speaker: Robert B. Gilbert

    Speaker Affiliation: Department of Civil, Architectural and Environmental Engineering, UT Austin

  • Abstract

    Probability Theory is based on starting with a comprehensive set of all possible events, known as the sample space. Probabilities for events in this prior sample space are independent of any information used for assessing probabilities; they are called Non-Informative Prior Probabilities. Bayes’ Theorem can be used to update these Non-Informative Prior Probabilities with all available information, including objective (data) and subjective (judgement) information.

    A persistent challenge is how to establish Non-Informative Prior Probabilities. How do we include all possibilities and assess their probabilities a priori without any information? How do we account for an unknown that is outside our range of experience? How do we include the possibility of black swans when we have only seen white swans? For centuries, a variety of extremely distinguished theoreticians, including Bernoulli, Keynes, Jaynes, and Raiffa, have attempted to but been unable to overcome this challenge.

    The goal of our research is to develop a theoretical basis, Decision Entropy Theory, to rationally and defensibly establish Non-Informative Prior Probabilities. The premise of Decision Entropy Theory is that probabilities provide input to decision making; therefore, non-informative probabilities are probabilities that do not inform a decision. The greatest lack of information for a decision is defined by the following three principles:
    1. A decision alternative compared to another alternative is equally probable to be preferred or not to be preferred.
    2. The possible gains or losses for one decision alternative compared to another alternative are equally probable.
    3. The possibilities of learning about the preference of one decision alternative compared to another alternative with new information are equally probable.

    The development of Decision Entropy Theory involves formulating these principles into a mathematical framework that describes the entropy (uncertainty) of a decision. The non-informative prior probabilities are found by maximizing the entropy of the decision.

    This talk will provide practical examples to illustrate the challenge of establishing Non-Informative Prior Probabilities and to illustrate how Decision Entropy Theory attempts to address this challenge.

    Robert B. Gilbert P.E., Ph.D., D.GE, M.ASCE is Chair of the Department of Civil, Architectural and Environmental Engineering at The University of Texas at Austin. He joined the faculty in 1993. He also practiced with Golder Associates Inc. as a geotechnical engineer from 1988 to 1993. His technical focus is the assessment, evaluation and management of risk for civil engineering systems. Recent activities include analyzing the performance of offshore platforms and pipelines in Gulf of Mexico hurricanes; managing flooding risks for levees in Texas, California, Washington and Louisiana; and performing a review of design and construction for the new Bay Bridge in San Francisco. Dr. Gilbert has been awarded the Norman Medal from the American Society of Civil Engineers and an Outstanding Civilian Service Medal from the United States Army Corps of Engineers.

Tuesday, Apr 24

A communication-avoiding sparse direct solver

Tuesday, Apr 24, 2018 from 3:30PM to 5PM | POB 6.304

  • Additional Information

    Hosted by George Biros

    Sponsor: ICES Seminar

    Speaker: Rich Vuduc

    Speaker Affiliation: Associate Professor, Georgia Institute of Technology

  • Abstract

    This talk describes several techniques to improve the strong scalability of a (right-looking, supernodal) sparse direct solver for distributed memory systems by reducing and hiding both internode and intranode communication.

    To reduce inter-node communication, we present a communication-avoiding 3D sparse LU factorization algorithm. The "3D" refers the use of a logical three-dimensional arrangement of MPI processes, and the method combines data redundancy with elimination tree parallelism. The 3D algorithm can be shown to reduce asymptotic communication costs by a factor of $O(\sqrt{\log n})$ and latency costs by a factor of $O(\log n)$ for planar sparse matrices arising from finite element discretization of two-dimensional PDEs. For the non-planar case, it can reduce communication and latency costs by a constant factor.

    On-node, we propose a novel technique, called the HALO, targeted at heterogeneous architectures consisting of multicore and manycore co-processors such as GPUs or Xeon Phi. The name HALO is a shorthand for highly asynchronous lazy offload, which refers to the way the method combines highly aggressive use of asynchrony with the accelerated offload, lazy updates, and data shadowing (a la halo or ghost zones), all of which serve to hide and reduce communication, whether to local memory, across the network, or over PCIe. The overall hybrid solver achieves a speedup of up to 3x on a variety of realistic test problems in single and multi-node configurations.

    Richard (Rich) Vuduc is an Associate Professor at the Georgia Institute of Technology (“Georgia Tech”), in the School of Computational Science and Engineering. His research lab, The HPC Garage (@hpcgarage on Twitter and Instagram), is interested in high-performance computing, with an emphasis on algorithms, performance analysis, and performance engineering. He is a recipient of a DARPA Computer Science Study Group grant; an NSF CAREER award; a collaborative Gordon Bell Prize in 2010; Lockheed-Martin Aeronautics Company Dean’s Award for Teaching Excellence (2013); and Best Paper Awards at the SIAM Conference on Data Mining (SDM, 2012) and the IEEE Parallel and Distributed Processing Symposium (IPDPS, 2015), among others. He has also served as his department’s Associate Chair and Director of its graduate programs. External to Georgia Tech, he currently serves as Chair of the SIAM Activity Group on Supercomputing (2018-2020); co-chaired the Technical Papers Program of the “Supercomputing” (SC) Conference in 2016. He received his Ph.D. in Computer Science from the University of California, Berkeley, and was a postdoctoral scholar in the Center for Advanced Scientific Computing the Lawrence Livermore National Laboratory.

Friday, Apr 20

Practice and experience interacting with scientific software at TACC

Friday, Apr 20, 2018 from 10AM to 11AM | POB 6.304

  • Additional Information

    Hosted by Federico Fuentes and Sriram Nagaraj

    Sponsor: ICES Seminar-Babuska Forum Series

    Speaker: Damon McDougall

    Speaker Affiliation: Institute for Computational Engineering and Sciences (ICES), Texas Advanced Computing Center (TACC)

  • Abstract

    TACC, XSEDE, and ultimately the NSF care greatly about how effective funded computing resources are used by domain scientists. Part of my role at TACC is to help those domain scientists use TACC's hardware more effectively. This talk will consist mostly of anecdotes and personal experiences interacting with TACC's user community in the context of scientific software optimisation. The goal of the talk will be to educate the audience on a) some (opinionated) scientific software best practices; and b) resources available to you to help you make your software more efficient.

    Damon McDougall has a Mathematics Ph.D from the University of Warwick. He moved to the US to do a postdoctoral research fellow, and later become a Research Associate, under Professor Moser in 2012 and has recently joined TACC.

Friday, Apr 13

An Optimal Control Framework for Efficient Training of Deep Neural Networks

Friday, Apr 13, 2018 from 1PM to 2PM | POB 6.304

  • Additional Information

    Hosted by Kui Ren

    Sponsor: ICES Seminar-Numerical Analysis Series

    Speaker: Lars Ruthotto

    Speaker Affiliation: Department of Mathematics and Computer Science, Emory University

  • Abstract

    One of the most promising areas in artificial intelligence is deep learning, a form of machine learning that uses neural networks containing many hidden layers. Recent success has led to breakthroughs in applications such as speech and image recognition. However, more theoretical insight is needed to create a rigorous scientific basis for designing and training deep neural networks, increasing their scalability, and providing insight into their reasoning.

    In this talk, we present a new mathematical framework that simplifies designing, training, and analyzing deep neural networks. It is based on the interpretation of deep learning as a dynamic optimal control problem similar to path-planning problems. We will exemplify how this understanding helps design, analyze, and train deep neural networks.

    First, we will focus on ways to ensure the stability of the dynamics in both the continuous and discrete setting and on ways to exploit discretization to obtain adaptive neural networks. Second, we will present new multilevel and multiscale approaches, derived from he continuous formulation. Finally, we will discuss adaptive higher-order discretization methods and illustrate their impact on the optimization problem.