Applying the most powerful supercomputers in the world to investigate society's Grand Challenges
Understanding the theory, the model and the algorithm is not enough – solving today’s most challenging problems also requires adapting the algorithms and techniques to exploit cutting edge computing hardware.
An Overview: High Performance Computing
What is High Performance Computing?
High performance computing (HPC) is a high-impact area that combines a broad array of tools and techniques needed to take the numerical models developed throughout the Institute and modify them to run efficiently on today’s modern supercomputers. Supercomputers are used in support of almost all fields of science and typically aggregate hundreds to thousands of individual computers using a high-speed, low-latency communication fabric. Through additional programing efforts, applications can then harness the aggregate memory and floating-point performance afforded by the supercomputer to perform calculations that could not be done otherwise including (1) running simulations at a scale and resolution that are impossible on a single system due to memory constraints, (2) using domain decomposition to drastically reduce the time-to-solution of a time-sensitive prediction (e.g. weather forecasting), and (3) performing uncertainty quantification (UQ), or design optimization by exploring the response of thousands of related simulations that would otherwise be intractable on a few workstations.
A key component in HPC is the requirement of parallel programming which typically occurs at the node level (via threading or alternative shared-memory parallelism), and at the multi-node level (via MPI, or alternative distributed-memory system). Particularly challenging is the need to extract scientific application performance on systems that are becoming increasingly heterogenous with the growing adoption of GPUs or other accelerators. Furthermore, the HPC hardware landscape changes quickly compared to the typical scientific application lifespan, and computational scientists are faced with the need for maintaining performance portable codes that can be ported quickly to new architectures as they arise. In addition to understanding the basics of parallel programming, gaining HPC expertise draws on skills from a variety of domains including computer science, system architecture, algorithmic design, linear algebra, runtime systems, I/O, performance optimization, and software engineering; these are elements interspersed throughout the CSEM curriculum.
Current research areas
Supercomputing tools: One of the big roadblocks in research requiring the deployment of supercomputing resources is the difficulty of interaction with and working on such computers. In an NSF DesignSafe project, we develop new software to more easily interact with supercomputers. An example of the developed tools is the automation of large-scale parameter sweeps for storm surge models which allows the user to run hundreds of simulations where the input parameters are varied to ascertain model sensitivities and the effect of variable storm parameters.
Visualization tools: The outputs of the simulations from the many models used and developed in the Computational Hydraulics Group are generally text or binary files ranging in size from mega to terabytes. These formats are not easy to interpret and require postprocessing to produce useful and meaningful (visual) formats. In the figure above, a simulation of hurricane Delta impacting the Louisiana coast in October 2020. The two color scales denote the land topography and sea surface elevation, respectively, whereas the white arrows indicate magnitude and direction of the winds.
Working with partners
The University of Texas is fortunate to be home to the Texas Advanced Computing Center (TACC), a leading national supercomputing facility that has been home to some of the nation’s fastest academic supercomputers over the last two decades. The Oden Institute has a long history partnering with TACC on a number of grants and continues to have active collaborations with examples like Frontera (TACC’s current flagship HPC system), and gaining early-access to evaluation HPC hardware in support of the Department of Energy’s Predictive Science Academic Alliance Program. Oden Institute students have access to small, dedicated internal HPC clusters, but also leverage the world-class facilities at TACC for class-room instruction and in support of their research activities. Oden Institute members also contribute to community initiatives promoting best practices such as OpenHPC.
Centers and Groups
To learn more about projects and people in High-Performance Computing, explore the centers and groups with research activities in this cross-cutting research area.
UT Austin-Led Team Wins 2025 Gordon Bell Prize for Breakthrough Research on Real-Time Tsunami Digital Twin
The ACM Gordon Bell Prize rewards innovation in applying high-performance computing to challenges in science, engineering, and large-scale data analytics.
The winning team created an improved predictive early warning framework by developing a digital twin to enable real-time, data-driven tsunami forecasting with dynamic adaptivity to complex source behavior.
ACM Gordon Bell Prize Honors Breakthrough in Real-Time Tsunami Modeling
Scientists have helped develop an advanced, real-time tsunami forecasting system that could dramatically improve early warning capabilities for coastal communities in earthquake zones.