Past Event:
Parallel Hierarchical Solver with Applications to Ice Sheet Modeling
Chao Chen, Post Doctoral Fellow, ICES, UT Austin
10 – 11AM
Friday Feb 22, 2019
POB 6.304
Abstract
Solving large-scale sparse linear systems is an important building block – but often a computational bottleneck – in many science and engineering applications, such as reservoir simulation, Gaussian regression, fluid/solid/structural mechanics and electromagnetics. Most existing solvers fall into two categories: direct methods (e.g., LU and Cholesky) and iterative methods (e.g., CG, MINRES and Multigrid).
This presentation focuses on a parallel hierarchical solver and its applications to a real-world problem – ice sheet modeling. This new solver is based on graph clustering and uses low-rank approximation techniques to sparsify dense fill-in blocks, introduced in the Gaussian elimination process. As a result, it is faster and more memory-efficient than direct solvers for solving large 3D problems. Targeted at distributed-memory machines, the parallel algorithm is based on data decomposition and requires only asynchronous local communication for updating boundary data on every processor.
To demonstrate its robustness, this hierarchical solver is compared with two state-of-the-art pre-conditioners (for iterative solvers), namely the incomplete LU (ILU) factorization and a multigrid solver, for solving linear systems arising from ice sheet modeling. The modeling of thin structures, such as ice sheets, leads to extremely ill-conditioned matrices, which are difficult to solve iteratively. The hierarchical solver, however, converges in almost constant number of iterations if the physical mesh is clustered along horizontal directions. To further improve the efficiency, a stabilized variant of the hierarchical solver was developed.
Bio
Dr. Chen received his PhD in computational & mathematical engineering from Stanford in 2018. His PhD research focused on developing fast linear solvers for computational mechanics using the hierarchical matrix theory and parallel computing. At ICES, he is working on fast algorithms for neural-network training with Professor George Biros.