Demystifying the Kriging Model with Massive Data
Tuesday, September 18, 3:30PM – 5PM
Dr. Peter Qian
The Kriging model is widely used in uncertainty quantification. Fitting a Kriging model with massive data is not only a challenge but also a mystery. On one hand, the nominal accuracy of a Kriging model is supposed to increase with the number of data points. On the other hand, fitting such a model to a large number of points is known to have numerical singularity. To reconcile this contradiction, I will present a sequential method to simultaneously achieve numerical stability and theoretical accuracy in large-scale Kriging models. This method forms nested space-filling subsets of the data, builds kernel models for different subsets and then combines all submodels together to obtain an accurate surrogate model. We introduce a mathematical decomposition of the overall error of a Kriging model into nominal and numeric portions. Theoretical bounds on the numeric and nominal error are developed to show that substantial gains in overall accuracy can be attained with this sequential method. Examples are given to illustrate the effectiveness of the developed method.
Speaker Bio: Dr. Qian received a Ph.D. degree in Industrial and Systems Engineering from Georgia Tech in 2006. He is currently an associate professor in the Department of Statistics at the University of Wisconsin-Madison and has an affiliated appointment with the Department of Industrial and Systems Engineering. He is working on the interface between statistics and engineering. His current research interests include modeling massive data, uncertainty quantification, design of experiments and stochastic optimization. He has received a National Science Foundation Career Award and an IBM Faculty Award.
Hosted by Ernesto Prudencio