Whitney Award for Research: Po-Hsu Chen
Sequential Pareto Minimization of Physical Systems Using Calibrated Computer Simulators.
This work proposes a sequential design methodology for a combined physical system and computer simulator experiment having multiple outputs, when the goal is to find the Pareto Front and Set of the means of the physical system outputs; the methodology is based on a statistically-calibrated simulator. In this work, the simulator is a computer implementation of a deterministic mathematical model of the physical system; it contains the same set of control(able) inputs as those used minimax fitness function is proposed for guiding the sequential search for new vectors of control input settings when additional observations on the physical system are to be taken. Based on a Bayesian calibrated model, the update step maximizes the posterior expected minimax fitness function over untried control inputs. When additional runs of the simulator are to be taken, the control input settings are chosen as above; then calibration input settings are selected to minimize the sum, over the set of predicted output means, of the posterior mean squared prediction errors. Using the Hypervolume Indicator function to assess Pareto Front accuracy, the performance of the sequential procedure is evaluated using analytic test functions from the multiple-objective optimization literature.
Cooley Memorial Prize: Junyan Wang
Empirical Bayes Model Averaging in the presence of Influential Observations
We study the performance of Bayesian Model Averaging (BMA) with respect to out-of-sample prediction accuracy in the presence of influential observations. Zellner's $g$ prior remains popular in Bayesian regression modeling, and treatment of the parameters in the prior is directly connected to the quality of posterior inference and prediction. We investigate methods of empirical estimation of the tuning parameters in the prior that attempt to attenuate the impact of model misfit on prediction accuracy. Initially, we focus on optimal prediction under the $g$ prior in the presence of influential cases. Next, we develop several methods that aim to increase the robustness of BMA through adjusting the $g$ priors. Finally, we show through simulation that out-of-sample predictive performance can often be improved by averaging predictions across a variety of methodologies.