PhD Defense: Yang Yu
Analyzing sampling in stochastic optimization:
Importance sampling and statistical inference
The objective function of a stochastic optimization problem usually involves an expectation of random variables which cannot be calculated directly. When this is the case, a common approach is to replace the expectation with a sample average approximation. However, sometimes there are difficulties in using such a sample average approximation to achieve certain goals. This dissertation studies two specific problems.
In the first problem, we aim to solve a minimization problem whose objective function is the probability of an undesired rare event. To accurately estimate this rare event probability by Monte Carlo simulation, an extremely large sample is required, which is expensive to implement. An importance sampling scheme based on the theory of large deviations is developed to efficiently reduce the sample size and thus reduce the computational cost. The convergence of a sequence of approximation problems is also studied, through which a good initial point to the minimization problem can be found. We also study the buffered probability of exceedance as an alternative risk measure instead of the ordinary probability. Under conditions, the analogous minimization problem can be formulated into a convex problem.
In the second problem, we focus on a two-stage stochastic linear programming problem, where the objective function has to be approximated by a sample average function with a random sample of the corresponding random variables. However, such a sample average function is not smooth enough to estimate the Hessian of the objective function which is needed to calculate the confidence intervals for the true solution. To overcome this difficulty, the sample average function is smoothed by its convolution with a kernel function. Methods to compute confidence intervals for the true solution are then developed based on inference methods for stochastic variational inequalities.