A stochastic gradient descent approach for risk optimization using the Chernoff bound
Palavras-chave:
Stochastic Gradient Descent, Risk Optimization, Chernoff BoundResumo
We propose a method for solving Risk Optimization (RO) problems based on the Stochastic Gradient
Descent (SGD) methods. SGD is used to minimize the expectation of functions. We approximate each limit state
function in the RO problem using the Chernoff bound, thus recasting the original RO problem as an expectation
minimization problem. The Chernoff bound approximation requires the evaluation of Monte Carlo sampling,
which could be expensive. However, once the Chernoff bound parameters are set, they can be used to cheaply
approximate the probabilities of failure of each state limit for several iterations. We propose a heuristic approach to
tune the Chernoff bound parameters after a distance from the last update. Moreover, we decay the update distance
each iteration, thus guaranteeing that the probabilities of failure approximations are accurate as SGD converges
to the optimum solution. We present numerical results supporting the efficiency of our approach to different RO
problems with applications in structural engineering. Comparisons of SGD equipped with our Chernoff bound
approximation against particle swarm optimization using sample average approximation validate the efficiency of
the proposed approach.