Neural network smoothing approximation method for stochastic variational inequality problems
Hui-Qiang Ma Nan-Jing Huang
This paper is concerned with solving a stochastic variational inequality problem (for short, SVIP) from a viewpoint of minimization of mixed conditional value-at-risk (CVaR). The regularized gap function for SVIP is used to define a loss function for the SVIP and mixed CVaR to measure the loss. In this setting, SVIP can be reformulated as a deterministic minimization problem. We show that the reformulation is a convex program for a huge class of SVIP under suitable conditions. Since mixed CVaR involves the plus function and mathematical expectation, the neural network smoothing function and Monte Carlo method are employed to get an approximation problem of the minimization reformulation. Finally, we consider the convergence of optimal solutions and stationary points of the approximation.
keywords: regularized gap function neural network smoothing approximation Stochastic variational inequality Monte Carlo sampling approximation convergence. mixed conditional value-at-risk

Year of publication

Related Authors

Related Keywords

[Back to Top]