Numerical Algebra, Control & Optimization
December 2020 , Volume 10 , Issue 4
Select all articles
The tensor eigenvalue complementarity problem (TEiCP) is a higher order extension model of the classical matrix eigenvalue complementarity problem (EiCP), which has been studied extensively in the literature from theoretical perspective to algorithmic design. Due to the high nonlinearity resulted by tensors, the corresponding TEiCPs are often not easy to be solved directly by the algorithms tailored for EiCPs. In this paper, we introduce a nonmonotone spectral projected gradient (NSPG) method equipped with a positive Barzilai-Borwein step size to find a solution of TEiCPs. A series of numerical experiments show that the proposed NSPG method can greatly improve the efficiency of solving TEiCPs in terms of taking much less computing time for higher dimensional cases. Moreover, computational results show that our NSPG method is less sensitive to choices of starting points than some state-of-the-art algorithms.
We study box-constrained total least squares problem (BTLS), which minimizes the ratio of two quadratic functions with lower and upper bounded constraints. We first prove that (BTLS) is NP-hard. Then we show that for fixed number of dimension, it is polynomially solvable. When the constraint box is centered at zero, a relative
The textbook direct method for generalized least squares estimation was developed by Christopher C. Paige about 40 years ago. He proposed two algorithms. Suppose that the noise covariance matrix, rather than its factor, is available. Both of the Paige's algorithms involve three matrix factorizations. The first does not exploit the matrix structure of the problem, but it can be implemented by blocking techniques to reduce data communication time on modern computer processors. The second takes advantage of the matrix structure, but its main part cannot be implemented by blocking techniques. In this paper, we propose an improved algorithm. The new algorithm involves only two matrix factorizations, instead of three, and can be implemented by blocking techniques. We show that, in terms of flop counts, the improved algorithm costs less than Paige's first algorithm in any case and less than his second algorithm in some cases. Numerical tests show that in terms of CPU running time, our improved algorithm is faster than both of the existing algorithms when blocking techniques are used.
In this article, we propose a randomized Douglas-Rachford(DR) method for linear system. This algorithm is based on the cyclic DR method. We consider a linear system as a feasible problem of finding intersection of hyperplanes. In each iteration, the next iteration point is determined by a random DR operator. We prove the convergence of the iteration points based on expectation. And the variance of the iteration points declines to zero. The numerical experiment shows that the proposed algorithm performs better than the cyclic DR method.
Eigenvalues and eigenvectors of high order tensors have crucial applications in sciences and engineering. For computing H-eigenvalues and Z-eigenvalues of even order tensors, we transform the tensor eigenvalue problem to a nonlinear optimization with a spherical constraint. Then, a trust region algorithm for the spherically constrained optimization is proposed in this paper. At each iteration, an unconstrained quadratic model function is solved inexactly to produce a trial step. The Cayley transform maps the trial step onto the unit sphere. If the trial step generates a satisfactory actual decrease of the objective function, we accept the trial step as a new iterate. Otherwise, a second order line search process is performed to exploit valuable information contained in the trial step. Global convergence of the proposed trust region algorithm is analyzed. Preliminary numerical experiments illustrate that the novel trust region algorithm is efficient and promising.
This paper studies a proximal alternating direction method of multipliers (ADMM) with variable metric indefinite proximal terms for linearly constrained convex optimization problems. The proximal ADMM plays an important role in many application areas, since the subproblems of the method are easy to solve. Recently, it is reported that the proximal ADMM with a certain fixed indefinite proximal term is faster than that with a positive semidefinite term, and still has the global convergence property. On the other hand, Gu and Yamashita studied a variable metric semidefinite proximal ADMM whose proximal term is generated by the BFGS update. They reported that a slightly indefinite matrix also makes the algorithm work well in their numerical experiments. Motivated by this fact, we consider a variable metric indefinite proximal ADMM, and give sufficient conditions on the proximal terms for the global convergence. Moreover, based on the BFGS update, we propose a new indefinite proximal term which can satisfy the conditions for the global convergence. Experiments on several datasets demonstrated that our proposed variable metric indefinite proximal ADMM outperforms most of the comparison proximal ADMMs.
A projection-based iterative algorithm, which is related to a single parameter (or the multiple parameters), is proposed to solve the generalized positive semidefinite least squares problem introduced in this paper. The single parameter (or the multiple parameters) projection-based iterative algorithms converges to the optimal solution under certain condition, and the corresponding numerical results are shown too.
A convex two-stage non-cooperative game with risk-averse players under uncertainty is formulated as a two-stage stochastic variational inequality (SVI) for point-to-set operators. Due to the indifferentiability of function
In this survey paper we present an overview of derivative-free optimization, including basic concepts, theories, derivative-free methods and some applications. To date, there are mainly three classes of derivative-free methods and we concentrate on two of them, they are direct search methods and model-based methods. In this paper, we first focus on unconstrained optimization problems and review some classical direct search methods and model-based methods in turn for these problems. Then, we survey a number of derivative-free approaches for problems with constraints, including an algorithm we proposed for spherical optimization recently.
The convergence of direct ADMM is not guaranteed when used to solve multi-block separable convex optimization problems. In this paper, we propose a Gauss-Seidel method which can be calculated in parallel while solving subproblems. First we divide the variables into different groups. In the inner group, we use Gauss-Seidel method solving the subproblem. Among the different groups, Jacobi-like method is used. The effectiveness of the algorithm is proved by some numerical experiments.
This paper first gives out basic background and some definitions and propositions for Fourier matrix and bent function. Secondly we construct an standard orthogonal basis by the eigenvectors of the corresponding Fourier matrix. At last the diagonalization work of Fourier matrix is completed and some theorems about them are proved.
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]