MULTIVARIATE SPECTRAL DY-TYPE PROJECTION METHOD FOR CONVEX CONSTRAINED NONLINEAR MONOTONE EQUATIONS

. In this paper, we consider a multivariate spectral DY-type projection method for solving nonlinear monotone equations with convex constraints. The search direction of the proposed method combines those of the multivariate spectral gradient method and DY conjugate gradient method. With no need for the derivative information, the proposed method is very suitable to solve large-scale nonsmooth monotone equations. Under appropriate conditions, we prove the global convergence and R-linear convergence rate of the proposed method. The preliminary numerical results also indicate that the proposed method is robust and eﬀective.

1. Introduction. In this paper, we consider the solutions of the following nonlinear monotone equations F (x) = 0, x ∈ Ω, (1) where Ω ⊆ R n is a non-empty closed convex set, and F : Ω → R n is continuous and monotone, i.e. F (x) − F (y), x − y ≥ 0, ∀x, y ∈ Ω. Due to the monotonicity of F (x), the solution set of problem (1) denoted by Ω * is convex. Throughout this paper, we assume that Ω * is nonempty.
The focus of this paper is on solving the nonlinear monotone equations whose Jacobian matrix is not available or requires a prohibitive amount of storage. This situation is very common when problem (1) comes from the practical applications such as the economic equilibrium problems [7], the power flow equations [18] and the chemical equilibrium systems [13,14], and so on.
There are numerous methods which can be used to solve problem (1), see Refs. [5], [6], [9], [15], [16] and [19]. Among these methods, Newton method, quasi-Newton method, Gauss-Newton methods, Levenberg-Marquardt method and their variants are very attractive, because their superlinear convergence rate can be achieved under some suitable assumptions. It is a pity that these methods need to solve linear equations using the Jacobian matrix or an approximation of the Jacobian matrix at each iteration, which leads to that these methods are not suitable for large-scale nonsmooth monotone equations.
The spectral gradient method, originally proposed by Barzilai and Borwein [1] for unconstrained optimization problems, has been successfully extended to solve nonlinear monotone equations by Cruz and Raydan [10,11]. Recently, Zhang and Zhou [23] combined the spectral gradient method [1] with the projection technique [16] to construct a spectral gradient projection method for solving nonlinear monotone equations. When the nonlinear monotone equations is Lipschitz continuous, the global convergence of this method was established. An attractive feature of this method is that it can be used to solve nonsmooth equations. Han et al. [8] popularized the spectral gradient method [1], and proposed a multivariate spectral gradient method for unconstrained optimization problems, which is finitely convergent for positive definite quadratics. The global convergence is established for the multivariate spectral gradient method with a nonmonotone line search. Yu et al. [21] further studied a multivariate spectral projected gradient method for bounded constrained optimization. Most recently, Yu et al. [22] established a multivariate spectral projection gradient method for nonlinear monotone equations with convex constraints which can be viewed as an extension of multivariate spectral projected gradient method [21].
The main purpose of this paper is to study a multivariate spectral DY-type projection gradient method for solving large-scale nonlinear monotone equations with convex constraints. The search direction of the proposed method can be viewed as a correction of the search direction generated in [22] using the search direction of DY conjugate gradient method [4]. Under appropriate conditions, the global convergence and R-linear convergence rate of the proposed method are proved. Since there is no any derivative information, the proposed method is very suitable to solve large-scale nonlinear monotone equations.
The rest of this paper is organized as follows. In Section 2, we give the multivariate spectral DY-type projection method. In Section 3, we prove the global convergence of the proposed method under some suitable conditions. In Section 4, we establish the R-linear convergence rate of the proposed method with an assumption. Preliminary numerical results of the given test problems are presented in Section 5.
2. Multivariate spectral DY-type projection method. Spectral conjugate gradient method was firstly proposed by Barzilai and Borwein [1] for solving the unconstrained optimization problem where f : R n → R is continuously differentiable and its gradient denoted by g is available. This method usually generates a sequence {x k } by where α k > 0 is defined by The choice of α k imposes some quasi-Newton property, which can be obtained by minimizing ||αIs k−1 − y k−1 || with respect to α; αI approximates the Hessian matrix of f at x k . Recently, Han et al. [8] replaced α k with a vector diag{λ 1 k , λ 2 k , · · · , λ n k }, and obtained a multivariate spectral gradient method for solving unconstrained optimization problems. The vector diag{λ 1 k , λ 2 k , · · · , λ n k } is generated by minimizing Then the multivariate spectral gradient iterative scheme is In what follows, we popularize the multivariate spectral gradient method, and propose a multivariate spectral DY-type projection method with the help of the search direction of DY method and the projection technique. Denote the ith component of y k and s k as y i k and s i k , respectively. For the sake of notational convenience, we denote F (x k ) as F k . We are now ready to formally present the algorithmic framework as follows.
is the spectral coefficient which is an appropriate Rayleigh quotient with respect to a secant approximation of Jacobian. It was chosen as the step-size of the steepest descent method in [1], which can be obtained by minimizing ||ωIs k−1 − y k−1 || with respect to ω.
Remark 2. From the definitions of t and w k , we have that which implies that the parameter β DY k is interesting with any line search.
3. Convergence property. In this section, we show that Algorithm 2.1 is a contraction algorithm, and then we prove its convergence under the analytic framework of contraction type methods.
The following lemma shows that the sequence {x k } generated by Algorithm 2.1 is Fejèr monotone with respect to Ω * , and also indicates that Algorithm 2.1 is also a contraction method for problem (1), as stated in the following lemma.
Proof. From the definition of z k , it follows from (2) that where the first inequality follows from the monotonicity of F .
where the first inequality uses the nonexpansive property of the projection operator, the second inequality applies inequality (7). From (8), it is easy to obtain that which means that the sequence {||x k − x * ||} is monotonically decreasing, and has the lower boundary. Thus, the sequence {||x k − x * ||} is convergent which implies that the sequence {x k } is bounded.
For any x * ∈ Ω * , it follows from (5) that which implies that the sequence {||F (x k )||} is bounded, i.e., there exists a constant ϑ = max{σ, L||x 0 − x * ||} such that ||F (x k )|| ≤ ϑ. Then we have where the first inequality follows from (2), the second inequality follows from the monotonicity of F , and the third inequality applies Cauchy-Schwartz inequality. From Triangle inequality, (5) and (9), we have then it holds that In addition, it follows from (9) that which together with (8) and (10) gives Proof. From (6), the sequence {x k } generated by Algorithm 2.1 is bounded. In fact, the sequence {x k } is contained in the following compact set So, there exists a cluster point of {x k }, denoted byx. We assume that the subsequence {x kj } converges tox. From the standard techniques of the Fejèr monotonicity [2], it is not difficult to find thatx is a solution of problem (1). Moreover,x is the unique cluster of the sequence {x k }.
4. R-linear convergence rate. In this section, the R-linear convergence rate of Algorithm 2.1 is proved with the following assumption.
From Theorem 3.1, we always assume that x k → x * as k → ∞, where x * belongs to the solution set Ω * of problem (1). Proof. Let ν k := argmin{||x k − ν|| | ν ∈ Ω * }, this means that ν k is the closest solution to x k , i.e., ||x k − ν k || = dist(x k , Ω * ). From Step 4, it is not difficult to obtain that Then using Cauchy-Schwartz inequality, we have Due to ν k ∈ Ω * , from (6) it holds that where the second inequality follows from (12), and the third inequality applies (11).
The last inequality shows that the sequence {dist(x k , Ω * )} Q-linearly converges to 0. Thus, the sequence {x k } R-linearly converges to x * . 5. Numerical results. In this section, we apply Algorithm 2.1 to test some commonly nonlinear monotone equations with convex constraints, and compare it with MSGP method, a multivariate spectral gradient projection method proposed by Yu et al. [22] which performed better than a spectral gradient method [12] and a projection method [17]. Our code is written in MATLAB 7.0, and run on a HP personal computer with Intel Core (TM) CPU 2.60GHZ and 2G memory. The test problems are listed as follows: Problem 1. The problem can be viewed as a modification of Logarithmic function in [11]. F : Ω → R n with F (x) = ln(|x| + 1) − x/n, Ω = R n + . Problem 2. The problem can be viewed as a modification of Exponential function 2 in [11]. F : Ω → R n with F 1 (x) = e x1 − 1, F i (x) = e xi + x i−1 − 1, i = 2, 3, · · · , n, and Ω = R n + .

MULTIVARIATE SPECTRAL DY-TYPE PROJECTION METHOD 289
Problem 3. The problem can be viewed as a modification of one problem in [3].
and Ω = R n + . Problem 4. The problem can be viewed as a modification of one problem in [20].
For the ith group of experiment results, let ||F Algorithm2.1,i || and ||F M SGP,i || be respectively the final values of ||F || when Algorithm 2.1 and MSGP method terminate. We say that, the performance of Algorithm 2.1 method is better than the performance of MSGP method if