SEMI-LOCAL CONVERGENCE OF THE NEWTON-HSS METHOD UNDER THE CENTER LIPSCHITZ CONDITION

. Newton-type methods have gained much attention in the past decades, especially for the semilocal convergence based on no information around the solution x ∗ of the target nonlinear equation. For large sparse non-Hermitian positive deﬁnite systems of nonlinear equation, assuming that the nonlinear operator satisﬁes the center Lipschitz condition, which is wider than usual Lipschtiz condition and H¨ o lder continuous condition, we establish a new Newton-Kantorovich convergence theorem for the Newton-HSS method. Once the convergence criteria is satisﬁed, the iteration sequence { x k } ∞ k =0 generated by the Newton-HSS method is well deﬁned, and converges to the solution x ∗ . Numerical results illustrate the eﬀect.

1. Introduction. In this paper, we consider the following nonlinear equations where F : D ⊂ C n → C n is nonlinear and continuously differentiable, D is an open convex subset of the n-dimensional complex linear space C n . The Jacobian matrix F (x) ∈ C n×n is sparse, non-Hermitian and positive definite. This kind of nonlinear equations can be derived in many areas of scientific computing and engineering applications [3,4,5]. The most classic and important iterative method for the system of nonlinear equations (1) is the Newton method [17,18], which can be formulated as where x 0 ∈ D is a given initial vector. Obviously, at the k-th iteration step, it is necessary to solve the so-called Newton equation which is the dominant task in implementations of the Newton method, then we can get the k + 1-th iterative vector x k+1 = x k + s k . There are many linear iterative methods to inexactly solve such Newton equation, such as the popular Krylov subspace methods [19,24] or the classical splitting methods. Usually, the linear iteration is called an inner iteration, whereas the nonlinear iteration that generates the sequence {x k } is called an outer iteration. Newton-Krylov subspace methods have been successfully used in a mass of applications [1,2,7]. Based on the Hermitian and skew-Hermitian splitting, Bai et al. [5] have presented the Hermitian and skew-Hermitian splitting (HSS) method for non-Hermitian positive definite linear systems, and proved that the HSS method converges unlimitedly to the only solution of the linear system, where the convergence rate of HSS has the same upper bound as that of the CG method. In this paper we focus on this splitting method. The following is the HSS iterative method. Algorithm 1.1. HSS 1. Input: an arbitrary initial guess x 0 ∈ C n , a positive constant α, and tolerance tol. 3. Compute: for k = 0, 1, 2, · · · , compute x k+1 using the following iteration scheme until b − Ax k ≤ tol * b − Ax 0 : (αI + H)x k+ 1 2 = (αI − S)x k + b, (αI + S)x k+1 = (αI − H)x k+ 1 2 here I is the identity matrix.
The convergence analysis of the Newton-type methods have gained much attention in the past decades. Roughly speaking, these analyses are separated into two classes. The first is the local convergence analysis, which is to determine the convergence ball based on the information around the solution x * of the nonlinear equation (1), and also the convergence rate. For example, in [6], Bai and Guo firstly used the HSS iteration [5] as the inner solver to solve approximately the Newton equation (2), the inexact Newton method [10] as the outer solver, thus presented the Newton-HSS method for solving the system of nonlinear equations (1), and also gave the local convergence theorems under the Lipschtiz continuous conditions. After that, Chen, Lin and Wu [8] used the HSS method as the inner solver, and the modified Newton method as the outer solver, have proposed the modified Newton-HSS method for the system of nonlinear equations (1), and discussed the local convergence properties under the Hölder continuous condition, which is wider than the usual Lipschitz condition. Other examples see reference [12,23,22]. However, we usually don't know any information of the solution x * , thus it is nature to analyze the information around the initial point x 0 to get the convergence criterion, this scheme is called the semi-local convergence analysis, which is as brief as Newton-Kantorovich theorem [11,15]. Guo and Duff [14], Chen, Lin and Wu [9] analyzed the semilocal convergence theorems of the Newton-HSS method and the modified Newton-HSS method, respectively. For convenience, we use semi-local theorem instead of Newton-Kantorovich theorem in the following paragraphs.
The center Lipschitz condition in the inscribed sphere with the L-average is the condition that is wider than the usual Lipschitz condition and the Hölder continuous condition. The following definition is the definition of the center Lipschitz condition by Wang in [21], it has received a lot of attention and been extensively studied [13,16]. Definition 1.1. Let Y be a Banach space and let x 0 ∈ C n . Let G be a mapping from C n to Y . Then G is said to satisfy the center Lipschitz condition on B(x 0 , r) if Because sometimes, we will encounter the nonlinear equations that can't satisfy the Lipschitz condition and the Hölder condition, but satisfy the center Lipschitz condition, thus we can't estimate directly whether the Newton-HSS method is good to solve the equations or not. The numerical section will give the details. Consequently, it is necessary to study the convergence properties of the Newton-HSS method under the center Lipschitz condition, i.e., Definition 1.1. In this paper, the main work is to study the semi-local convergence theorem of the Newton-HSS method under the hypotheses that the derivative satisfies the center Lipschitz condition by majorizing function method, i.e., we will show that the Newton-HSS sequence {x k } is "majorized" by the numerical sequence {t k } with initial point t 0 = 0.
The organization of the paper is as follows. In Section 2, we introduce the Newton-HSS iterative method. In Section 3, we first give some lemmas which are useful for our main result, then present the new semi-local convergence theorems under the hypotheses that the derivative satisfies the center Lipschitz condition in the inscribed sphere with the L-average. The applications of the results is illustrated in Section 4. Finally, in Section 5, some conclusions are given.
2. Newton-HSS iteration. For Jacobian matrix F (x), let be its Hermitian part, be the skew-Hermitian part, the following is the Newton-HSS method [6,14].

HONGXIU ZHONG, GUOLIANG CHEN AND XUEPING GUO
and obtain d k,l k such that Thus we have the following formulas From the Newton-HSS method we can get [14] x here T k := T (α; x k ).
3. Semi-local convergence theorem under the center Lipschitz condition with the L-average. In this section, we establish a new semi-local convergence theorem for the Newton-HSS method under the assumption that the derivative satisfies the center Lipschitz condition, which is wider than Hölder condition and Lipschitz condition. Firstly, we give the assumption.
(A2) (The Center Lipschitz Condition in the inscribed sphere with the L-average) there exist positive integrable function L h (u) and L s (u) on [0, +∞) such that for all x, x ∈ B(x 0 , r), Remark 1. From the condition (A2), if we choose x = x 0 , and assume x ∈ B(x 0 , r), we can obtain Let Since Denote {t k } the sequence generated by the following iterative process Before giving the main theorem, we list a series of useful lemmas as follows. By Assumption 3.1, F (x) = H(x) + S(x), and perturbation lemma [17], we have Lemma 3.1. Lemma 3.2 is from the proof of Lemma 3.2 in [21]. And we will give a proof of Lemma 3.3. Define Then χ is increasing on (0, +∞).
Then the following assertions hold: (i) φ is strictly decreasing on [0, r 0 ] and strictly increasing on Moreover, if c < ω, φ has two zeros, denoted respectively by r 1 and r 2 , such that and if c = ω, then φ has a unique zero r 1 in (c, +∞) (in fact, r 1 = r 0 ). (ii) {t k } is strictly monotonically increasing and converges to r 1 .
(ii) We use inductive method to prove (ii), i.e., the following inequalities hold For k = 0, from (11) and (12), we have hence (13) is true for k = 0. Suppose that t k−1 < t k < r 1 , then we consider .
we have Hence, (13) is also true for k. Consequently, the inequalities (13) holds for all nonnegative integers. Furthermore, there exists t * such that lim k→+∞ t k = t * . Then we can assert t * = r 1 [20].
Then we can give the following semi-local convergence theorem of the Newton-HSS method under the center Lipschitz condition.
Proof. First of all, we will show the following estimate about the iterative matrix T (α; x) of the linear solver: if x ∈ B(x 0 , r), then T (α; x) < (τ + 1)θ < 1.
In fact, from the definition of B(α; x) in (5) and Assumption 3.1, denote ρ(x) = x − x 0 , we can get Similarly, we have

HONGXIU ZHONG, GUOLIANG CHEN AND XUEPING GUO
Then from (6), it follows that Therefore from (16), we obtain Hence using the perturbation lemma, we get B(α; x) −1 exists, and Hence, together with (16), (17), (18) and (19), the estimate about the gap between inner iterative matrix T (α; x) and T (α; x 0 ) is obtained as follows: Consequently, From the value of l * in the conditions, we have for k > 0, Next, we prove by induction From (7), we have we get (21) is correct for k = 0.
For k ≥ 2, suppose that (21) holds for all nonnegative integers less than k. We need to prove that it holds for k. Because x k , x k−1 ∈ B(x 0 , r 1 ), it follows from the integral mean-value theorem and Lemma 3.1 that, for k = 2, 3, · · · , Since t k−1 < t k < r 0 , and a Hence, from Since the condition r 1 < r 0 in (15), we get γ r1 0 L(u)du < 1, hence the condition of Lemma 3.1 is satisfied. Consequently, from the iterative formula (7), (20) and Lemma 3.1, we get so that, the second inequality in (21) is also correct for k. The third one in (21) is easy to get from Hence, (21) is true for all k. Since (14), then by Lemma 3.3, the sequence {t k } converges to r 1 , the sequence {x k } also converges, to say x * . Because T (α; x * ) < 1 [5], from the iteration (7), we have F (x * ) = 0. The proof is complete.
Remark 2. If we assume the integrable functions L h (u) and L s (u) in (A2) are positive constants L h and L s , respectively, then the center Lipschitz condition becomes usual Lipschitz condition. Therefore, Theorem 3.4 is an extension of Theorem 3.2 in [14]. If both the integrable functions are L p u p−1 , where 0 < p < 1, L is a positive constant, then the center Lipschitz condition becomes usual Hölder condition. From the above, we can see Lipschitz condition and Hölder condition are the specially cases of the center Lipschitz condition.
Remark 3. Because the function φ is strictly decreasing on [0, r 0 ], thus we can search r 1 ∈ (0, r 0 ) such that φ(r 1 ) > 0, then r 1 must be less than r 1 in (15), consequently, it is possible to choose an appropriate r.

4.
Application. In this section, we apply the main result on the following twodemensional nonlinear convection-diffusion equation where m is a rational number, Ω = (0, 1) × (0, 1), with ∂Ω is the boundary. q 1 and q 2 are positive constants used to measure magnitudes of the convective terms. By applying the centered finite difference scheme on the equidistant discretization grid with the stepsize h = 1/(N + 1), the system of nonlinear equations (1) is obtained with following form where N is a prescribed positive integer, here, Re j = 1 2 q j h, j = 1, 2, Re = max{Re 1 , Re 2 } is the mesh Reynolds number, ⊗ is the Kronecker product, and n = N × N .
It is easy to get F (x) = M + mh 2 diag(x m−1 1 , x m−1

2
, · · · , x m−1 n ). We have where · denotes the 2-norm, L(u) = m(m−1)h 2 u m−2 . Hence, the center Lipschitz condition in the scribed sphere with the L average is satisfied. Thus we can obtain the convergence result of nonlinear equation (25).
ah 2 r m 1 − br 1 + c = 0, and r 1 ∈ (0, r 0 ), and l * = lim inf k→∞ l k satisfies where the symbol · is used to denote the smallest integer no less than the corresponding real number, τ ∈ (0, (1 − θ)/θ), and θ ≡ θ(α; x 0 ) = T (α; x 0 ) < 1.  [8] and [9], respectively, hence satisfies Hölder conditions. If m = 2, thus L(u) = 2h 2 is a positive constant, and equation (25) satisfies Lipschitz condition. But if m > 2, we can't determine whether the function F meets the Lipschitz condition or the Hölder conditions, whereas we can confirm that F satisfies the center Lipschitz condition, hence we can use the corollary, and know the Newton-HSS method can solve the equation (25). Next, we will give case m = 3 to verify Corollary 1.
Example 4.1. Now let us examin the convergence criteria of the corollary. For simplicity, we choose m = 3. Let the initial guess x 0 be fixed, then the parameters β, γ, δ can be estimated from Assumption 3.1. Take the positive constants q 1 = q, q 2 = 1/h, and adopt α = 1 2 qh, then θ can be estimated from the definition of T (α; x 0 ) , and then τ . If the prescribed tolerance for controlling the accuracy of the HSS iteration is set to be η k = η = 0.1, then the parameters a, b, c in the Corollary 1 can be computed, and r 0 can also be calculated from r m−1 0 = b amh 2 , at the same time, the value of l * can be estimated by (26). The convergence corollary for equation (25) is examined for different problem sizes n = N × N , different quantities q, different inner iteration tolerance η and different initial points x 0 from the hypothesis c ≤ a(m − 1)h 2 r m 0 . We only list the numerical results of q = 600 and q = 800 with different N, x 0 in Tables 1-6.   From the above Tables 1-6, the inequality c ≤ a(m − 1)h 2 r m 0 , holds in all situations, where r 0 satisfies r m−1 0 = b amh 2 , even the initial point is far away from the exact solution x * = 0. Hence, the convergence criteria of Corollary 1 is satisfied, then the iteration sequence {x k } ∞ k=0 generated by the Newton-HSS method will be well defined and will converge to x * , which satisfies F (x * ) = 0.  In actual computation, the stopping criterion for the outer Newton method is set to be F (x k ) 2 F (x 0 ) 2 ≤ 10 −6 , and the numerical results are presented in Tables 7-8. Denote Error estimates the value of F (x k ) 2 F (x0) 2 , CPU the time in seconds, and Outer IT the outer iteration. From Tables 7-8, we can see, for different initial point x 0 , different problem sizes n = N ×N , different quantity q, and the tolerance η = 0.1, the Newton-HSS method can compute an approximate solution of the system of such nonlinear equation (25). Hence, the convergence criteria in Corollary 1 for equation (25) is effective.
However, from the tables, we can also see that CPU time is getting more and more by the value of n increasing, this means the Newton-HSS method is a little slower, and we can use a faster algorithm to calculate equation (25), such as Newton-MHSS, Modified Newton-MHSS and so on. But the convergence analysis of these methods under the center Lipschitz conditon need to be improved.

5.
Conclusion. For solving nonlinear systems, Newton-type methods are one type of the most classic and effective methods. In the case of large sparse nonlinear systems with non-Hermitian positive definite Jacobian matrices, the Newton-HSS method is a considerable method. In this paper, we study the semi-local convergence of the Newton-HSS method under the center Lipschitz condition, which is wider than the usual Lipschitz condition and Hölder condition. The convergence criteria of the Newton-HSS method under the center Lipschitz is presented, it provides the convenience for us to determine the solvability of the target equation, numerical results illustrate it's effect.In the future, convergence analysis of the faster methods under the center Lipschitz conditon can be improved.