CONVERGENCE ANALYSIS OF A PARALLEL PROJECTION ALGORITHM FOR SOLVING CONVEX FEASIBILITY PROBLEMS

. The convex feasibility problem (CFP) is a classical problem in nonlinear analysis. In this paper, we propose an inertial parallel projection algorithm for solving CFP. Diﬀerent from the previous algorithms, the proposed method introduces a sequence of parameters and uses the information of last two iterations at each step. To prove its convergence in a simple way, we transform the parallel algorithm to a sequential one in a constructed prod- uct space. Preliminary experiments are conducted to demonstrate that the proposed approach converges faster than the general extrapolated algorithms.

1. Introduction. In this paper, we are interested in designing new algorithms for solving the convex feasibility problem (CFP) of finding a point in the intersection of finitely many closed convex sets of a Euclid space. The CFP is a classical problem in nonlinear analysis with applications in a variety of areas including economics, physics, and applied sciences, for example, optimization [6], approximation theory [4,9], image reconstruction from projections and computerized tomography [5,15], control problem [3,10], to name only a few.
A number of algorithms were presented based on projection method for solving the CFP [1,12,14,17]. According to the number of projections adopted at each step, the sequential projection algorithm, the parallel projection algorithm and the block-iterative projection algorithm had been intensively investigated. The reader is referred to [2,11,16] for a detailed discussion in this regard.
Convergence of algorithms is an important issue in numerical computation. In the past decade, a great deal of efforts have been paid to study the convergence of algorithms developed for CFP. In particular, to achieve faster convergence, an over-relaxed projection method [1], a non-monotonous projection method [8] and extrapolation projection methods [7,18] were presented for solving CFP. In general, these methods basically used the current iterative point to obtain the next iteration point. However, this kind of design appears to underutilize available information. Certain research [24] has pointed out that algorithms using the information of two proceeding iterations at each step could converge faster than only using previous iteration for certain type of problems. However, very few research has been conducted along this direction. Hence, it is interesting and meaningful to explore the possibility of solving CFP by developing algorithms with more information from the previous steps.
In this paper, we shall investigate a new approach to solving convex feasibility problem following the above idea. Specifically, we shall develop a new accelerated method by applying the inertial technique [19][20][21] to parallel projection algorithm. In contrast to previous algorithms [1,7,18], the proposed method employs a sequence of parameters and two preceding iteration points to obtain the new iteration point. The algorithm utilizes more information available at each step, which enables us to find improved performance in comparison with previous algorithms. This feature is illustrated by preliminary experiments conducted in Section 5. Furthermore, to prove its convergence in a simple way, we transform the parallel algorithm to a sequential one in a constructed product space. We believe that the obtained results could complement and enrich the field of nonlinear analysis in both theory and applications.
The remainder of the paper is organized as follows. Section 2 summarizes some of the related notions and results. Section 3 describes the methodology adopted in the research in which an inertial parallel projection algorithm is presented for solving CFP. Section 4 presents the theoretical results in terms of convergence analysis. Some numerical tests are given in Section 5.

Preliminaries.
We are concerned with the following convex feasibility problem where C i ⊂ n , i = 1, 2, · · · , m, are closed convex sets. In the section, we summarize some notations and relevant results which will be used in the subsequent analysis. Let X be a finite dimensional vector space equipped with a scalar product denoted by ·, · and · be the norm induced by ·, · . Let P C (·) denote the projection mapping from X onto a nonempty closed convex set C of X, that is, We denote by I the identity operator in an arbitrary vector space. Throughout this paper, our focus is on X being the usual Euclidean space, namely, X = n . We denote by ·, · and | · | the inner product and norm in ( n ) m , respectively. Let E be a real Banach space and S be a nonempty closed convex subset of E. Recall that an operator T : S → S is said to be non-expansive, if

CONVERGENCE OF A PROJECTION ALGORITHM 507
and firmly non-expansive, if It is easy to see that (4) ⇒ (3). Note that P C (x) can be characterized by the following inequality The inequality (5) implies that for any x, y ∈ n and a closed convex set C ⊂ n , the following inequalities hold Taking the sum of the above two inequalities, we obtain that which can be further written as Therefore, the operator P C is firmly non-expansive. By virtue of Cauchy inequality, it yields that that is, the operator P C is non-expansive. From (5), it is not hard to derive that The following results will be needed in convergence analysis of the proposed algorithm.
Lemma 2.2 ( [19]). Assume that the parameters δ k , γ k , k = 0, 1, 2, · · · , are positive and satisfy Then, the sequence {γ k } ∞ k=0 is convergent. Let T : C → C be a non-expansive mapping, where C is a subset of a Hilbert space. We denote by Fix(T ) the set of fixed points of operator T , which is defined as Fix(T ) :   22]). Let H be a Hilbert space, {x k } be a sequence in H. Suppose that there exists a nonempty set S ⊂ H satisfying (i) for every x * ∈ S, lim k→∞ x k − x * exists, (ii) any weak cluster point of {x k } belongs to S. Then, there existsx * ∈ S such that {x k } weakly converges tox * .
3. Algorithm description. In this section, we propose an algorithm called inertial parallel projection algorithm for solving problem (1). To facilitate convergence analysis, we further convert the proposed algorithm to a sequential one by constructing a product space.
3.1. Inertial parallel projection algorithm. We state the inertial parallel projection algorithm as follows. Algorithm 3.1 Inertial parallel projection algorithm.
Initialization: Take x 0 , x 1 in n . Select relaxed factor λ k ∈ (0, 2) and inertial Iterative step: where P Ci (·) denotes the projection mapping onto the convex set C i , i = 1, · · · , m. Note that, if θ k ≡ 0 for all k, Algorithm 3.1 then reduces to the general parallel projection algorithm, which has been widely investigated in the past.

3.2.
Construction of a product space. To investigate the convergence of the algorithm, we transform the parallel projection algorithm (7)-(8) as a sequential one by introducing a product space as follows. Define We denote by H = (( n ) m , ·, · , | · |) the product space equipped with the norm | · | induced by the inner product ·, · . Next, we define two important subsets of the product space H, which will be used in the analysis. We first define ℵ the Cartesian product of the convex sets Hence, for the underlying CFP (1), finding a point in C ⊂ n is equivalent to finding a point in ℵ D ⊂ H. As a result, our subsequent analysis will focus on the latter problem.

3.3.
A sequential parallel projection algorithm. In the constructed product space H, we next develop an alternative algorithm of Algorithm 3.1 for (1), which is generally a sequential projection approach. Before stating the algorithm, we investigate some interesting properties concerning the projections on two sets ℵ and D. First, the following result is taken from [22].
Denote P ℵ (·) and P D (·) by the projections onto ℵ and D, respectively. Then In order to show the convergence results stated in Section 4, we first derive the following results as Propositions 3.1 and 3.2.
Proposition 3.1. Let P D , P ℵ be as in Lemma 3.1. Then Proof. For any X ∈ D, it follows from Lemma 3.1 that By the definition of norm | · | together with its convexity, we have In addition, by the definition of ·, · , it yields that Hence, according to the property of projection mapping and (9), it follows that This completes the proof.
Step 1: Let Iterative step: 4. Convergence analysis. In this section, we shall study the convergence of the proposed algorithms.  Proof. For a given Z ∈ D ℵ, define ϕ k := 1 2 |X k −Z| 2 . The proof of this theorem is divided into three parts as follows.
Part I. We claim that Observing, for arbitrary A, B ∈ H, We thus obtain By Proposition 3.1, we deduce ϕ k+1 Hence, Part II. We show that the sequence { |X k − Z |} is convergent. By virtue of (10), we have It follows from (13) that Putting (16) into (12), it turns out that Then, by θ 2 k ≤ θ k ( since θ k ∈ [0, 1)), (17) implies From the selection of the parameter θ k , we have k≥0 θ k |X k − X k−1 | 2 < ∞, applying Lemma 2.1 to (18), we get that the sequence { |X k − Z |} is convergent.
Part III. We prove that the sequence {X k } generated by Algorithm 3.2 converges to an element in D ℵ. By (18) and Lemma 2.1, we get k [ |X k −Z| 2 − |X k−1 − Z| 2 ] + < ∞, and from (17), we arrive at This yields Since inf k λ k > 0, sup k λ k < 2, we obtain Let {X k l } be a subsequence of {X k } such that {X k l } converges to X * . Evidently, lim k l |Y k l − P D P ℵ (Y k l )| = 0, and {Y k l } converges to X * too. From Lemma 2.3 and Proposition 3.2, we obtain X * ∈ D ℵ. Since { |X k −X * |} is convergent, from Lemma 2.4, it is easy to get that lim k→∞ |X k − X * | = lim l→∞ |X k l − X * | = 0. This completes the proof.
and the sequence {X k } generated by Algorithm 3.2 converges to an element in D ℵ.
Proof. For any given Z ∈ D ℵ, define ϕ k := 1 2 |X k − Z| 2 . From (10), we get that On the other hand, where p(θ) is given by (21). Since p(θ) > 0, we have that {µ k } is a non-increasing sequence. Obviously, Furthermore, it follows that
In the next theorem, we consider the convergence in the case of k λ k (2 − λ k ) = ∞. 1). Then, for arbitrary X 0 , X 1 in D, the sequence {X k } generated by Algorithm 3.2 converges to an element in D ℵ.
Proof. Following the same notations as in Theorem 4.1, by k λ k (2 − λ k ) = ∞ and by (19), we have lim inf Since λ k ∈ (0, 2), U is non-expansive, then which implies that By the choosing of θ k , applying Lemma 2.2 to (22), we get that the sequence The rest part of the proof is similar to that of Theorem 4.1. We omit for brevity and this completes the proof.
By transforming the sequence {X k } generated by Algorithm 3.2 to the corresponding sequence {x k } in the primal space n , we can derive similar convergence results of Algorithm 3.1 under appropriate assumptions. For simplicity in description, we only state the results while their proofs can be achieved in the same manner as Theorems 4.1-4.2 and Proposition 4.1. 1). Then, the sequence {x k } generated by Algorithm 3.1 converges to a point in C.

Numerical experiments.
To illustrate the proposed approach for solving CF-P, we have carried out numerical tests on three examples. We also compare the numerical results with commonly-adopted extrapolated algorithm for solving the CFP. In this section, we report some preliminary numerical results. The tests are carried out by implementing codes in Matlab 7.8.0 installed in a PC with Windows XP Operating System.
We briefly state the extrapolated algorithm for solving the CFP in the following. Given an arbitrary initiative point x 0 , the iterative step is as follows where λ k ∈ (0, 2).

Three examples.
We conduct numerical experiments on three man-made examples of convex feasibility problem, the dimension of the problem ranging from 2 to 120.
Example 5.1. Let Find x ∈ C 1 C 2 C 3 . In this example, we consider three scenarios on the choice of initial point x 0 .

Numerical results.
Our results of numerical experiment are listed in the following Tables 1-3, where EA denotes the extrapolated algorithm, IPPA denotes the inertial parallel projection algorithm, "Iter." and "Sec." denote the number of iterations and the CPU time in seconds, respectively, "p(ε)" denotes the sum of distance of the approximate solution to the given sets, and "x * " denotes the approximate solution. The results of Example 5.1-5.2 are respectively reported in Tables 1-2 with various different λ k . Table 3 shows the numbers of iterations needed to get the approximation solutions by the EA and the IPPA for Example 5.3 with λ k = 1. For comparison, we use the same initiative points x 0 and x 1 , where "x 1 " is generated by the EA.
In these tests, we take the parameters as follows: θ k =θ k 2 with θ k := min 0.8, 1   From Tables 1-2, we can see the IPPA converges faster than the EA. In Example 5.1-5.2, we can get the approximate solution in a finite number of iterations by using the IPPA, while the EA can not get any approximate solution after finitely many iterations. As shown in Table 3, the number of iterations needed to get the approximate solution in Example 5.3 by using the EA is much larger than that by using the IPPA.