An extension of hybrid method without extrapolation step to equilibrium problems

In this paper, we introduce a new hybrid algorithm for solving equilibrium problems. The algorithm combines the extragradient method and the hybrid (outer approximation) method. In this algorithm, only an optimization program is solved at each iteration without the extra-steps like as in the extragradient method and the Armijo linesearch method. A specially constructed half-space in the hybrid method is the reason for the absence of an optimization program in our algorithm. The strong convergence theorem is established and several numerical experiments are implemented to illustrate the convergence of the algorithm and compare it with others.

1. Introduction. The equilibrium problem (EP) [2] which was considered as the Ky Fan inequality [9] is very general in the sense that it includes, as special cases, many mathematical models such as: variational inequalities, fixed point problems, optimization problems, Nash equilirium problems, complementarity problems, see [2,8,25] and the references therein. Many methods have been proposed for solving EPs, for instance [2,7,13,14,15,16,23,25,29]. The most solution approximations to EPs are often based on the resolvent of equilibrium bifunction (see, for instance [7]) in which a strongly monotone regularization equilibrium problem (REP) is solved at each iterative step. It is also called the proximal point method (PPM). This method was first introduced by Martinet [22] for variational inequalities, and then it was extended by Rockafellar [31] to the problem of finding a zero point of a monotone operator. In 2000, Konnov [18] further extended PPM to Ky Fan inequalities for monotone or weakly monotone bifunctions.
A special case of EP is the variational inequality problem (VIP). The simplest method for VIPs is the gradient projection method in which only one projection on the feasible set is computed. However, in order to obtain the convergence, the method requires the restrictive assumption that operators are strongly (or inverse strongly) monotone. To overcome this, Korpelevich [19] introduced the extragradient method (double projection method) where two metric projections onto the feasible set are implemented at each iteration. The convergence of the extragradient method was proved under the weaker assumption that operators are only monotone 1724 VAN HIEU DANG (even, pseudomonotone) and L -Lipschitz continuous. Some extragradient-like algorithms proposed for solving VIPs can be found in [3,4,15,26] and the references therein. However, the projection is only found easily if the constrained set has a simple structure, for instance, as balls, hyperplanes or halfspaces. In recent years, the extragradient method has received a lot of attention by several authors and it has been modified in various ways, see [5,6,12,21] and the references therein. For instance, the authors in [5] replaced the second projection onto the feasible set in the extragradient method by one onto a half-space and proposed the subgradient extragradient method for VIPs in Hilbert spaces.
The Korpelevich's extragradient method has been naturally extended to EPs for monotone (or pseudomonotone) and Lipschitz-type continuous bifunctions and widely studied both theoretically and algorithmically [15,27,28,29,32,33]. In the extended extragradient methods to EPs, we need to solve two strongly convex optimization programs on a closed convex constrained set (see, Algorithms 3 and 4 in Section 2). They are generalizations of two projections in Korpelevich's extragradient method. The advantage of the extragradient method is that two optimization programs are solved at each iteration which seems to be numerically easier than the non-linear inequality (or REP) in PPM.
In this paper, motivated by the hybrid method without the extrapolation step [21] for variational inequalities, the extragradient method [29] and the hybrid method, we have proposed a new hybrid algorithm for solving EPs. In this algorithm, by constructing a specially cutting -halfspace in the hybrid method, we only need to solve a strongly convex optimization program onto the feasible set at each iteration. The absence of an optimization program in the proposed algorithm (compare with the extragradient method) can be considered as an improvement in each computational step of the results in [15,20,27,28,32,33].
The remainder of the paper is organized as follows: Section 2 introduces a new algorithm and some related works. In Section 3, we collect some definitions and preliminary results used in the paper. Section 4 deals with proving the convergence of the proposed algorithm. Some applications of the algorithm to Gâteaux differentiable EPs and multivalued variational inequalities are presented in Section 5. Finally, in Section 6 we provide some numerical examples to illustrate the convergence of the proposed algorithm and compare it with others.
2. Algorithm and related works. Let H be a real Hilbert space, C be a nonempty closed convex subset of H and f : C ×C → be a bifunction with f (x, x) = 0 for all x ∈ C. The equilibrium problem (EP) for the bifunction f on C is to find The solution set of EP (1) is denoted by EP (f, C). In this paper, we introduce the following hybrid algorithm for solving EP (1).
Algorithm 1 (An extended hybrid algorithm without extrapolation step to EPs).
where x 0 ∈ H, y 0 ∈ C, λ is a suitable parameter and C n , Q n are two specially constructed half-spaces (see Algorithm 1 in Section 4 below).
In the special case, f (x, y) = A(x), y − x where A : C → H is a nonlinear operator then EP becomes the following variational inequality problem (VIP): Find Then, the proposed algorithm (Algorithm 1) becomes the following hybrid algorithm without extrapolation step which was introduced in [21] for VIPs.
Algorithm 2 (The hybrid algorithm without extrapolation step for VIPs).
In 2008, Quoc et al. [29] extended the Korpelevich's extragradient method [19] to EPs in Euclidean spaces in which two optimization programs are solved at each iteration. Recently, Nguyen et al. [28] also have done in that direction and proposed the general extragradient method which consists of solving three optimization programs on the feasible set. In Euclidean spaces, the convergence of the sequences generated by the extragradient methods [28,29] was proved under the assumptions of pseudomonotonicity and Lipschitz-type continuity of equilibrium bifunctions. The problem which arises in infinite dimensional Hilbert spaces is how to design an algorithm which provides the strong convergence. In 2012, Vuong et al. [33] used the extragradient method in [29] and the hybrid (outer approximation) method to obtain the following strongly convergent hybrid algorithm.
with c 1 , c 2 being two Lipschitz-type constants of the bifunction f (see, Definition 3.1(iii) below). In 2013, another hybrid algorithm [27, Algorithm 1] was also proposed in this direction as where x 0 ∈ C, C 0 = C and λ satisfies condition (3). The authors in [27,33] proved that the sequences {x n } generated by Algorithms 3 and 4 converge strongly to P EP (f,C) (x 0 ) under the hypotheses of the pseudomonotonicity and Lipschitz-type continuity of f . Also in [27], in order to avoid the condition of the Lipschitz-type continuity of the bifunction f , the authors replaced the second optimization problem 1726 VAN HIEU DANG in the extragradient method by the Armijo linesearch technique and obtained the following hybrid algorithm.
{λf (x n , y) + 1 2 ||x n − y|| 2 }, m n is the smallest integer number such that , η ∈ (0, 1), g n ∈ ∂f 2 (w n , x n ) and σ n = f (w n , x n )/||g n || 2 if y n = x n and σ n = 0 otherwise. According to Algorithm 5, we still have to solve an optimization program on C for y n , find an optimization direction for w n and compute a projection onto C for z n at each step. We emphasize that the projection P Cn∩Qn (x 0 ) in Algorithm 3 and the projection P Cn+1 (x 0 ) in Algorithms 4 and 5 still deal with the constrained set C while the sets C n and Q n in Algorithm 1 are two half-spaces, and so x n+1 can be expressed by an explicit formula (see, for instance [7]).

Preliminaries.
In this section, we recall some definitions and results for further use. Let C be a nonempty closed convex subset of a real Hilbert space H. We begin with some concepts of the monotonicity of a bifunction (see [2,25] for more details).
(iii) Lipschitz-type continuous on C if there exist two positive constants c 1 , c 2 such that From the definitions above, it is clear that a monotone bifunction is pseudomonotone, i.e., (i) =⇒ (ii). For solving EP (1), we assume that the bifunction f satisfies the following conditions: (A1) f is pseudomonotone on C and f (x, x) = 0 for all x ∈ C; (A2) f is Lipschitz-type continuous on C with two constants c 1 , c 2 ; (A3) lim sup n→∞ f (x n , y) ≤ f (x, y) for each sequence {x n } ⊂ C converging weakly tō x and every fixed y ∈ C; (A4) f (x, .) is convex and subdifferentiable on C for every fixed x ∈ C. It is easy to show that under assumptions (A1) − (A4), the solution set EP (f, C) of EP (1) is closed and convex (see, for instance [29]). In this paper, we assume that the solution set EP (f, C) is nonempty.
The metric projection P C : H → C is defined by P C x = arg min { y − x : y ∈ C} . Since C is nonempty, closed and convex, P C x exists and is unique. It is also known that P C has the following characteristic properties, see [10] for more details.
Lemma 3.2. Let P C : H → C be the metric projection from H onto C. Then (i) P C is firmly nonexpansive, i.e., (ii) For all x ∈ C, y ∈ H, Let g : C → be a function. The subdifferential of g at x is defined by We recall that the normal cone of C at x ∈ C is defined by

Definition 3.3 (Weakly lower semicontinuity). A function
It is well-known that the functional ϕ(x) := ||x|| 2 is convex and weakly lower semicontinuous. Any Hilbert space has the Kadec-Klee property (see, for instance [11]), i.e., if {x n } is a sequence in H such that x n x and ||x n || → ||x|| then x n → x as n → ∞.
Finally, we have the following technical lemma.
4. Convergence analysis. In this section, we present the detailed algorithm and prove the convergence of it.
The parameters λ and k satisfy the following conditions: Compute y 1 by Step 1. Solve a strongly convex program

VAN HIEU DANG
If y n+1 = y n = x n then stop.
Set n := n + 1 and go back Step 1.
We have the following result which gives us a stopping criterion of Algorithm 1. Proof. Assume that y n+1 = y n = x n . From the definition of y n+1 , Thus, from [23, Proposition 2.1], one has x n ∈ EP (f, C) . The proof of Lemma 4.1 is complete.
We need the lemma below which is an infinite version of Theorem 27.4 in [31] and is similarly proved by using Moreau-Rockafellar Theorem to find the subdifferential of a sum of a convex function g and the indicator function δ C to C in a real Hilbert space.
Lemma 4.2. Let C be a nonempty convex subset of a real Hilbert space H and g : C → be a convex, subdifferentiable and lower semicontinuous function on C.
Then, x * is a solution to the following convex optimization problem Based on Lemma 4.2 and Lemma 6 in [21], we obtain the following central lemma which is used to prove the convergence of Algorithm 1.

Lemma 4.3.
Assume that x * ∈ EP (f, C). Let {x n } , {y n } be the sequences generated by Algorithm 1. Then, there holds the relation where n is defined by Step 2 of Algorithm 1.
From the last two inequalities, we obtain Similarly, by replacing n + 1 by n, we also have Substituting y = y n+1 into (7) and a straightforward computation yield Substituting y = x * into (6) we also obtain Since x * ∈ EP (f, C) and y n ∈ C, f (x * , y n ) ≥ 0. Thus, from the pseudomonotonicity of f one has f (y n , x * ) ≤ 0. This together with (9) implies that By the Lipschitz-type continuity of f , Thus, Relations (10) and (11) lead to This together with relation (8) implies that Thus, We have the following fact: By the triangle, Cauchy-Schwarz and Cauchy inequalities,

VAN HIEU DANG
This together with (13) implies that Thus, Combining (12) and (14) we obtain Thus, from the definition of n we obtain  Proof. (i). From the definitions of C n and Q n , we see that they are the halfspaces. Thus, C n and Q n are closed and convex for all n ≥ 0. Lemma 4.3 and the definition of C n ensure that EP (f, C) ⊂ C n for all n ≥ 0. It is clear that . From the definition of Q n and Lemma 3.2(iii), x n = P Qn (x 0 ). Thus, from Lemma 3.2(ii) we have Substituting z = x † := P EP (f,C)) (x 0 ) ∈ Q n into (15), one has Thus, the sequence {||x n − x 0 ||}, therefore {x n }, are bounded. Substituting z = x n+1 ∈ Q n into (15), one also has This implies that {||x n − x 0 ||} is non-decreasing. Hence, there exists the limit of {||x n − x 0 ||}. By (17), Passing to the limit in the last inequality as K → ∞, we obtain Thus, lim From the definition of C n and x n+1 ∈ C n , Set From the definition of n , n = k||x n − x n−1 || 2 + bγ n − aγ n+1 . Thus, from (20), From the hypotheses of λ, k and (18), we see that a > b ≥ 0 and ∞ n=1 β n < +∞. Lemma 3.4 and (21) imply that α n → 0, or This together with relation (19) and the inequality ||y n+1 − y n || ≤ ||y n+1 − x n+1 || + ||x n+1 − x n || + ||x n − y n || implies that lim n→∞ ||y n+1 − y n || = 0.
Thanks to Lemma 4.1, we see that if Algorithm 1 terminates at some iterate n then a solution of EP can be found. Otherwise, if Algorithm 1 does not terminate then we have the following main result. Proof. From Lemma 4.4, the sequence {x n } is bounded. Assume that p is any weak cluster point of {x n }. Without loss of generality, we can write x n p as n → ∞. Thus, y n p because ||x n − y n || → 0. Since C is closed and convex in the Hilbert space H, C is weakly closed. Thus p ∈ C because of {y n } ⊂ C. Now, we show that p ∈ EP (f, C). From (6), we get λf (y n , y) ≥ λf (y n , y n+1 ) + x n − y n+1 , y − y n+1 , ∀y ∈ C.
Thus, p ∈ EP (f, C). Finally, from inequality (16), we get . By the weakly lower semicontinuity of the norm ||.|| and x n p, we have By the definition of x † , p = x † and lim n→∞ ||x n − x 0 || = ||x † − x 0 ||. By x n − x 0 x † − x 0 and the Kadec-Klee property of the Hilbert space H, we obtain we also see that {y n } converges strongly to P EP (f,C) x 0 . Theorem 4.5 is proved.

5.
Applications. In this section, we introduce several applications of Algorithm 1 to Gâteaux differentiable EPs and multivalued variational inequalities.

5.1.
Gâteaux differentiable equilibrium problems. We consider EPs for Gâteaux differentiable bifunctions. We denote ∇ 2 f (x, y) by the Gâteaux derivative of the function f (x, .) at y. For solving EP (1), we assume that the bifunction f satisfies the following conditions: (B1). f is monotone on C and f (x, x) = 0 for all x ∈ C; (B2). f (x, .) is convex and Gâteaux differentiable on C; (B3). There exists a constant L > 0 such that Remark 1. If EP (1) is reduced to VIP (2) for the operator A : C → H then the condition (B3) is equivalent to the Lipschitzianity of A with the constant L > 0.
We need the following result. Thanks to Lemma 5.1, instead of EP (1) we solve VIP (2) for the operator A(x) = ∇ 2 f (x, x) onto C. It is emphasized that (B2) and (B3) are slightly strong conditions. However, in this case, we can use the existing methods for VIPs to solve EPs. For instance, using the subgradient extragradient method [6, Algorithm 3.6] we obtain the following hybrid algorithm for solving EP (1) where T n = {z ∈ H : x n − λ∇ 2 f (x n , x n ) − y n , z − y n ≤ 0}. If conditions (B1) − (B4) hold for all x, y ∈ H then {x n } generated by (25) converges strongly to P EP (f,C) (x 0 ).
In this subsection, using Algorithm 2 which is a special case of Algorithm 1 for the operator A(x) = ∇ 2 f (x, x), we come to the following result.
Theorem 5.2. Let C be a nonempty closed convex subset of a real Hilbert space H. Assume that the bifunction f satisfies all conditions (B1) − (B4) such that EP (f, C) is nonempty. Let {x n } be the sequence generated by the following manner: where n , λ, k are defined as in Algorithm 1 with c 1 = c 2 = L/2. Then, the sequence {x n } converges strongly to P EP (f,C) (x 0 ).

Multivalued variational inequalities.
In this subsection, we consider the following multivalued variational inequality problem (MVIP) where A : C → 2 H is a multivalued compact operator. For each pair x, y ∈ C, we put f (x, y) = sup It is easy to show that x * is a solution of MVIP (26) if and only if x * is a solution of EP for the bifunction f defined by (27). We recall the following definitions.
Definition 5.3. A multivalued operator A : C → 2 H is said to be: Remark 2. If we denote h(C 1 , C 2 ) by the Hausdorff distance between two sets C 1 and C 2 then definition (iii) means that We can easily check that if A is pseudomonotone and L -Lipschitz continuous then f is also pseudomonotone and Lipschitz-type continuous with two constants c 1 = c 2 = L/2. Note that, when A is singlevalued then Algorithm 1 becomes the hybrid algorithm without the extrapolation step for variational inequalities [21]. When A is multivalued then Algorithm 1 can be applied for the bifunction f defined by (27). A disadvantage of performing Algorithm 1 in this case is that it is not easy to choose an approximation of the bifunction f (x, y). In fact, if A is a monotone and L -Lipschitz continuous multi-valued operator, by repeating the proof of Theorem 1 in [21] with A(y n ) being replaced by u n and using Definition 5.3(ii) and (iii), we can obtain the strong convergence of the following algorithm where n , λ, k are defined as in Algorithm 1 with c 1 = c 2 = L/2.

Numerical examples.
In this section, we consider some numerical examples in Euclidean space m . The purpose of these experiments is to illustrate the convergence of Algorithm 1 and compare the efficiency of it with Algorithms 3, 4 and 5. The advantage of the proposed algorithm is a computation over each iteration. Of course, there are many mathematical models for EPs in infinite dimensional Hilbert spaces, see for instance [2], and the norm convergence of algorithms is more necessary than the weak convergence. The ability of the implementation of these algorithms has been discussed in Sections 1 and 2. Firstly, we present briefly the problems of constructing the sets C n , Q n and computing the projections P Cn+1 (x 0 ) and P Cn∩Qn (x 0 ) for Algorithms 3, 4 and 5 in the first four examples when C is given as a polyhedron. Since C is a polyhedral convex set, it can be reformulated by C = {x ∈ m : Ax ≤ b}, where A ∈ l×m is a matrix, b ∈ l is a vector. For Algorithms 4 and 5, by We claim that C n is also a polyhedron defined by C n = {x ∈ m : A n x ≤ b n } for all n ≥ 0, where A n ∈ (l+n)×m and b ∈ l+n . Indeed, assume that C n = {x ∈ m : A n x ≤ b n } for some n ≥ 0, by the definition of C n+1 in Algorithms 4 and 5, we see that C n+1 = C n ∩ H n , where By setting Then C n+1 = {x ∈ m : A n+1 x ≤ b n+1 }. By the induction, we can obtain the desired conclusion. The sets C n , Q n in Algorithm 3 are also polyhedral convex sets and similarly constructed by adding one linear inequality constraint to the ones of C per each step. Projections P Cn+1 (x 0 ) and P Cn∩Qn (x 0 ) are equivalently rewritten to convex quadratic optimization programs onto polyhedral convex sets, respectively. While the sets C n , Q n in Algorithm 1 are two half-spaces, and so we use the explicit formula in [7] to compute x n+1 = P Cn∩Qn (x 0 ). All convex quadratic optimization programs in the algorithms can be easily solved by the MALAB Optimization Toolbox. In the final example, when C is given as a generalized convex feasible set [34], then optimization programs and projections P Cn∩Qn (x 0 ) and P Cn+1 (x 0 ) in Algorithms 3, 4, 5 seem to be more complex. The problem of solving these subproblems is presented in Example 5.
Next, we present five numerical examples and compute them on a PC Desktop Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz 2.50 GHz, RAM 2.00 GB. For a given tolerance T OL, we compare number of iterations (Iter.) and execution time (Time -in second) for above mentioned algorithms with choosing different starting points and stepsizes.
Example 5. The bifunction f in 5 is defined by (28) and P, Q, q are chosen like as in Example 2. Let K, C 1 , . . . , C l be nonempty closed convex subsets of 5 such that K ∩ ∩ l j=1 C j = ∅ and at least one K, C j , j = 1, . . . , l (l = 100) is bounded. For each point x ∈ 5 , we set Φ(x) = 1 2 l j=1 w j d 2 (x, C j ), where {w j } l j=1 ⊂ (0, 1), l j=1 w j = 1 and d(x, C j ) = inf {||x − y|| : y ∈ C j }. We consider the feasible set C which is called the generalized convex feasible set [34, Definition 4.1] as follows Note that C is a nonempty closed convex subset of K, see [34,Proposition 4.2 and Remark 4.3]. In this experiment, we chose w j = 1 l and K = x ∈ 5 : ||x − a|| ≤ 4.5}, C 1 = x ∈ 5 : ||x|| ≤ 1 , C j = x ∈ 5 : c T j x ≤ b j , j = 2, . . . l, where a = (10, 0, 0, 0, 0) T and the vectors c j , b j are generated randomly with their entries in [−2, 2]. We need to solve the following program at each step where H = 2λQ + I and h is a vector in 5 . The first problem here is how to solve problem (31) when the feasible set C is formulated in implicit form (30). Now, we set then C = F ix(T ), see [34,Proposition 4.2b] for β = 1. Thus, problem (31) becomes an optimization problem over the fixed point set of the nonexpansive mapping T . We use the hybrid steepest descent method (HSDM) in [17,34] to obtain the solution of problem (31) with the tolerance T OL = 10 −4 . The second problem is how to find the next iterate x n+1 in Algorithm 3. We rewrite x n+1 in this algorithm as x n+1 = P H1n∩H2n∩C (x 0 ) where H 1n , H 2n are two spaces H 1n = z ∈ 5 : ||z − z n || 2 ≤ ||z − x n || 2 , H 2n = z ∈ 5 : x 0 − x n , z − x n ≤ 0 . We combine the Haugazeau's method [1,Corollary 29.8] and the HSDM in [17,34] to obtain x n+1 with the tolerance T OL. In this example, it seems to be not easy to find the next iterate x n+1 in Algorithms 4 and 5. Note that the projection x n+1 = P Cn∩Qn (x 0 ) in Algorithm 1 is explicit. We perform numerical tests for Algorithms 1 and 3 with the control parameters λ, k and three starting points x 0 and y 0 as in the first experiment of Example 2. The results are reported in Table  7. From this table, we see that the whole time for performing Algorithm 3 is significantly larger than the one for Algorithm 1. Of course, this comes from the problems of solving additionally an optimization program and finding the next iterate x n+1 per each iteration. Table 7. Results for given starting points in Example 5. The study of examples here is preliminary and it is clear that EP depends on the structures of the feasible set C and of the bifunction f . The advantage of the proposed algorithm is a computation per each iteration. The convergence and the efficiency of the algorithm have been illustrated by the numerical results in Tables  1 -4 and 6, 7 on five test problems.
7. Concluding remarks. The paper proposes a novel algorithm for solving EPs for a class of pseudomonotone and Lipschitz-type continuous bifunctions. By constructing specially cutting halfspaces, we have designed the algorithm with a more simple and elegant structure and without any extra-step dealing with the feasible set. The strong convergence of the algorithm is proved. It is also emphasized that we still have to solve exactly an optimization problem in each step. This, in general, is a disadvantage of the algorithm (also, of the extragradient methods and the Armijo linesearch methods) when equilibrium bifunctions and feasible sets have complex structures. The paper also helps us in the design and analysis of more practical algorithms to be seen.