\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Inertial Tseng's extragradient method for solving variational inequality problems of pseudo-monotone and non-Lipschitz operators

  • * Corresponding author: G. Cai

    * Corresponding author: G. Cai 

The first author is supported by the NSF of China (Grant No. 11771063), the Natural Science Foundation of Chongqing(cstc2020jcyj-msxmX0455), Science and Technology Project of Chongqing Education Committee (Grant No. KJZD-K201900504) and the Program of Chongqing Innovation Research Group Project in University (Grant no. CXQT19018)

Abstract / Introduction Full Text(HTML) Figure(43) / Table(8) Related Papers Cited by
  • In this paper, we propose a new inertial Tseng's extragradient iterative algorithm for solving variational inequality problems of pseudo-monotone and non-Lipschitz operator in real Hilbert spaces. We prove that the sequence generated by proposed algorithm converges strongly to an element of solutions of variational inequality problem under some suitable assumptions imposed on the parameters. Finally, we give some numerical experiments for supporting our main results. The main results obtained in this paper extend and improve some related works in the literature.

    Mathematics Subject Classification: Primary: 47H09, 47H10; Secondary: 47J20, 65K15.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Example 1: $ k = 20 $, $ N = 10 $

    Figure 2.  Example 1: $ k = 20 $, $ N = 20 $

    Figure 3.  Example 1: $ k = 20 $, $ N = 30 $

    Figure 4.  Example 1: $ k = 20 $, $ N = 40 $

    Figure 5.  Example 1: $ k = 30 $, $ N = 10 $

    Figure 6.  Example 1: $ k = 30 $, $ N = 20 $

    Figure 7.  Example 1: $ k = 30 $, $ N = 30 $

    Figure 8.  Example 1: $ k = 30 $, $ N = 40 $

    Figure 9.  Example 1: Different $ \gamma $ with $ (N, k) = (20, 10) $

    Figure 10.  Example 1: Different $ \gamma $ with $ (N, k) = (20, 21) $

    Figure 11.  Example 1: Different $ \gamma $ with $ (N, k) = (20, 30) $

    Figure 12.  Example 1: Different $ \gamma $ with $ (N, k) = (20, 40) $

    Figure 13.  Example 1: Different $ \gamma $ with $ (N, k) = (30, 10) $

    Figure 14.  Example 1: Different $ \gamma $ with $ (N, k) = (30, 20) $

    Figure 15.  Example 1: Different $ \gamma $ with $ (N, k) = (30, 30) $

    Figure 16.  Example 1: Different $ \gamma $ with $ (N, k) = (30, 40) $

    Figure 17.  Example 1: Different $ \mu $ with $ (N, k) = (20, 10) $

    Figure 18.  Example 1: Different $ \mu $ with $ (N, k) = (20, 20) $

    Figure 19.  Example 1: Different $ \mu $ with $ (N, k) = (20, 30) $

    Figure 20.  Example 1: Different $ \mu $ with $ (N, k) = (20, 40) $

    Figure 21.  Example 1: Different $ \mu $ with $ (N, k) = (30, 10) $

    Figure 22.  Example 1: Different $ \mu $ with $ (N, k) = (30, 20) $

    Figure 23.  Example 1: Different $ \mu $ with $ (N, k) = (30, 30) $

    Figure 24.  Example 1: Different $ \mu $ with $ (N, k) = (30, 40) $

    Figure 25.  Example 2: Case I

    Figure 26.  Example 2: Case II

    Figure 27.  Example 2: Case III

    Figure 28.  Example 2: Case IV

    Figure 29.  Example 2: Case V

    Figure 30.  Example 2: Case VI

    Figure 31.  Example 2: Case I with different $ \gamma $

    Figure 32.  Example 2: Case II with different $ \gamma $

    Figure 33.  Example 2: Case III with different $ \gamma $

    Figure 34.  Example 2: Case IV with different $ \gamma $

    Figure 35.  Example 2: Case V with different $ \gamma $

    Figure 36.  Example 2: Case VI with different $ \gamma $

    Figure 37.  Example 2: Case I with different $ \mu $

    Figure 38.  Example 2: Case II with different $ \mu $

    Figure 39.  Example 2: Case III with different $ \mu $

    Figure 40.  Example 2: Case IV with different $ \mu $

    Figure 41.  Example 2: Case V with different $ \mu $

    Figure 42.  Example 2: Case VI with different $ \mu $

    Figure 43.  The value of error versus the iteration numbers for Example 3

    Table 1.  Methods Parameters Choice for Comparison

    Proposed Alg. $ \epsilon_n = \frac{1}{n^2} $ $ \theta = 0.1 $ $ l=0.001 $ $ \beta_n = \frac{1}{n} $
    $ \gamma = 0.99 $ $ \mu = 0.99 $
    Thong Alg. (1) $ \epsilon_n = \frac{1}{(n + 1)^2} $ $ \theta = 0.1 $ $ \beta_n = \frac{1}{n + 1} $ $ \lambda = \frac{1}{1.01L} $
    Thong Alg. (2) $ l=0.001 $ $ \gamma = 0.99 $ $ \mu = 0.99 $
    Thong Alg. (3) $ \alpha_n = \frac{1}{n + 1} $ $ l=0.001 $ $ \gamma = 0.99 $ $ \mu = 0.99 $
    Gibali Alg. $ \alpha_n = \frac{1}{n + 1} $ $ l=0.001 $ $ \gamma = 0.99 $ $ \mu = 0.99 $
     | Show Table
    DownLoad: CSV

    Table 2.  Example 1: Comparison among methods with different values of $ N $ and $ k $

    $ N=10 $ $ N=20 $ $ N=30 $ $ N=40 $
    $ k=20 $ Iter. Time Iter. Time Iter. Time Iter. Time
    Proposed Alg. 3 1.3843 3 1.7672 3 1.7564 4 2.2017
    Thong Alg. (1) 76 1.2902 139 2.7111 111 2.1715 232 37.7743
    Thong Alg. (2) 2136 36.6812 1561 30.7776 1370 31.8672 1160 4.0453
    Thong Alg. (3) 86 1.1655 152 2.3615 148 2.4878 178 29.0789
    Gibali Alg. 150 12.0085 235 20.3243 319 41.0421 315 4.2520
    $ N=10 $ $ N=20 $ $ N=30 $ $ N=40 $
    $ k=30 $ Iter. Time Iter. Time Iter. Time Iter. Time
    Proposed Alg. 3 1.3819 3 1.7834 3 1.6555 3 1.7517
    Thong Alg. (1) 72 1.1548 142 2.6436 136 2.888 207 4.4416
    Thong Alg. (2) 1771 30.2921 1325 28.4023 1132 28.6053 920 26.5714
    Thong Alg. (3) 101 1.5058 90 1.4923 156 2.9515 162 3.6149
    Gibali Alg. 203 17.1568 255 30.849 282 31.6244 303 35.2953
     | Show Table
    DownLoad: CSV

    Table 3.  Example 1 Comparison: Proposed Alg. with different values $ \gamma $

    $ (N, k) $ $ \gamma = 0.1 $ $ \gamma = 0.5 $ $ \gamma = 0.7 $ $ \gamma = 0.99 $
    $ (20, 10) $ No. of Iterations 3 3 3 3
    CPU (Time) 1.6337 1.4830 1.4773 1.3843
    $ (20, 20) $ No. of Iterations 3 3 4 3
    CPU (Time) 1.4606 1.4876 2.3980 1.7672
    $ (20, 30) $ No. of Iterations 4 5 4 3
    CPU (Time) 1.8664 0.85257 1.7597 1.7564
    $ (20, 40) $ No. of Iterations 4 3 4 4
    CPU (Time) 1.6573 1.5935 1.8008 2.2017
    $ (30, 10) $ No. of Iterations 3 3 3 3
    CPU (Time) 1.3060 1.3376 1.4359 1.3819
    $ (30, 20) $ No. of Iterations 3 4 3 3
    CPU (Time) 1.4630 1.7306 1.5115 1.7834
    $ (30, 30) $ No. of Iterations 4 3 4 3
    CPU (Time) 1.7102 1.6399 1.7931 1.6555
    $ (30, 40) $ No. of Iterations 5 3 4 3
    CPU (Time) 2.7099 1.6589 2.3287 1.7517
     | Show Table
    DownLoad: CSV

    Table 4.  Example 1 Comparison: Proposed Alg. with different values $ \mu $

    $ (N, k) $ $ \mu = 0.1 $ $ \mu = 0.5 $ $ \mu = 0.7 $ $ \mu = 0.99 $
    $ (20, 10) $ No. of Iterations 3 3 4 3
    CPU (Time) 1.4409 1.4632 2.6888 1.3843
    $ (20, 20) $ No. of Iterations 3 3 3 3
    CPU (Time) 1.5248 1.4840 1.5217 1.7672
    $ (20, 30) $ No. of Iterations 4 3 5 3
    CPU (Time) 1.9571 1.5852 1.9322 1.7564
    $ (20, 40) $ No. of Iterations 6 4 5 4
    CPU (Time) 3.0365 1.8605 2.0718 2.2017
    $ (30, 10) $ No. of Iterations 3 3 3 3
    CPU (Time) 1.3524 1.3416 1.3648 1.3819
    $ (30, 20) $ No. of Iterations 3 3 4 3
    CPU (Time) 1.5265 1.5336 1.6929 1.7834
    $ (30, 30) $ No. of Iterations 5 5 3 3
    CPU (Time) 2.9525 2.0816 1.6424 1.6555
    $ (30, 40) $ No. of Iterations 3 4 8 3
    CPU (Time) 1.6958 2.0833 4.5199 1.7517
     | Show Table
    DownLoad: CSV

    Table 5.  Methods Parameters Choice for Comparison

    Proposed Alg. $ \epsilon_n = \frac{1}{(n + 1)^2} $ $ \theta = 0.5 $ $ l=0.01 $ $ \beta_n = \frac{1}{n + 1} $ $ \gamma = 0.99 $ $ \mu = 0.99 $
    Gibali Alg. $ \alpha_n = \frac{1}{1 + n} $ $ l=0.01 $ $ \gamma = 0.99 $ $ \mu = 0.99 $
     | Show Table
    DownLoad: CSV

    Table 6.  Example 2: Prop. Alg. vs Gibali Alg. (Unaccel. Alg.)

    No. of Iterations CPU Time
    Prop. Alg. Gibali Alg. Prop. Alg. Gibali Alg.
    Case I 17 1712 0.001243 0.1244
    Case II 17 1708 0.001518 0.1248
    Case III 17 1713 0.001261 0.1276
    Case IV 17 1729 0.001202 0.1297
    Case V 17 1715 0.001272 0.1258
    Case VI 18 1835 0.001339 0.1564
     | Show Table
    DownLoad: CSV

    Table 7.  Example 2 Comparison: Proposed Alg. with different values $ \mu $

    $ \mu = 0.1 $ $ \mu = 0.5 $ $ \mu = 0.7 $ $ \mu = 0.99 $
    Case I No. of Iterations 17 17 17 17
    CPU (Time) 0.0011992 0.0012179 0.0013264 0.0012430
    Case II No. of Iterations 17 17 17 17
    CPU (Time) 0.0011457 0.0011586 0.0015604 0.0015181
    Case III No. of Iterations 17 17 17 17
    CPU (Time) 0.0011386 0.0014248 0.0012852 0.0012606
    Case IV No. of Iterations 17 17 17 17
    CPU (Time) 0.0010843 0.0010928 0.0011176 0.0012022
    Case V No. of Iterations 17 17 17 17
    CPU (Time) 0.0012491 0.0011169 0.0012293 0.0012719
    Case VI No. of Iterations 18 18 18 18
    CPU (Time) 0.0012431 0.0013496 0.0011613 0.0013392
     | Show Table
    DownLoad: CSV

    Table 8.  Example 2 Comparison: Proposed Alg. with different values $ \gamma $

    $ \gamma = 0.1 $ $ \gamma = 0.5 $ $ \gamma = 0.7 $ $ \gamma = 0.99 $
    Case I No. of Iterations 17 17 17 17
    CPU (Time) 0.0013518 0.0012097 0.0011754 0.0012430
    Case II No. of Iterations 17 17 17 17
    CPU (Time) 0.0012701 0.0011233 0.0012382 0.0015181
    Case III No. of Iterations 17 17 17 17
    CPU (Time) 0.0011386 0.0014248 0.0012852 0.0012606
    Case IV No. of Iterations 17 17 17 17
    CPU (Time) 0.0011530 0.0013917 0.0015395 0.0012022
    Case V No. of Iterations 17 17 17 17
    CPU (Time) 0.0011413 0.0011319 0.0011286 0.0012719
    Case VI No. of Iterations 17 17 18 18
    CPU (Time) 0.0011094 0.0011839 0.0013550 0.0013392
     | Show Table
    DownLoad: CSV
  • [1] T. O. AlakoyaL. O. Jolaoso and O. T. Mewomo, Modified inertial subgradient extragradient method with self adaptive stepsize for solving monotone variational inequality and fixed point problems, Optimization, 70 (2021), 545-574.  doi: 10.1080/02331934.2020.1723586.
    [2] F. Alvarez and H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set. Valued Anal., 9 (2001), 3-11.  doi: 10.1023/A:1011253113155.
    [3] F. Alvarez, Weak convergence of a relaxed and inertial hybrid projection proximal point algorithm for maximal monotone operators in Hilbert space, SIAM J. Optim., 14 (2004), 773-782.  doi: 10.1137/S1052623403427859.
    [4] C. Baiocchi and A. Capelo, Variational and quasivariational inequalities: Applications to free boundary problems, Wiley, New York, 1984.
    [5] R. I. BǫtE. R. Csetnek and A. Heinrich, A primal-dual splitting algorithm for finding zeros of sums of maximally monotone operators, SIAM J. Optim., 23 (2013), 2011-2036.  doi: 10.1137/12088255X.
    [6] R. I. Bǫt and C. Hendrich, A Douglas-Rachford type primal-dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators, SIAM J. Optim., 23 (2013), 2541-2565.  doi: 10.1137/120901106.
    [7] R. I. BǫtE. R. CsetnekA. Heinrich and C. Hendrich, On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems, Math. Program., 150 (2015), 251-279.  doi: 10.1007/s10107-014-0766-0.
    [8] R. I. BǫtE. R. Csetnek and C. Hendrich, Inertial Douglas-Rachford splitting for monotone inclusion problems, Appl. Math. Comput., 256 (2015), 472-487.  doi: 10.1016/j.amc.2015.01.017.
    [9] R. I. Bǫt and E. R. Csetnek, An inertial Tseng's type proximal algorithm for nonsmooth and nonconvex optimization problems, J. Optim. Theory Appl., 171 (2016), 600-616.  doi: 10.1007/s10957-015-0730-z.
    [10] R. I. BǫtE. R. Csetnek and P. T. Vuong, The forward-backward-forward method from continuous and discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces, European J. Oper. Res., 287 (2020), 49-60.  doi: 10.1016/j.ejor.2020.04.035.
    [11] Y. CensorA. Gibali and S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318-335.  doi: 10.1007/s10957-010-9757-3.
    [12] Y. CensorA. Gibali and S. Reich, Extensions of Korpelevich's extragradient method for the variational inequality problem in Euclidean space, Optimization, 61 (2011), 1119-1132.  doi: 10.1080/02331934.2010.539689.
    [13] Y. CensorA. Gibali and S. Reich, Algorithms for the split variational inequality problem, Numer. Algorithms, 56 (2012), 301-323.  doi: 10.1007/s11075-011-9490-5.
    [14] R. W. Cottle and J. C. Yao, Pseudo-monotone complementarity problems in Hilbert space, J. Optim. Theory Appl., 75 (1992), 281-295.  doi: 10.1007/BF00941468.
    [15] S. V. DenisovV. V. Semenov and L. M. Chabak, Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators, Cybern. Syst. Anal., 51 (2015), 757-765.  doi: 10.1007/s10559-015-9768-z.
    [16] Q. L. DongY. J. ChoL. L. Zhong and Th. M. Rassias, Inertial projection and contraction algorithms for variational inequalities, J. Glob. Optim., 70 (2018), 687-704.  doi: 10.1007/s10898-017-0506-0.
    [17] Q. L. DongH. B.YuanY. J. Cho and Th. M. Rassias, Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings, Optim. Lett., 12 (2018), 87-102.  doi: 10.1007/s11590-016-1102-9.
    [18] Q.-L. Dong, K. R. Kazmi, R. Ali and X.-H. Li, Inertial Krasnosel'skii-Mann type hybrid algorithms for solving hierarchical fixed point problems, J. Fixed Point Theory Appl., 21 (2019), 57. doi: 10.1007/s11784-019-0699-6.
    [19] F. Facchinei and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer Series in Operations Research, vol. II. Springer, New York, 2003.
    [20] A. GibaliD. V. Thong and P. A. Tuan, Two simple projection-type methods for solving variational inequalities, Anal. Math. Phys., 9 (2019), 2203-2225.  doi: 10.1007/s13324-019-00330-w.
    [21] R. Glowinski, J.-L. Lions and R. Trémolières, Numerical Analysis of Variational Inequalities, Elsevier, Amsterdam, 1981.
    [22] K. Goebel and S. Reich, Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings, Marcel Dekker, New York, 1984.
    [23] P. T. Harker and J.-S. Pang, A damped-newton method for the linear complementarity problem, Lect. Appl. Math., 26 (1990), 265-284.  doi: 10.1007/bf01582255.
    [24] X. Hu and J. Wang, Solving pseudo-monotone variational inequalities and pseudo-convex optimization problems using the projection neural network, IEEE Trans. Neural Netw., 17 (2006), 1487-1499. 
    [25] A. N. Iusem, An iterative algorithm for the variational inequality problem, Mat. Apl. Comput., 13 (1994), 103-114. 
    [26] A. Iusem and R. Gárciga Otero, Inexact versions of proximal point and augmented Lagrangian algorithms in Banach spaces, Numer. Funct. Anal. Optim., 22 (2001), 609-640.  doi: 10.1081/NFA-100105310.
    [27] C. Izuchukwu, A. A. Mebawondu and O. T. Mewomo, A new method for solving split variational inequality problem without co-coerciveness, J. Fixed Point Theory Appl., 22 (2020), 98. doi: 10.1007/s11784-020-00834-0.
    [28] L. O. JolaosoA. TaiwoT. O. Alakoya and O. T. Mewomo, A strong convergence theorem for solving pseudo-monotone variational inequalities using projection methods, J. Optim. Theory Appl., 185 (2020), 744-766.  doi: 10.1007/s10957-020-01672-3.
    [29] L. O. Jolaoso, A. Taiwo, T. O. Alakoya and O. T. Mewomo, A unified algorithm for solving variational inequality and fixed point problems with application to the split equality problem, Comput. Appl. Math., 39 (2020), 38. doi: 10.1007/s40314-019-1014-2.
    [30] L. O. JolaosoT. O. AlakoyaA. Taiwo and O. T. Mewomo, Inertial extragradient method via viscosity approximation approach for solving Equilibrium problem in Hilbert space, Optimization, 70 (2021), 387-412.  doi: 10.1080/02331934.2020.1716752.
    [31] G. KassayS. Reich and S. Sabach, Iterative methods for solving systems of variational inequalities in refelexive Banach spaces, SIAM J. Optim., 21 (2011), 1319-1344.  doi: 10.1137/110820002.
    [32] D. Kinderlehrer and  G. StampacchiaAn Introduction to Variational Inequalities and their Applications, Academic Press, New York, 1980. 
    [33] I. Konnov, Combined Relaxation Methods for Variational Inequalities, Springer, Berlin, 2001. doi: 10.1007/978-3-642-56886-2.
    [34] G. M. Korpelevič, The extragradient method for finding saddle points and other problems, Ekon. Mat. Metody, 12 (1976), 747-756. 
    [35] P.-E. Maingé, A hybrid extragradient-viscosity method for monotone operators and fixed point problems, SIAM J. Control Optim., 47 (2008), 1499-1515.  doi: 10.1137/060675319.
    [36] P.-E. Maingé, Convergence theorem for inertial KM-type algorithms, J. Comput. Appl. Math., 219 (2008), 223-236.  doi: 10.1016/j.cam.2007.07.021.
    [37] Y. Malitsky, Projected reflected gradient methods for monotone variational inequalities, SIAM J. Optim., 25 (2015), 502-520.  doi: 10.1137/14097238X.
    [38] Y. V. Malitsky and V. V. Semenov, A hybrid method without extrapolation step for solving variational inequality problems, J. Glob. Optim., 61 (2015), 193-202.  doi: 10.1007/s10898-014-0150-x.
    [39] Y. Shehu and O. S. Iyiola, Convergence analysis for the proximal split feasibility problem using an inertial extrapolation term method, J. Fixed Point Theory Appl., 19 (2017), 2483-2510.  doi: 10.1007/s11784-017-0435-z.
    [40] Y. ShehuO. S. Iyiola and F. U. Ogbuisi, Iterative method with inertial terms for nonexpansive mappings: Applications to compressed sensing, Numer Algorithms, 83 (2020), 1321-1347.  doi: 10.1007/s11075-019-00727-5.
    [41] Y. Shehu and P. Cholamjiak, Iterative method with inertial for variational inequalities in Hilbert spaces, Calcolo, 56 (2019), 4. doi: 10.1007/s10092-018-0300-5.
    [42] Y. ShuhuQ.-L. Dong and D. Jiang, Single projection method for pseudo-monotone variational inequality in Hilbert spaces, Optimization, 68 (2019), 385-409.  doi: 10.1080/02331934.2018.1522636.
    [43] M. V. Solodov and P. Tseng, Modified projection-type methods for monotone variational inequalities, SIAM J. Control Optim., 34 (1996), 1814-1830.  doi: 10.1137/S0363012994268655.
    [44] M. V. Solodov and B. F. Svaiter, A new projection method for variational inequality problems, SIAM J. Control Optim., 37 (1999), 765-776.  doi: 10.1137/S0363012997317475.
    [45] D. V. Thong and D. V. Hieu, Inertial extragradient algorithms for strongly pseudomonotone variational inequalities, J. Comput. Appl. Math., 341 (2018), 80-98.  doi: 10.1016/j.cam.2018.03.019.
    [46] D. V. Thong and D. V.Hieu, Modified subgradient extragradient method for variational inequality problems, Numer. Algorithms, 79 (2018), 597-610.  doi: 10.1007/s11075-017-0452-4.
    [47] D. V. Thong and D. V. Hieu, Weak and strong convergence theorems for variational inequality problems, Numer. Algorithms, 78 (2018), 1045-1060.  doi: 10.1007/s11075-017-0412-z.
    [48] D. V. Thong and D. V. Hieu, Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems, Numer Algorithms, 80 (2019), 1283-1307.  doi: 10.1007/s11075-018-0527-x.
    [49] D. V. ThongN. T. Vinh and Y. J. Cho, Accelerated subgradient extragradient methods for variational inequality problems, J. Sci. Comput., 80 (2019), 1438-1462.  doi: 10.1007/s10915-019-00984-5.
    [50] D. V. ThongD. Van Hieu and T. M. Rassias, Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems, Optim. Lett., 14 (2020), 115-144.  doi: 10.1007/s11590-019-01511-z.
    [51] D. V. Thong and A. Gibali, Extragradient methods for solving non-Lipschitzian pseudo-monotone variational inequalities, J. Fixed Point Theory Appl., 21 (2019), 20. doi: 10.1007/s11784-018-0656-9.
    [52] D. V. ThongN. T. Vinh and Y. J. Cho, New strong convergence theorem of the inertial projection and contraction method for variational inequality problems, Numer Algorithms, 84 (2020), 285-305.  doi: 10.1007/s11075-019-00755-1.
    [53] D. V. Thong and D. Van Hieu, Strong convergence of extragradient methods with a new step size for solving variational inequality problems, Comp. Appl. Math., 38 (2019), 136. doi: 10.1007/s40314-019-0899-0.
    [54] D. V. ThongY. Shehu and O. S. Iyiola, Weak and strong convergence theorems for solving pseudo-monotone variational inequalities with non-Lipschitz mappings, Numer. Algorithms, 84 (2020), 795-823.  doi: 10.1007/s11075-019-00780-0.
    [55] D. V. Thong and D. V. Hieu, New extragradient methods for solving variational inequality problems and fixed point problems, J. Fixed Point Theory Appl., 20 (2018), 129. doi: 10.1007/s11784-018-0610-x.
    [56] D. V. Thong and D. V. Hieu, Modified Tseng's extragradient algorithms for variational inequality problems, J. Fixed Point Theory Appl., 20 (2018), 152. doi: 10.1007/s11784-018-0634-2.
    [57] D. V. ThongN. A. TrietX.-H. Li and Q.-L. Dong, Strong convergence of extragradient methods for solving bilevel pseudo-monotone variational inequality problems, Numer. Algorithms, 83 (2020), 1123-1143.  doi: 10.1007/s11075-019-00718-6.
    [58] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (2000), 431-446.  doi: 10.1137/S0363012998338806.
    [59] F. Wang and H.-K. Xu, Weak and strong convergence theorems for variational inequality and fixed point problems with Tseng's extragradient method, Taiwan. J. Math., 16 (2012), 1125-1136.  doi: 10.11650/twjm/1500406682.
    [60] H.-K. Xu, Iterative algorithms for nonlinear operators, J. Lond. Math. Soc., 66 (2002), 240-256.  doi: 10.1112/S0024610702003332.
  • 加载中

Figures(43)

Tables(8)

SHARE

Article Metrics

HTML views(2477) PDF downloads(848) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return