January  2022, 15(1): 41-56. doi: 10.3934/dcdss.2021026

Extended Krylov subspace methods for solving Sylvester and Stein tensor equations

1. 

Laboratory LAMAI, University Cadi Ayyad, Marrakesh, Morocco

2. 

Laboratory LabSIPE, ENSA d'El Jadida, University Chouaib Doukkali, El Jadida, Morocco

* Corresponding author: Smahane El-Halouy

Received  August 2020 Revised  January 2021 Published  January 2022 Early access  March 2021

This paper deals with Sylvester and Stein tensor equations with low rank right hand sides. It proposes extended Krylov-like methods for solving Sylvester and Stein tensor equations. The expressions of residual norms are presented. To show the performance of the proposed approaches, some numerical experiments are given.

Citation: Abdeslem Hafid Bentbib, Smahane El-Halouy, El Mostafa Sadek. Extended Krylov subspace methods for solving Sylvester and Stein tensor equations. Discrete and Continuous Dynamical Systems - S, 2022, 15 (1) : 41-56. doi: 10.3934/dcdss.2021026
References:
[1]

J. Ballani and L. Grasedyck, A projection method to solve linear systems in tensor format, Numerical Linear Algebra with Applications, 20 (2013), 27–43. doi: 10.1002/nla.1818.

[2]

F. P. A. Beik, F. S. Movahed and S. Ahmadi-Asl, On the Krylov subspace methods based on tensor format for positive definite Sylvester tensor equations, Numerical Linear Algebra with Applications, 23 (2016), 444–466. doi: 10.1002/nla.2033.

[3]

A. H. Bentbib, K. Jbilou and El M. Sadek, On some extended block Krylov based methods for large scale nonsymmetric Stein matrix equations, Mathematics, 5 (2017), 21. doi: 10.3390/math5020021.

[4]

A. H. Bentbib, S. El-Halouy and El M. Sadek, Krylov subspace projection method for Sylvester tensor equation with low rank right-hand side, Numerical Algorithms, (2020), 1–20. doi: 10.1007/s11075-020-00874-0.

[5]

M. Chen and D. Kressner, Recursive blocked algorithms for linear systems with Kronecker product structure, Numerical Algorithms, (2019), 1–18. doi: 10.1007/s11075-019-00797-5.

[6]

D. Calvetti and L. Reichel, Application of ADI iterative methods to the restoration of noisy images, SIAM Journal on Matrix Analysis and Applications, 17 (1996), 165–186. doi: 10.1137/S0895479894273687.

[7]

Z. Chen and L. Lu, A projection method and Kronecker product pre-conditioner for solving Sylvester tensor equations, Science China Mathematics, 55 (2012), 1281-1292.  doi: 10.1007/s11425-012-4363-5.

[8]

Z. Chen and L. Lu, A gradient based iterative solutions for Sylvester tensor equations, Mathematical Problems in Engineering, (2013). doi: 10.1155/2013/819479.

[9]

F. Ding and T. Chen, Gradient based iterative algorithms for solving a class of matrix equations, IEEE Transactions on Automatic Control, 50 (2005), 1216–1221. doi: 10.1109/TAC.2005.852558.

[10]

M. Heyouni and K. Jbilou, An extended block Arnoldi algorithm for large-scale solutions of the continuous-time algebraic Riccati equation, Electronic Transactions on Numerical Analysis, 33 (2009), 53–62.

[11]

M. Heyouni, Extended Arnoldi methods for large low-rank Sylvester matrix equations, Applied Numerical Mathematics, 60 (2010), 1171–1182. doi: 10.1016/j.apnum.2010.07.005.

[12]

T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Review, 51 (2009), 455–500. doi: 10.1137/07070111X.

[13]

D. Kressner and C. Tobler, Krylov subspace methods for linear systems with tensor product structure, SIAM Journal on Matrix Analysis and Applications, 31 (2010), 1688–1714. doi: 10.1137/090756843.

[14]

A. N. Langville and W. J. Stewart, A Kronecker product approximate preconditioner for SANs, Numerical Linear Algebra with Applications, 11 (2004), 723–752. doi: 10.1002/nla.344.

[15]

N. Lee and A. Cichocki, Fundamental tensor operations for large-scale data analysis using tensor network formats, Multidimensional Systems and Signal Processing, 29 (2018), 921–960. doi: 10.1007/s11045-017-0481-0.

[16]

T. Li, Q. W. Wang and X. F. Duan, Numerical algorithms for solving discrete Lyapunov tensor equation, Journal of Computational and Applied Mathematics, 370 (2020), 112676. doi: 10.1016/j.cam.2019.112676.

[17]

T. Penzl, et al., A Matlab toolbox for large Lyapunov and Riccati equations, model reduction problems, and linear quadratic optimal control problems, https://www.tu-chemnitz.de/sfb393/lyapack/, (2000).

[18]

T. Stykel and V. Simoncini, Krylov subspace methods for projected Lyapunov equations, Applied Numerical Mathematics, 62 (2012), 35-50.  doi: 10.1016/j.apnum.2011.09.007.

[19]

X. Xu and W. Q. Wang, Extending BiCG and BiCR methods to solve the Stein tensor equation, Computers & Mathematics with Applications, 77 (2019), 3117–3127. doi: 10.1016/j.camwa.2019.01.024.

show all references

References:
[1]

J. Ballani and L. Grasedyck, A projection method to solve linear systems in tensor format, Numerical Linear Algebra with Applications, 20 (2013), 27–43. doi: 10.1002/nla.1818.

[2]

F. P. A. Beik, F. S. Movahed and S. Ahmadi-Asl, On the Krylov subspace methods based on tensor format for positive definite Sylvester tensor equations, Numerical Linear Algebra with Applications, 23 (2016), 444–466. doi: 10.1002/nla.2033.

[3]

A. H. Bentbib, K. Jbilou and El M. Sadek, On some extended block Krylov based methods for large scale nonsymmetric Stein matrix equations, Mathematics, 5 (2017), 21. doi: 10.3390/math5020021.

[4]

A. H. Bentbib, S. El-Halouy and El M. Sadek, Krylov subspace projection method for Sylvester tensor equation with low rank right-hand side, Numerical Algorithms, (2020), 1–20. doi: 10.1007/s11075-020-00874-0.

[5]

M. Chen and D. Kressner, Recursive blocked algorithms for linear systems with Kronecker product structure, Numerical Algorithms, (2019), 1–18. doi: 10.1007/s11075-019-00797-5.

[6]

D. Calvetti and L. Reichel, Application of ADI iterative methods to the restoration of noisy images, SIAM Journal on Matrix Analysis and Applications, 17 (1996), 165–186. doi: 10.1137/S0895479894273687.

[7]

Z. Chen and L. Lu, A projection method and Kronecker product pre-conditioner for solving Sylvester tensor equations, Science China Mathematics, 55 (2012), 1281-1292.  doi: 10.1007/s11425-012-4363-5.

[8]

Z. Chen and L. Lu, A gradient based iterative solutions for Sylvester tensor equations, Mathematical Problems in Engineering, (2013). doi: 10.1155/2013/819479.

[9]

F. Ding and T. Chen, Gradient based iterative algorithms for solving a class of matrix equations, IEEE Transactions on Automatic Control, 50 (2005), 1216–1221. doi: 10.1109/TAC.2005.852558.

[10]

M. Heyouni and K. Jbilou, An extended block Arnoldi algorithm for large-scale solutions of the continuous-time algebraic Riccati equation, Electronic Transactions on Numerical Analysis, 33 (2009), 53–62.

[11]

M. Heyouni, Extended Arnoldi methods for large low-rank Sylvester matrix equations, Applied Numerical Mathematics, 60 (2010), 1171–1182. doi: 10.1016/j.apnum.2010.07.005.

[12]

T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Review, 51 (2009), 455–500. doi: 10.1137/07070111X.

[13]

D. Kressner and C. Tobler, Krylov subspace methods for linear systems with tensor product structure, SIAM Journal on Matrix Analysis and Applications, 31 (2010), 1688–1714. doi: 10.1137/090756843.

[14]

A. N. Langville and W. J. Stewart, A Kronecker product approximate preconditioner for SANs, Numerical Linear Algebra with Applications, 11 (2004), 723–752. doi: 10.1002/nla.344.

[15]

N. Lee and A. Cichocki, Fundamental tensor operations for large-scale data analysis using tensor network formats, Multidimensional Systems and Signal Processing, 29 (2018), 921–960. doi: 10.1007/s11045-017-0481-0.

[16]

T. Li, Q. W. Wang and X. F. Duan, Numerical algorithms for solving discrete Lyapunov tensor equation, Journal of Computational and Applied Mathematics, 370 (2020), 112676. doi: 10.1016/j.cam.2019.112676.

[17]

T. Penzl, et al., A Matlab toolbox for large Lyapunov and Riccati equations, model reduction problems, and linear quadratic optimal control problems, https://www.tu-chemnitz.de/sfb393/lyapack/, (2000).

[18]

T. Stykel and V. Simoncini, Krylov subspace methods for projected Lyapunov equations, Applied Numerical Mathematics, 62 (2012), 35-50.  doi: 10.1016/j.apnum.2011.09.007.

[19]

X. Xu and W. Q. Wang, Extending BiCG and BiCR methods to solve the Stein tensor equation, Computers & Mathematics with Applications, 77 (2019), 3117–3127. doi: 10.1016/j.camwa.2019.01.024.

Figure 1.  Residual norm of the global and the extended global Arnoldi algorithms for example $ 3 $ with the first set of matrices and $ n = 225 $
Figure 2.  Residual norm of the global and the extended global Arnoldi algorithms for example $ 3 $ with the second set of matrices and $ n = 225 $
Figure 3.  Residual norm of the extended block and global Arnoldi algorithms for example $ 4 $ with $ n = 300 $
Algorithm 1 Extended Block Arnoldi (EBA)
1: Input: $ n\times n $ matrix $ A $, $ n\times r $ matrix $ B $ and an integer $ m $.
2: Compute the QR decomposition of $ \left[ B,A^{-1}B\right]=U_1\Lambda, \mathbb{U}_0=\left[ \;\right] $ for $ j=1,\ldots ,m $
3: Set $ U_j^{(1)} $: first $ r $ columns of $ U_j $, $ U_j^{(2)} $: second $ r $ columns of $ U_j $
4: $ \mathbb{U}_j=\left[ \mathbb{U}_{j-1},U_j\right] $; $ W=\left[ AU_j^{(1)},A^{-1}U_j^{(2)}\right] $
5: Orthogonalize $ W $ w.r. to $ \mathbb{U}_j $ to get $ U_{j+1} $ i.e.,
6: for $ i=1,\ldots ,j $ $ H_{i,j}=U_i^TW $; $ W=W-U_iH_{i,j} $
7: Compute the QR decomposition of $ W=U_{j+1}H_{j+1,j} $
Algorithm 1 Extended Block Arnoldi (EBA)
1: Input: $ n\times n $ matrix $ A $, $ n\times r $ matrix $ B $ and an integer $ m $.
2: Compute the QR decomposition of $ \left[ B,A^{-1}B\right]=U_1\Lambda, \mathbb{U}_0=\left[ \;\right] $ for $ j=1,\ldots ,m $
3: Set $ U_j^{(1)} $: first $ r $ columns of $ U_j $, $ U_j^{(2)} $: second $ r $ columns of $ U_j $
4: $ \mathbb{U}_j=\left[ \mathbb{U}_{j-1},U_j\right] $; $ W=\left[ AU_j^{(1)},A^{-1}U_j^{(2)}\right] $
5: Orthogonalize $ W $ w.r. to $ \mathbb{U}_j $ to get $ U_{j+1} $ i.e.,
6: for $ i=1,\ldots ,j $ $ H_{i,j}=U_i^TW $; $ W=W-U_iH_{i,j} $
7: Compute the QR decomposition of $ W=U_{j+1}H_{j+1,j} $
Algorithm 2 Extended Global Arnoldi (EGA)
1: Input: $ n\times n $ matrix $ A $, $ n\times r $ matrix $ B $ and an integer $ m $.
2: Compute the global QR decomposition of $ \left[ B,A^{-1}B\right]=V_1(\Omega\otimes I_r) $; $ \mathbb{V}_0=\left[ \;\right] $ for $ j=1,\ldots ,m $
3: Set $ V_j^{(1)} $: first $ r $ columns of $ V_j $, $ V_j^{(2)} $: second $ r $ columns of $ V_j $
4: $ \mathbb{V}_j=\left[ \mathbb{V}_{j-1},V_j\right] $; $ U=\left[ AV_j^{(1)},A^{-1}V_j^{(2)}\right] $
5: Orthogonalize $ U $ w.r. to $ \mathbb{V}_j $ to get $ V_{j+1} $ i.e.,
6: for $ i=1,\ldots ,j $ $ H_{i,j}=V_i^T\lozenge U $; $ U=U-V_iH_{i,j} $
7:     Compute the QR decomposition of $ U=V_{j+1}(H_{j+1,j}\otimes I_r) $
Algorithm 2 Extended Global Arnoldi (EGA)
1: Input: $ n\times n $ matrix $ A $, $ n\times r $ matrix $ B $ and an integer $ m $.
2: Compute the global QR decomposition of $ \left[ B,A^{-1}B\right]=V_1(\Omega\otimes I_r) $; $ \mathbb{V}_0=\left[ \;\right] $ for $ j=1,\ldots ,m $
3: Set $ V_j^{(1)} $: first $ r $ columns of $ V_j $, $ V_j^{(2)} $: second $ r $ columns of $ V_j $
4: $ \mathbb{V}_j=\left[ \mathbb{V}_{j-1},V_j\right] $; $ U=\left[ AV_j^{(1)},A^{-1}V_j^{(2)}\right] $
5: Orthogonalize $ U $ w.r. to $ \mathbb{V}_j $ to get $ V_{j+1} $ i.e.,
6: for $ i=1,\ldots ,j $ $ H_{i,j}=V_i^T\lozenge U $; $ U=U-V_iH_{i,j} $
7:     Compute the QR decomposition of $ U=V_{j+1}(H_{j+1,j}\otimes I_r) $
Algorithm 3 Sylvester Stein EBA Process
1: Coefficient matrices $ A^{(i)} $, $ i=1,\ldots ,N $, and the right hand side in low rank representation $ \mathcal{B}=\left[B^{(1)},\ldots ,B^{(N)} \right] $.
2: Output: Approximate solutions, $ \mathcal{X}_m $, to the equations (1) / (2).
3: Choose a tolerance $ \epsilon>0 $, a maximum number of iterations $ itermax_i $, a step-size parameter $ k $, set for $ i=1,\ldots ,N $, $ m_i=k $.
4: For $ i=1,\ldots ,N $ do
  For $ m_i=k $, construct the orthonormal basis $ \mathbb{V}_{m_i} $ and the restriction matrices $ \mathbb{T}_{m_i} $ by EBA algorithm (1).
5: Compute $ \mathcal{Y}_m $ / $ \mathcal{Z}_m $ the solution of the low dimensional equation (12) / (13).
6: Compute the residual norms $ r_m=\|\mathcal{R}_m\| $ as in theorem (4.1)/ $ q_m=\|\mathcal{Q}_m\| $ as in theorem (4.2).
7: If $ r_m\geq\epsilon $ / $ q_m\geq\epsilon $, set for $ i=1,\ldots ,N $, $ m_i=mi+k $ and go to step 2.
8: The approximate solutions are given by $ \mathcal{X}_m=\mathcal{Y}_m\times_1\mathbb{V}_{m_1}\ldots\times_N\mathbb{V}_{m_N} $ / $ \mathcal{X}_m=\mathcal{Z}_m\times_1\mathbb{V}_{m_1}\ldots\times_N\mathbb{V}_{m_N} $.
Algorithm 3 Sylvester Stein EBA Process
1: Coefficient matrices $ A^{(i)} $, $ i=1,\ldots ,N $, and the right hand side in low rank representation $ \mathcal{B}=\left[B^{(1)},\ldots ,B^{(N)} \right] $.
2: Output: Approximate solutions, $ \mathcal{X}_m $, to the equations (1) / (2).
3: Choose a tolerance $ \epsilon>0 $, a maximum number of iterations $ itermax_i $, a step-size parameter $ k $, set for $ i=1,\ldots ,N $, $ m_i=k $.
4: For $ i=1,\ldots ,N $ do
  For $ m_i=k $, construct the orthonormal basis $ \mathbb{V}_{m_i} $ and the restriction matrices $ \mathbb{T}_{m_i} $ by EBA algorithm (1).
5: Compute $ \mathcal{Y}_m $ / $ \mathcal{Z}_m $ the solution of the low dimensional equation (12) / (13).
6: Compute the residual norms $ r_m=\|\mathcal{R}_m\| $ as in theorem (4.1)/ $ q_m=\|\mathcal{Q}_m\| $ as in theorem (4.2).
7: If $ r_m\geq\epsilon $ / $ q_m\geq\epsilon $, set for $ i=1,\ldots ,N $, $ m_i=mi+k $ and go to step 2.
8: The approximate solutions are given by $ \mathcal{X}_m=\mathcal{Y}_m\times_1\mathbb{V}_{m_1}\ldots\times_N\mathbb{V}_{m_N} $ / $ \mathcal{X}_m=\mathcal{Z}_m\times_1\mathbb{V}_{m_1}\ldots\times_N\mathbb{V}_{m_N} $.
Algorithm 4 Sylvester / Stein EGA Process
1: Input: Coefficient matrices $ A^{(i)} $, $ i=1,\ldots ,N $, and the right hand side in low rank representation $ \mathcal{B}=\left[B^{(1)},\ldots ,B^{(N)} \right] $.
2: Output: Approximate solutions, $ \mathcal{X}_m $, to the equations (1) / (2).
3: Choose a tolerance $ \epsilon>0 $, a maximum number of iterations $ itermax_i $, a step-size parameter $ k $, set for $ i=1,\ldots ,N $, $ m_i=k $.
4: For $ i=1,\ldots ,N $ do
  For $ m_i=k $, construct the orthonormal basis $ \mathbb{W}_{m_i} $ and the restriction matrices $ T_{m_i} $ by EGA algorithm (2).
5: Compute $ \mathcal{Y}_m $ / $ \mathcal{Z}_m $ the solutions of the low dimensional equations (16) / (19).
6: Compute the upper bound for the residual norms $ r_m=\left( {\sum_{i=1}^N\|\mathcal{Y}_m\times_iT_{m_i+1,m_i}E_{m_i}^T\|^2}\right)^{1/2} $ as in theorem (4.4)/ $ q_m=\left( \beta_1^2+\sum_{i=1}^N{\beta_2^{(i)}}^2+\sum_{i=1}^N{\beta_3^{(i)}}^2\right)^{1/2} $ as in theorem (4.5).
7: If $ r_m\geq\epsilon $ / $ q_m\geq\epsilon $, set for $ i=1,\ldots ,N $, $ m_i=mi+k $ and go to step 2.
8: The approximate solutions are given by $ \mathcal{X}_m=(\mathcal{Y}_m\otimes\mathcal{I}_R)\times_1\mathbb{W}_{m_1}\ldots\times_N\mathbb{W}_{m_N} $ / $ \mathcal{X}_m=(\mathcal{Z}_m\otimes\mathcal{I}_R)\times_1\mathbb{W}_{m_1}\ldots\times_N\mathbb{W}_{m_N} $.
Algorithm 4 Sylvester / Stein EGA Process
1: Input: Coefficient matrices $ A^{(i)} $, $ i=1,\ldots ,N $, and the right hand side in low rank representation $ \mathcal{B}=\left[B^{(1)},\ldots ,B^{(N)} \right] $.
2: Output: Approximate solutions, $ \mathcal{X}_m $, to the equations (1) / (2).
3: Choose a tolerance $ \epsilon>0 $, a maximum number of iterations $ itermax_i $, a step-size parameter $ k $, set for $ i=1,\ldots ,N $, $ m_i=k $.
4: For $ i=1,\ldots ,N $ do
  For $ m_i=k $, construct the orthonormal basis $ \mathbb{W}_{m_i} $ and the restriction matrices $ T_{m_i} $ by EGA algorithm (2).
5: Compute $ \mathcal{Y}_m $ / $ \mathcal{Z}_m $ the solutions of the low dimensional equations (16) / (19).
6: Compute the upper bound for the residual norms $ r_m=\left( {\sum_{i=1}^N\|\mathcal{Y}_m\times_iT_{m_i+1,m_i}E_{m_i}^T\|^2}\right)^{1/2} $ as in theorem (4.4)/ $ q_m=\left( \beta_1^2+\sum_{i=1}^N{\beta_2^{(i)}}^2+\sum_{i=1}^N{\beta_3^{(i)}}^2\right)^{1/2} $ as in theorem (4.5).
7: If $ r_m\geq\epsilon $ / $ q_m\geq\epsilon $, set for $ i=1,\ldots ,N $, $ m_i=mi+k $ and go to step 2.
8: The approximate solutions are given by $ \mathcal{X}_m=(\mathcal{Y}_m\otimes\mathcal{I}_R)\times_1\mathbb{W}_{m_1}\ldots\times_N\mathbb{W}_{m_N} $ / $ \mathcal{X}_m=(\mathcal{Z}_m\otimes\mathcal{I}_R)\times_1\mathbb{W}_{m_1}\ldots\times_N\mathbb{W}_{m_N} $.
Table 1.  Example $ 1 $
Algorithm $ R $ $ n $ Cycles $ \mathcal{R}_m $ CPU Times (Second)
Algorithm 3 $ 3 $ $ 500 $ $ 5 $ $ 9.34\cdot 10^{-7} $ $ 16.1 $
$ 5 $ $ 300 $ $ 4 $ $ 4.03\cdot 10^{-7} $ $ 31.38 $
Algorithm 4 $ 3 $ $ 500 $ $ 9 $ $ 6.69\cdot 10^{-7} $ $ 4.57 $
$ 5 $ $ 300 $ $ 8 $ $ 4.34\cdot 10^{-7} $ $ 3.76 $
Algorithm $ R $ $ n $ Cycles $ \mathcal{R}_m $ CPU Times (Second)
Algorithm 3 $ 3 $ $ 500 $ $ 5 $ $ 9.34\cdot 10^{-7} $ $ 16.1 $
$ 5 $ $ 300 $ $ 4 $ $ 4.03\cdot 10^{-7} $ $ 31.38 $
Algorithm 4 $ 3 $ $ 500 $ $ 9 $ $ 6.69\cdot 10^{-7} $ $ 4.57 $
$ 5 $ $ 300 $ $ 8 $ $ 4.34\cdot 10^{-7} $ $ 3.76 $
Table 2.  Example 2
Algorithm $ R $ $ n $ Cycles $ \|\mathcal{R}_m\| $ $ \|\mathcal{X}^{\ast}-\mathcal{X}_m\| $ CPU Times (Second)
Algorithm 3 (case $ 1 $) $ 3 $ $ 300 $ $ 2 $ $ 4.08\cdot 10^{-9} $ $ 3.85\cdot 10^{-6} $ $ 1.58 $
(case $ 2 $) $ 3 $ $ 300 $ $ 2 $ $ 1.26\cdot 10^{-8} $ $ 2.82\cdot 10^{-7} $ $ 1.53 $
Algorithm 4 (case $ 1 $) $ 3 $ $ 300 $ $ 6 $ $ 5.82\cdot 10^{-8} $ $ 4.21\cdot 10^{-6} $ $ 1.46 $
(case $ 2 $) $ 3 $ $ 300 $ $ 7 $ $ 5.39\cdot 10^{-8} $ $ 4.17\cdot 10^{-8} $ $ 2.22 $
Algorithm 4 $ 5 $ $ 500 $ $ 10 $ $ 4.54\cdot 10^{-7} $ $ - $ $ 26.01 $
Algorithm $ R $ $ n $ Cycles $ \|\mathcal{R}_m\| $ $ \|\mathcal{X}^{\ast}-\mathcal{X}_m\| $ CPU Times (Second)
Algorithm 3 (case $ 1 $) $ 3 $ $ 300 $ $ 2 $ $ 4.08\cdot 10^{-9} $ $ 3.85\cdot 10^{-6} $ $ 1.58 $
(case $ 2 $) $ 3 $ $ 300 $ $ 2 $ $ 1.26\cdot 10^{-8} $ $ 2.82\cdot 10^{-7} $ $ 1.53 $
Algorithm 4 (case $ 1 $) $ 3 $ $ 300 $ $ 6 $ $ 5.82\cdot 10^{-8} $ $ 4.21\cdot 10^{-6} $ $ 1.46 $
(case $ 2 $) $ 3 $ $ 300 $ $ 7 $ $ 5.39\cdot 10^{-8} $ $ 4.17\cdot 10^{-8} $ $ 2.22 $
Algorithm 4 $ 5 $ $ 500 $ $ 10 $ $ 4.54\cdot 10^{-7} $ $ - $ $ 26.01 $
Table 3.  Example 3, $ N = 3 $ and $ R = 5 $
Matrices Algorithm $ \|\mathcal{R}_m\| $ CPU Times (Second)
fdm Algorithm $ 2 $ [4] $ 1.45\cdot 10^{-6} $ $ 56.0 $
Algorithm 4 $ 3.6\cdot 10^{-9} $ $ 0.74 $
poisson Algorithm $ 2 $ [4] $ 2.82\cdot 10^{-6} $ $ 14.5 $
Algorithm 4 $ 2.24\cdot 10^{-9} $ $ 0.67 $
Matrices Algorithm $ \|\mathcal{R}_m\| $ CPU Times (Second)
fdm Algorithm $ 2 $ [4] $ 1.45\cdot 10^{-6} $ $ 56.0 $
Algorithm 4 $ 3.6\cdot 10^{-9} $ $ 0.74 $
poisson Algorithm $ 2 $ [4] $ 2.82\cdot 10^{-6} $ $ 14.5 $
Algorithm 4 $ 2.24\cdot 10^{-9} $ $ 0.67 $
Table 4.  Example $ 4 $
Algorithm $ R $ $ n $ Cycles $ \mathcal{R}_m $ CPU Times (Second)
Algorithm 3 $ 3 $ $ 300 $ $ 6 $ $ 1.58\cdot 10^{-6} $ $ 27.49 $
Algorithm 4 $ 3 $ $ 300 $ $ 4 $ $ 1.56\cdot 10^{-7} $ $ 0.83 $
$ 5 $ $ 300 $ $ 4 $ $ 8.77\cdot 10^{-7} $ $ 0.85 $
Algorithm $ R $ $ n $ Cycles $ \mathcal{R}_m $ CPU Times (Second)
Algorithm 3 $ 3 $ $ 300 $ $ 6 $ $ 1.58\cdot 10^{-6} $ $ 27.49 $
Algorithm 4 $ 3 $ $ 300 $ $ 4 $ $ 1.56\cdot 10^{-7} $ $ 0.83 $
$ 5 $ $ 300 $ $ 4 $ $ 8.77\cdot 10^{-7} $ $ 0.85 $
[1]

Shenglong Hu. A note on the solvability of a tensor equation. Journal of Industrial and Management Optimization, 2021  doi: 10.3934/jimo.2021146

[2]

Jin Wang, Jun-E Feng, Hua-Lin Huang. Solvability of the matrix equation $ AX^{2} = B $ with semi-tensor product. Electronic Research Archive, 2021, 29 (3) : 2249-2267. doi: 10.3934/era.2020114

[3]

Mourad Bellassoued, Zouhour Rezig. Recovery of transversal metric tensor in the Schrödinger equation from the Dirichlet-to-Neumann map. Discrete and Continuous Dynamical Systems - S, 2022, 15 (5) : 1061-1084. doi: 10.3934/dcdss.2021158

[4]

Qun Liu, Lihua Fu, Meng Zhang, Wanjuan Zhang. Two-dimensional seismic data reconstruction using patch tensor completion. Inverse Problems and Imaging, 2020, 14 (6) : 985-1000. doi: 10.3934/ipi.2020052

[5]

Jan Boman, Vladimir Sharafutdinov. Stability estimates in tensor tomography. Inverse Problems and Imaging, 2018, 12 (5) : 1245-1262. doi: 10.3934/ipi.2018052

[6]

Chaudry Masood Khalique, Letlhogonolo Daddy Moleleki. A study of a generalized first extended (3+1)-dimensional Jimbo-Miwa equation. Discrete and Continuous Dynamical Systems - S, 2020, 13 (10) : 2789-2802. doi: 10.3934/dcdss.2020169

[7]

Qiao Liang, Qiang Ye. Deflation by restriction for the inverse-free preconditioned Krylov subspace method. Numerical Algebra, Control and Optimization, 2016, 6 (1) : 55-71. doi: 10.3934/naco.2016.6.55

[8]

Dan Li, Chunlai Mu, Pan Zheng, Ke Lin. Boundedness in a three-dimensional Keller-Segel-Stokes system involving tensor-valued sensitivity with saturation. Discrete and Continuous Dynamical Systems - B, 2019, 24 (2) : 831-849. doi: 10.3934/dcdsb.2018209

[9]

Meng Ding, Ting-Zhu Huang, Xi-Le Zhao, Michael K. Ng, Tian-Hui Ma. Tensor train rank minimization with nonlocal self-similarity for tensor completion. Inverse Problems and Imaging, 2021, 15 (3) : 475-498. doi: 10.3934/ipi.2021001

[10]

Mengmeng Zheng, Ying Zhang, Zheng-Hai Huang. Global error bounds for the tensor complementarity problem with a P-tensor. Journal of Industrial and Management Optimization, 2019, 15 (2) : 933-946. doi: 10.3934/jimo.2018078

[11]

Yiju Wang, Guanglu Zhou, Louis Caccetta. Nonsingular $H$-tensor and its criteria. Journal of Industrial and Management Optimization, 2016, 12 (4) : 1173-1186. doi: 10.3934/jimo.2016.12.1173

[12]

Ruixue Zhao, Jinyan Fan. Quadratic tensor eigenvalue complementarity problems. Journal of Industrial and Management Optimization, 2022  doi: 10.3934/jimo.2022073

[13]

Abdollah Borhanifar, Maria Alessandra Ragusa, Sohrab Valizadeh. High-order numerical method for two-dimensional Riesz space fractional advection-dispersion equation. Discrete and Continuous Dynamical Systems - B, 2021, 26 (10) : 5495-5508. doi: 10.3934/dcdsb.2020355

[14]

Daomin Cao, Hang Li. High energy solutions of the Choquard equation. Discrete and Continuous Dynamical Systems, 2018, 38 (6) : 3023-3032. doi: 10.3934/dcds.2018129

[15]

Nicolas Van Goethem. The Frank tensor as a boundary condition in intrinsic linearized elasticity. Journal of Geometric Mechanics, 2016, 8 (4) : 391-411. doi: 10.3934/jgm.2016013

[16]

Henry O. Jacobs, Hiroaki Yoshimura. Tensor products of Dirac structures and interconnection in Lagrangian mechanics. Journal of Geometric Mechanics, 2014, 6 (1) : 67-98. doi: 10.3934/jgm.2014.6.67

[17]

François Monard. Efficient tensor tomography in fan-beam coordinates. Inverse Problems and Imaging, 2016, 10 (2) : 433-459. doi: 10.3934/ipi.2016007

[18]

Kaili Zhang, Haibin Chen, Pengfei Zhao. A potential reduction method for tensor complementarity problems. Journal of Industrial and Management Optimization, 2019, 15 (2) : 429-443. doi: 10.3934/jimo.2018049

[19]

Xia Li, Yong Wang, Zheng-Hai Huang. Continuity, differentiability and semismoothness of generalized tensor functions. Journal of Industrial and Management Optimization, 2021, 17 (6) : 3525-3550. doi: 10.3934/jimo.2020131

[20]

Liqun Qi, Shenglong Hu, Yanwei Xu. Spectral norm and nuclear norm of a third order tensor. Journal of Industrial and Management Optimization, 2022, 18 (2) : 1101-1113. doi: 10.3934/jimo.2021010

2020 Impact Factor: 2.425

Metrics

  • PDF downloads (395)
  • HTML views (562)
  • Cited by (0)

[Back to Top]