\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Fully-connected tensor network decomposition for robust tensor completion problem

  • *Corresponding author: Xi-Le Zhao

    *Corresponding author: Xi-Le Zhao 
Abstract / Introduction Full Text(HTML) Figure(9) / Table(5) Related Papers Cited by
  • Motivated by the success of fully-connected tensor network (FCTN) decomposition, we suggest two FCTN-based models for the robust tensor completion (RTC) problem. Firstly, we propose an $ \textbf{FCTN} $-based $ \textbf{r} $obust $ \textbf{n} $on$ \textbf{c} $onvex optimization model (RNC-FCTN) directly based on FCTN decomposition for the RTC problem. Then, a proximal alternating minimization (PAM)-based algorithm is developed to solve the proposed RNC-FCTN. Meanwhile, we theoretically derive the convergence of the PAM-based algorithm. Although the nonconvex model has shown empirically excellent results, the exact recovery guarantee is still missing and $ N(N-1)/2+1 $ tuning parameters are difficult to choose for $ N $-th order tensor. Therefore, we propose the FCTN nuclear norm as the convex surrogate function of the FCTN rank and suggest an $ \textbf{FCTN} $ nuclear norm-based $ \textbf{r} $obust $ \textbf{c} $onvex optimization model (RC-FCTN) for the RTC problem. For solving the constrained optimization model RC-FCTN, we develop an alternating direction method of multipliers (ADMM)-based algorithm, which enjoys the global convergence guarantee. To explore the exact recovery guarantee, we design a constructive singular value decomposition (SVD)-based FCTN decomposition, which is another crucial algorithm to obtain the factor tensors of FCTN decomposition. Accordingly, we rigorously establish the exact recovery guarantee for the RC-FCTN and suggest the theoretical optimal value for the only one parameter in the convex model. Comprehensive numerical experiments in several applications, such as video completion and video background subtraction, demonstrate that the suggested convex and nonconvex models have achieved state-of-the-art performance.

    Mathematics Subject Classification: Primary: 94A08, 68U10; Secondary: 65F50.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  The illustration of tensor network decomposition

    Figure 2.  The illustration of tensor network decompositions

    Figure 3.  The graphical representation of the SVD-based FCTN decomposition. (a), (b), and (c) are the graphical representation of the SVD-based FCTN decomposition of a fourth-order tensor, a fifth-order tensor, and an $ \mathit{N} $th-order tensor, respectively

    Figure 4.  The MPSNR and MSSIM values recovered by RNC-FCTN with respect to $ \lambda_0 $ for two color videos

    Figure 5.  The MPSNR and MSSIM values recovered by RNC-FCTN with respect to the FCTN-rank for color video $ \mathit{bunny} $

    Figure 6.  The MPSNR and MSSIM values recovered by RC-FCTN with respect to $ \lambda $ for two color videos

    Figure 7.  Recovered results on two color videos with SR = 0.2 and SaP = 0.1. The first row and third row are visual results at the 1st frame of $ \mathit{bunny} $ and the 50th frame of $ \mathit{elephants} $, respectively. The second row and fourth row are the corresponding residual images

    Figure 8.  Recovered results on the HSV (band 8, 9, and 10 of first frame are picked as red, green, and blue channels). The first row and third row are visual results with SR = 0.1, SaP = 0.1 and 0.2, respectively. The second row and fourth row are the corresponding residual images

    Figure 9.  The visual results of the 17th frame, the 27th frame, the 37th frame, and the 47th frame of six robust competition methods

    Table 1.  The notations of the unfolding matrices of tensor $ \mathcal{X} \in \mathbb{R}^{I_1 \times I_2 \times \cdots \times I_N} $

    Name Form Size The corresponding tensor decomposition
    The mode-$ k $ matricization $ \textbf{X}_{(k)} $ $ I_k \times \Pi_{i=1, i \neq k}^N I_i $ Tucker decomposition
    The $ k $th frontal slice $ \textbf{X}^{(k)} $ $ I_1 \times I_2 $ t-SVD
    The $ k $-mode matricization $ \textbf{X}_{[k]} $ $ \Pi_{i=1}^k I_i \times \Pi_{i=k+1}^N I_i $ TT decomposition
    The $ k $th circularly unfolding matrice $ \textbf{X}_{<k, L>} $ $ \Pi_{i=k}^{k+L-1} I_i \times \Pi_{i=k+L}^{k-1} I_i $ TR decomposition
    The generalized unfolding matrix $ \textbf{X}_{[\textbf{n}_{1:d};\textbf{n}_{d+1:N}]} $ $ \Pi_{i=1}^d I_{\textbf{n}_i} \times \Pi_{i=d+1}^N I_{\textbf{n}_i} $ FCTN decomposition
     | Show Table
    DownLoad: CSV

    Table 2.  The comparison of the computational complexity of the proposed methods and compared methods

    Method The computational complexity of each step
    SNN $ \mathcal{O}\big(\sum_{k=1}^{4} p_k q_k $ min $ (p_k, q_k)\big) $ $ \left(p_k= I_k \text{and} q_k = \prod_{i=1, i \neq k }^4 I_i \right) $
    TNN $ \mathcal{O}\big(I_1 I_2 log(I_3 I_4))+\text{min}(I_1, I_2) \prod_{i=1}^4 I_i \big) $
    TTNN $ \mathcal{O}\big(\sum_{k=1}^{4} p_k q_k $ min $ (p_k, q_k)\big) $ $ \left(p_k= \prod_{i=1}^k I_i \text{and} q_i = \prod_{i=k+1}^4 I_i \right) $
    RTRC $ \mathcal{O}\big(\sum_{k=1}^{2} p_k q_k $ min $ (p_k, q_k)\big) $ $ \left(p_k= \prod_{i=k}^{k+1} I_i \text{and} q_i = \prod_{i=k+2}^{k+3} I_i \text{with} I_5=I_1 \right) $
    RC-FCTN $ \mathcal{O}\big(\sum_{k=1}^{3} p_k q_k $ min $ (p_k, q_k)\big) $ $ \left(p_k= \prod_{i=1}^l I_{\textbf{n}_i} \text{and} q_k= \prod_{i=l+1}^4 I_{\textbf{n}_i}\right) $
    RNC-FCTN $ \mathcal{O}(4\sum_{k=2}^{4} I^k R^{k(4-k)+k-1} +4 I^3 R^6) $
     | Show Table
    DownLoad: CSV

    Table 3.  Exact recovery on random synthetic data in different cases

    size $ I $ rank $ r $ $ \rho $ $ s $ $ \|\mathcal{X}-\mathcal{X}^0\|_F/\|\mathcal{X}^0\|_F $ size $ I $ rank $ r $ $ \rho $ $ s $ $ \|\mathcal{X}-\mathcal{X}^0\|_F/\|\mathcal{X}^0\|_F $
    20 0.1$ I $ 1 0.05 $ 1.28 \times 10^{-4} $ 40 0.1$ I $ 1 0.05 $ 1.06 \times 10^{-4} $
    0.1 $ 1.56 \times 10^{-4} $ 0.1 $ 1.12 \times 10^{-4} $
    0.9 0.05 $ 1.93 \times 10^{-4} $ 0.9 0.05 $ 1.72 \times 10^{-4} $
    0.1 $ 1.96 \times 10^{-4} $ 0.1 $ 2.25 \times 10^{-4} $
    0.2$ I $ 1 0.05 $ 4.74 \times 10^{-4} $ 0.2$ I $ 1 0.05 $ 1.77 \times 10^{-4} $
    0.1 $ 7.83 \times 10^{-4} $ 0.1 $ 2.88 \times 10^{-4} $
    0.9 0.05 $ 6.55 \times 10^{-4} $ 0.9 0.05 $ 2.66 \times 10^{-4} $
    0.1 $ 8.35 \times 10^{-4} $ 0.1 $ 3.01 \times 10^{-4} $
     | Show Table
    DownLoad: CSV

    Table 4.  The MPSNR/MSSIM, iteration number, and CPU time (seconds) of the recovered color videos by different methods for different cases

    Data Methods SaP=0.1, SR=0.6 SaP=0.1, SR=0.4 SaP=0.1, SR=0.2
    MPSNR MSSIM Iter Time MPSNR MSSIM Iter Time MPSNR MSSIM Iter Time
    $ \mathit{bunny} $ Observed 9.357 0.095 - - 8.035 0.062 - - 7.024 0.033 - -
    SNN 26.80 0.832 128 83.5 22.61 0.653 142 87.3 14.92 0.415 143 85.3
    TNN 34.61 0.955 34 22.9 30.59 0.925 34 22.6 25.67 0.774 34 22.1
    TTNN 32.86 0.966 60 27.8 29.80 0.937 66 29.3 23.90 0.764 74 33.7
    RTRC 34.05 0.962 66 26.7 30.87 0.924 67 28.5 26.60 0.835 72 30.1
    RC-FCTN 34.92 0.975 85 60.1 31.86 0.946 87 62.1 26.56 0.837 92 64.4
    RNC-FCTN 39.72 0.983 643 849.7 37.11 0.977 656 856.1 32.85 0.941 676 880.8
    $ \mathit{elephants} $ Observed 7.146 0.049 - - 5.631 0.031 - - 4.516 0.017 - -
    SNN 28.80 0.863 141 88.7 24.21 0.759 143 89.9 14.45 0.516 144 85.7
    TNN 34.35 0.958 32 22.2 31.05 0.928 33 22.5 26.28 0.811 32 21.8
    TTNN 32.61 0.943 62 28.1 29.75 0.933 68 30.5 25.42 0.845 67 29.4
    RTRC 33.69 0.961 66 27.2 30.52 0.929 68 28.5 26.31 0.847 75 32.3
    RC-FCTN 36.99 0.974 83 62.3 33.25 0.956 87 62.1 27.94 0.877 94 66.4
    RNC-FCTN 39.18 0.975 651 841.7 36.20 0.960 663 850.0 29.91 0.888 669 889.9
     | Show Table
    DownLoad: CSV

    Table 5.  The MPSNR/MSSIM, iteration number, and CPU time (seconds) of the recovered HSV by different methods for different cases

    Methods SaP=0.1, SR=0.3 SaP=0.1, SR=0.2 SaP=0.1, SR=0.1
    MPSNR MSSIM Iter Time MPSNR MSSIM Iter Time MPSNR MSSIM Iter Time
    Observed 9.367 0.071 - - 8.936 0.048 - - 8.545 0.026 - -
    SNN 26.89 0.872 94 24.9 24.12 0.790 97 25.6 19.23 0.628 98 25.8
    TNN 43.23 0.993 36 12.4 37.77 0.983 36 12.6 27.72 0.893 34 11.2
    TTNN 45.71 0.995 89 28.7 43.18 0.993 94 30.7 36.68 0.985 94 29.2
    RTRC 45.73 0.996 93 33.0 42.70 0.994 95 32.1 36.56 0.983 99 36.4
    RC-FCTN 46.64 0.997 109 67.2 43.98 0.995 110 67.7 37.88 0.988 112 66.4
    RNC-FCTN 46.91 0.997 436 162.2 44.38 0.995 443 159.0 39.20 0.986 446 159.9
    Methods SaP=0.2, SR=0.3 SaP=0.2, SR=0.2 SaP=0.2, SR=0.1
    MPSNR MSSIM Iter Time MPSNR MSSIM Iter Time MPSNR MSSIM Iter Time
    Observed 9.015 0.055 - - 8.716 0.038 - - 8.448 0.022 - -
    SNN 24.24 0.759 89 23.1 21.32 0.722 94 24.8 17.00 0.527 98 25.1
    TNN 39.75 0.987 36 12.4 34.37 0.969 36 12.2 25.14 0.821 34 11.0
    TTNN 43.43 0.994 72 24.7 40.20 0.993 82 26.7 34.38 0.971 92 29.6
    RTRC 42.62 0.994 83 30.6 39.09 0.991 87 31.7 33.65 0.972 96 35.8
    RC-FCTN 43.79 0.996 110 68.7 39.62 0.991 112 69.9 34.49 0.974 112 67.1
    RNC-FCTN 44.73 0.995 421 149.7 41.93 0.992 423 155.2 36.02 0.975 426 157.8
     | Show Table
    DownLoad: CSV
  • [1] H. AttouchJ. Bolte and B. F. Svaiter, Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods, Math. Prog., 137 (2013), 91-129.  doi: 10.1007/s10107-011-0484-9.
    [2] J. A. BenguaH. N. PhienH. D. Tuan and M. N. Do, Efficient tensor completion for color image and video recovery: Low-rank tensor train, IEEE Trans. Image Process., 26 (2017), 2466-2479.  doi: 10.1109/TIP.2017.2672439.
    [3] J. BolteA. DaniilidisA. Lewis and M. Shiota, Clarke subgradients of stratifiable functions, SIAM J. Optim., 18 (2007), 556-572.  doi: 10.1137/060670080.
    [4] J. BolteS. Sabach and M. Teboulle, Proximal alternating linearized minimization for nonconvex and nonsmooth problems, Math. Prog., 146 (2014), 459-494.  doi: 10.1007/s10107-013-0701-9.
    [5] S. BoydN. ParikhE. ChuB. Peleato and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. and Trends in Mach. Learn., 3 (2011), 1-122.  doi: 10.1561/2200000016.
    [6] E. J. CandèsX. LiY. Ma and J. Wright, Robust principal component analysis?, J. ACM, 58 (2011), 1-37.  doi: 10.1145/1970392.1970395.
    [7] E. J. Candes and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inf. Theory, 56 (2010), 2053-2080.  doi: 10.1109/TIT.2010.2044061.
    [8] C. ChenX. LiM. K. Ng and X. Yuan, Total variation based tensor decomposition for multi-dimensional data with time dimension, Numer. Linear Algeb. Appl., 22 (2015), 999-1019.  doi: 10.1002/nla.1993.
    [9] C. ChenZ.-B. WuZ.-T. ChenZ.-B. Zheng and X.-J. Zhang, Auto-weighted robust low-rank tensor completion via tensor-train, Inform. Sci., 567 (2021), 100-115.  doi: 10.1016/j.ins.2021.03.025.
    [10] M. DingT.-Z. HuangX.-L. ZhaoM. K. Ng and T.-H. Ma, Tensor train rank minimization with nonlocal self-similarity for tensor completion, Inverse Probl. Imag., 15 (2021), 475-498.  doi: 10.3934/ipi.2021001.
    [11] M. FazelT. K. PongD. Sun and P. Tseng, Hankel matrix rank minimization with applications to system identification and realization, SIAM J. Matrix Anal. Appl., 34 (2013), 946-977.  doi: 10.1137/110853996.
    [12] C. J. Hillar and L.-H. Lim, Most tensor problems are NP-hard, J. ACM, 60 (2013), 1-39.  doi: 10.1145/2512329.
    [13] B. HuangC. MuD. Goldfarb and J. Wrigh, Provable models for robust low-rank tensor completion, Pac. J. Optim., 11 (2015), 339-364. 
    [14] H. HuangY. LiuZ. Long and C. Zhu, Robust low-rank tensor ring completion, IEEE Trans. Comput. Imaging, 6 (2020), 1117-1126.  doi: 10.1109/TCI.2020.3006718.
    [15] Q. Jiang and M. Ng, Robust low-tubal-rank tensor completion via convex optimization, in Proceedings of the IJCAI, 2649-2655. doi: 10.24963/ijcai.2019/368.
    [16] M. E. KilmerK. BramanN. Hao and R. C. Hoover, Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging, SIAM J. Matrix Anal. Appl., 34 (2013), 148-172.  doi: 10.1137/110837711.
    [17] M. E. Kilmer and C. D. Martin, Factorization strategies for third-order tensors, Linear Algeb. Appl., 435 (2011), 641-658.  doi: 10.1016/j.laa.2010.09.020.
    [18] B.-Z. Li, X.-L. Zhao, T.-Y. Ji, X.-J. Zhang and T.-Z. Huang, Nonlinear transform induced tensor nuclear norm for tensor completion, J. Sci. Comput., 92 (2022), Paper No. 83, 30 pp. doi: 10.1007/s10915-022-01937-1.
    [19] J. LiuP. MusialskiP. Wonka and J. Ye, Tensor completion for estimating missing values in visual data, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2013), 208-220.  doi: 10.1109/TPAMI.2012.39.
    [20] C. LuJ. FengY. ChenW. LiuZ. Lin and S. Yan, Tensor robust principal component analysis with a new tensor nuclear norm, IEEE Trans. Pattern Anal. Mach. Intell., 42 (2020), 925-938.  doi: 10.1109/TPAMI.2019.2891760.
    [21] C.-Y. Lyu, X.-L. Zhao, B.-Z. Li, H. Zhang and T.-Z. Huang, Multi-dimensional image recovery via fully-connected tensor network decomposition under the learnable transforms, J. Sci. Comput., 93 (2022), Paper No. 49, 24 pp. doi: 10.1007/s10915-022-02009-0.
    [22] I. V. Oseledets, Tensor-train decomposition, SIAM J. Sci. Comput., 33 (2011), 2295-2317.  doi: 10.1137/090752286.
    [23] O. SemerciN. HaoM. E. Kilmer and E. L. Miller, Tensor-based formulation and nuclear norm regularization for multienergy computed tomography, IEEE Trans. Image Process., 23 (2014), 1678-1693.  doi: 10.1109/TIP.2014.2305840.
    [24] Y. ShiZ. LiuX. Wang and J. Zhang, Edge detection with mixed noise based on maximum a posteriori approach, Inverse Probl. Imag., 15 (2021), 1223-1245.  doi: 10.3934/ipi.2021035.
    [25] G. Song, M. K. Ng and X. Zhang, Robust tensor completion using transformed tensor singular value decomposition, Numer. Linear Algeb. Appl., 27 (2020), e2299, 27 pp. doi: 10.1002/nla.2299.
    [26] L. R. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika, 31 (1966), 279-311.  doi: 10.1007/BF02289464.
    [27] K. WeiJ.-F. CaiT. F. Chan and S. Leung, Guarantees of riemannian optimization for low rank matrix completion, Inverse Probl. Imag., 14 (2020), 233-265.  doi: 10.3934/ipi.2020011.
    [28] Y. XuR. HaoW. Yin and Z. Su, Parallel matrix factorization for low-rank tensor completion, Inverse Probl. Imag., 9 (2015), 601-624.  doi: 10.3934/ipi.2015.9.601.
    [29] Y. Xu and W. Yin, A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion, SIAM J. Imaging Sci., 6 (2013), 1758-1789.  doi: 10.1137/120887795.
    [30] N. Yair and T. Michaeli, Multi-scale weighted nuclear norm image restoration, in Proceedings of the CVPR, 3165-3174. doi: 10.1109/CVPR.2018.00334.
    [31] H. YangX. DingR. ChanH. HuY. Peng and T. Zeng, A new initialization method based on normed statistical spaces in deep networks, Inverse Probl. Imag., 15 (2021), 147-158.  doi: 10.3934/ipi.2020045.
    [32] J.-H. Yang, X.-L. Zhao, T.-Y. Ji, T.-H. Ma and T.-Z. Huang, Low-rank tensor train for tensor robust principal component analysis, Appl. Math. Comput., 367 (2020), 124783, 15 pp. doi: 10.1016/j.amc.2019.124783.
    [33] K. Ye and L.-H. Lim, Tensor network ranks, arXiv: 1801.02662.
    [34] J. Yu, C. Li, Q. Zhao and G. Zhou, Tensor-ring nuclear norm minimization and application for visual data completion, in Proceedings of the ICASSP, (2019), 3142-3146. doi: 10.1109/ICASSP.2019.8683115.
    [35] Q. Yu, X. Zhang and Z.-H. Huang, Multi-tubal rank of third order tensor and related low rank tensor completion problem, arXiv preprint, arXiv: 2012.05065.
    [36] H. ZhangX.-L. ZhaoT.-X. JiangM. K. Ng and T.-Z. Huang, Multiscale feature tensor train rank minimization for multidimensional image recovery, IEEE Trans. Cybern., 52 (2021), 13395-13410.  doi: 10.1109/TCYB.2021.3108847.
    [37] J. ZhangY. DuanY. LuM. K. Ng and H. Chang, Bilinear constraint based admm for mixed poisson-gaussian noise removal, Inverse Probl. Imag., 15 (2021), 339-366.  doi: 10.3934/ipi.2020071.
    [38] X. Zhang, A nonconvex relaxation approach to low-rank tensor completion, IEEE Trans. Neural Netws. Learn. Syst., 30 (2019), 1659-1671.  doi: 10.1109/TNNLS.2018.2872583.
    [39] X. Zhang and M. K. Ng, Robust tensor train component analysis, Numer. Linear Algeb. Appl., 29 (2022), e2403. doi: 10.1002/nla.2403.
    [40] M. ZhaoY.-W. WenM. Ng and H. Li, A nonlocal low rank model for poisson noise removal, Inverse Probl. Imag., 15 (2021), 519-537.  doi: 10.3934/ipi.2021003.
    [41] Q. Zhao, G. Zhou, S. Xie, L. Zhang and A. Cichocki, Tensor ring decomposition, arXiv preprint, arXiv: 1606.05535.
    [42] X. Zhao, M. Bai and M. K. Ng, Nonconvex optimization for robust tensor completion from grossly sparse observations, J. Sci. Comput., 85 (2020), Paper No. 46, 32 pp. doi: 10.1007/s10915-020-01356-0.
    [43] Y.-B. ZhengT.-Z. HuangX.-L. ZhaoT.-X. JiangT.-H. Ma and T.-Y. Ji, Mixed noise removal in hyperspectral image via low-fibered-rank regularization, IEEE Trans. Geosci. Remote Sens., 58 (2020), 734-749.  doi: 10.1109/TGRS.2019.2940534.
    [44] Y.-B. Zheng, T.-Z. Huang, X.-L. Zhao, Q. Zhao and T.-X. Jiang, Fully-connected tensor network decomposition and its application to higher-order tensor completion, in Proceedings of the AAAI Conf. Artifi. Intell., 35 (2021). doi: 10.1609/aaai.v35i12.17321.
  • 加载中

Figures(9)

Tables(5)

SHARE

Article Metrics

HTML views(5033) PDF downloads(472) Cited by(0)

Access History

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return