\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Iterative singular tube hard thresholding algorithms for tensor recovery

  • *Corresponding author: Jing Qin

    *Corresponding author: Jing Qin
Abstract / Introduction Full Text(HTML) Figure(7) Related Papers Cited by
  • Due to the explosive growth of large-scale data sets, tensors have been a vital tool to analyze and process high-dimensional data. Different from the matrix case, tensor decomposition has been defined in various formats, which can be further used to define the best low-rank approximation of a tensor to significantly reduce the dimensionality for signal compression and recovery. In this paper, we consider the low-rank tensor recovery problem when the tubal rank of the underlying tensor is given or estimated a priori. We propose a novel class of iterative singular tube hard thresholding algorithms for tensor recovery based on the low-tubal-rank tensor approximation, including basic, accelerated deterministic and stochastic versions. Convergence guarantees are provided along with the special case when the measurements are linear. Numerical experiments on tensor compressive sensing and color image inpainting are conducted to demonstrate convergence and computational efficiency in practice.

    Mathematics Subject Classification: Primary: 15A69; Secondary: 68W20, 15A83, 94A12, 65F55.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Reconstruction error comparison with various tensor tubal ranks. The ground truth is noise-free and the sampling rate is fixed as $ {M/N} = 0.60 $

    Figure 2.  Reconstruction error comparison with various sampling rates. The ground truth is noise-free with fixed tubal rank as 2

    Figure 3.  Reconstruction error comparison with various noisy measurements

    Figure 4.  Reconstruction error comparison for Algorithm 5 with various batch sizes. The running times for the batch sizes $ b = 200,400,600,800, 1000 $ are (in seconds): 8.38, 12.27, 16.51, 20.52, 24.66, respectively

    Figure 5.  Recovery of a checkerboard image with tubal rank 2 via Algorithm 2

    Figure 6.  Recovery of a facade image with tubal rank 3 via Algorithm 2

    Figure 7.  Convergence of our method in color image inpainting

  • [1] J. Barzilai and J. M. Borwein, Two-point step size gradient methods, IMA Journal of Numerical Analysis, 8 (1988), 141-148.  doi: 10.1093/imanum/8.1.141.
    [2] F. Bornemann and T. März, Fast image inpainting based on coherence transport, Journal of Mathematical Imaging and Vision, 28 (2007), 259-278.  doi: 10.1007/s10851-007-0017-6.
    [3] J. D. Carroll and J. J. Chang, Analysis of individual difference in multidimensional scaling via an N-way generalization of "Eckart-Young" decomposition, Psychometrika, 35 (2013), 283-319.  doi: 10.1007/BF02310791.
    [4] X. Chen and J. Qin, Regularized Kaczmarz algorithms for tensor recovery, SIAM Journal on Imaging Sciences, 14 (2021), 1439-1471.  doi: 10.1137/21M1398562.
    [5] A. CriminisiP. Pérez and K. Toyama, Region filling and object removal by exemplar-based image inpainting, IEEE Transactions on Image Processing, 13 (2004), 1200-1212.  doi: 10.1109/TIP.2004.833105.
    [6] L. De LathauwerB. De Moor and J. Vandewalle, A multilinear singular value decomposition, SIAM Journal on Matrix Analysis and Applications, 21 (2000), 1253-1278.  doi: 10.1137/S0895479896305696.
    [7] L. De LathauwerB. De Moor and J. Vandewalle, On the best rank-1 and rank-($r_1, r_2, ..., r_n$) approximation of higher-order tensors, SIAM Journal on Matrix Analysis and Applications, 21 (2000), 1324-1342.  doi: 10.1137/S0895479898346995.
    [8] J. DuchiE. Hazan and Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization, Journal of Machine Learning Research, 12 (2011), 2121-2159. 
    [9] N. Durgin, R. Grotheer, C. Huang, S. Li, A. Ma, D. Needell and J. Qin, Fast hyperspectral diffuse optical imaging method with joint sparsity, In 41st Annual International Conference of the IEEE Engineering in Medicine & Biology Society, Berlin, Germany, 2019, 4758-4761. doi: 10.1109/EMBC.2019.8857069.
    [10] H. FanY. ChenY. GuoH. Zhang and G. Kuang, Hyperspectral image restoration using low-rank tensor recovery, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10 (2017), 4589-4604.  doi: 10.1109/JSTARS.2017.2714338.
    [11] S. Foucart and S. Subramanian, Iterative hard thresholding for low-rank recovery from rank-one projections, Linear Algebra and its Applications, 572 (2019), 117-134.  doi: 10.1016/j.laa.2019.03.007.
    [12] R. A. Harshman, Foundations of the parafac procedure: Models and conditions for an "explanatory" multi-modal factor analysis, UCLA Working Papers in Phonetics, 16 (1970), 1-84. 
    [13] C. J. Hillar and L.-H. Lim, Most tensor problems are np-hard, Journal of the ACM (JACM), 60 (2013), 1-39.  doi: 10.1145/2512329.
    [14] M. E. KilmerK. BramanN. Hao and R. C. Hoover, Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging, SIAM Journal on Matrix Analysis and Applications, 34 (2013), 148-172.  doi: 10.1137/110837711.
    [15] M. E. Kilmer and C. D. Martin, Factorization strategies for third-order tensors, Linear Algebra and its Applications, 435 (2011), 641-658.  doi: 10.1016/j.laa.2010.09.020.
    [16] T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Review, 51 (2009), 455-500.  doi: 10.1137/07070111X.
    [17] O. Le MeurM. Ebdelli and C. Guillemot, Hierarchical super-resolution-based inpainting, IEEE Transactions on Image Processing, 22 (2013), 3779-3790.  doi: 10.1109/TIP.2013.2261308.
    [18] X.-Y. Liu, S. Aeron, V. Aggarwal and X. Wang, Low-tubal-rank tensor completion using alternating minimization, In Modeling and Simulation for Defense Systems and Applications XI, volume 9848, International Society for Optics and Photonics, 2016, 984809. doi: 10.1117/12.2224039.
    [19] Y. LiuX. Zeng and W. Wang, Proximal gradient algorithm for nonconvex low tubal rank tensor recovery, BIT Numerical Mathematics, 63 (2023), 25.  doi: 10.1007/s10543-023-00964-0.
    [20] C. Lu, J. Feng, Y. Chen, W. Liu, Z. Lin and S. Yan, Tensor robust principal component analysis: Exact recovery of corrupted low-rank tensors via convex optimization, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), 5249-5257. doi: 10.1109/CVPR.2016.567.
    [21] C. D. MartinR. Shafer and B. LaRue, An order-$p$ tensor factorization with applications in imaging, SIAM Journal on Scientific Computing, 35 (2013), A474-A490.  doi: 10.1137/110841229.
    [22] D. Needell and R. Ward, Batched stochastic gradient descent with weighted sampling, In International Conference Approximation Theory, Springer, 2016, 279-306. doi: 10.1007/978-3-319-59912-0_14.
    [23] Y. E. Nesterov, A method for unconstrained convex minimization problem with the rate of convergence $o(1/k^2)$, Doklady AN USSR, 269 (1983), 543-547. 
    [24] N. NguyenD. Needell and T. Woolf, Linear convergence of stochastic iterative greedy algorithms with sparse constraints, IEEE Transactions on Information Theory, 63 (2017), 6869-6895.  doi: 10.1109/TIT.2017.2749330.
    [25] M. Nimishakavi, P. K. Jawanpuria and B. Mishra, A dual framework for low-rank tensor completion, In Advances in Neural Information Processing Systems, (2018), 5484-5495.
    [26] L. QiY. ChenM. Bakshi and X. Zhang, Triple decomposition and tensor recovery of third order tensors, SIAM Journal on Matrix Analysis and Applications, 42 (2021), 299-329.  doi: 10.1137/20M1323266.
    [27] J. QinS. LiD. NeedellA. MaR. GrotheerC. Huang and N. Durgin, Stochastic greedy algorithms for multiple measurement vectors, Inverse Problems & Imaging, 15 (2021), 79-107.  doi: 10.3934/ipi.2020066.
    [28] H. RauhutR. Schneider and Ž. Stojanac, Low rank tensor recovery via iterative hard thresholding, Linear Algebra and its Applications, 523 (2017), 220-262.  doi: 10.1016/j.laa.2017.02.028.
    [29] L. R. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika, 31 (1966), 279-311.  doi: 10.1007/BF02289464.
    [30] A. WangZ. Lai and Z. Jin, Noisy low-tubal-rank tensor completion, Neurocomputing, 330 (2019), 267-279.  doi: 10.1016/j.neucom.2018.11.012.
    [31] A. WangD. WeiB. Wang and Z. Jin, Noisy low-tubal-rank tensor completion through iterative singular tube thresholding, IEEE Access, 6 (2018), 35112-35128.  doi: 10.1109/ACCESS.2018.2850324.
    [32] W.-H. Xu, X.-L. Zhao and M. Ng, A fast algorithm for cosine transform based tensor singular value decomposition, arXiv preprint, arXiv: 1902.03070, 2019.
    [33] F. Zhang, W. Wang, J. Hou, J. Wang and J. Huang, Tensor restricted isometry property analysis for a large class of random measurement ensembles, Sci. China Inf. Sci., 64 (2021), Paper No. 119101, 3 pp, arXiv preprint, arXiv: 1906.01198, 2019. doi: 10.1007/s11432-019-2717-4.
    [34] F. Zhang, W. Wang, J. Huang, Y. Wang and J. Wang, Rip-based performance guarantee for low-tubal-rank tensor recovery, J. Comput. Appl. Math., 374 (2020), 112767, 14 pp, arXiv preprint, arXiv: 1906.01774, 2019. doi: 10.1016/j.cam.2020.112767.
    [35] X. Zhang and M. K. Ng, A corrected tensor nuclear norm minimization method for noisy low-rank tensor completion, SIAM Journal on Imaging Sciences, 12 (2019), 1231-1273.  doi: 10.1137/18M1202311.
    [36] Z. Zhang, G. Ely, S. Aeron, N. Hao and M. Kilmer, Novel methods for multilinear data completion and de-noising based on tensor-SVD, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), 3842-3849. doi: 10.1109/CVPR.2014.485.
  • 加载中

Figures(7)

SHARE

Article Metrics

HTML views(1747) PDF downloads(431) Cited by(0)

Access History

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return