# American Institute of Mathematical Sciences

doi: 10.3934/fods.2021028
Online First

Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.

Readers can access Online First articles via the “Online First” tab for the selected journal.

## An adaptation for iterative structured matrix completion

 1 Department of Mathematics, Colorado State University, Fort Collins, CO 80523, USA 2 Department of Mathematics, University of California, Los Angeles CA 90095, USA

* Corresponding author: Lara Kassab

Received  May 2021 Revised  August 2021 Early access November 2021

Fund Project: Needell was partially supported by NSF CAREER #1348721 and NSF BIGDATA #1740325

The task of predicting missing entries of a matrix, from a subset of known entries, is known as matrix completion. In today's data-driven world, data completion is essential whether it is the main goal or a pre-processing step. Structured matrix completion includes any setting in which data is not missing uniformly at random. In recent work, a modification to the standard nuclear norm minimization (NNM) for matrix completion has been developed to take into account sparsity-based structure in the missing entries. This notion of structure is motivated in many settings including recommender systems, where the probability that an entry is observed depends on the value of the entry. We propose adjusting an Iteratively Reweighted Least Squares (IRLS) algorithm for low-rank matrix completion to take into account sparsity-based structure in the missing entries. We also present an iterative gradient-projection-based implementation of the algorithm that can handle large-scale matrices. Finally, we present a robust array of numerical experiments on matrices of varying sizes, ranks, and level of structure. We show that our proposed method is comparable with the adjusted NNM on small-sized matrices, and often outperforms the IRLS algorithm in structured settings on matrices up to size $1000 \times 1000$.

Citation: Henry Adams, Lara Kassab, Deanna Needell. An adaptation for iterative structured matrix completion. Foundations of Data Science, doi: 10.3934/fods.2021028
##### References:

show all references

##### References:
We consider twenty $1000 \times 1000$ sparse random matrices of rank 10 with average density equal to 0.66. The upper plots display (left) the average relative error for sIRLS $\|M - \tilde X\|_F/\| M\|_F$, and (right) the average relative error for Structured sIRLS $\|M - \hat X\|_F/\| M\|_F$. The lower plots display (left) the average ratio $\|M - \hat X\|_F / \|M - \tilde X\|_F$, and (right) the binned average ratio where white means the average ratio is strictly less than 1, and black otherwise
We consider twenty $500 \times 500$ sparse random matrices of rank 10 with average density equal to 0.66. The upper plots display (left) the average relative error for sIRLS $\|M - \tilde X\|_F/\| M\|_F$, and (right) the average relative error for Structured sIRLS $\|M - \hat X\|_F/\| M\|_F$. The lower plots display (left) the average ratio $\|M - \hat X\|_F / \|M - \tilde X\|_F$, and (right) the binned average ratio where white means the average ratio is strictly less than 1, and black otherwise
We consider twenty $100 \times 100$ sparse random matrices of rank 10 with average density equal to 0.66. The upper plots display (left) the average relative error for sIRLS $\|M - \tilde X\|_F/\| M\|_F$, and (right) the average relative error for Structured sIRLS $\|M - \hat X\|_F/\| M\|_F$. The lower plots display (left) the average ratio $\|M - \hat X\|_F / \|M - \tilde X\|_F$, and (right) the binned average ratio where white means the average ratio is strictly less than 1, and black otherwise
We consider twenty $100 \times 100$ sparse random matrices of rank 8 with density equal to 0.58, but we do not input any rank guess. The upper plots display (left) the average relative error for sIRLS $\|M - \tilde X\|_F/\| M\|_F$, and (right) the average relative error for Structured sIRLS $\|M - \hat X\|_F/\| M\|_F$. The lower plots display (left) the average ratio $\|M - \hat X\|_F / \|M - \tilde X\|_F$, and (right) the average ratio when the sampling rate of non-zero entries is at least 0.70 (a zoomed in version of part of the lower left plot)
We consider twenty $100 \times 100$ sparse random matrices of rank 20 with average density equal to 0.88. The upper plots display (left) the average relative error for sIRLS $\|M - \tilde X\|_F/\| M\|_F$, and (right) the average relative error for Structured sIRLS $\|M - \hat X\|_F/\| M\|_F$. The lower plots display (left) the average ratio $\|M - \hat X\|_F / \|M - \tilde X\|_F$, and (right) the binned average ratio where white means the average ratio is strictly less than 1, and black otherwise. The red line separates the hard cases from those that are not: all the cases below the red line are hard
We consider twenty $30 \times 30$ sparse random matrices of rank 7, with average density equal to 0.53. We provide Structured sIRLS with the rank of the matrices. We optimize for each matrix and combination of sampling rates the regularization parameter $\alpha \in \{ 10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}\}$ for Structured NNM and report the "optimal" results. The upper plots display (left) the average relative error for Structured NNM $\|M - \bar X\|_F/\| M\|_F$, and (right) the average relative error for Structured sIRLS $\|M - \hat X\|_F/\| M\|_F$. The lower plots display (left) the average ratio $\|M - \hat X\|_F / \|M - \bar X\|_F$, and (right) the average ratio when the sampling rate of non-zero entries is at most 0.90 (a zoomed in version of part of the lower left plot)
We consider twenty $100 \times 100$ random matrices of rank 3 with noise parameter $\epsilon = 10^{-4}$. The upper plots display (left) the average relative error for sIRLS $\|B - \tilde X\|_F/\| B\|_F$, and (right) the average relative error for Structured sIRLS $\|B - \hat X\|_F/\| B\|_F$. The lower plots display (left) the average ratio $\|B - \hat X\|_F / \|B - \tilde X\|_F$, and (right) the average ratio when the sampling rate of non-zero entries is at least 0.35 (a zoomed in version of part of the lower left plot)
 [1] Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su. Parallel matrix factorization for low-rank tensor completion. Inverse Problems & Imaging, 2015, 9 (2) : 601-624. doi: 10.3934/ipi.2015.9.601 [2] Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems & Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030 [3] Ke Wei, Jian-Feng Cai, Tony F. Chan, Shingyu Leung. Guarantees of riemannian optimization for low rank matrix completion. Inverse Problems & Imaging, 2020, 14 (2) : 233-265. doi: 10.3934/ipi.2020011 [4] Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006 [5] Yuhong Dai, Nobuo Yamashita. Convergence analysis of sparse quasi-Newton updates with positive definite matrix completion for two-dimensional functions. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 61-69. doi: 10.3934/naco.2011.1.61 [6] Yuan Shen, Xin Liu. An alternating minimization method for matrix completion problems. Discrete & Continuous Dynamical Systems - S, 2020, 13 (6) : 1757-1772. doi: 10.3934/dcdss.2020103 [7] Yi Yang, Jianwei Ma, Stanley Osher. Seismic data reconstruction via matrix completion. Inverse Problems & Imaging, 2013, 7 (4) : 1379-1392. doi: 10.3934/ipi.2013.7.1379 [8] Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems & Imaging, 2021, 15 (1) : 159-183. doi: 10.3934/ipi.2020076 [9] Simon Foucart, Richard G. Lynch. Recovering low-rank matrices from binary measurements. Inverse Problems & Imaging, 2019, 13 (4) : 703-720. doi: 10.3934/ipi.2019032 [10] Hui Zhang, Jian-Feng Cai, Lizhi Cheng, Jubo Zhu. Strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems & Imaging, 2012, 6 (2) : 357-372. doi: 10.3934/ipi.2012.6.357 [11] Xianchao Xiu, Lingchen Kong. Rank-one and sparse matrix decomposition for dynamic MRI. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 127-134. doi: 10.3934/naco.2015.5.127 [12] Qingshan You, Qun Wan, Yipeng Liu. A short note on strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems & Imaging, 2013, 7 (1) : 305-306. doi: 10.3934/ipi.2013.7.305 [13] Wei Wan, Weihong Guo, Jun Liu, Haiyang Huang. Non-local blind hyperspectral image super-resolution via 4d sparse tensor factorization and low-rank. Inverse Problems & Imaging, 2020, 14 (2) : 339-361. doi: 10.3934/ipi.2020015 [14] Zhouchen Lin. A review on low-rank models in data analysis. Big Data & Information Analytics, 2016, 1 (2&3) : 139-161. doi: 10.3934/bdia.2016001 [15] Meng Ding, Ting-Zhu Huang, Xi-Le Zhao, Michael K. Ng, Tian-Hui Ma. Tensor train rank minimization with nonlocal self-similarity for tensor completion. Inverse Problems & Imaging, 2021, 15 (3) : 475-498. doi: 10.3934/ipi.2021001 [16] Simon Arridge, Pascal Fernsel, Andreas Hauptmann. Joint reconstruction and low-rank decomposition for dynamic inverse problems. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021059 [17] Zhengshan Dong, Jianli Chen, Wenxing Zhu. Homotopy method for matrix rank minimization based on the matrix hard thresholding method. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 211-224. doi: 10.3934/naco.2019015 [18] Haixia Liu, Jian-Feng Cai, Yang Wang. Subspace clustering by (k,k)-sparse matrix factorization. Inverse Problems & Imaging, 2017, 11 (3) : 539-551. doi: 10.3934/ipi.2017025 [19] Yitong Guo, Bingo Wing-Kuen Ling. Principal component analysis with drop rank covariance matrix. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2345-2366. doi: 10.3934/jimo.2020072 [20] Eric Bedford, Kyounghee Kim. Degree growth of matrix inversion: Birational maps of symmetric, cyclic matrices. Discrete & Continuous Dynamical Systems, 2008, 21 (4) : 977-1013. doi: 10.3934/dcds.2008.21.977

Impact Factor:

## Tools

Article outline

Figures and Tables