August  2019, 13(4): 703-720. doi: 10.3934/ipi.2019032

Recovering low-rank matrices from binary measurements

Department of Mathematics, Mailstop 3368, Texas A & M University, College Station, TX 77843-3368, USA

* Corresponding author: Simon Foucart

Received  August 2017 Revised  January 2019 Published  May 2019

Fund Project: The first author is partially supported by NSF grants DMS-1622134 and DMS-1664803.

This article studies the approximate recovery of low-rank matrices acquired through binary measurements. Two types of recovery algorithms are considered, one based on hard singular value thresholding and the other one based on semidefinite programming. In case no thresholds are introduced before binary quantization, it is first shown that the direction of the low-rank matrices can be well approximated. Then, in case nonadaptive thresholds are incorporated, it is shown that both the direction and the magnitude can be well approximated. Finally, by allowing the thresholds to be chosen adaptively, we exhibit a recovery procedure for which low-rank matrices are fully approximated with error decaying exponentially with the number of binary measurements. In all cases, the procedures are robust to prequantization error. The underlying arguments are essentially deterministic: they rely only on an unusual restricted isometry property of the measurement process, which is established once and for all for Gaussian measurement processes.

Citation: Simon Foucart, Richard G. Lynch. Recovering low-rank matrices from binary measurements. Inverse Problems & Imaging, 2019, 13 (4) : 703-720. doi: 10.3934/ipi.2019032
References:
[1]

R. Baraniuk and P. Boufounos, 1-bit compressive sensing, in 2008 42nd Annual Conference on Information Sciences and Systems, IEEE, (2008), 16–21. Google Scholar

[2]

R. BaraniukS. FoucartD. NeedellY. Plan and M. Wootters, Exponential decay of reconstruction error from binary measurements of sparse signals, IEEE Transactions on Information Theory, 63 (2017), 3368-3385.  doi: 10.1109/TIT.2017.2688381.  Google Scholar

[3]

R. BaraniukS. FoucartD. NeedellY. Plan and M. Wootters, One-bit compressive sensing of dictionary-sparse signals, Information and Inference: A Jounral of the IMA, 7 (2018), 83-104.  doi: 10.1093/imaiai/iax009.  Google Scholar

[4]

E. J. Candès and Y. Plan, Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements, IEEE Transactions on Information Theory, 57 (2011), 2342-2359.  doi: 10.1109/TIT.2011.2111771.  Google Scholar

[5]

CVX Research, Inc, CVX: MATLAB software for disciplined convex programming, version 2.1, http://cvxr.com/cvx, 2014. Google Scholar

[6]

M. A. Davenport and J. Romberg, An overview of low-rank matrix recovery from incomplete observations, IEEE Journal of Selected Topics in Signal Processing, 10 (2016), 608-622.  doi: 10.1109/JSTSP.2016.2539100.  Google Scholar

[7]

S. Foucart, Flavors of Compressive Sensing, in Approximation Theory XV: San Antonio 2016 (eds. G. E. Fasshauer and L. L. Schumaker), Springer Proceedings in Mathematics & Statistics, 201 (2017), 61–104.  Google Scholar

[8]

S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Birkhäuser, New York, 2013. doi: 10.1007/978-0-8176-4948-7.  Google Scholar

[9]

Y. Plan and R. Vershynin, One-bit compressed sensing by linear programming, Communications on Pure and Applied Mathematics, 66 (2013), 1275-1297.  doi: 10.1002/cpa.21442.  Google Scholar

[10]

Y. Plan and R. Vershynin, Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach, IEEE Transactions on Information Theory, 59 (2013), 482-494.  doi: 10.1109/TIT.2012.2207945.  Google Scholar

[11]

Y. Plan and R. Vershynin, Dimension reduction by random hyperplane tessellations, Discrete & Computational Geometry, 51 (2014), 438-461.  doi: 10.1007/s00454-013-9561-6.  Google Scholar

[12]

Y. Plan and R. Vershynin, The generalized Lasso with non-linear observations, IEEE Transactions on Information Theory, 62 (2016), 1528-1537.  doi: 10.1109/TIT.2016.2517008.  Google Scholar

[13]

G. Schechtman, Two observations regarding embedding subsets of Euclidean spaces in normed spaces, Advances in Mathematics, 200 (2006), 125-135.  doi: 10.1016/j.aim.2004.11.003.  Google Scholar

show all references

References:
[1]

R. Baraniuk and P. Boufounos, 1-bit compressive sensing, in 2008 42nd Annual Conference on Information Sciences and Systems, IEEE, (2008), 16–21. Google Scholar

[2]

R. BaraniukS. FoucartD. NeedellY. Plan and M. Wootters, Exponential decay of reconstruction error from binary measurements of sparse signals, IEEE Transactions on Information Theory, 63 (2017), 3368-3385.  doi: 10.1109/TIT.2017.2688381.  Google Scholar

[3]

R. BaraniukS. FoucartD. NeedellY. Plan and M. Wootters, One-bit compressive sensing of dictionary-sparse signals, Information and Inference: A Jounral of the IMA, 7 (2018), 83-104.  doi: 10.1093/imaiai/iax009.  Google Scholar

[4]

E. J. Candès and Y. Plan, Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements, IEEE Transactions on Information Theory, 57 (2011), 2342-2359.  doi: 10.1109/TIT.2011.2111771.  Google Scholar

[5]

CVX Research, Inc, CVX: MATLAB software for disciplined convex programming, version 2.1, http://cvxr.com/cvx, 2014. Google Scholar

[6]

M. A. Davenport and J. Romberg, An overview of low-rank matrix recovery from incomplete observations, IEEE Journal of Selected Topics in Signal Processing, 10 (2016), 608-622.  doi: 10.1109/JSTSP.2016.2539100.  Google Scholar

[7]

S. Foucart, Flavors of Compressive Sensing, in Approximation Theory XV: San Antonio 2016 (eds. G. E. Fasshauer and L. L. Schumaker), Springer Proceedings in Mathematics & Statistics, 201 (2017), 61–104.  Google Scholar

[8]

S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Birkhäuser, New York, 2013. doi: 10.1007/978-0-8176-4948-7.  Google Scholar

[9]

Y. Plan and R. Vershynin, One-bit compressed sensing by linear programming, Communications on Pure and Applied Mathematics, 66 (2013), 1275-1297.  doi: 10.1002/cpa.21442.  Google Scholar

[10]

Y. Plan and R. Vershynin, Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach, IEEE Transactions on Information Theory, 59 (2013), 482-494.  doi: 10.1109/TIT.2012.2207945.  Google Scholar

[11]

Y. Plan and R. Vershynin, Dimension reduction by random hyperplane tessellations, Discrete & Computational Geometry, 51 (2014), 438-461.  doi: 10.1007/s00454-013-9561-6.  Google Scholar

[12]

Y. Plan and R. Vershynin, The generalized Lasso with non-linear observations, IEEE Transactions on Information Theory, 62 (2016), 1528-1537.  doi: 10.1109/TIT.2016.2517008.  Google Scholar

[13]

G. Schechtman, Two observations regarding embedding subsets of Euclidean spaces in normed spaces, Advances in Mathematics, 200 (2006), 125-135.  doi: 10.1016/j.aim.2004.11.003.  Google Scholar

Figure 1.  Comparison of the procedures (25) and (39) when the dimension $ n $ varies
Figure 2.  Recovery error $ \varepsilon: = \left\| {\mathbf X} - \dfrac{ {\mathbf X}^{\rm ht}}{\| {\mathbf X}^{\rm ht}\|_F} \right\|_F $ vs magnitude $ \mu: = \| {\mathbf e}\|_1 $ of prequantization error
Figure 3.  Recovery error $ \varepsilon: = \| {\mathbf X} - \widehat{ {\mathbf X}}\|_F $ vs oversampling factor $ \lambda $ for the procedure (62)
Figure 4.  Recovery error $ \varepsilon: = \| {\mathbf X} - \widehat{ {\mathbf X}}\|_F $ vs oversampling factor $ \lambda $ for the procedure (73)-(74)-(75)
[1]

Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006

[2]

Manfred Einsiedler, Elon Lindenstrauss. On measures invariant under diagonalizable actions: the Rank-One case and the general Low-Entropy method. Journal of Modern Dynamics, 2008, 2 (1) : 83-128. doi: 10.3934/jmd.2008.2.83

[3]

Mario Pulvirenti, Sergio Simonella. On the cardinality of collisional clusters for hard spheres at low density. Discrete & Continuous Dynamical Systems, 2021, 41 (8) : 3903-3914. doi: 10.3934/dcds.2021021

[4]

Mingchao Zhao, You-Wei Wen, Michael Ng, Hongwei Li. A nonlocal low rank model for poisson noise removal. Inverse Problems & Imaging, 2021, 15 (3) : 519-537. doi: 10.3934/ipi.2021003

[5]

Oleksandr Boichuk, Victor Feruk. Boundary-value problems for weakly singular integral equations. Discrete & Continuous Dynamical Systems - B, 2021  doi: 10.3934/dcdsb.2021094

[6]

Mao Okada. Local rigidity of certain actions of solvable groups on the boundaries of rank-one symmetric spaces. Journal of Modern Dynamics, 2021, 17: 111-143. doi: 10.3934/jmd.2021004

[7]

Zhihua Zhang, Naoki Saito. PHLST with adaptive tiling and its application to antarctic remote sensing image approximation. Inverse Problems & Imaging, 2014, 8 (1) : 321-337. doi: 10.3934/ipi.2014.8.321

[8]

Thomas Alazard. A minicourse on the low Mach number limit. Discrete & Continuous Dynamical Systems - S, 2008, 1 (3) : 365-404. doi: 10.3934/dcdss.2008.1.365

[9]

Youjun Deng, Hongyu Liu, Xianchao Wang, Dong Wei, Liyan Zhu. Simultaneous recovery of surface heat flux and thickness of a solid structure by ultrasonic measurements. Electronic Research Archive, , () : -. doi: 10.3934/era.2021027

[10]

Samira Shahsavari, Saeed Ketabchi. The proximal methods for solving absolute value equation. Numerical Algebra, Control & Optimization, 2021, 11 (3) : 449-460. doi: 10.3934/naco.2020037

[11]

Ardeshir Ahmadi, Hamed Davari-Ardakani. A multistage stochastic programming framework for cardinality constrained portfolio optimization. Numerical Algebra, Control & Optimization, 2017, 7 (3) : 359-377. doi: 10.3934/naco.2017023

[12]

Luke Finlay, Vladimir Gaitsgory, Ivan Lebedev. Linear programming solutions of periodic optimization problems: approximation of the optimal control. Journal of Industrial & Management Optimization, 2007, 3 (2) : 399-413. doi: 10.3934/jimo.2007.3.399

[13]

Mohammed Abdelghany, Amr B. Eltawil, Zakaria Yahia, Kazuhide Nakata. A hybrid variable neighbourhood search and dynamic programming approach for the nurse rostering problem. Journal of Industrial & Management Optimization, 2021, 17 (4) : 2051-2072. doi: 10.3934/jimo.2020058

[14]

Vladimir Georgiev, Sandra Lucente. Focusing nlkg equation with singular potential. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1387-1406. doi: 10.3934/cpaa.2018068

[15]

Meng Ding, Ting-Zhu Huang, Xi-Le Zhao, Michael K. Ng, Tian-Hui Ma. Tensor train rank minimization with nonlocal self-similarity for tensor completion. Inverse Problems & Imaging, 2021, 15 (3) : 475-498. doi: 10.3934/ipi.2021001

[16]

Yosra Soussi. Stable recovery of a non-compactly supported coefficient of a Schrödinger equation on an infinite waveguide. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021022

[17]

Nishant Sinha. Internal state recovery of Espresso stream cipher using conditional sampling resistance and TMDTO attack. Advances in Mathematics of Communications, 2021, 15 (3) : 539-556. doi: 10.3934/amc.2020081

[18]

M. Mahalingam, Parag Ravindran, U. Saravanan, K. R. Rajagopal. Two boundary value problems involving an inhomogeneous viscoelastic solid. Discrete & Continuous Dynamical Systems - S, 2017, 10 (6) : 1351-1373. doi: 10.3934/dcdss.2017072

[19]

Fritz Gesztesy, Helge Holden, Johanna Michor, Gerald Teschl. The algebro-geometric initial value problem for the Ablowitz-Ladik hierarchy. Discrete & Continuous Dynamical Systems, 2010, 26 (1) : 151-196. doi: 10.3934/dcds.2010.26.151

[20]

Hui Yang, Yuzhu Han. Initial boundary value problem for a strongly damped wave equation with a general nonlinearity. Evolution Equations & Control Theory, 2021  doi: 10.3934/eect.2021019

2019 Impact Factor: 1.373

Metrics

  • PDF downloads (137)
  • HTML views (260)
  • Cited by (0)

Other articles
by authors

[Back to Top]