\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Recovering low-rank matrices from binary measurements

  • * Corresponding author: Simon Foucart

    * Corresponding author: Simon Foucart 
The first author is partially supported by NSF grants DMS-1622134 and DMS-1664803.
Abstract / Introduction Full Text(HTML) Figure(4) Related Papers Cited by
  • This article studies the approximate recovery of low-rank matrices acquired through binary measurements. Two types of recovery algorithms are considered, one based on hard singular value thresholding and the other one based on semidefinite programming. In case no thresholds are introduced before binary quantization, it is first shown that the direction of the low-rank matrices can be well approximated. Then, in case nonadaptive thresholds are incorporated, it is shown that both the direction and the magnitude can be well approximated. Finally, by allowing the thresholds to be chosen adaptively, we exhibit a recovery procedure for which low-rank matrices are fully approximated with error decaying exponentially with the number of binary measurements. In all cases, the procedures are robust to prequantization error. The underlying arguments are essentially deterministic: they rely only on an unusual restricted isometry property of the measurement process, which is established once and for all for Gaussian measurement processes.

    Mathematics Subject Classification: 94A12, 65F10, 90C22.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Comparison of the procedures (25) and (39) when the dimension $ n $ varies

    Figure 2.  Recovery error $ \varepsilon: = \left\| {\mathbf X} - \dfrac{ {\mathbf X}^{\rm ht}}{\| {\mathbf X}^{\rm ht}\|_F} \right\|_F $ vs magnitude $ \mu: = \| {\mathbf e}\|_1 $ of prequantization error

    Figure 3.  Recovery error $ \varepsilon: = \| {\mathbf X} - \widehat{ {\mathbf X}}\|_F $ vs oversampling factor $ \lambda $ for the procedure (62)

    Figure 4.  Recovery error $ \varepsilon: = \| {\mathbf X} - \widehat{ {\mathbf X}}\|_F $ vs oversampling factor $ \lambda $ for the procedure (73)-(74)-(75)

  • [1] R. Baraniuk and P. Boufounos, 1-bit compressive sensing, in 2008 42nd Annual Conference on Information Sciences and Systems, IEEE, (2008), 16–21.
    [2] R. BaraniukS. FoucartD. NeedellY. Plan and M. Wootters, Exponential decay of reconstruction error from binary measurements of sparse signals, IEEE Transactions on Information Theory, 63 (2017), 3368-3385.  doi: 10.1109/TIT.2017.2688381.
    [3] R. BaraniukS. FoucartD. NeedellY. Plan and M. Wootters, One-bit compressive sensing of dictionary-sparse signals, Information and Inference: A Jounral of the IMA, 7 (2018), 83-104.  doi: 10.1093/imaiai/iax009.
    [4] E. J. Candès and Y. Plan, Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements, IEEE Transactions on Information Theory, 57 (2011), 2342-2359.  doi: 10.1109/TIT.2011.2111771.
    [5] CVX Research, Inc, CVX: MATLAB software for disciplined convex programming, version 2.1, http://cvxr.com/cvx, 2014.
    [6] M. A. Davenport and J. Romberg, An overview of low-rank matrix recovery from incomplete observations, IEEE Journal of Selected Topics in Signal Processing, 10 (2016), 608-622.  doi: 10.1109/JSTSP.2016.2539100.
    [7] S. Foucart, Flavors of Compressive Sensing, in Approximation Theory XV: San Antonio 2016 (eds. G. E. Fasshauer and L. L. Schumaker), Springer Proceedings in Mathematics & Statistics, 201 (2017), 61–104.
    [8] S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Birkhäuser, New York, 2013. doi: 10.1007/978-0-8176-4948-7.
    [9] Y. Plan and R. Vershynin, One-bit compressed sensing by linear programming, Communications on Pure and Applied Mathematics, 66 (2013), 1275-1297.  doi: 10.1002/cpa.21442.
    [10] Y. Plan and R. Vershynin, Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach, IEEE Transactions on Information Theory, 59 (2013), 482-494.  doi: 10.1109/TIT.2012.2207945.
    [11] Y. Plan and R. Vershynin, Dimension reduction by random hyperplane tessellations, Discrete & Computational Geometry, 51 (2014), 438-461.  doi: 10.1007/s00454-013-9561-6.
    [12] Y. Plan and R. Vershynin, The generalized Lasso with non-linear observations, IEEE Transactions on Information Theory, 62 (2016), 1528-1537.  doi: 10.1109/TIT.2016.2517008.
    [13] G. Schechtman, Two observations regarding embedding subsets of Euclidean spaces in normed spaces, Advances in Mathematics, 200 (2006), 125-135.  doi: 10.1016/j.aim.2004.11.003.
  • 加载中

Figures(4)

SHARE

Article Metrics

HTML views(1706) PDF downloads(251) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return