doi: 10.3934/fods.2020006

Stability of sampling for CUR decompositions

1. 

Department of Mathematics, University of Arizona, Tucson, AZ 85719, USA

2. 

Department of Mathematics, University of California, Los Angeles, CA 90095, USA

* Corresponding author: Longxiu Huang

Received  January 2020

Fund Project: The first author is partially supported by the National Science Foundation TRIPODS program, grant number NSF CCF–1740858. The second author is partially supported by NSF CAREER DMS 1348721 and NSF BIGDATA 1740325

This article studies how to form CUR decompositions of low-rank matrices via primarily random sampling, though deterministic methods due to previous works are illustrated as well. The primary problem is to determine when a column submatrix of a rank $ k $ matrix also has rank $ k $. For random column sampling schemes, there is typically a tradeoff between the number of columns needed to be chosen and the complexity of determining the sampling probabilities. We discuss several sampling methods and their complexities as well as stability of the method under perturbations of both the probabilities and the underlying matrix. As an application, we give a high probability guarantee of the exact solution of the Subspace Clustering Problem via CUR decompositions when columns are sampled according to their Euclidean lengths.

Citation: Keaton Hamm, Longxiu Huang. Stability of sampling for CUR decompositions. Foundations of Data Science, doi: 10.3934/fods.2020006
References:
[1]

A. Aldroubi, K. Hamm, A. B. Koku and A. Sekmen, CUR decompositions, similarity matrices, and subspace clustering, Frontiers in Applied Mathematics and Statistics, 4 (2019), 65. doi: 10.3389/fams.2018.00065.  Google Scholar

[2]

A. AldroubiA. SekmenA. B. Koku and A. F. Cakmak, Similarity matrix framework for data from union of subspaces, Applied and Computational Harmonic Analysis, 45 (2018), 425-435.  doi: 10.1016/j.acha.2017.08.006.  Google Scholar

[3]

R. Basri and D. Jacobs, Lambertian reflectance and linear subspaces, IEEE Transactions on Pattern Analysis & Machine Intelligence, 25 (2003), 218-233.  doi: 10.1109/ICCV.2001.937651.  Google Scholar

[4]

C. Boutsidis and D. P. Woodruff, Optimal CUR matrix decompositions, SIAM Journal on Computing, 46 (2017), 543-589.  doi: 10.1137/140977898.  Google Scholar

[5]

S. Chaturantabut and D. C. Sorensen, Nonlinear model reduction via discrete empirical interpolation, SIAM Journal on Scientific Computing, 32 (2010), 2737-2764.  doi: 10.1137/090766498.  Google Scholar

[6]

J. Chiu and L. Demanet, Sublinear randomized algorithms for skeleton decompositions, SIAM Journal on Matrix Analysis and Applications, 34 (2013), 1361-1383.  doi: 10.1137/110852310.  Google Scholar

[7]

J. P. Costeira and T. Kanade, A multibody factorization method for independently moving objects, International Journal of Computer Vision, 29 (1998), 159-179.  doi: 10.1023/A:1008000628999.  Google Scholar

[8]

S. Demko, Condition numbers of rectangular systems and bounds for generalized inverses, Linear Algebra and its Applications, 78 (1986), 199-206.  doi: 10.1016/0024-3795(86)90024-8.  Google Scholar

[9]

J. Dongarra and F. Sullivan, Guest editors introduction to the top 10 algorithms, Computing in Science & Engineering, 2 (2000), 22-23.  doi: 10.1109/MCISE.2000.814652.  Google Scholar

[10]

P. DrineasR. Kannan and M. W. Mahoney, Fast monte carlo algorithms for matrices. III: Computing a compressed approximate matrix decomposition, SIAM Journal on Computing, 36 (2006), 184-206.  doi: 10.1137/S0097539704442702.  Google Scholar

[11]

P. Drineas and M. W. Mahoney, On the Nyström method for approximating a Gram matrix for improved kernel-based learning, Journal of Machine Learning Research, 6 (2005), 2153–2175, http://www.jmlr.org/papers/volume6/drineas05a/drineas05a.pdf.  Google Scholar

[12]

P. DrineasM. W. Mahoney and S. Muthukrishnan, Relative-error $CUR$ matrix decompositions, SIAM Journal on Matrix Analysis and Applications, 30 (2008), 844-881.  doi: 10.1137/07070471X.  Google Scholar

[13]

E. Elhamifar and R. Vidal, Sparse subspace clustering: Algorithm, theory, and applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 2765–2781, https://ieeexplore.ieee.org/document/6482137. doi: 10.1109/TPAMI.2013.57.  Google Scholar

[14]

A. Gittens, The spectral norm error of the naive Nystrom extension, preprint, arXiv: 1110.5305. Google Scholar

[15]

A. Gittens and M. W. Mahoney, Revisiting the Nyström method for improved large-scale machine learning, The Journal of Machine Learning Research, 17 (2016), Paper No. 117, 65 pp, https://dl.acm.org/doi/abs/10.5555/2946645.3007070.  Google Scholar

[16]

G. H. Golub and C. F. van Loan, Matrix Computations, Fourth edition, Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, MD, 2013.  Google Scholar

[17]

L. Guttman, Enlargement methods for computing the inverse matrix, The Annals of Mathematical Statistics, 17 (1946), 336-343.  doi: 10.1214/aoms/1177730946.  Google Scholar

[18]

R. Hadani and A. Singer, Representation theoretic patterns in three dimensional cryo-electron microscopy I: The intrinsic reconstitution algorithm, Annals of Mathematics (2), 174 (2011), 1219–1241. doi: 10.4007/annals.2011.174.2.11.  Google Scholar

[19]

K. Hamm and L. X. Huang, Perturbations of CUR decompositions, e-prints, arXiv: 1908.08101. Google Scholar

[20]

K. Hamm and L. X. Huang, Perspectives on CUR decompositions, Applied and Computational Harmonic Analysis, 48 (2020), 1088-1099.  doi: 10.1016/j.acha.2019.08.006.  Google Scholar

[21]

R. Kannan and S. Vempala, Randomized algorithms in numerical linear algebra, Acta Numerica, 26 (2017), 95-135.  doi: 10.1017/S0962492917000058.  Google Scholar

[22]

G. C. Liu, Z. C. Lin, S. C. Yan, J. Sun, Y. Yu and Y. Ma, Robust recovery of subspace structures by low-rank representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2012), 171–184, https://ieeexplore.ieee.org/document/6180173. doi: 10.1109/TPAMI.2012.88.  Google Scholar

[23]

M. W. Mahoney and P. Drineas, CUR matrix decompositions for improved data analysis, Proc. Natl. Acad. Sci. USA, 106 (2009), 697-702.  doi: 10.1073/pnas.0803205106.  Google Scholar

[24]

R. Penrose, On best approximate solutions of linear matrix equations, Proc. Cambridge Philos. Soc., 52 (1956), 17-19.  doi: 10.1017/S0305004100030929.  Google Scholar

[25]

F. Pourkamali-Anaraki and S. Becker, Improved fixed-rank Nyström approximation via QR decomposition: Practical and theoretical aspects, Neurocomputing, 363 (2019), 261-272.  doi: 10.1016/j.neucom.2019.06.070.  Google Scholar

[26]

M. Rudelson, Personal Communication, 2019. Google Scholar

[27]

M. Rudelson and R. Vershynin, Sampling from large matrices: An approach through geometric functional analysis, Journal of the ACM, 54 (2007), Art. 21, 19 pp. doi: 10.1145/1255443.1255449.  Google Scholar

[28]

D. C. Sorensen and M. Embree, A DEIM induced CUR factorization, SIAM Journal on Scientific Computing, 38 (2016), A1454–A1482. doi: 10.1137/140978430.  Google Scholar

[29]

J. A. Tropp, Column subset selection, matrix factorization, and eigenvalue optimization, Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SIAM, Philadelphia, PA, (2009), 978–986. doi: 10.1137/1.9781611973068.106.  Google Scholar

[30]

M. Udell and A. Townsend, Why are big data matrices approximately low rank?, SIAM Journal on Mathematics of Data Science, 1 (2019), 144-160.  doi: 10.1137/18M1183480.  Google Scholar

[31]

R. Vidal, Subspace clustering, IEEE Signal Processing Magazine, 28 (2011), 52–68, https://ieeexplore.ieee.org/document/5714408. doi: 10.1109/MSP.2010.939739.  Google Scholar

[32]

S. Voronin and P.-G. Martinsson, Efficient algorithms for CUR and interpolative matrix decompositions, Advances in Computational Mathematics, 43 (2017), 495-516.  doi: 10.1007/s10444-016-9494-8.  Google Scholar

[33]

T. Yang, L. Zhang, R. Jin and S. Zhu, An explicit sampling dependent spectral error bound for column subset selection, International Conference on Machine Learning, (2015), 135–143, http://proceedings.mlr.press/v37/yanga15.pdf. Google Scholar

show all references

References:
[1]

A. Aldroubi, K. Hamm, A. B. Koku and A. Sekmen, CUR decompositions, similarity matrices, and subspace clustering, Frontiers in Applied Mathematics and Statistics, 4 (2019), 65. doi: 10.3389/fams.2018.00065.  Google Scholar

[2]

A. AldroubiA. SekmenA. B. Koku and A. F. Cakmak, Similarity matrix framework for data from union of subspaces, Applied and Computational Harmonic Analysis, 45 (2018), 425-435.  doi: 10.1016/j.acha.2017.08.006.  Google Scholar

[3]

R. Basri and D. Jacobs, Lambertian reflectance and linear subspaces, IEEE Transactions on Pattern Analysis & Machine Intelligence, 25 (2003), 218-233.  doi: 10.1109/ICCV.2001.937651.  Google Scholar

[4]

C. Boutsidis and D. P. Woodruff, Optimal CUR matrix decompositions, SIAM Journal on Computing, 46 (2017), 543-589.  doi: 10.1137/140977898.  Google Scholar

[5]

S. Chaturantabut and D. C. Sorensen, Nonlinear model reduction via discrete empirical interpolation, SIAM Journal on Scientific Computing, 32 (2010), 2737-2764.  doi: 10.1137/090766498.  Google Scholar

[6]

J. Chiu and L. Demanet, Sublinear randomized algorithms for skeleton decompositions, SIAM Journal on Matrix Analysis and Applications, 34 (2013), 1361-1383.  doi: 10.1137/110852310.  Google Scholar

[7]

J. P. Costeira and T. Kanade, A multibody factorization method for independently moving objects, International Journal of Computer Vision, 29 (1998), 159-179.  doi: 10.1023/A:1008000628999.  Google Scholar

[8]

S. Demko, Condition numbers of rectangular systems and bounds for generalized inverses, Linear Algebra and its Applications, 78 (1986), 199-206.  doi: 10.1016/0024-3795(86)90024-8.  Google Scholar

[9]

J. Dongarra and F. Sullivan, Guest editors introduction to the top 10 algorithms, Computing in Science & Engineering, 2 (2000), 22-23.  doi: 10.1109/MCISE.2000.814652.  Google Scholar

[10]

P. DrineasR. Kannan and M. W. Mahoney, Fast monte carlo algorithms for matrices. III: Computing a compressed approximate matrix decomposition, SIAM Journal on Computing, 36 (2006), 184-206.  doi: 10.1137/S0097539704442702.  Google Scholar

[11]

P. Drineas and M. W. Mahoney, On the Nyström method for approximating a Gram matrix for improved kernel-based learning, Journal of Machine Learning Research, 6 (2005), 2153–2175, http://www.jmlr.org/papers/volume6/drineas05a/drineas05a.pdf.  Google Scholar

[12]

P. DrineasM. W. Mahoney and S. Muthukrishnan, Relative-error $CUR$ matrix decompositions, SIAM Journal on Matrix Analysis and Applications, 30 (2008), 844-881.  doi: 10.1137/07070471X.  Google Scholar

[13]

E. Elhamifar and R. Vidal, Sparse subspace clustering: Algorithm, theory, and applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 2765–2781, https://ieeexplore.ieee.org/document/6482137. doi: 10.1109/TPAMI.2013.57.  Google Scholar

[14]

A. Gittens, The spectral norm error of the naive Nystrom extension, preprint, arXiv: 1110.5305. Google Scholar

[15]

A. Gittens and M. W. Mahoney, Revisiting the Nyström method for improved large-scale machine learning, The Journal of Machine Learning Research, 17 (2016), Paper No. 117, 65 pp, https://dl.acm.org/doi/abs/10.5555/2946645.3007070.  Google Scholar

[16]

G. H. Golub and C. F. van Loan, Matrix Computations, Fourth edition, Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, MD, 2013.  Google Scholar

[17]

L. Guttman, Enlargement methods for computing the inverse matrix, The Annals of Mathematical Statistics, 17 (1946), 336-343.  doi: 10.1214/aoms/1177730946.  Google Scholar

[18]

R. Hadani and A. Singer, Representation theoretic patterns in three dimensional cryo-electron microscopy I: The intrinsic reconstitution algorithm, Annals of Mathematics (2), 174 (2011), 1219–1241. doi: 10.4007/annals.2011.174.2.11.  Google Scholar

[19]

K. Hamm and L. X. Huang, Perturbations of CUR decompositions, e-prints, arXiv: 1908.08101. Google Scholar

[20]

K. Hamm and L. X. Huang, Perspectives on CUR decompositions, Applied and Computational Harmonic Analysis, 48 (2020), 1088-1099.  doi: 10.1016/j.acha.2019.08.006.  Google Scholar

[21]

R. Kannan and S. Vempala, Randomized algorithms in numerical linear algebra, Acta Numerica, 26 (2017), 95-135.  doi: 10.1017/S0962492917000058.  Google Scholar

[22]

G. C. Liu, Z. C. Lin, S. C. Yan, J. Sun, Y. Yu and Y. Ma, Robust recovery of subspace structures by low-rank representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2012), 171–184, https://ieeexplore.ieee.org/document/6180173. doi: 10.1109/TPAMI.2012.88.  Google Scholar

[23]

M. W. Mahoney and P. Drineas, CUR matrix decompositions for improved data analysis, Proc. Natl. Acad. Sci. USA, 106 (2009), 697-702.  doi: 10.1073/pnas.0803205106.  Google Scholar

[24]

R. Penrose, On best approximate solutions of linear matrix equations, Proc. Cambridge Philos. Soc., 52 (1956), 17-19.  doi: 10.1017/S0305004100030929.  Google Scholar

[25]

F. Pourkamali-Anaraki and S. Becker, Improved fixed-rank Nyström approximation via QR decomposition: Practical and theoretical aspects, Neurocomputing, 363 (2019), 261-272.  doi: 10.1016/j.neucom.2019.06.070.  Google Scholar

[26]

M. Rudelson, Personal Communication, 2019. Google Scholar

[27]

M. Rudelson and R. Vershynin, Sampling from large matrices: An approach through geometric functional analysis, Journal of the ACM, 54 (2007), Art. 21, 19 pp. doi: 10.1145/1255443.1255449.  Google Scholar

[28]

D. C. Sorensen and M. Embree, A DEIM induced CUR factorization, SIAM Journal on Scientific Computing, 38 (2016), A1454–A1482. doi: 10.1137/140978430.  Google Scholar

[29]

J. A. Tropp, Column subset selection, matrix factorization, and eigenvalue optimization, Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SIAM, Philadelphia, PA, (2009), 978–986. doi: 10.1137/1.9781611973068.106.  Google Scholar

[30]

M. Udell and A. Townsend, Why are big data matrices approximately low rank?, SIAM Journal on Mathematics of Data Science, 1 (2019), 144-160.  doi: 10.1137/18M1183480.  Google Scholar

[31]

R. Vidal, Subspace clustering, IEEE Signal Processing Magazine, 28 (2011), 52–68, https://ieeexplore.ieee.org/document/5714408. doi: 10.1109/MSP.2010.939739.  Google Scholar

[32]

S. Voronin and P.-G. Martinsson, Efficient algorithms for CUR and interpolative matrix decompositions, Advances in Computational Mathematics, 43 (2017), 495-516.  doi: 10.1007/s10444-016-9494-8.  Google Scholar

[33]

T. Yang, L. Zhang, R. Jin and S. Zhu, An explicit sampling dependent spectral error bound for column subset selection, International Conference on Machine Learning, (2015), 135–143, http://proceedings.mlr.press/v37/yanga15.pdf. Google Scholar

Table 1.  Table summarizing sampling complexities for different algorithms
Sampling # Rows # Cols Success Prob Complexity Ref
$ p_i^\text{unif}, q_i^\text{unif} $ $ \frac{r}{ \varepsilon^4\delta}\log(\frac{r}{ \varepsilon^4\delta}) $ $ \frac{r}{ \varepsilon^4\delta}\log(\frac{r}{ \varepsilon^4\delta}) $ $ (1-2e^{-c\gamma^2/\delta})^2 $ $ O(1) $ Cor 5.2
$ p_i^\text{col}, q_i^\text{col} $ $ \frac{r}{ \varepsilon^4\delta}\log(\frac{r}{ \varepsilon^4\delta}) $ $ \frac{r}{ \varepsilon^4\delta}\log(\frac{r}{ \varepsilon^4\delta}) $ $ (1-2e^{-c/\delta})^2 $ $ O(mn) $ Thm 4.1
$ p_i^\text{col}, q_i^\text{col} $ $ r\kappa(A)^2(\log(k)+\frac{1}{\delta}) $ $ r\kappa(A)^2(\log(k)+\frac{1}{\delta}) $ $ (1-2e^{-c/\delta})^2 $ $ O(mn) $ Cor 6.4
$ p_i^{\text{lev}, k}, q_i^{\text{lev}, k} $ $ \frac{k^2}{ \varepsilon^2\delta} $ $ \frac{k^2}{ \varepsilon^2\delta} $ $ 1-e^{-1/\delta} $ $ O(\text{SVD}(A, k)) $ [12]
$ p_i^{\text{lev}, k}, q_i^{\text{lev}, k} $ $ k\log(k)+\frac{k}{\delta} $ $ k\log(k)+\frac{k}{\delta} $ $ (1-2e^{-1/\delta})^2 $ $ O(\text{SVD}(A, k)) $ [33]
DEIM-CUR $ k $ $ k $ 1 $ O(\text{SVD}(A, k) + k^4) $ [28]
RRQR-CUR $ k $ $ k $ 1 $ O(\text{RRQR}(A)) $ [32]
Sampling # Rows # Cols Success Prob Complexity Ref
$ p_i^\text{unif}, q_i^\text{unif} $ $ \frac{r}{ \varepsilon^4\delta}\log(\frac{r}{ \varepsilon^4\delta}) $ $ \frac{r}{ \varepsilon^4\delta}\log(\frac{r}{ \varepsilon^4\delta}) $ $ (1-2e^{-c\gamma^2/\delta})^2 $ $ O(1) $ Cor 5.2
$ p_i^\text{col}, q_i^\text{col} $ $ \frac{r}{ \varepsilon^4\delta}\log(\frac{r}{ \varepsilon^4\delta}) $ $ \frac{r}{ \varepsilon^4\delta}\log(\frac{r}{ \varepsilon^4\delta}) $ $ (1-2e^{-c/\delta})^2 $ $ O(mn) $ Thm 4.1
$ p_i^\text{col}, q_i^\text{col} $ $ r\kappa(A)^2(\log(k)+\frac{1}{\delta}) $ $ r\kappa(A)^2(\log(k)+\frac{1}{\delta}) $ $ (1-2e^{-c/\delta})^2 $ $ O(mn) $ Cor 6.4
$ p_i^{\text{lev}, k}, q_i^{\text{lev}, k} $ $ \frac{k^2}{ \varepsilon^2\delta} $ $ \frac{k^2}{ \varepsilon^2\delta} $ $ 1-e^{-1/\delta} $ $ O(\text{SVD}(A, k)) $ [12]
$ p_i^{\text{lev}, k}, q_i^{\text{lev}, k} $ $ k\log(k)+\frac{k}{\delta} $ $ k\log(k)+\frac{k}{\delta} $ $ (1-2e^{-1/\delta})^2 $ $ O(\text{SVD}(A, k)) $ [33]
DEIM-CUR $ k $ $ k $ 1 $ O(\text{SVD}(A, k) + k^4) $ [28]
RRQR-CUR $ k $ $ k $ 1 $ O(\text{RRQR}(A)) $ [32]
[1]

Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006

[2]

Ke Wei, Jian-Feng Cai, Tony F. Chan, Shingyu Leung. Guarantees of riemannian optimization for low rank matrix completion. Inverse Problems & Imaging, 2020, 14 (2) : 233-265. doi: 10.3934/ipi.2020011

[3]

Xianchao Xiu, Lingchen Kong. Rank-one and sparse matrix decomposition for dynamic MRI. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 127-134. doi: 10.3934/naco.2015.5.127

[4]

Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su. Parallel matrix factorization for low-rank tensor completion. Inverse Problems & Imaging, 2015, 9 (2) : 601-624. doi: 10.3934/ipi.2015.9.601

[5]

Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems & Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030

[6]

Armin Eftekhari, Michael B. Wakin, Ping Li, Paul G. Constantine. Randomized learning of the second-moment matrix of a smooth function. Foundations of Data Science, 2019, 1 (3) : 329-387. doi: 10.3934/fods.2019015

[7]

Zhouchen Lin. A review on low-rank models in data analysis. Big Data & Information Analytics, 2016, 1 (2&3) : 139-161. doi: 10.3934/bdia.2016001

[8]

Yitong Guo, Bingo Wing-Kuen Ling. Principal component analysis with drop rank covariance matrix. Journal of Industrial & Management Optimization, 2017, 13 (5) : 0-0. doi: 10.3934/jimo.2020072

[9]

Zhengshan Dong, Jianli Chen, Wenxing Zhu. Homotopy method for matrix rank minimization based on the matrix hard thresholding method. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 211-224. doi: 10.3934/naco.2019015

[10]

Jiying Liu, Jubo Zhu, Fengxia Yan, Zenghui Zhang. Compressive sampling and $l_1$ minimization for SAR imaging with low sampling rate. Inverse Problems & Imaging, 2013, 7 (4) : 1295-1305. doi: 10.3934/ipi.2013.7.1295

[11]

Yongge Tian. A survey on rank and inertia optimization problems of the matrix-valued function $A + BXB^{*}$. Numerical Algebra, Control & Optimization, 2015, 5 (3) : 289-326. doi: 10.3934/naco.2015.5.289

[12]

Simon Foucart, Richard G. Lynch. Recovering low-rank matrices from binary measurements. Inverse Problems & Imaging, 2019, 13 (4) : 703-720. doi: 10.3934/ipi.2019032

[13]

Giuseppe Geymonat, Françoise Krasucki. Hodge decomposition for symmetric matrix fields and the elasticity complex in Lipschitz domains. Communications on Pure & Applied Analysis, 2009, 8 (1) : 295-309. doi: 10.3934/cpaa.2009.8.295

[14]

Huseyin Coskun. Nonlinear decomposition principle and fundamental matrix solutions for dynamic compartmental systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6553-6605. doi: 10.3934/dcdsb.2019155

[15]

Florian Bossmann, Jianwei Ma. Enhanced image approximation using shifted rank-1 reconstruction. Inverse Problems & Imaging, 2020, 14 (2) : 267-290. doi: 10.3934/ipi.2020012

[16]

Martin Redmann, Peter Benner. Approximation and model order reduction for second order systems with Levy-noise. Conference Publications, 2015, 2015 (special) : 945-953. doi: 10.3934/proc.2015.0945

[17]

Manfred Einsiedler, Elon Lindenstrauss. On measures invariant under diagonalizable actions: the Rank-One case and the general Low-Entropy method. Journal of Modern Dynamics, 2008, 2 (1) : 83-128. doi: 10.3934/jmd.2008.2.83

[18]

Sihem Guerarra. Positive and negative definite submatrices in an Hermitian least rank solution of the matrix equation AXA*=B. Numerical Algebra, Control & Optimization, 2019, 9 (1) : 15-22. doi: 10.3934/naco.2019002

[19]

Sihem Guerarra. Maximum and minimum ranks and inertias of the Hermitian parts of the least rank solution of the matrix equation AXB = C. Numerical Algebra, Control & Optimization, 2019, 0 (0) : 0-0. doi: 10.3934/naco.2020016

[20]

J. Colliander, Tristan Roy. Bootstrapped Morawetz estimates and resonant decomposition for low regularity global solutions of cubic NLS on $R^2$. Communications on Pure & Applied Analysis, 2011, 10 (2) : 397-414. doi: 10.3934/cpaa.2011.10.397

 Impact Factor: 

Article outline

Figures and Tables

[Back to Top]