August  2017, 11(4): 643-661. doi: 10.3934/ipi.2017030

Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery

1. 

Department of Mathematics, Nanjing University of Chinese Medicine, Nanjing, Jiangsu 210023, China

2. 

School of Mathematical Science, Zhejiang University, Hangzhou, Zhejiang 310027, China

* Corresponding author: songli@zju.edu.cn

Received  November 2015 Revised  April 2017 Published  June 2017

Fund Project: This work is supported by National Natural Science Foundation of China under Grant No. 11626133 and 11531013.

In this paper, we study the theoretical properties of iteratively reweighted least squares algorithm for recovering a matrix (IRLS-M for short) from noisy linear measurements. The IRLS-M was proposed by Fornasier et al. (2011) [17] for solving nuclear norm minimization and by Mohan et al. (2012) [31] for solving Schatten-$p$ (quasi) norm minimization ($0 < p≤q1$) in noiseless case, based on the iteratively reweighted least squares algorithm for sparse signal recovery (IRLS for short) (Daubechies et al., 2010) [15], and numerical experiments have been given to show its efficiency (Fornasier et al. and Mohan et al.) [17], [31]. In this paper, we focus on providing convergence and stability analysis of iteratively reweighted least squares algorithm for low-rank matrix recovery in the presence of noise. The convergence of IRLS-M is proved strictly for all $0 < p≤q1$. Furthermore, when the measurement map $\mathcal{A}$ satisfies the matrix restricted isometry property (M-RIP for short), we show that the IRLS-M is stable for $0 < p≤q1$. Specially, when $p=1$, we prove that the M-RIP constant $δ_{2r} < \sqrt{2}-1$ is sufficient for IRLS-M to recover an unknown (approximately) low rank matrix with an error that is proportional to the noise level. The simplicity of IRLS-M, along with the theoretical guarantees provided in this paper, make a compelling case for its adoption as a standard tool for low rank matrix recovery.

Citation: Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems & Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030
References:
[1]

D. BaB. BabadiP. L. Purdon and E. N. Brown, Convergence and stability of iteratively re-weighted least squares algorithms, IEEE Trans. Signal. Process., 62 (2014), 183-195.  doi: 10.1109/TSP.2013.2287685.  Google Scholar

[2]

R. Basri and D. Jacobs, Lambertian reflectance and linear subspaces, IEEE Trans. Pattern Anal. Mach. Intell., 25 (2003), 218-233.  doi: 10.1109/ICCV.2001.937651.  Google Scholar

[3]

D. P. Bertsekas, A. Nedic and A. E. Ozdaglar, Convex Analysis and Optimization, Athena Scientific, 2003.  Google Scholar

[4]

S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. doi: 10.1017/CBO9780511804441.  Google Scholar

[5]

T. Cai and A. Zhang, Sparse representation of a polytope and recovery of sparse signals and low-rank matrices, IEEE Trans. Inform. Theory., 60 (2014), 122-132.  doi: 10.1109/TIT.2013.2288639.  Google Scholar

[6]

Y. Cai and S. Li, Convergence analysis of projected gradient descent for Schatten-$p$ nonconvex matrix recovery, Sci. China Math., 58 (2015), 845-858.  doi: 10.1007/s11425-014-4949-1.  Google Scholar

[7]

E. J. Candés and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory., 51 (2005), 4203-4215.  doi: 10.1109/TIT.2005.858979.  Google Scholar

[8]

E. J. Candés, The restricted isometry property and its implications for compressed sensing, C. R. Math. Acad. Sci. Paris. Ser. I, 346 (2008), 589-592.  doi: 10.1016/j.crma.2008.03.014.  Google Scholar

[9]

E. J. Candés and B. Recht, Exact Matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.  Google Scholar

[10]

E. J. Candés and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory., 56 (2010), 2053-2080.  doi: 10.1109/TIT.2010.2044061.  Google Scholar

[11]

E. J. Candés and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98 (2009), 925-936.   Google Scholar

[12]

E. J. Candés and Y. Plan, Tight oracle bounds for low-rank recovery from a minimal number of random measurements, IEEE Trans. Inform. Theory., 57 (2011), 2342-2359.  doi: 10.1109/TIT.2011.2111771.  Google Scholar

[13]

E. J. CandésY. EldarT. Strohmer and V. Voroninski, Phase retrieval via matrix completion, SIAM J. Imaging Sci., 6 (2013), 199-225.  doi: 10.1137/110848074.  Google Scholar

[14]

R. Chartrand and W. Yin, Iteratively reweighted algorithms for compressive sensing, International Conference on Acoustics, Speech and Signal Processing, (2008), 3869-3872.   Google Scholar

[15]

I. DaubechiesR. DevoreM. Fornasier and C. S. Güntük, Iteratively reweighted least squares minimization for sparse recovery, Commu. Pure. Appl. Math., 63 (2010), 1-38.  doi: 10.1002/cpa.20303.  Google Scholar

[16]

M. Fazel, Matrix Rank Minimization with Applications, Ph. D thesis, Stanford University, 2002. Google Scholar

[17]

M. FornasierH. Rauhut and R. Ward, Low-rank matrix recovery via iteratively reweighted least squares minimization, SIAM J. Optim., 21 (2011), 1614-1640.  doi: 10.1137/100811404.  Google Scholar

[18]

S. Foucart and M. J. Lai, Sparsest solutions of underdetermined linear systems via $l_{q}$-minimization for $0 < q≤q1$, Appl. Comput. Harmon. Anal., 26 (2009), 395-407.  doi: 10.1016/j.acha.2008.09.001.  Google Scholar

[19]

M. Grant and S. Boyd, Graph implementations for nonsmooth convex programs, in Recent Advances in Learning and Control (tribute to M. Vidyasagar) (eds. V. Blondel, S. Boyd and H. Kimura), Springer, 2008, 95–110. Google Scholar

[20]

M. Grant and S. Boyd, CVX: Matlab software for disciplined convex programming, Available from: http://web.stanford.edu/boyd/software.html. Google Scholar

[21]

D. Gross, Y. K. Liu, S. T. Flammia, S. Becker and J. Eisert, Quantum state tomography via compressed sensing Phys. Rev. Lett., 105 (2010), 150401. doi: 10.1103/PhysRevLett.105.150401.  Google Scholar

[22]

D. Gross, Recovery Low-rank matrices from few coefficients in any basis, IEEE Trans. Inform. Theory., 57 (2011), 1548-1566.  doi: 10.1109/TIT.2011.2104999.  Google Scholar

[23]

P. JainP. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, (2013), 665-674.  doi: 10.1145/2488608.2488693.  Google Scholar

[24]

H. JiC. LiuZ. W. Shen and Y. H. Xu, Robust video denoising using low rank matrix completion, IEEE Conference on Computer Vision and Pattern Recognition, (CVPR), San Francisco, (2010), 1548-1566.   Google Scholar

[25]

F. Kramer and R. Ward, New and improved Johnson-Lindenstrauss embeddings via the restricted isometry property, SIAM J. Math. Anal., 43 (2011), 1269-1281.  doi: 10.1137/100810447.  Google Scholar

[26]

M. J. LaiY. Xu and W. Yin, Improved iteratively reweighted least square for unconstrained smoothed $l_{q}$ minimization, SIAM J. Numer. Anal., 51 (2013), 927-957.  doi: 10.1137/110840364.  Google Scholar

[27]

C. L. Lawson, Contributions to the Theory of Linear Least Maximum Approximation, Ph. D thesis, University of California, Los Angeles, 1961.  Google Scholar

[28]

J. Lin and S. Li, Convergence of projected Landweber iteration for matrix rank minimization, Appl. Comput. Harmon. Anal., 36 (2014), 316-325.  doi: 10.1016/j.acha.2013.06.005.  Google Scholar

[29]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. Appl., 31 (2009), 1235-1256.  doi: 10.1137/090755436.  Google Scholar

[30]

K. Mohan and M. Fazel, Iterative reweighted least squares for matrix rank minimization In Proc. 48th Allerton Conference on Controls and Communications, Allerton, IL, 2010b. doi: 10.1109/ALLERTON.2010.5706969.  Google Scholar

[31]

K. Mohan and and M. Fazel, Iterative reweighted algorithms for matrix rank minimization, J. Mach. Learn. Res., 13 (2012), 3441-3473.   Google Scholar

[32]

T. Morita and T. Kanade, A sequential factorization method for recovering shape and motion from image streams, IEEE Trans. Pattern Anal. Mach. Intell., 19 (1997), 858-867.  doi: 10.1109/34.608289.  Google Scholar

[33]

Netflix Prize, Available from: http://www.netflixprize.com/. Google Scholar

[34]

S. Oymak, K. Mohan, M. Fazel and B. Hassibi, A simplified approach to recovery conditions for low rank matrices, in: IEEE International Symposium on Information Theory Proceedings, ISIT, (2011), 2318–2322. doi: 10.1109/ISIT.2011.6033976.  Google Scholar

[35]

B. RechtM. Fazel and P. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501.  doi: 10.1137/070697835.  Google Scholar

[36]

B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), 3413-3430.   Google Scholar

[37]

R. Saab and O. Yilmaz, Sparse recovery by non-convex optimization-instance optimality, Appl. Comput. Harmon. Anal., 29 (2010), 30-48.  doi: 10.1016/j.acha.2009.08.002.  Google Scholar

[38]

E. van den Berg and M. P. Friedlander, Sparse optimization with least-squares constraints, SIAM J. Optim., 21 (2011), 1201-1229.  doi: 10.1137/100785028.  Google Scholar

[39]

W. WangW. Xu and A. Tang, On the performance of sparse recovery via $l_{p}$-minimization, IEEE Trans. Inform. Theory., 57 (2011), 7255-7278.  doi: 10.1109/TIT.2011.2159959.  Google Scholar

[40]

K. WeiJ-F. CaiT. F. Chan and S. Leung, Guarantees of riemannian optimization for low rank matrix recovery, SIAM J. Matrix Anal. Appl., 37 (2016), 1198-1222.  doi: 10.1137/15M1050525.  Google Scholar

[41]

M-C. Yue and A. Man-Cho So, A perturbation inequality for concave functions of singular values and its applications in low-rank matrix recovery, Appl. Comput. Harmon. Anal., 40 (2016), 396-416.  doi: 10.1016/j.acha.2015.06.006.  Google Scholar

[42]

M. ZhangZ. Huang and Y. Zhang, Restricted $p$ -Isometry Properties of Nonconvex matrix recovery, IEEE Trans. Inform. Theory., 59 (2013), 4316-4323.  doi: 10.1109/TIT.2013.2250577.  Google Scholar

[43]

Z Zhou, J. Wright, X. Li, E. J. Candès and Y. Ma, Stable Principal Component Pursuit Proceedings of International Symposium on Information Theory, (2010). doi: 10.1109/ISIT.2010.5513535.  Google Scholar

show all references

References:
[1]

D. BaB. BabadiP. L. Purdon and E. N. Brown, Convergence and stability of iteratively re-weighted least squares algorithms, IEEE Trans. Signal. Process., 62 (2014), 183-195.  doi: 10.1109/TSP.2013.2287685.  Google Scholar

[2]

R. Basri and D. Jacobs, Lambertian reflectance and linear subspaces, IEEE Trans. Pattern Anal. Mach. Intell., 25 (2003), 218-233.  doi: 10.1109/ICCV.2001.937651.  Google Scholar

[3]

D. P. Bertsekas, A. Nedic and A. E. Ozdaglar, Convex Analysis and Optimization, Athena Scientific, 2003.  Google Scholar

[4]

S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. doi: 10.1017/CBO9780511804441.  Google Scholar

[5]

T. Cai and A. Zhang, Sparse representation of a polytope and recovery of sparse signals and low-rank matrices, IEEE Trans. Inform. Theory., 60 (2014), 122-132.  doi: 10.1109/TIT.2013.2288639.  Google Scholar

[6]

Y. Cai and S. Li, Convergence analysis of projected gradient descent for Schatten-$p$ nonconvex matrix recovery, Sci. China Math., 58 (2015), 845-858.  doi: 10.1007/s11425-014-4949-1.  Google Scholar

[7]

E. J. Candés and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory., 51 (2005), 4203-4215.  doi: 10.1109/TIT.2005.858979.  Google Scholar

[8]

E. J. Candés, The restricted isometry property and its implications for compressed sensing, C. R. Math. Acad. Sci. Paris. Ser. I, 346 (2008), 589-592.  doi: 10.1016/j.crma.2008.03.014.  Google Scholar

[9]

E. J. Candés and B. Recht, Exact Matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.  Google Scholar

[10]

E. J. Candés and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory., 56 (2010), 2053-2080.  doi: 10.1109/TIT.2010.2044061.  Google Scholar

[11]

E. J. Candés and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98 (2009), 925-936.   Google Scholar

[12]

E. J. Candés and Y. Plan, Tight oracle bounds for low-rank recovery from a minimal number of random measurements, IEEE Trans. Inform. Theory., 57 (2011), 2342-2359.  doi: 10.1109/TIT.2011.2111771.  Google Scholar

[13]

E. J. CandésY. EldarT. Strohmer and V. Voroninski, Phase retrieval via matrix completion, SIAM J. Imaging Sci., 6 (2013), 199-225.  doi: 10.1137/110848074.  Google Scholar

[14]

R. Chartrand and W. Yin, Iteratively reweighted algorithms for compressive sensing, International Conference on Acoustics, Speech and Signal Processing, (2008), 3869-3872.   Google Scholar

[15]

I. DaubechiesR. DevoreM. Fornasier and C. S. Güntük, Iteratively reweighted least squares minimization for sparse recovery, Commu. Pure. Appl. Math., 63 (2010), 1-38.  doi: 10.1002/cpa.20303.  Google Scholar

[16]

M. Fazel, Matrix Rank Minimization with Applications, Ph. D thesis, Stanford University, 2002. Google Scholar

[17]

M. FornasierH. Rauhut and R. Ward, Low-rank matrix recovery via iteratively reweighted least squares minimization, SIAM J. Optim., 21 (2011), 1614-1640.  doi: 10.1137/100811404.  Google Scholar

[18]

S. Foucart and M. J. Lai, Sparsest solutions of underdetermined linear systems via $l_{q}$-minimization for $0 < q≤q1$, Appl. Comput. Harmon. Anal., 26 (2009), 395-407.  doi: 10.1016/j.acha.2008.09.001.  Google Scholar

[19]

M. Grant and S. Boyd, Graph implementations for nonsmooth convex programs, in Recent Advances in Learning and Control (tribute to M. Vidyasagar) (eds. V. Blondel, S. Boyd and H. Kimura), Springer, 2008, 95–110. Google Scholar

[20]

M. Grant and S. Boyd, CVX: Matlab software for disciplined convex programming, Available from: http://web.stanford.edu/boyd/software.html. Google Scholar

[21]

D. Gross, Y. K. Liu, S. T. Flammia, S. Becker and J. Eisert, Quantum state tomography via compressed sensing Phys. Rev. Lett., 105 (2010), 150401. doi: 10.1103/PhysRevLett.105.150401.  Google Scholar

[22]

D. Gross, Recovery Low-rank matrices from few coefficients in any basis, IEEE Trans. Inform. Theory., 57 (2011), 1548-1566.  doi: 10.1109/TIT.2011.2104999.  Google Scholar

[23]

P. JainP. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, (2013), 665-674.  doi: 10.1145/2488608.2488693.  Google Scholar

[24]

H. JiC. LiuZ. W. Shen and Y. H. Xu, Robust video denoising using low rank matrix completion, IEEE Conference on Computer Vision and Pattern Recognition, (CVPR), San Francisco, (2010), 1548-1566.   Google Scholar

[25]

F. Kramer and R. Ward, New and improved Johnson-Lindenstrauss embeddings via the restricted isometry property, SIAM J. Math. Anal., 43 (2011), 1269-1281.  doi: 10.1137/100810447.  Google Scholar

[26]

M. J. LaiY. Xu and W. Yin, Improved iteratively reweighted least square for unconstrained smoothed $l_{q}$ minimization, SIAM J. Numer. Anal., 51 (2013), 927-957.  doi: 10.1137/110840364.  Google Scholar

[27]

C. L. Lawson, Contributions to the Theory of Linear Least Maximum Approximation, Ph. D thesis, University of California, Los Angeles, 1961.  Google Scholar

[28]

J. Lin and S. Li, Convergence of projected Landweber iteration for matrix rank minimization, Appl. Comput. Harmon. Anal., 36 (2014), 316-325.  doi: 10.1016/j.acha.2013.06.005.  Google Scholar

[29]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. Appl., 31 (2009), 1235-1256.  doi: 10.1137/090755436.  Google Scholar

[30]

K. Mohan and M. Fazel, Iterative reweighted least squares for matrix rank minimization In Proc. 48th Allerton Conference on Controls and Communications, Allerton, IL, 2010b. doi: 10.1109/ALLERTON.2010.5706969.  Google Scholar

[31]

K. Mohan and and M. Fazel, Iterative reweighted algorithms for matrix rank minimization, J. Mach. Learn. Res., 13 (2012), 3441-3473.   Google Scholar

[32]

T. Morita and T. Kanade, A sequential factorization method for recovering shape and motion from image streams, IEEE Trans. Pattern Anal. Mach. Intell., 19 (1997), 858-867.  doi: 10.1109/34.608289.  Google Scholar

[33]

Netflix Prize, Available from: http://www.netflixprize.com/. Google Scholar

[34]

S. Oymak, K. Mohan, M. Fazel and B. Hassibi, A simplified approach to recovery conditions for low rank matrices, in: IEEE International Symposium on Information Theory Proceedings, ISIT, (2011), 2318–2322. doi: 10.1109/ISIT.2011.6033976.  Google Scholar

[35]

B. RechtM. Fazel and P. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501.  doi: 10.1137/070697835.  Google Scholar

[36]

B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), 3413-3430.   Google Scholar

[37]

R. Saab and O. Yilmaz, Sparse recovery by non-convex optimization-instance optimality, Appl. Comput. Harmon. Anal., 29 (2010), 30-48.  doi: 10.1016/j.acha.2009.08.002.  Google Scholar

[38]

E. van den Berg and M. P. Friedlander, Sparse optimization with least-squares constraints, SIAM J. Optim., 21 (2011), 1201-1229.  doi: 10.1137/100785028.  Google Scholar

[39]

W. WangW. Xu and A. Tang, On the performance of sparse recovery via $l_{p}$-minimization, IEEE Trans. Inform. Theory., 57 (2011), 7255-7278.  doi: 10.1109/TIT.2011.2159959.  Google Scholar

[40]

K. WeiJ-F. CaiT. F. Chan and S. Leung, Guarantees of riemannian optimization for low rank matrix recovery, SIAM J. Matrix Anal. Appl., 37 (2016), 1198-1222.  doi: 10.1137/15M1050525.  Google Scholar

[41]

M-C. Yue and A. Man-Cho So, A perturbation inequality for concave functions of singular values and its applications in low-rank matrix recovery, Appl. Comput. Harmon. Anal., 40 (2016), 396-416.  doi: 10.1016/j.acha.2015.06.006.  Google Scholar

[42]

M. ZhangZ. Huang and Y. Zhang, Restricted $p$ -Isometry Properties of Nonconvex matrix recovery, IEEE Trans. Inform. Theory., 59 (2013), 4316-4323.  doi: 10.1109/TIT.2013.2250577.  Google Scholar

[43]

Z Zhou, J. Wright, X. Li, E. J. Candès and Y. Ma, Stable Principal Component Pursuit Proceedings of International Symposium on Information Theory, (2010). doi: 10.1109/ISIT.2010.5513535.  Google Scholar

[1]

Sihem Guerarra. Maximum and minimum ranks and inertias of the Hermitian parts of the least rank solution of the matrix equation AXB = C. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 75-86. doi: 10.3934/naco.2020016

[2]

Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020076

[3]

Peizhao Yu, Guoshan Zhang, Yi Zhang. Decoupling of cubic polynomial matrix systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 13-26. doi: 10.3934/naco.2020012

[4]

Shengxin Zhu, Tongxiang Gu, Xingping Liu. AIMS: Average information matrix splitting. Mathematical Foundations of Computing, 2020, 3 (4) : 301-308. doi: 10.3934/mfc.2020012

[5]

Parikshit Upadhyaya, Elias Jarlebring, Emanuel H. Rubensson. A density matrix approach to the convergence of the self-consistent field iteration. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 99-115. doi: 10.3934/naco.2020018

[6]

S. Sadeghi, H. Jafari, S. Nemati. Solving fractional Advection-diffusion equation using Genocchi operational matrix based on Atangana-Baleanu derivative. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020435

[7]

Yuri Fedorov, Božidar Jovanović. Continuous and discrete Neumann systems on Stiefel varieties as matrix generalizations of the Jacobi–Mumford systems. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020375

[8]

Xuemei Chen, Julia Dobrosotskaya. Inpainting via sparse recovery with directional constraints. Mathematical Foundations of Computing, 2020, 3 (4) : 229-247. doi: 10.3934/mfc.2020025

[9]

Tommi Brander, Joonas Ilmavirta, Petteri Piiroinen, Teemu Tyni. Optimal recovery of a radiating source with multiple frequencies along one line. Inverse Problems & Imaging, 2020, 14 (6) : 967-983. doi: 10.3934/ipi.2020044

[10]

Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247

[11]

Maoding Zhen, Binlin Zhang, Vicenţiu D. Rădulescu. Normalized solutions for nonlinear coupled fractional systems: Low and high perturbations in the attractive case. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020379

[12]

Youming Guo, Tingting Li. Optimal control strategies for an online game addiction model with low and high risk exposure. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020347

2019 Impact Factor: 1.373

Metrics

  • PDF downloads (73)
  • HTML views (70)
  • Cited by (3)

Other articles
by authors

[Back to Top]