April  2022, 16(2): 341-366. doi: 10.3934/ipi.2021053

PCA reduced Gaussian mixture models with applications in superresolution

1. 

TU Berlin, Straße des 17. Juni 136, D-10623 Berlin, Germany

2. 

Univ. Bordeaux, Bordeaux INP, CNRS, IMB, UMR 5251, F-33400 Talence, France

3. 

Univ. Bordeaux, Bordeaux INP, CNRS, IMS, UMR 5218, F-33400 Talence, France

4. 

CNRS, Univ. Bordeaux, Bordeaux INP, ICMCB, UMR 5026, F-33600 Pessac, France

5. 

TU Berlin, Straße des 17. Juni 136, D-10623 Berlin, Germany

* Corresponding author: Johannes Hertrich

Received  September 2020 Revised  May 2021 Published  April 2022 Early access  September 2021

Despite the rapid development of computational hardware, the treatment of large and high dimensional data sets is still a challenging problem. The contribution of this paper to the topic is twofold. First, we propose a Gaussian mixture model in conjunction with a reduction of the dimensionality of the data in each component of the model by principal component analysis, which we call PCA-GMM. To learn the (low dimensional) parameters of the mixture model we propose an EM algorithm whose M-step requires the solution of constrained optimization problems. Fortunately, these constrained problems do not depend on the usually large number of samples and can be solved efficiently by an (inertial) proximal alternating linearized minimization algorithm. Second, we apply our PCA-GMM for the superresolution of 2D and 3D material images based on the approach of Sandeep and Jacob. Numerical results confirm the moderate influence of the dimensionality reduction on the overall superresolution result.

Citation: Johannes Hertrich, Dang-Phuong-Lan Nguyen, Jean-Francois Aujol, Dominique Bernard, Yannick Berthoumieu, Abdellatif Saadaldin, Gabriele Steidl. PCA reduced Gaussian mixture models with applications in superresolution. Inverse Problems and Imaging, 2022, 16 (2) : 341-366. doi: 10.3934/ipi.2021053
References:
[1]

H. Attouch and J. Bolte, On the convergence of the proximal algorithm for nonsmooth functions involving analytic features, Math. Program., 116 (2009), 5-16.  doi: 10.1007/s10107-007-0133-5.

[2]

J. BolteS. Sabach and M. Teboulle, Proximal alternating linearized minimization for nonconvex and nonsmooth problems, Math. Program., 146 (2014), 459-494.  doi: 10.1007/s10107-013-0701-9.

[3]

C. BouveyronS. Girard and C. Schmid, High-dimensional data clustering, Comput. Statist. Data Anal., 52 (2007), 502-519.  doi: 10.1016/j.csda.2007.02.009.

[4]

C. L. Byrne, The EM Algorithm: Theory, Applications and Related Methods, Lecture Notes, University of Massachusetts, 2017.

[5]

S. Chrétien and A. O. Hero, Kullback proximal algorithms for maximum-likelihood estimation, IEEE Trans. Inform. Theory, 46 (2000), 1800-1810.  doi: 10.1109/18.857792.

[6]

S. Chrétien and A. O. Hero, On EM algorithms and their proximal generalizations, ESAIM Probab. Stat., 12 (2008), 308-326.  doi: 10.1051/ps:2007041.

[7]

C. A. DeledalleJ. Salmon and A. Dalalyan, Image denoising with patch based PCA: Local versus global, Proceedings of the British Machine Vision Conference, 25 (2011), 1-25.  doi: 10.5244/C.25.25.

[8]

A. P. DempsterN. M. Laird and D. B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, J. Roy. Statist. Soc. Ser. B (Methodological), 39 (1977), 1-38.  doi: 10.1111/j.2517-6161.1977.tb01600.x.

[9]

J. H. FitschenK. Losch and G. Steidl, Unsupervised multi class segmentation of 3d images with intensity inhomogeneities, Journal of Visual Communication and Image Representation, 46 (2017), 312-323.  doi: 10.1016/j.jvcir.2017.04.011.

[10]

M. HasannasabJ. HertrichF. Laus and G. Steidl, Alternatives to the EM algorithm for ML estimation of location, scatter matrix, and degree of freedom of the student t distribution, Numer. Algorithms, 87 (2021), 77-118.  doi: 10.1007/s11075-020-00959-w.

[11]

J. Hertrich and G. Steidl, Inertial stochastic PALM (iSPALM) and applications in machine learning, preprint, 2020, arXiv: 2005.02204.

[12]

N. J. Higham, Matrix nearness problems and applications, In Applications of Matrix Theory (Bradford, 1988), Inst. Math. Appl. Conf. Ser. New Ser., 22, Oxford Univ. Press, New York, 1989, 1-27. doi: 10.1093/imamat/22.1.1.

[13]

N. J. Higham and R. S. Schreiber, Fast polar decomposition of an arbitrary matrix, SIAM J. Sci. Statist. Comput., 11 (1990), 648-655.  doi: 10.1137/0911038.

[14]

A. HoudardC. Bouveyron and J. Delon, High-dimensional mixture models for unsupervised image denoising (HDMI), SIAM J. Imaging Sci., 11 (2018), 2815-2846.  doi: 10.1137/17M1135694.

[15]

T. Klatzer, D. Soukup, E. Kobler, K. Hammernik and T. Pock, Trainable regularization for multi-frame superresolution, Pattern Recognition, Lecture Notes in Comput. Sci., 10496 Springer, Cham., 2017, 90-100. doi: 10.1007/978-3-319-66709-6.

[16]

F. Laus, Statistical Analysis and Optimal Transport for Euclidean and Manifold-Valued Data, Ph.D Thesis, TU Kaiserslautern, 2019.

[17]

E. Lehmann and H. Scheffé, Completeness, similar regions, and unbiased estimation: Part I, Sankhyā, 10 (1950), 305-340. doi: 10.1007/978-1-4614-1412-4_23.

[18]

S. Łojasiewicz, Une propriété topologique des sous-ensembles analytiques réels, In Les Équations aux Dérivées Partielles (Paris, 1962), Éditions du Centre National de la Recherche Scientifique, Paris, 1963, 87-89.

[19]

S. Łojasiewicz, Sur la géométrie semi- et sous-analytique, Ann. Inst. Fourier (Grenoble), 43 (1993), 1575-1595.  doi: 10.5802/aif.1384.

[20]

G. J. McLachlanD. Peel and R. Bean, Modelling high-dimensional data by mixtures of factor analyzers, Comput. Statist. Data Anal., 41 (2003), 379-388.  doi: 10.1016/S0167-9473(02)00183-4.

[21]

X.-L. Meng and D. Van Dyk, The EM algorithm - an old folk-song sung to a fast new tune, J. Roy. Statist. Soc. Ser. B, 59 (1997), 511-567.  doi: 10.1111/1467-9868.00082.

[22]

S. NeumayerM. NimmerS. Setzer and G. Steidl, On the rotational invariant $l_1$-norm PCA, Linear Algebra Appl., 587 (2020), 243-270.  doi: 10.1016/j.laa.2019.10.030.

[23]

S. ParameswaranC. DeledalleL. Denis and T. Q. Nguyen, Accelerating GMM-based patch priors for image restoration: Three ingredients for a 100x speed-up, IEEE Trans. Process., 28 (2019), 687-698.  doi: 10.1109/TIP.2018.2866691.

[24]

K. Pearson, On lines and planes of closest fit to systems of points in space, Philosophical Magazine, 2 (1901), 559-572.  doi: 10.1080/14786440109462720.

[25]

T. Pock and S. Sabach, Inertial proximal alternating linearized minimization (iPALM) for nonconvex and nonsmooth problems, SIAM J. Imaging Sci., 9 (2016), 1756-1787.  doi: 10.1137/16M1064064.

[26]

P. Sandeep and T. Jacob, Single image super-resolution using a joint GMM method, IEEE Trans. Image Process., 25 (2016), 4233-4244. 

[27]

C. SutourC.-A. Deledalle and J.-F. Aujol, Estimation of the noise level function based on a nonparametric detection of homogeneous image regions, SIAM J. Imaging Sci., 8 (2015), 2622-2661.  doi: 10.1137/15M1012682.

[28]

M. E. Tipping and C. M. Bishop, Mixtures of probabilistic principal component analyzers, Neural Computation, 11 (1999), 443-482.  doi: 10.1162/089976699300016728.

[29]

S. VaucherP. UnifantowiczC. RicardL. DuboisM. KuballJ.-M. Catala-CiveraD. BernardM. Stampanoni and R. Nicula, On-line tools for microscopic and macroscopic monitoring of microwave processing, Physica B: Condensed Matter, 398 (2007), 191-195. 

[30]

M. Xiao, S. Zheng, C. Liu, Y. Wang, D. He, G. Ke, J. Bian, Z. Lin and T.-Y. Liu, Invertible image rescaling, preprint, 2020, arXiv: 2005.05650.

[31]

J.-H. Zhao and L. Philip, Fast ML estimation for the mixture of factor analyzers via an ECM algorithm, IEEE Trans. Neural Networks, 19 (2008), 1956-1961. 

[32]

D. Zoran and Y. Weiss, From learning models of natural image patches to whole image restoration, In 2011 International Conference on Computer Vision (ICCV), IEEE, 2011,479-486. doi: 10.1109/ICCV.2011.6126278.

show all references

References:
[1]

H. Attouch and J. Bolte, On the convergence of the proximal algorithm for nonsmooth functions involving analytic features, Math. Program., 116 (2009), 5-16.  doi: 10.1007/s10107-007-0133-5.

[2]

J. BolteS. Sabach and M. Teboulle, Proximal alternating linearized minimization for nonconvex and nonsmooth problems, Math. Program., 146 (2014), 459-494.  doi: 10.1007/s10107-013-0701-9.

[3]

C. BouveyronS. Girard and C. Schmid, High-dimensional data clustering, Comput. Statist. Data Anal., 52 (2007), 502-519.  doi: 10.1016/j.csda.2007.02.009.

[4]

C. L. Byrne, The EM Algorithm: Theory, Applications and Related Methods, Lecture Notes, University of Massachusetts, 2017.

[5]

S. Chrétien and A. O. Hero, Kullback proximal algorithms for maximum-likelihood estimation, IEEE Trans. Inform. Theory, 46 (2000), 1800-1810.  doi: 10.1109/18.857792.

[6]

S. Chrétien and A. O. Hero, On EM algorithms and their proximal generalizations, ESAIM Probab. Stat., 12 (2008), 308-326.  doi: 10.1051/ps:2007041.

[7]

C. A. DeledalleJ. Salmon and A. Dalalyan, Image denoising with patch based PCA: Local versus global, Proceedings of the British Machine Vision Conference, 25 (2011), 1-25.  doi: 10.5244/C.25.25.

[8]

A. P. DempsterN. M. Laird and D. B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, J. Roy. Statist. Soc. Ser. B (Methodological), 39 (1977), 1-38.  doi: 10.1111/j.2517-6161.1977.tb01600.x.

[9]

J. H. FitschenK. Losch and G. Steidl, Unsupervised multi class segmentation of 3d images with intensity inhomogeneities, Journal of Visual Communication and Image Representation, 46 (2017), 312-323.  doi: 10.1016/j.jvcir.2017.04.011.

[10]

M. HasannasabJ. HertrichF. Laus and G. Steidl, Alternatives to the EM algorithm for ML estimation of location, scatter matrix, and degree of freedom of the student t distribution, Numer. Algorithms, 87 (2021), 77-118.  doi: 10.1007/s11075-020-00959-w.

[11]

J. Hertrich and G. Steidl, Inertial stochastic PALM (iSPALM) and applications in machine learning, preprint, 2020, arXiv: 2005.02204.

[12]

N. J. Higham, Matrix nearness problems and applications, In Applications of Matrix Theory (Bradford, 1988), Inst. Math. Appl. Conf. Ser. New Ser., 22, Oxford Univ. Press, New York, 1989, 1-27. doi: 10.1093/imamat/22.1.1.

[13]

N. J. Higham and R. S. Schreiber, Fast polar decomposition of an arbitrary matrix, SIAM J. Sci. Statist. Comput., 11 (1990), 648-655.  doi: 10.1137/0911038.

[14]

A. HoudardC. Bouveyron and J. Delon, High-dimensional mixture models for unsupervised image denoising (HDMI), SIAM J. Imaging Sci., 11 (2018), 2815-2846.  doi: 10.1137/17M1135694.

[15]

T. Klatzer, D. Soukup, E. Kobler, K. Hammernik and T. Pock, Trainable regularization for multi-frame superresolution, Pattern Recognition, Lecture Notes in Comput. Sci., 10496 Springer, Cham., 2017, 90-100. doi: 10.1007/978-3-319-66709-6.

[16]

F. Laus, Statistical Analysis and Optimal Transport for Euclidean and Manifold-Valued Data, Ph.D Thesis, TU Kaiserslautern, 2019.

[17]

E. Lehmann and H. Scheffé, Completeness, similar regions, and unbiased estimation: Part I, Sankhyā, 10 (1950), 305-340. doi: 10.1007/978-1-4614-1412-4_23.

[18]

S. Łojasiewicz, Une propriété topologique des sous-ensembles analytiques réels, In Les Équations aux Dérivées Partielles (Paris, 1962), Éditions du Centre National de la Recherche Scientifique, Paris, 1963, 87-89.

[19]

S. Łojasiewicz, Sur la géométrie semi- et sous-analytique, Ann. Inst. Fourier (Grenoble), 43 (1993), 1575-1595.  doi: 10.5802/aif.1384.

[20]

G. J. McLachlanD. Peel and R. Bean, Modelling high-dimensional data by mixtures of factor analyzers, Comput. Statist. Data Anal., 41 (2003), 379-388.  doi: 10.1016/S0167-9473(02)00183-4.

[21]

X.-L. Meng and D. Van Dyk, The EM algorithm - an old folk-song sung to a fast new tune, J. Roy. Statist. Soc. Ser. B, 59 (1997), 511-567.  doi: 10.1111/1467-9868.00082.

[22]

S. NeumayerM. NimmerS. Setzer and G. Steidl, On the rotational invariant $l_1$-norm PCA, Linear Algebra Appl., 587 (2020), 243-270.  doi: 10.1016/j.laa.2019.10.030.

[23]

S. ParameswaranC. DeledalleL. Denis and T. Q. Nguyen, Accelerating GMM-based patch priors for image restoration: Three ingredients for a 100x speed-up, IEEE Trans. Process., 28 (2019), 687-698.  doi: 10.1109/TIP.2018.2866691.

[24]

K. Pearson, On lines and planes of closest fit to systems of points in space, Philosophical Magazine, 2 (1901), 559-572.  doi: 10.1080/14786440109462720.

[25]

T. Pock and S. Sabach, Inertial proximal alternating linearized minimization (iPALM) for nonconvex and nonsmooth problems, SIAM J. Imaging Sci., 9 (2016), 1756-1787.  doi: 10.1137/16M1064064.

[26]

P. Sandeep and T. Jacob, Single image super-resolution using a joint GMM method, IEEE Trans. Image Process., 25 (2016), 4233-4244. 

[27]

C. SutourC.-A. Deledalle and J.-F. Aujol, Estimation of the noise level function based on a nonparametric detection of homogeneous image regions, SIAM J. Imaging Sci., 8 (2015), 2622-2661.  doi: 10.1137/15M1012682.

[28]

M. E. Tipping and C. M. Bishop, Mixtures of probabilistic principal component analyzers, Neural Computation, 11 (1999), 443-482.  doi: 10.1162/089976699300016728.

[29]

S. VaucherP. UnifantowiczC. RicardL. DuboisM. KuballJ.-M. Catala-CiveraD. BernardM. Stampanoni and R. Nicula, On-line tools for microscopic and macroscopic monitoring of microwave processing, Physica B: Condensed Matter, 398 (2007), 191-195. 

[30]

M. Xiao, S. Zheng, C. Liu, Y. Wang, D. He, G. Ke, J. Bian, Z. Lin and T.-Y. Liu, Invertible image rescaling, preprint, 2020, arXiv: 2005.05650.

[31]

J.-H. Zhao and L. Philip, Fast ML estimation for the mixture of factor analyzers via an ECM algorithm, IEEE Trans. Neural Networks, 19 (2008), 1956-1961. 

[32]

D. Zoran and Y. Weiss, From learning models of natural image patches to whole image restoration, In 2011 International Conference on Computer Vision (ICCV), IEEE, 2011,479-486. doi: 10.1109/ICCV.2011.6126278.

Figure 1.  Top: Images for estimating the mixture models. Bottom: Ground truth for reconstruction. First column: Material "FS", second column: Material "SiC Diamonds", third column: $\texttt{goldhill}$ image
Figure 2.  Reconstructions of 2D low resolution images. First row: ground truth, second row: low resolution, third row: reconstruction with GMM, fourth row: reconstruction with PCA-GMM and $ d = 20 $, fifth row: reconstruction with PCA-GMM and $ d = 12 $. The larger of $ d $, the closer is the result of PCA-GMM to GMM
Figure 3.  Histograms of the eigenvalues of $ \Sigma_k $, $ k = 1, ..., K $ for the PCA-GMM with fixed $ \sigma = 0.02 $ for $ d = 20 $
Table 1.  PSNRs of the reconstructions of artificially downsampled 2D images using either bicubic interpolation, a GMM, PCA-GMM for different choices of $ d $ or HDDC. The magnification factor is set to $ q\in\{2, 4\} $. PCA-GMM produces results almost as good as GMM, with a much lower dimensionality
Magnification factor $ q=2 $ Magnification factor $ q=4 $
$ d $ FS Diamonds Goldhill FS Diamonds Goldhill
bicubic - $ 30.57 $ $ 30.67 $ $ 28.99 $ $ 25.27 $ $ 25.19 $ $ 24.66 $
GMM - $ 35.49 $ $ 37.21 $ $ 31.63 $ $ 30.69 $ $ 30.74 $ $ 27.80 $
PCA-GMM, $ \sigma=0.02 $ $ 20 $ $ 35.44 $ $ 37.24 $ $ 31.25 $ $ 30.75 $ $ 30.74 $ $ 27.64 $
$ 16 $ $ 35.42 $ $ 37.22 $ $ 31.25 $ $ 30.74 $ $ 30.62 $ $ 27.59 $
$ 12 $ $ 35.47 $ $ 37.13 $ $ 31.18 $ $ 30.67 $ $ 30.48 $ $ 27.55 $
$ 8 $ $ 35.32 $ $ 36.69 $ $ 31.00 $ $ 30.46 $ $ 30.16 $ $ 27.38 $
$ 4 $ $ 34.69 $ $ 35.23 $ $ 30.42 $ $ 29.78 $ $ 29.24 $ $ 26.89 $
PCA-GMM, learned $ \sigma $ $ 20 $ $ 35.22 $ $ 37.06 $ $ 31.27 $ $ 30.43 $ $ 30.51 $ $ 27.66 $
$ 16 $ $ 35.14 $ $ 37.01 $ $ 31.14 $ $ 30.34 $ $ 30.31 $ $ 27.51 $
$ 12 $ $ 34.95 $ $ 36.54 $ $ 30.94 $ $ 30.13 $ $ 29.84 $ $ 27.33 $
$ 8 $ $ 34.43 $ $ 35.47 $ $ 30.54 $ $ 29.62 $ $ 29.08 $ $ 26.88 $
$ 4 $ $ 32.74 $ $ 33.41 $ $ 29.69 $ $ 28.51 $ $ 27.75 $ $ 26.16 $
HDDC [3] $ 20 $ $ 35.35 $ $ 37.12 $ $ 31.35 $ $ 30.54 $ $ 30.63 $ $ 27.73 $
$ 16 $ $ 35.31 $ $ 37.10 $ $ 31.25 $ $ 30.47 $ $ 30.48 $ $ 27.62 $
$ 12 $ $ 35.24 $ $ 36.64 $ $ 31.08 $ $ 30.27 $ $ 30.08 $ $ 27.40 $
$ 8 $ $ 34.76 $ $ 35.66 $ $ 30.76 $ $ 29.80 $ $ 29.34 $ $ 27.00 $
$ 4 $ $ 33.46 $ $ 33.86 $ $ 29.93 $ $ 28.61 $ $ 27.99 $ $ 26.37 $
Magnification factor $ q=2 $ Magnification factor $ q=4 $
$ d $ FS Diamonds Goldhill FS Diamonds Goldhill
bicubic - $ 30.57 $ $ 30.67 $ $ 28.99 $ $ 25.27 $ $ 25.19 $ $ 24.66 $
GMM - $ 35.49 $ $ 37.21 $ $ 31.63 $ $ 30.69 $ $ 30.74 $ $ 27.80 $
PCA-GMM, $ \sigma=0.02 $ $ 20 $ $ 35.44 $ $ 37.24 $ $ 31.25 $ $ 30.75 $ $ 30.74 $ $ 27.64 $
$ 16 $ $ 35.42 $ $ 37.22 $ $ 31.25 $ $ 30.74 $ $ 30.62 $ $ 27.59 $
$ 12 $ $ 35.47 $ $ 37.13 $ $ 31.18 $ $ 30.67 $ $ 30.48 $ $ 27.55 $
$ 8 $ $ 35.32 $ $ 36.69 $ $ 31.00 $ $ 30.46 $ $ 30.16 $ $ 27.38 $
$ 4 $ $ 34.69 $ $ 35.23 $ $ 30.42 $ $ 29.78 $ $ 29.24 $ $ 26.89 $
PCA-GMM, learned $ \sigma $ $ 20 $ $ 35.22 $ $ 37.06 $ $ 31.27 $ $ 30.43 $ $ 30.51 $ $ 27.66 $
$ 16 $ $ 35.14 $ $ 37.01 $ $ 31.14 $ $ 30.34 $ $ 30.31 $ $ 27.51 $
$ 12 $ $ 34.95 $ $ 36.54 $ $ 30.94 $ $ 30.13 $ $ 29.84 $ $ 27.33 $
$ 8 $ $ 34.43 $ $ 35.47 $ $ 30.54 $ $ 29.62 $ $ 29.08 $ $ 26.88 $
$ 4 $ $ 32.74 $ $ 33.41 $ $ 29.69 $ $ 28.51 $ $ 27.75 $ $ 26.16 $
HDDC [3] $ 20 $ $ 35.35 $ $ 37.12 $ $ 31.35 $ $ 30.54 $ $ 30.63 $ $ 27.73 $
$ 16 $ $ 35.31 $ $ 37.10 $ $ 31.25 $ $ 30.47 $ $ 30.48 $ $ 27.62 $
$ 12 $ $ 35.24 $ $ 36.64 $ $ 31.08 $ $ 30.27 $ $ 30.08 $ $ 27.40 $
$ 8 $ $ 34.76 $ $ 35.66 $ $ 30.76 $ $ 29.80 $ $ 29.34 $ $ 27.00 $
$ 4 $ $ 33.46 $ $ 33.86 $ $ 29.93 $ $ 28.61 $ $ 27.99 $ $ 26.37 $
Table 2.  Average execution time (in seconds) for the E-step and M-step in the EM algorithm for estimating the parameters of the mixture models
Magnification factor $q=2$, i.e. dimension $n=80$
FS, $N=405769$ Diamonds, $N=405769$ Goldhill, $N=15625$
$d$ E-step M-step E-step M-step E-step M-step
GMM - $10.91$ $0.06$ $10.91$ $0.06$ $0.44$ $0.06$
PCA-GMM, $\sigma=0.02$ $20$ $7.25$ $0.74$ $7.42$ $0.57$ $0.28$ $0.54$
$12$ $6.58$ $0.59$ $6.53$ $0.51$ $0.25$ $0.46$
$4$ $6.18$ $0.56$ $6.17$ $0.52$ $0.24$ $0.48$
PCA-GMM, learned $\sigma$ $20$ $7.28$ $0.54$ $7.41$ $0.54$ $0.28$ $0.54$
$12$ $6.59$ $0.47$ $6.53$ $0.45$ $0.25$ $0.47$
$4$ $6.20$ $0.47$ $6.17$ $0.44$ $0.24$ $0.51$
HDDC[3] $20$ $7.27$ $0.27$ $7.44$ $0.27$ $0.28$ $0.26$
$12$ $6.64$ $0.26$ $6.64$ $0.26$ $0.25$ $0.26$
$4$ $6.27$ $0.27$ $6.23$ $0.26$ $0.24$ $0.26$
Magnification factor $q=4$, i.e. dimension $n=272$
FS, $N=100489$ Diamonds, $N=100489$ Goldhill, $N=3721$
$d$ E-step M-step E-step M-step E-step M-step
GMM - $17.15$ $0.06$ $17.11$ $0.06$ $0.90$ $0.06$
PCA-GMM, $\sigma=0.02$ $20$ $8.65$ $3.54$ $8.68$ $2.03$ $0.44$ $1.83$
$12$ $8.17$ $2.73$ $8.15$ $1.99$ $0.42$ $1.75$
$4$ $7.95$ $2.10$ $7.94$ $2.49$ $0.41$ $1.91$
PCA-GMM, learned $\sigma$ $20$ $8.65$ $1.92$ $8.70$ $1.83$ $0.44$ $1.87$
$12$ $8.17$ $1.99$ $8.17$ $1.74$ $0.42$ $1.74$
$4$ $7.94$ $2.17$ $7.93$ $1.76$ $0.41$ $1.72$
HDDC [3] $20$ $8.65$ $1.53$ $8.71$ $1.54$ $0.44$ $1.52$
$12$ $8.16$ $1.54$ $8.14$ $1.53$ $0.42$ $1.52$
$4$ $7.95$ $1.54$ $7.96$ $1.54$ $0.41$ $1.52$
Magnification factor $q=2$, i.e. dimension $n=80$
FS, $N=405769$ Diamonds, $N=405769$ Goldhill, $N=15625$
$d$ E-step M-step E-step M-step E-step M-step
GMM - $10.91$ $0.06$ $10.91$ $0.06$ $0.44$ $0.06$
PCA-GMM, $\sigma=0.02$ $20$ $7.25$ $0.74$ $7.42$ $0.57$ $0.28$ $0.54$
$12$ $6.58$ $0.59$ $6.53$ $0.51$ $0.25$ $0.46$
$4$ $6.18$ $0.56$ $6.17$ $0.52$ $0.24$ $0.48$
PCA-GMM, learned $\sigma$ $20$ $7.28$ $0.54$ $7.41$ $0.54$ $0.28$ $0.54$
$12$ $6.59$ $0.47$ $6.53$ $0.45$ $0.25$ $0.47$
$4$ $6.20$ $0.47$ $6.17$ $0.44$ $0.24$ $0.51$
HDDC[3] $20$ $7.27$ $0.27$ $7.44$ $0.27$ $0.28$ $0.26$
$12$ $6.64$ $0.26$ $6.64$ $0.26$ $0.25$ $0.26$
$4$ $6.27$ $0.27$ $6.23$ $0.26$ $0.24$ $0.26$
Magnification factor $q=4$, i.e. dimension $n=272$
FS, $N=100489$ Diamonds, $N=100489$ Goldhill, $N=3721$
$d$ E-step M-step E-step M-step E-step M-step
GMM - $17.15$ $0.06$ $17.11$ $0.06$ $0.90$ $0.06$
PCA-GMM, $\sigma=0.02$ $20$ $8.65$ $3.54$ $8.68$ $2.03$ $0.44$ $1.83$
$12$ $8.17$ $2.73$ $8.15$ $1.99$ $0.42$ $1.75$
$4$ $7.95$ $2.10$ $7.94$ $2.49$ $0.41$ $1.91$
PCA-GMM, learned $\sigma$ $20$ $8.65$ $1.92$ $8.70$ $1.83$ $0.44$ $1.87$
$12$ $8.17$ $1.99$ $8.17$ $1.74$ $0.42$ $1.74$
$4$ $7.94$ $2.17$ $7.93$ $1.76$ $0.41$ $1.72$
HDDC [3] $20$ $8.65$ $1.53$ $8.71$ $1.54$ $0.44$ $1.52$
$12$ $8.16$ $1.54$ $8.14$ $1.53$ $0.42$ $1.52$
$4$ $7.95$ $1.54$ $7.96$ $1.54$ $0.41$ $1.52$
Table 3.  PSNRs of the reconstructions of artificially downsampled 3D images using either nearest neighbor interpolation, GMM or PCA-GMM for different choices of $ d $. The magnification factor is set to $ q = 2 $. As in the 2D case, PCA-GMM with small $ d $ produces results almost as good as GMM, but with a much lower dimensionality
$ d $ FS Diamonds
Nearest neighbor - $ 30.10 $ $ 26.25 $
GMM - $ 33.32 $ $ 30.71 $
PCA-GMM, $ \sigma=0.02 $ $ 60 $ $ 33.38 $ $ 30.83 $
$ 40 $ $ 33.36 $ $ 30.75 $
$ 20 $ $ 33.25 $ $ 30.17 $
HDDC [3] $ 60 $ $ 33.23 $ $ 30.49 $
$ 40 $ $ 33.24 $ $ 30.29 $
$ 20 $ $ 33.02 $ $ 29.47 $
$ d $ FS Diamonds
Nearest neighbor - $ 30.10 $ $ 26.25 $
GMM - $ 33.32 $ $ 30.71 $
PCA-GMM, $ \sigma=0.02 $ $ 60 $ $ 33.38 $ $ 30.83 $
$ 40 $ $ 33.36 $ $ 30.75 $
$ 20 $ $ 33.25 $ $ 30.17 $
HDDC [3] $ 60 $ $ 33.23 $ $ 30.49 $
$ 40 $ $ 33.24 $ $ 30.29 $
$ 20 $ $ 33.02 $ $ 29.47 $
Table 4.  Average execution time (in seconds) of the E-step and M-step in the EM algorithm for estimating the parameters of the mixture models
FS Diamonds
$ d $ E-step M-step E-step M-step
GMM - $ 717.91 $ $ \phantom{0}0.07 $ $ 718.13 $ $ \phantom{0}0.07 $
PCA-GMM, $ \sigma=0.02 $ $ 60 $ $ 338.22 $ $ 12.29 $ $ 337.44 $ $ 17.49 $
$ 40 $ $ 327.34 $ $ \phantom{0}9.73 $ $ 324.93 $ $ 13.87 $
$ 20 $ $ 320.00 $ $ \phantom{0}7.85 $ $ 319.46 $ $ \phantom{0}9.80 $
HDDC [3] $ 60 $ $ 337.29 $ $ \phantom{0}4.15 $ $ 337.42 $ $ \phantom{0}4.16 $
$ 40 $ $ 327.11 $ $ \phantom{0}4.19 $ $ 324.95 $ $ \phantom{0}4.15 $
$ 20 $ $ 320.03 $ $ \phantom{0}4.20 $ $ 319.07 $ $ \phantom{0}4.15 $
FS Diamonds
$ d $ E-step M-step E-step M-step
GMM - $ 717.91 $ $ \phantom{0}0.07 $ $ 718.13 $ $ \phantom{0}0.07 $
PCA-GMM, $ \sigma=0.02 $ $ 60 $ $ 338.22 $ $ 12.29 $ $ 337.44 $ $ 17.49 $
$ 40 $ $ 327.34 $ $ \phantom{0}9.73 $ $ 324.93 $ $ 13.87 $
$ 20 $ $ 320.00 $ $ \phantom{0}7.85 $ $ 319.46 $ $ \phantom{0}9.80 $
HDDC [3] $ 60 $ $ 337.29 $ $ \phantom{0}4.15 $ $ 337.42 $ $ \phantom{0}4.16 $
$ 40 $ $ 327.11 $ $ \phantom{0}4.19 $ $ 324.95 $ $ \phantom{0}4.15 $
$ 20 $ $ 320.03 $ $ \phantom{0}4.20 $ $ 319.07 $ $ \phantom{0}4.15 $
[1]

Jie Huang, Xiaoping Yang, Yunmei Chen. A fast algorithm for global minimization of maximum likelihood based on ultrasound image segmentation. Inverse Problems and Imaging, 2011, 5 (3) : 645-657. doi: 10.3934/ipi.2011.5.645

[2]

Yitong Guo, Bingo Wing-Kuen Ling. Principal component analysis with drop rank covariance matrix. Journal of Industrial and Management Optimization, 2021, 17 (5) : 2345-2366. doi: 10.3934/jimo.2020072

[3]

Shi Yan, Jun Liu, Haiyang Huang, Xue-Cheng Tai. A dual EM algorithm for TV regularized Gaussian mixture model in image segmentation. Inverse Problems and Imaging, 2019, 13 (3) : 653-677. doi: 10.3934/ipi.2019030

[4]

Gabriel Jouan, Anne Cuzol, Valérie Monbet, Goulven Monnier. Gaussian mixture models for clustering and calibration of ensemble weather forecasts. Discrete and Continuous Dynamical Systems - S, 2022  doi: 10.3934/dcdss.2022037

[5]

Ningyu Sha, Lei Shi, Ming Yan. Fast algorithms for robust principal component analysis with an upper bound on the rank. Inverse Problems and Imaging, 2021, 15 (1) : 109-128. doi: 10.3934/ipi.2020067

[6]

Hui Zhang, Jian-Feng Cai, Lizhi Cheng, Jubo Zhu. Strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems and Imaging, 2012, 6 (2) : 357-372. doi: 10.3934/ipi.2012.6.357

[7]

Johnathan M. Bardsley. A theoretical framework for the regularization of Poisson likelihood estimation problems. Inverse Problems and Imaging, 2010, 4 (1) : 11-17. doi: 10.3934/ipi.2010.4.11

[8]

Ming Yan, Alex A. T. Bui, Jason Cong, Luminita A. Vese. General convergent expectation maximization (EM)-type algorithms for image reconstruction. Inverse Problems and Imaging, 2013, 7 (3) : 1007-1029. doi: 10.3934/ipi.2013.7.1007

[9]

Qingshan You, Qun Wan, Yipeng Liu. A short note on strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems and Imaging, 2013, 7 (1) : 305-306. doi: 10.3934/ipi.2013.7.305

[10]

Johnathan M. Bardsley. An efficient computational method for total variation-penalized Poisson likelihood estimation. Inverse Problems and Imaging, 2008, 2 (2) : 167-185. doi: 10.3934/ipi.2008.2.167

[11]

Saroja Kumar Singh. Moderate deviation for maximum likelihood estimators from single server queues. Probability, Uncertainty and Quantitative Risk, 2020, 5 (0) : 2-. doi: 10.1186/s41546-020-00044-z

[12]

Azam Moradi, Jafar Razmi, Reza Babazadeh, Ali Sabbaghnia. An integrated Principal Component Analysis and multi-objective mathematical programming approach to agile supply chain network design under uncertainty. Journal of Industrial and Management Optimization, 2019, 15 (2) : 855-879. doi: 10.3934/jimo.2018074

[13]

Charles Curry, Stephen Marsland, Robert I McLachlan. Principal symmetric space analysis. Journal of Computational Dynamics, 2019, 6 (2) : 251-276. doi: 10.3934/jcd.2019013

[14]

Marc Bocquet, Julien Brajard, Alberto Carrassi, Laurent Bertino. Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization. Foundations of Data Science, 2020, 2 (1) : 55-80. doi: 10.3934/fods.2020004

[15]

Marco Castrillón López, Pablo M. Chacón, Pedro L. García. Lagrange-Poincaré reduction in affine principal bundles. Journal of Geometric Mechanics, 2013, 5 (4) : 399-414. doi: 10.3934/jgm.2013.5.399

[16]

Christian Klingenberg, Marlies Pirner, Gabriella Puppo. A consistent kinetic model for a two-component mixture with an application to plasma. Kinetic and Related Models, 2017, 10 (2) : 445-465. doi: 10.3934/krm.2017017

[17]

Lu Han, Min Li, Dachuan Xu, Dongmei Zhang. Stochastic-Lazier-Greedy Algorithm for monotone non-submodular maximization. Journal of Industrial and Management Optimization, 2021, 17 (5) : 2607-2614. doi: 10.3934/jimo.2020085

[18]

Xin Sun, Dachuan Xu, Dongmei Zhang, Yang Zhou. An adaptive algorithm for maximization of non-submodular function with a matroid constraint. Journal of Industrial and Management Optimization, 2022  doi: 10.3934/jimo.2022031

[19]

Aihua Fan, Jörg Schmeling, Weixiao Shen. $ L^\infty $-estimation of generalized Thue-Morse trigonometric polynomials and ergodic maximization. Discrete and Continuous Dynamical Systems, 2021, 41 (1) : 297-327. doi: 10.3934/dcds.2020363

[20]

Alexander Schaub, Olivier Rioul, Jean-Luc Danger, Sylvain Guilley, Joseph Boutros. Challenge codes for physically unclonable functions with Gaussian delays: A maximum entropy problem. Advances in Mathematics of Communications, 2020, 14 (3) : 491-505. doi: 10.3934/amc.2020060

[Back to Top]