\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

PCA reduced Gaussian mixture models with applications in superresolution

  • * Corresponding author: Johannes Hertrich

    * Corresponding author: Johannes Hertrich 
Abstract / Introduction Full Text(HTML) Figure(3) / Table(4) Related Papers Cited by
  • Despite the rapid development of computational hardware, the treatment of large and high dimensional data sets is still a challenging problem. The contribution of this paper to the topic is twofold. First, we propose a Gaussian mixture model in conjunction with a reduction of the dimensionality of the data in each component of the model by principal component analysis, which we call PCA-GMM. To learn the (low dimensional) parameters of the mixture model we propose an EM algorithm whose M-step requires the solution of constrained optimization problems. Fortunately, these constrained problems do not depend on the usually large number of samples and can be solved efficiently by an (inertial) proximal alternating linearized minimization algorithm. Second, we apply our PCA-GMM for the superresolution of 2D and 3D material images based on the approach of Sandeep and Jacob. Numerical results confirm the moderate influence of the dimensionality reduction on the overall superresolution result.

    Mathematics Subject Classification: Primary: 62H12, 62H25, 62H35; Secondary: 65K05.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Top: Images for estimating the mixture models. Bottom: Ground truth for reconstruction. First column: Material "FS", second column: Material "SiC Diamonds", third column: $\texttt{goldhill}$ image

    Figure 2.  Reconstructions of 2D low resolution images. First row: ground truth, second row: low resolution, third row: reconstruction with GMM, fourth row: reconstruction with PCA-GMM and $ d = 20 $, fifth row: reconstruction with PCA-GMM and $ d = 12 $. The larger of $ d $, the closer is the result of PCA-GMM to GMM

    Figure 3.  Histograms of the eigenvalues of $ \Sigma_k $, $ k = 1, ..., K $ for the PCA-GMM with fixed $ \sigma = 0.02 $ for $ d = 20 $

    Table 1.  PSNRs of the reconstructions of artificially downsampled 2D images using either bicubic interpolation, a GMM, PCA-GMM for different choices of $ d $ or HDDC. The magnification factor is set to $ q\in\{2, 4\} $. PCA-GMM produces results almost as good as GMM, with a much lower dimensionality

    Magnification factor $ q=2 $ Magnification factor $ q=4 $
    $ d $ FS Diamonds Goldhill FS Diamonds Goldhill
    bicubic - $ 30.57 $ $ 30.67 $ $ 28.99 $ $ 25.27 $ $ 25.19 $ $ 24.66 $
    GMM - $ 35.49 $ $ 37.21 $ $ 31.63 $ $ 30.69 $ $ 30.74 $ $ 27.80 $
    PCA-GMM, $ \sigma=0.02 $ $ 20 $ $ 35.44 $ $ 37.24 $ $ 31.25 $ $ 30.75 $ $ 30.74 $ $ 27.64 $
    $ 16 $ $ 35.42 $ $ 37.22 $ $ 31.25 $ $ 30.74 $ $ 30.62 $ $ 27.59 $
    $ 12 $ $ 35.47 $ $ 37.13 $ $ 31.18 $ $ 30.67 $ $ 30.48 $ $ 27.55 $
    $ 8 $ $ 35.32 $ $ 36.69 $ $ 31.00 $ $ 30.46 $ $ 30.16 $ $ 27.38 $
    $ 4 $ $ 34.69 $ $ 35.23 $ $ 30.42 $ $ 29.78 $ $ 29.24 $ $ 26.89 $
    PCA-GMM, learned $ \sigma $ $ 20 $ $ 35.22 $ $ 37.06 $ $ 31.27 $ $ 30.43 $ $ 30.51 $ $ 27.66 $
    $ 16 $ $ 35.14 $ $ 37.01 $ $ 31.14 $ $ 30.34 $ $ 30.31 $ $ 27.51 $
    $ 12 $ $ 34.95 $ $ 36.54 $ $ 30.94 $ $ 30.13 $ $ 29.84 $ $ 27.33 $
    $ 8 $ $ 34.43 $ $ 35.47 $ $ 30.54 $ $ 29.62 $ $ 29.08 $ $ 26.88 $
    $ 4 $ $ 32.74 $ $ 33.41 $ $ 29.69 $ $ 28.51 $ $ 27.75 $ $ 26.16 $
    HDDC [3] $ 20 $ $ 35.35 $ $ 37.12 $ $ 31.35 $ $ 30.54 $ $ 30.63 $ $ 27.73 $
    $ 16 $ $ 35.31 $ $ 37.10 $ $ 31.25 $ $ 30.47 $ $ 30.48 $ $ 27.62 $
    $ 12 $ $ 35.24 $ $ 36.64 $ $ 31.08 $ $ 30.27 $ $ 30.08 $ $ 27.40 $
    $ 8 $ $ 34.76 $ $ 35.66 $ $ 30.76 $ $ 29.80 $ $ 29.34 $ $ 27.00 $
    $ 4 $ $ 33.46 $ $ 33.86 $ $ 29.93 $ $ 28.61 $ $ 27.99 $ $ 26.37 $
     | Show Table
    DownLoad: CSV

    Table 2.  Average execution time (in seconds) for the E-step and M-step in the EM algorithm for estimating the parameters of the mixture models

    Magnification factor $q=2$, i.e. dimension $n=80$
    FS, $N=405769$ Diamonds, $N=405769$ Goldhill, $N=15625$
    $d$ E-step M-step E-step M-step E-step M-step
    GMM - $10.91$ $0.06$ $10.91$ $0.06$ $0.44$ $0.06$
    PCA-GMM, $\sigma=0.02$ $20$ $7.25$ $0.74$ $7.42$ $0.57$ $0.28$ $0.54$
    $12$ $6.58$ $0.59$ $6.53$ $0.51$ $0.25$ $0.46$
    $4$ $6.18$ $0.56$ $6.17$ $0.52$ $0.24$ $0.48$
    PCA-GMM, learned $\sigma$ $20$ $7.28$ $0.54$ $7.41$ $0.54$ $0.28$ $0.54$
    $12$ $6.59$ $0.47$ $6.53$ $0.45$ $0.25$ $0.47$
    $4$ $6.20$ $0.47$ $6.17$ $0.44$ $0.24$ $0.51$
    HDDC[3] $20$ $7.27$ $0.27$ $7.44$ $0.27$ $0.28$ $0.26$
    $12$ $6.64$ $0.26$ $6.64$ $0.26$ $0.25$ $0.26$
    $4$ $6.27$ $0.27$ $6.23$ $0.26$ $0.24$ $0.26$
    Magnification factor $q=4$, i.e. dimension $n=272$
    FS, $N=100489$ Diamonds, $N=100489$ Goldhill, $N=3721$
    $d$ E-step M-step E-step M-step E-step M-step
    GMM - $17.15$ $0.06$ $17.11$ $0.06$ $0.90$ $0.06$
    PCA-GMM, $\sigma=0.02$ $20$ $8.65$ $3.54$ $8.68$ $2.03$ $0.44$ $1.83$
    $12$ $8.17$ $2.73$ $8.15$ $1.99$ $0.42$ $1.75$
    $4$ $7.95$ $2.10$ $7.94$ $2.49$ $0.41$ $1.91$
    PCA-GMM, learned $\sigma$ $20$ $8.65$ $1.92$ $8.70$ $1.83$ $0.44$ $1.87$
    $12$ $8.17$ $1.99$ $8.17$ $1.74$ $0.42$ $1.74$
    $4$ $7.94$ $2.17$ $7.93$ $1.76$ $0.41$ $1.72$
    HDDC [3] $20$ $8.65$ $1.53$ $8.71$ $1.54$ $0.44$ $1.52$
    $12$ $8.16$ $1.54$ $8.14$ $1.53$ $0.42$ $1.52$
    $4$ $7.95$ $1.54$ $7.96$ $1.54$ $0.41$ $1.52$
     | Show Table
    DownLoad: CSV

    Table 3.  PSNRs of the reconstructions of artificially downsampled 3D images using either nearest neighbor interpolation, GMM or PCA-GMM for different choices of $ d $. The magnification factor is set to $ q = 2 $. As in the 2D case, PCA-GMM with small $ d $ produces results almost as good as GMM, but with a much lower dimensionality

    $ d $ FS Diamonds
    Nearest neighbor - $ 30.10 $ $ 26.25 $
    GMM - $ 33.32 $ $ 30.71 $
    PCA-GMM, $ \sigma=0.02 $ $ 60 $ $ 33.38 $ $ 30.83 $
    $ 40 $ $ 33.36 $ $ 30.75 $
    $ 20 $ $ 33.25 $ $ 30.17 $
    HDDC [3] $ 60 $ $ 33.23 $ $ 30.49 $
    $ 40 $ $ 33.24 $ $ 30.29 $
    $ 20 $ $ 33.02 $ $ 29.47 $
     | Show Table
    DownLoad: CSV

    Table 4.  Average execution time (in seconds) of the E-step and M-step in the EM algorithm for estimating the parameters of the mixture models

    FS Diamonds
    $ d $ E-step M-step E-step M-step
    GMM - $ 717.91 $ $ \phantom{0}0.07 $ $ 718.13 $ $ \phantom{0}0.07 $
    PCA-GMM, $ \sigma=0.02 $ $ 60 $ $ 338.22 $ $ 12.29 $ $ 337.44 $ $ 17.49 $
    $ 40 $ $ 327.34 $ $ \phantom{0}9.73 $ $ 324.93 $ $ 13.87 $
    $ 20 $ $ 320.00 $ $ \phantom{0}7.85 $ $ 319.46 $ $ \phantom{0}9.80 $
    HDDC [3] $ 60 $ $ 337.29 $ $ \phantom{0}4.15 $ $ 337.42 $ $ \phantom{0}4.16 $
    $ 40 $ $ 327.11 $ $ \phantom{0}4.19 $ $ 324.95 $ $ \phantom{0}4.15 $
    $ 20 $ $ 320.03 $ $ \phantom{0}4.20 $ $ 319.07 $ $ \phantom{0}4.15 $
     | Show Table
    DownLoad: CSV
  • [1] H. Attouch and J. Bolte, On the convergence of the proximal algorithm for nonsmooth functions involving analytic features, Math. Program., 116 (2009), 5-16.  doi: 10.1007/s10107-007-0133-5.
    [2] J. BolteS. Sabach and M. Teboulle, Proximal alternating linearized minimization for nonconvex and nonsmooth problems, Math. Program., 146 (2014), 459-494.  doi: 10.1007/s10107-013-0701-9.
    [3] C. BouveyronS. Girard and C. Schmid, High-dimensional data clustering, Comput. Statist. Data Anal., 52 (2007), 502-519.  doi: 10.1016/j.csda.2007.02.009.
    [4] C. L. Byrne, The EM Algorithm: Theory, Applications and Related Methods, Lecture Notes, University of Massachusetts, 2017.
    [5] S. Chrétien and A. O. Hero, Kullback proximal algorithms for maximum-likelihood estimation, IEEE Trans. Inform. Theory, 46 (2000), 1800-1810.  doi: 10.1109/18.857792.
    [6] S. Chrétien and A. O. Hero, On EM algorithms and their proximal generalizations, ESAIM Probab. Stat., 12 (2008), 308-326.  doi: 10.1051/ps:2007041.
    [7] C. A. DeledalleJ. Salmon and A. Dalalyan, Image denoising with patch based PCA: Local versus global, Proceedings of the British Machine Vision Conference, 25 (2011), 1-25.  doi: 10.5244/C.25.25.
    [8] A. P. DempsterN. M. Laird and D. B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, J. Roy. Statist. Soc. Ser. B (Methodological), 39 (1977), 1-38.  doi: 10.1111/j.2517-6161.1977.tb01600.x.
    [9] J. H. FitschenK. Losch and G. Steidl, Unsupervised multi class segmentation of 3d images with intensity inhomogeneities, Journal of Visual Communication and Image Representation, 46 (2017), 312-323.  doi: 10.1016/j.jvcir.2017.04.011.
    [10] M. HasannasabJ. HertrichF. Laus and G. Steidl, Alternatives to the EM algorithm for ML estimation of location, scatter matrix, and degree of freedom of the student t distribution, Numer. Algorithms, 87 (2021), 77-118.  doi: 10.1007/s11075-020-00959-w.
    [11] J. Hertrich and G. Steidl, Inertial stochastic PALM (iSPALM) and applications in machine learning, preprint, 2020, arXiv: 2005.02204.
    [12] N. J. Higham, Matrix nearness problems and applications, In Applications of Matrix Theory (Bradford, 1988), Inst. Math. Appl. Conf. Ser. New Ser., 22, Oxford Univ. Press, New York, 1989, 1-27. doi: 10.1093/imamat/22.1.1.
    [13] N. J. Higham and R. S. Schreiber, Fast polar decomposition of an arbitrary matrix, SIAM J. Sci. Statist. Comput., 11 (1990), 648-655.  doi: 10.1137/0911038.
    [14] A. HoudardC. Bouveyron and J. Delon, High-dimensional mixture models for unsupervised image denoising (HDMI), SIAM J. Imaging Sci., 11 (2018), 2815-2846.  doi: 10.1137/17M1135694.
    [15] T. Klatzer, D. Soukup, E. Kobler, K. Hammernik and T. Pock, Trainable regularization for multi-frame superresolution, Pattern Recognition, Lecture Notes in Comput. Sci., 10496 Springer, Cham., 2017, 90-100. doi: 10.1007/978-3-319-66709-6.
    [16] F. Laus, Statistical Analysis and Optimal Transport for Euclidean and Manifold-Valued Data, Ph.D Thesis, TU Kaiserslautern, 2019.
    [17] E. Lehmann and H. Scheffé, Completeness, similar regions, and unbiased estimation: Part I, Sankhyā, 10 (1950), 305-340. doi: 10.1007/978-1-4614-1412-4_23.
    [18] S. Łojasiewicz, Une propriété topologique des sous-ensembles analytiques réels, In Les Équations aux Dérivées Partielles (Paris, 1962), Éditions du Centre National de la Recherche Scientifique, Paris, 1963, 87-89.
    [19] S. Łojasiewicz, Sur la géométrie semi- et sous-analytique, Ann. Inst. Fourier (Grenoble), 43 (1993), 1575-1595.  doi: 10.5802/aif.1384.
    [20] G. J. McLachlanD. Peel and R. Bean, Modelling high-dimensional data by mixtures of factor analyzers, Comput. Statist. Data Anal., 41 (2003), 379-388.  doi: 10.1016/S0167-9473(02)00183-4.
    [21] X.-L. Meng and D. Van Dyk, The EM algorithm - an old folk-song sung to a fast new tune, J. Roy. Statist. Soc. Ser. B, 59 (1997), 511-567.  doi: 10.1111/1467-9868.00082.
    [22] S. NeumayerM. NimmerS. Setzer and G. Steidl, On the rotational invariant $l_1$-norm PCA, Linear Algebra Appl., 587 (2020), 243-270.  doi: 10.1016/j.laa.2019.10.030.
    [23] S. ParameswaranC. DeledalleL. Denis and T. Q. Nguyen, Accelerating GMM-based patch priors for image restoration: Three ingredients for a 100x speed-up, IEEE Trans. Process., 28 (2019), 687-698.  doi: 10.1109/TIP.2018.2866691.
    [24] K. Pearson, On lines and planes of closest fit to systems of points in space, Philosophical Magazine, 2 (1901), 559-572.  doi: 10.1080/14786440109462720.
    [25] T. Pock and S. Sabach, Inertial proximal alternating linearized minimization (iPALM) for nonconvex and nonsmooth problems, SIAM J. Imaging Sci., 9 (2016), 1756-1787.  doi: 10.1137/16M1064064.
    [26] P. Sandeep and T. Jacob, Single image super-resolution using a joint GMM method, IEEE Trans. Image Process., 25 (2016), 4233-4244. 
    [27] C. SutourC.-A. Deledalle and J.-F. Aujol, Estimation of the noise level function based on a nonparametric detection of homogeneous image regions, SIAM J. Imaging Sci., 8 (2015), 2622-2661.  doi: 10.1137/15M1012682.
    [28] M. E. Tipping and C. M. Bishop, Mixtures of probabilistic principal component analyzers, Neural Computation, 11 (1999), 443-482.  doi: 10.1162/089976699300016728.
    [29] S. VaucherP. UnifantowiczC. RicardL. DuboisM. KuballJ.-M. Catala-CiveraD. BernardM. Stampanoni and R. Nicula, On-line tools for microscopic and macroscopic monitoring of microwave processing, Physica B: Condensed Matter, 398 (2007), 191-195. 
    [30] M. Xiao, S. Zheng, C. Liu, Y. Wang, D. He, G. Ke, J. Bian, Z. Lin and T.-Y. Liu, Invertible image rescaling, preprint, 2020, arXiv: 2005.05650.
    [31] J.-H. Zhao and L. Philip, Fast ML estimation for the mixture of factor analyzers via an ECM algorithm, IEEE Trans. Neural Networks, 19 (2008), 1956-1961. 
    [32] D. Zoran and Y. Weiss, From learning models of natural image patches to whole image restoration, In 2011 International Conference on Computer Vision (ICCV), IEEE, 2011,479-486. doi: 10.1109/ICCV.2011.6126278.
  • 加载中

Figures(3)

Tables(4)

SHARE

Article Metrics

HTML views(4784) PDF downloads(414) Cited by(0)

Access History

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return