August  2020, 19(8): 3933-3945. doi: 10.3934/cpaa.2020173

Sparse generalized canonical correlation analysis via linearized Bregman method

1. 

School of Statistics and Mathematics, Collaborative Innovation Development Center of, Pearl River Delta Science & Technology Finance Industry, Guangdong University of Finance & Economics, Guangzhou, Guangdong, 510320, China

2. 

School of Electronics and Computer Science, University of Southampton, University Road, Southampton, SO17 1BJ, United Kingdom

* Corresponding author

Received  February 2019 Revised  August 2019 Published  May 2020

Fund Project: The work described in this paper is supported partially by National Natural Science Foundation of China (11871167,11671171), Science and Technology Program of Guangzhou (201707010228), Project of Collaborative Innovation Development Center of Pearl River Delta Science & Technology Finance Industry (19XT01), National Social Science Foundation (19AJY027), Guangdong Province Innovative Team Program (2017WCXTD004), Special Support Plan for High-Level Talents of Guangdong Province (2019TQ05X571), Foundation of Guangdong Educational Committee (2019KZDZX1023)

Canonical correlation analysis (CCA) is a powerful statistical tool for detecting mutual information between two sets of multi-dimensional random variables. Unlike CCA, Generalized CCA (GCCA), a natural extension of CCA, could detect the relations of multiple datasets (more than two). To interpret canonical variates more efficiently, this paper addresses a novel sparse GCCA algorithm via linearized Bregman method, which is a generalization of traditional sparse CCA methods. Experimental results on both synthetic dataset and real datasets demonstrate the effectiveness and efficiency of the proposed algorithm when compared with several state-of-the-art sparse CCA and deep CCA algorithms.

Citation: Jia Cai, Junyi Huo. Sparse generalized canonical correlation analysis via linearized Bregman method. Communications on Pure & Applied Analysis, 2020, 19 (8) : 3933-3945. doi: 10.3934/cpaa.2020173
References:
[1]

G. Andrew, R. Arora, J. Bilmes and K. Livescu, Deep canonical correlation analysis, in International Conference on Machine Learning, (2013), 1247–1255. Google Scholar

[2]

A. BentonR. Arora and and M. Dredze, Learning multiview embeddings of twitter users, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2 (2016), 14-19.   Google Scholar

[3]

A. Benton, H. Khayrallah, B. Gujral and et al., Deep Generalized Canonical Correlation Analysis, in Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), (2019), 1–6. Google Scholar

[4]

L. M. Br $\grave{e}$ gman, A relaxation method of finding a common point of convex sets and its application to the solution of problems in convex programming, Zh. Vychisl. Mat. Mat. Fiz., 7 (1967), 620-631.   Google Scholar

[5]

J. F. CaiS. Osher and Z. Shen, Convergence of the linearized bregman iteration for $\ell_1$-norm minimization, Math. Comp., 78 (2009), 2127-2136.  doi: 10.1090/S0025-5718-09-02242-X.  Google Scholar

[6]

J. F. CaiS. Osher and Z. Shen, Linearized bregman iterations for compressed sensing, Math. Comp., 78 (2009), 1515-1536.  doi: 10.1090/S0025-5718-08-02189-3.  Google Scholar

[7]

J. Carroll, Equations and tables for a generalization of canonical correlation analysis to three or more sets of variables, Proceedings of Annual Convention of The American Psychological Association, 3 (1968), 227-228.   Google Scholar

[8]

M. Chen, C. Gao, Z. Ren and et al., Sparse cca via precision adjusted iterative thresholding, in Proceedings of International Congress of Chinese Mathematicians, (2016). Google Scholar

[9]

D. ChuL. Z. LiaoM. K. Ng and X. W. Zhang, Sparse canonical correlation analysis: new formulation and algorithm, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2013), 3050-3065.  doi: 10.1109/TPAMI.2013.104.  Google Scholar

[10]

M. Dettling, Bagboosting for tumor classification with gene expression data, Bioinformatics, 20 (2004), 3583-3593.  doi: 10.1093/bioinformatics/bth447.  Google Scholar

[11]

O. FrimanJ. CedefamnP. LundbergH. Borga and H. Knutsson, Detection of neural activity in functional mri using canonical correlation analysis, Magn. Reson. Med., 45 (2001), 323-330.  doi: 10.1002/1522-2594(200102)45:2<323::aid-mrm1041>3.0.co;2-#.  Google Scholar

[12]

C. GaoZ. Ma and H. H. Zhou, Sparse cca: Adaptive estimation and computational barriers, Ann. Statist., 45 (2017), 2074-2101.  doi: 10.1214/16-AOS1519.  Google Scholar

[13]

D. R. Hardoon and J. Shawe-Taylor, Sparse canonical correlation analysis, Mach. Learn., 83 (2011), 331-353.  doi: 10.1007/s10994-010-5222-7.  Google Scholar

[14]

D. R. HardoonS. Szedmak and and J. Shawe-Taylor, Canonical correlation analysis: An overview with application to learning methods, Neural Comput., 16 (2004), 2639-2664.  doi: 10.1162/0899766042321814.  Google Scholar

[15]

P. Horst, Generalized canonical correlations and their applications to experimental data, J. Clin. Psychol., 17 (1961), 331-347.   Google Scholar

[16]

H. Hotelling, Relations between two sets of variates, Biometrika, 2 (1936), 321-377.   Google Scholar

[17]

M. Kang, B. Zhang, X. Wu, C. Y. Liu and J. Gao, Sparse generalized canonical correlation analysis for biological model integration: a genetic study of psychiatric disorders, in 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), (2013), 1490–1493. Google Scholar

[18]

J. R. Kettenring, Canonical analysis of several sets of variables, Biometrika, 58 (1971), 433-451.  doi: 10.1093/biomet/58.3.433.  Google Scholar

[19]

Y. LuoD. TaoK. RamamohanaraoC. Xu and Y. G. Wen, Tensor canonical correlation analysis for multi-view dimension reduction, IEEE Trans. Knowl. Data Eng., 27 (2015), 3111-3124.  doi: 10.1109/TKDE.2015.2445757.  Google Scholar

[20]

S. OsherM. BurgerD. GoldfarbJ. J. Xu and W. T. Ying, An iterative regularization method for total variation-based image restoration, Multiscale Model. Simul., 4 (2005), 460-489.  doi: 10.1137/040605412.  Google Scholar

[21]

R. Steinberger, B. Pouliquen, A. Widiger and et al., The jrc-acquis: A multilingual aligned parallel corpus with 20+ languages, in Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC$\prime$ 2006), (2006), 2142–2147. Google Scholar

[22]

J. V$\acute{I}$aI. Santamar$\acute{I}$a and and J. P$\acute{e}$rez, A learning algorithm for adaptive canonical correlation analysis of several data sets, Neural Netw., 20 (2007), 139-152.  doi: 10.1016/j.neunet.2006.09.011.  Google Scholar

[23]

A. Vinokourov, N. Cristianini and J. Shawe-Taylor, Inferring a semantic representation of text via cross-language correlation analysis, in Advances in Neural Information Processing Systems, (2003), 1497–1504. Google Scholar

[24]

S. Waaijenborg, P. C. V. de Witt Hamer and A. H. Zwinderman, Quantifying the association between gene expressions and dna-markers by penalized canonical correlation analysis, Stat. Appl. Genet. Mol. Biol., 7 (2008), Art. 3. doi: 10.2202/1544-6115.1329.  Google Scholar

[25]

D. M. WittenR. Tibshirani and T. Hastie, A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis, Biostatistics, 10 (2009), 515-534.  doi: 10.1093/biostatistics/kxp008.  Google Scholar

[26]

Y. Yamanishi, J. P. Vert, A. Nakaya and M. Kanehisa, Extraction of correlated gene clusters from multiple genomic data by generalized kernel canonical correlation analysis, Bioinformatics, 19 (2003), i323–i330. doi: 10.1093/bioinformatics/btg1045.  Google Scholar

[27]

W. YinS. OsherD. Goldfarb and J. Darbon, Bregman iterative algorithms for 1-minimization with applications to compressed sensing, SIAM J. Imaging Sci., 1 (2008), 143-168.  doi: 10.1137/070703983.  Google Scholar

show all references

References:
[1]

G. Andrew, R. Arora, J. Bilmes and K. Livescu, Deep canonical correlation analysis, in International Conference on Machine Learning, (2013), 1247–1255. Google Scholar

[2]

A. BentonR. Arora and and M. Dredze, Learning multiview embeddings of twitter users, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2 (2016), 14-19.   Google Scholar

[3]

A. Benton, H. Khayrallah, B. Gujral and et al., Deep Generalized Canonical Correlation Analysis, in Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), (2019), 1–6. Google Scholar

[4]

L. M. Br $\grave{e}$ gman, A relaxation method of finding a common point of convex sets and its application to the solution of problems in convex programming, Zh. Vychisl. Mat. Mat. Fiz., 7 (1967), 620-631.   Google Scholar

[5]

J. F. CaiS. Osher and Z. Shen, Convergence of the linearized bregman iteration for $\ell_1$-norm minimization, Math. Comp., 78 (2009), 2127-2136.  doi: 10.1090/S0025-5718-09-02242-X.  Google Scholar

[6]

J. F. CaiS. Osher and Z. Shen, Linearized bregman iterations for compressed sensing, Math. Comp., 78 (2009), 1515-1536.  doi: 10.1090/S0025-5718-08-02189-3.  Google Scholar

[7]

J. Carroll, Equations and tables for a generalization of canonical correlation analysis to three or more sets of variables, Proceedings of Annual Convention of The American Psychological Association, 3 (1968), 227-228.   Google Scholar

[8]

M. Chen, C. Gao, Z. Ren and et al., Sparse cca via precision adjusted iterative thresholding, in Proceedings of International Congress of Chinese Mathematicians, (2016). Google Scholar

[9]

D. ChuL. Z. LiaoM. K. Ng and X. W. Zhang, Sparse canonical correlation analysis: new formulation and algorithm, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2013), 3050-3065.  doi: 10.1109/TPAMI.2013.104.  Google Scholar

[10]

M. Dettling, Bagboosting for tumor classification with gene expression data, Bioinformatics, 20 (2004), 3583-3593.  doi: 10.1093/bioinformatics/bth447.  Google Scholar

[11]

O. FrimanJ. CedefamnP. LundbergH. Borga and H. Knutsson, Detection of neural activity in functional mri using canonical correlation analysis, Magn. Reson. Med., 45 (2001), 323-330.  doi: 10.1002/1522-2594(200102)45:2<323::aid-mrm1041>3.0.co;2-#.  Google Scholar

[12]

C. GaoZ. Ma and H. H. Zhou, Sparse cca: Adaptive estimation and computational barriers, Ann. Statist., 45 (2017), 2074-2101.  doi: 10.1214/16-AOS1519.  Google Scholar

[13]

D. R. Hardoon and J. Shawe-Taylor, Sparse canonical correlation analysis, Mach. Learn., 83 (2011), 331-353.  doi: 10.1007/s10994-010-5222-7.  Google Scholar

[14]

D. R. HardoonS. Szedmak and and J. Shawe-Taylor, Canonical correlation analysis: An overview with application to learning methods, Neural Comput., 16 (2004), 2639-2664.  doi: 10.1162/0899766042321814.  Google Scholar

[15]

P. Horst, Generalized canonical correlations and their applications to experimental data, J. Clin. Psychol., 17 (1961), 331-347.   Google Scholar

[16]

H. Hotelling, Relations between two sets of variates, Biometrika, 2 (1936), 321-377.   Google Scholar

[17]

M. Kang, B. Zhang, X. Wu, C. Y. Liu and J. Gao, Sparse generalized canonical correlation analysis for biological model integration: a genetic study of psychiatric disorders, in 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), (2013), 1490–1493. Google Scholar

[18]

J. R. Kettenring, Canonical analysis of several sets of variables, Biometrika, 58 (1971), 433-451.  doi: 10.1093/biomet/58.3.433.  Google Scholar

[19]

Y. LuoD. TaoK. RamamohanaraoC. Xu and Y. G. Wen, Tensor canonical correlation analysis for multi-view dimension reduction, IEEE Trans. Knowl. Data Eng., 27 (2015), 3111-3124.  doi: 10.1109/TKDE.2015.2445757.  Google Scholar

[20]

S. OsherM. BurgerD. GoldfarbJ. J. Xu and W. T. Ying, An iterative regularization method for total variation-based image restoration, Multiscale Model. Simul., 4 (2005), 460-489.  doi: 10.1137/040605412.  Google Scholar

[21]

R. Steinberger, B. Pouliquen, A. Widiger and et al., The jrc-acquis: A multilingual aligned parallel corpus with 20+ languages, in Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC$\prime$ 2006), (2006), 2142–2147. Google Scholar

[22]

J. V$\acute{I}$aI. Santamar$\acute{I}$a and and J. P$\acute{e}$rez, A learning algorithm for adaptive canonical correlation analysis of several data sets, Neural Netw., 20 (2007), 139-152.  doi: 10.1016/j.neunet.2006.09.011.  Google Scholar

[23]

A. Vinokourov, N. Cristianini and J. Shawe-Taylor, Inferring a semantic representation of text via cross-language correlation analysis, in Advances in Neural Information Processing Systems, (2003), 1497–1504. Google Scholar

[24]

S. Waaijenborg, P. C. V. de Witt Hamer and A. H. Zwinderman, Quantifying the association between gene expressions and dna-markers by penalized canonical correlation analysis, Stat. Appl. Genet. Mol. Biol., 7 (2008), Art. 3. doi: 10.2202/1544-6115.1329.  Google Scholar

[25]

D. M. WittenR. Tibshirani and T. Hastie, A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis, Biostatistics, 10 (2009), 515-534.  doi: 10.1093/biostatistics/kxp008.  Google Scholar

[26]

Y. Yamanishi, J. P. Vert, A. Nakaya and M. Kanehisa, Extraction of correlated gene clusters from multiple genomic data by generalized kernel canonical correlation analysis, Bioinformatics, 19 (2003), i323–i330. doi: 10.1093/bioinformatics/btg1045.  Google Scholar

[27]

W. YinS. OsherD. Goldfarb and J. Darbon, Bregman iterative algorithms for 1-minimization with applications to compressed sensing, SIAM J. Imaging Sci., 1 (2008), 143-168.  doi: 10.1137/070703983.  Google Scholar

Figure 1.  original signal
Figure 2.  The samples of JRC-Acquis database that used in our experiments
Algorithm 1: Sparse Generalized CCA algorithm.
Input:
  Training data $ X_j\in \mathbb{R}^{n_j\times m} $ ($ j=1,\cdots,J $), and tolerance parameter $ \varepsilon $.
Output:
  Sparse canonical variates $ W $.
1: Compute reduced SVD for each $ X_j $ via Equation (3.1).
2: Compute top $ \ell $ eigenvectors of matrix $ M $.
3: Construct $ G $ (setting the top $ \ell $ eigenvectors of $ M $ as the rows of $ G $).
4: Let $ W^0=U^0=0 $.
5: while error$ >\varepsilon $ do
6:   Compute $ (U^{k+1}, W^{k+1}) $ via (3.6),
7:  error= $ \|AW-B\|_F $,
8:end while
9:return $ W=(W^T_1,\cdots,W_J^T)^T $.
Algorithm 1: Sparse Generalized CCA algorithm.
Input:
  Training data $ X_j\in \mathbb{R}^{n_j\times m} $ ($ j=1,\cdots,J $), and tolerance parameter $ \varepsilon $.
Output:
  Sparse canonical variates $ W $.
1: Compute reduced SVD for each $ X_j $ via Equation (3.1).
2: Compute top $ \ell $ eigenvectors of matrix $ M $.
3: Construct $ G $ (setting the top $ \ell $ eigenvectors of $ M $ as the rows of $ G $).
4: Let $ W^0=U^0=0 $.
5: while error$ >\varepsilon $ do
6:   Compute $ (U^{k+1}, W^{k+1}) $ via (3.6),
7:  error= $ \|AW-B\|_F $,
8:end while
9:return $ W=(W^T_1,\cdots,W_J^T)^T $.
Table 1.  Comparison results on synthetic dataset over $ 30 $ training-testing replications: reconstruction error for training data (Trerror), reconstruction error for testing data (Tserror), and sparsity (Spar1, Spar2, Spar3 stand for the sparsity of the first view, second view, and third view, respectively) obtained by GCCA, Deep GCCA, WGCCA and SGCCA algorithms
GCCA Deep GCCA WGCCA SGCCA
Trerror 3.7277e-30 0.5043 1.1118e-07 1.6354e-10
Tserror 0.4756 0.2955 0.6860 0.4554
Spar1 (%) 2.73 0.01 70.01 99.47
Spar2 (%) 5.15 0.03 78.19 99.67
Spar3 (%) 7.54 0.09 83.02 99.72
GCCA Deep GCCA WGCCA SGCCA
Trerror 3.7277e-30 0.5043 1.1118e-07 1.6354e-10
Tserror 0.4756 0.2955 0.6860 0.4554
Spar1 (%) 2.73 0.01 70.01 99.47
Spar2 (%) 5.15 0.03 78.19 99.67
Spar3 (%) 7.54 0.09 83.02 99.72
Table 2.  Data Structures: data dimension ($ n $), number of data ($ m $), number of classes ($ K $), number of columns in $ W_1 $ and $ W_2 $ ($ \ell $)
Type Data $ n $ $ m $ $ K $ $ \ell $
Gene Data Leukemia 3571 72 2 1
Lymphomia 4026 62 3 2
Prostate 6033 102 2 1
Brain 5597 42 5 4
Type Data $ n $ $ m $ $ K $ $ \ell $
Gene Data Leukemia 3571 72 2 1
Lymphomia 4026 62 3 2
Prostate 6033 102 2 1
Brain 5597 42 5 4
Table 3.  Comparison results on gene data over $ 30 $ training-testing replications: the average of sparsity (Spar1 and Spar2 stand for the sparsity of the first view and second view, respectively), classification accuracy, and the summation of correlation coefficients (SCORR)
Methods Spar1($ \% $) Spar2($ \% $) Accuracy ($ \% $) SCORR
Leukemia CCA 0.03 0 97.20 0.8945
PMD CCA 98.48 0 72.22 0.8944
GCCA 5.80 50 97.20 0.9227
Deep CCA 0 0 94.40 0.8662
SGCCA 98.99 50 100 0.8916
Lyphomia CCA 0.04 0 83.87 1.8206
PMD CCA 85.38 0 77.97 1.7911
GCCA 5.77 50 90.32 1.7423
Deep CCA 0 0 80.65 0.9754
SGCCA 99.23 50 96.77 1.6933
Prostate CCA 0.1 0 88.24 0.7646
PMD CCA 99.19 0 60.78 0.2978
GCCA 4.59 50 88.24 0.7645
Deep CCA 0 0 79.41 0.7610
SGCCA 99.14 50 85.27 0.7758
Brain CCA 0.07 0 71.43 2.9749
PMD CCA 76.40 74.65 47.62 3.0669
GCCA 4.52 34.9 61.90 2.8307
Deep CCA 0 0 61.90 2.9674
SGCCA 99.62 34.9 66.67 2.5729
Methods Spar1($ \% $) Spar2($ \% $) Accuracy ($ \% $) SCORR
Leukemia CCA 0.03 0 97.20 0.8945
PMD CCA 98.48 0 72.22 0.8944
GCCA 5.80 50 97.20 0.9227
Deep CCA 0 0 94.40 0.8662
SGCCA 98.99 50 100 0.8916
Lyphomia CCA 0.04 0 83.87 1.8206
PMD CCA 85.38 0 77.97 1.7911
GCCA 5.77 50 90.32 1.7423
Deep CCA 0 0 80.65 0.9754
SGCCA 99.23 50 96.77 1.6933
Prostate CCA 0.1 0 88.24 0.7646
PMD CCA 99.19 0 60.78 0.2978
GCCA 4.59 50 88.24 0.7645
Deep CCA 0 0 79.41 0.7610
SGCCA 99.14 50 85.27 0.7758
Brain CCA 0.07 0 71.43 2.9749
PMD CCA 76.40 74.65 47.62 3.0669
GCCA 4.52 34.9 61.90 2.8307
Deep CCA 0 0 61.90 2.9674
SGCCA 99.62 34.9 66.67 2.5729
Table 4.  Comparison results on JRC-Acquis database over $ 30 $ training-testing replications for $ \ell = 100 $: reconstruction error for training data (Trerror), reconstruction error for testing data (Tserror), and sparsity (Spar1, Spar2, Spar3 stand for the sparsity of the first view, second view, and third view, respectively) obtained by GCCA, WGCCA and SGCCA algorithms
GCCA Deep GCCA WGCCA SGCCA
Trerror 2.2078e-30 3.0859e-02 8.4776e-12 5.5140e-11
Tserror 4.1496e-02 1.7648e-02 0.7138 3.8799e-02
Spar1 (%) 7.00 0.1140 10.27 96.56
Spar2 (%) 7.24 0.6850 9.96 96.85
Spar3 (%) 6.09 0.1100 8.49 95.64
GCCA Deep GCCA WGCCA SGCCA
Trerror 2.2078e-30 3.0859e-02 8.4776e-12 5.5140e-11
Tserror 4.1496e-02 1.7648e-02 0.7138 3.8799e-02
Spar1 (%) 7.00 0.1140 10.27 96.56
Spar2 (%) 7.24 0.6850 9.96 96.85
Spar3 (%) 6.09 0.1100 8.49 95.64
[1]

Selim Esedoḡlu, Fadil Santosa. Error estimates for a bar code reconstruction method. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1889-1902. doi: 10.3934/dcdsb.2012.17.1889

[2]

Chengxiang Wang, Li Zeng. Error bounds and stability in the $l_{0}$ regularized for CT reconstruction from small projections. Inverse Problems & Imaging, 2016, 10 (3) : 829-853. doi: 10.3934/ipi.2016023

[3]

Yunmei Chen, Xianqi Li, Yuyuan Ouyang, Eduardo Pasiliao. Accelerated bregman operator splitting with backtracking. Inverse Problems & Imaging, 2017, 11 (6) : 1047-1070. doi: 10.3934/ipi.2017048

[4]

Wenye Ma, Stanley Osher. A TV Bregman iterative model of Retinex theory. Inverse Problems & Imaging, 2012, 6 (4) : 697-708. doi: 10.3934/ipi.2012.6.697

[5]

Victoria Martín-Márquez, Simeon Reich, Shoham Sabach. Iterative methods for approximating fixed points of Bregman nonexpansive operators. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 1043-1063. doi: 10.3934/dcdss.2013.6.1043

[6]

Victoria Martín-Márquez, Simeon Reich, Shoham Sabach. Iterative methods for approximating fixed points of Bregman nonexpansive operators. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 1043-1063. doi: 10.3934/dcdss.2013.6.1043

[7]

Tim Kreutzmann, Andreas Rieder. Geometric reconstruction in bioluminescence tomography. Inverse Problems & Imaging, 2014, 8 (1) : 173-197. doi: 10.3934/ipi.2014.8.173

[8]

Jiaqing Yang, Bo Zhang, Ruming Zhang. Reconstruction of penetrable grating profiles. Inverse Problems & Imaging, 2013, 7 (4) : 1393-1407. doi: 10.3934/ipi.2013.7.1393

[9]

Jorge Tejero. Reconstruction of rough potentials in the plane. Inverse Problems & Imaging, 2019, 13 (4) : 863-878. doi: 10.3934/ipi.2019039

[10]

Felipe Ponce-Vanegas. Reconstruction of the derivative of the conductivity at the boundary. Inverse Problems & Imaging, 2020, 14 (4) : 701-718. doi: 10.3934/ipi.2020032

[11]

Michael Renardy. Backward uniqueness for linearized compressible flow. Evolution Equations & Control Theory, 2015, 4 (1) : 107-113. doi: 10.3934/eect.2015.4.107

[12]

Guillaume Bal, Chenxi Guo, Francçois Monard. Linearized internal functionals for anisotropic conductivities. Inverse Problems & Imaging, 2014, 8 (1) : 1-22. doi: 10.3934/ipi.2014.8.1

[13]

Vladimir Sharafutdinov. The linearized problem of magneto-photoelasticity. Inverse Problems & Imaging, 2014, 8 (1) : 247-257. doi: 10.3934/ipi.2014.8.247

[14]

Orlando Lopes. A linearized instability result for solitary waves. Discrete & Continuous Dynamical Systems - A, 2002, 8 (1) : 115-119. doi: 10.3934/dcds.2002.8.115

[15]

Horst Heck, Gunther Uhlmann, Jenn-Nan Wang. Reconstruction of obstacles immersed in an incompressible fluid. Inverse Problems & Imaging, 2007, 1 (1) : 63-76. doi: 10.3934/ipi.2007.1.63

[16]

Mila Nikolova. Model distortions in Bayesian MAP reconstruction. Inverse Problems & Imaging, 2007, 1 (2) : 399-422. doi: 10.3934/ipi.2007.1.399

[17]

Ben A. Vanderlei, Matthew M. Hopkins, Lisa J. Fauci. Error estimation for immersed interface solutions. Discrete & Continuous Dynamical Systems - B, 2012, 17 (4) : 1185-1203. doi: 10.3934/dcdsb.2012.17.1185

[18]

Rua Murray. Approximation error for invariant density calculations. Discrete & Continuous Dynamical Systems - A, 1998, 4 (3) : 535-557. doi: 10.3934/dcds.1998.4.535

[19]

El Miloud Zaoui, Marc Laforest. Stability and modeling error for the Boltzmann equation. Kinetic & Related Models, 2014, 7 (2) : 401-414. doi: 10.3934/krm.2014.7.401

[20]

Robert S. Strichartz. Average error for spectral asymptotics on surfaces. Communications on Pure & Applied Analysis, 2016, 15 (1) : 9-39. doi: 10.3934/cpaa.2016.15.9

2019 Impact Factor: 1.105

Metrics

  • PDF downloads (64)
  • HTML views (55)
  • Cited by (0)

Other articles
by authors

[Back to Top]