
-
Previous Article
Averaging versus voting: A comparative study of strategies for distributed classification
- MFC Home
- This Issue
-
Next Article
Summation of Gaussian shifts as Jacobi's third Theta function
Modal additive models with data-driven structure identification
1. | Department of Mathematics and Statistics, University of Ottawa, Ottawa, ON, K1N 6N5, Canada |
2. | College of Science, Huazhong Agricultural University, Wuhan 430070, China |
Additive models, due to their high flexibility, have received a great deal of attention in high dimensional regression analysis. Many efforts have been made on capturing interactions between predictive variables within additive models. However, typical approaches are designed based on conditional mean assumptions, which may fail to reveal the structure when data is contaminated by heavy-tailed noise. In this paper, we propose a penalized modal regression method, Modal Additive Models (MAM), based on a conditional mode assumption for simultaneous function estimation and structure identification. MAM approximates the non-parametric function through forward neural networks, and maximizes modal risk with constraints on the function space and group structure. The proposed approach can be implemented by the half-quadratic (HQ) optimization technique, and its asymptotic estimation and selection consistency are established. It turns out that MAM can achieve satisfactory learning rate and identify the target group structure with high probability. The effectiveness of MAM is also supported by some simulated examples.
References:
[1] |
L. Breiman and J. Friedman,
Estimating optimal transformations for multiple regression and correlation, Journal of the American Statistical Association, 80 (1985), 580-598.
doi: 10.1080/01621459.1985.10478157. |
[2] |
P. Chao and M. Zhu, Group additive structure identification for kernel non-parametric regression, Advances in Neural Information Processing Systems, (2017). Google Scholar |
[3] |
H. Chen, X. Wang, C. Deng and H. Huang, Group sparse additive machine, Advances in Neural Information Processing Systems, (2017). Google Scholar |
[4] |
H. Chen and Y. L. Wang,
Kernel-based sparse regression with the correntropy-induced loss, Appl. Comput. Harmon. Anal., 44 (2018), 144-164.
doi: 10.1016/j.acha.2016.04.004. |
[5] |
Y.-C. Chen, R. Genovese, R. Tibshirani and L. Wasserman,
Nonparametric modal regression, Annals of Statistics, 44 (2016), 489-514.
doi: 10.1214/15-AOS1373. |
[6] |
G. Collomb, W. Härdle and S. Hassani, A note on prediction via estimation of the conditional mode function, Journal of Statistical Planning and Inference, 15 (1986), 227– 236.
doi: 10.1016/0378-3758(86)90099-6. |
[7] |
F. Cucker and S. Smale,
Best choices for regularization parameters in learning theory: On the bias-variance problem, Foundations of Computational Mathematics, 2 (2002), 413-428.
doi: 10.1007/s102080010030. |
[8] |
F. Cucker and S. Smale,
On the mathematical foundations of learning, Bulletin of the American Mathematical Society, 39 (2002), 1-49.
doi: 10.1090/S0273-0979-01-00923-5. |
[9] |
J. Q. Fan, Y. Feng and R. Song, Nonparametric independence screening in sparse ultra-high-dimensional additive models, Journal of the American Statistical Association, 106 (2011), 544– 557.
doi: 10.1198/jasa.2011.tm09779. |
[10] |
J. Q. Fan and R. Z. Li,
Variable selection via non-concave penalized likelihood and its oracle properties, Journal of the American Statistical Association, 96 (2001), 1348-1360.
doi: 10.1198/016214501753382273. |
[11] |
J. Q. Fan and J. C. Lv,
Sure independence screening for ultrahigh dimensional feature space, J. R. Stat. Soc. Ser. B Stat. Methodol., 70 (2008), 849-911.
doi: 10.1111/j.1467-9868.2008.00674.x. |
[12] |
Y. Feng, J. Fan and Y. Suykens, A statistical learning approach to modal regression, Journal of Machine Learning Research, 21 (2020), 1-35. Google Scholar |
[13] |
D. Geman and C. Yang,
Nonlinear image recovery with half-quadratic regularization, IEEE Transactions on Image Processing, 4 (1995), 932-946.
doi: 10.1109/83.392335. |
[14] |
T. L. Gong, Z. B. Xu and H. Chen,
Generalization analysis of Fredholm kernel regularized classifiers, Neural Computation, 29 (2017), 1879-1901.
doi: 10.1162/NECO_a_00967. |
[15] |
C. Gu, Smoothing Spline ANOVA Models, Second edition, Springer Series in Statistics, 297. Springer, New York, 2013.
doi: 10.1007/978-1-4614-5369-7. |
[16] |
X. He, J. Wang and S. Lv, Scalable kernel-based variable selection with sparsistency, preprint, arXiv: 1802.09246. Google Scholar |
[17] |
J. Huang, J. Horowitz and F. R. Wei,
Variable selection in nonparametric additive models, Annals of Statistics, 38 (2010), 2282-2313.
doi: 10.1214/09-AOS781. |
[18] |
J. Huang and L. J. Yang,
Identification of non-linear additive autoregressive models, Journal of the Royal Statistical Society, Series B, 66 (2004), 463-477.
doi: 10.1111/j.1369-7412.2004.05500.x. |
[19] |
J. Huang, S. G. Ma and C.-H. Zhang,
Adaptive lasso for sparse high-dimensional regression models, Statistica Sinica., 18 (2008), 1603-1618.
|
[20] |
K. Kandasamy and Y. Yu, Additive approximations in high-dimensional non- parametric regression via the salsa, International Conference on Machine Learning, (2016). Google Scholar |
[21] |
T. Kühn,
Covering numbers of Gaussian reproducing kernel Hilbert spaces, Journal of Complexity, 27 (2011), 489-499.
doi: 10.1016/j.jco.2011.01.005. |
[22] |
F. Kuo, G. Sloan, G. Wasilkowski and H. Woźniakowski,
On decompositions of multivariate functions, Mathematics of computation, Mathematics of Computation, 79 (2010), 953-966.
doi: 10.1090/S0025-5718-09-02319-9. |
[23] |
Y. Lin and H. Zhang,
Component selection and smoothing in multi-variate nonparametric regression, Annals of Statistics, 34 (2006), 2272-2297.
doi: 10.1214/009053606000000722. |
[24] |
T. Sager and R. Thisted,
Maximum likelihood estimation of isotonic modal regression, Annals of Statistics, 10 (1982), 690-707.
doi: 10.1214/aos/1176345865. |
[25] |
X. T. Shen, W. Pan and Y. Z. Zhu,
Likelihood-based selection and sharp parameter estimation, Journal of the American Statistical Association, 107 (2012), 223-232.
doi: 10.1080/01621459.2011.645783. |
[26] |
L. Shi, Y.-L. Feng and D.-X. Zhou, Concentration estimates for learning with $\ell^{1}$-regularizer and data dependent hypothesis space, Applied and Computational Harmonic Analysis, 31 (2011), 286 – 302.
doi: 10.1016/j.acha.2011.01.001. |
[27] |
T. Shively, R. Kohn and S. Wood,
Variable selection and function estimation in additive non-parametric regression using a data-based prior, Journal of the American Statistical Association, 94 (1999), 777-794.
doi: 10.1080/01621459.1999.10474180. |
[28] |
R. Tibshirani,
Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society, Series B, 58 (1996), 267-288.
doi: 10.1111/j.2517-6161.1996.tb02080.x. |
[29] |
X. Wang, H. Chen, W. Cai, D. Shen and H. Huang, Regularized modal regression with applications in cognitive impairment prediction, Advances in Neural Information Processing Systems, (2017). Google Scholar |
[30] |
Q. Wu, Y. M. Ying and D.-X. Zhou,
Multi-kernel regularized classifiers, Journal of Complexity, 23 (2007), 108-134.
doi: 10.1016/j.jco.2006.06.007. |
[31] |
Q. Wu and D.-X. Zhou,
Learning with sample dependent hypothesis spaces, Computers and Mathematics with Applications, 56 (2008), 2896-2907.
doi: 10.1016/j.camwa.2008.09.014. |
[32] |
W. Yao and R. Lindsay amd R. Li,
Local modal regression, Journal of Nonparametric Statistics, 24 (2012), 647-663.
doi: 10.1080/10485252.2012.678848. |
[33] |
J. Yin, X. Chen and E. Xing, Group sparse additive models, International Conference on Machine Learning, (2012). Google Scholar |
[34] |
M. Yuan and Y. Lin,
Model selection and estimation in regression with grouped variables, Journal of the Royal Statistical Society, Series B, 68 (2006), 49-67.
doi: 10.1111/j.1467-9868.2005.00532.x. |
[35] |
T. Zhang,
Covering number bounds of certain regularized linear function classes, Journal of Machine Learning Research, 2 (2002), 527-550.
|
[36] |
D.-X. Zhou,
The covering number in learning theory, Journal of Complexity, 18 (2002), 739-767.
doi: 10.1006/jcom.2002.0635. |
[37] |
D.-X. Zhou,
Capacity of reproducing kernel space in learning theory, IEEE Transactions on Information Theory, 49 (2003), 1743-1752.
doi: 10.1109/TIT.2003.813564. |
[38] |
D.-X. Zhou and K. Jetter,
Approximation with polynomial kernels and SVM classifiers, Advances in Computational Mathematics, 25 (2006), 323-344.
doi: 10.1007/s10444-004-7206-2. |
[39] |
H. Zou,
The adaptive lasso and its oracle properties, Journal of the American Statistical Association, 101 (2006), 1418-1429.
doi: 10.1198/016214506000000735. |
show all references
References:
[1] |
L. Breiman and J. Friedman,
Estimating optimal transformations for multiple regression and correlation, Journal of the American Statistical Association, 80 (1985), 580-598.
doi: 10.1080/01621459.1985.10478157. |
[2] |
P. Chao and M. Zhu, Group additive structure identification for kernel non-parametric regression, Advances in Neural Information Processing Systems, (2017). Google Scholar |
[3] |
H. Chen, X. Wang, C. Deng and H. Huang, Group sparse additive machine, Advances in Neural Information Processing Systems, (2017). Google Scholar |
[4] |
H. Chen and Y. L. Wang,
Kernel-based sparse regression with the correntropy-induced loss, Appl. Comput. Harmon. Anal., 44 (2018), 144-164.
doi: 10.1016/j.acha.2016.04.004. |
[5] |
Y.-C. Chen, R. Genovese, R. Tibshirani and L. Wasserman,
Nonparametric modal regression, Annals of Statistics, 44 (2016), 489-514.
doi: 10.1214/15-AOS1373. |
[6] |
G. Collomb, W. Härdle and S. Hassani, A note on prediction via estimation of the conditional mode function, Journal of Statistical Planning and Inference, 15 (1986), 227– 236.
doi: 10.1016/0378-3758(86)90099-6. |
[7] |
F. Cucker and S. Smale,
Best choices for regularization parameters in learning theory: On the bias-variance problem, Foundations of Computational Mathematics, 2 (2002), 413-428.
doi: 10.1007/s102080010030. |
[8] |
F. Cucker and S. Smale,
On the mathematical foundations of learning, Bulletin of the American Mathematical Society, 39 (2002), 1-49.
doi: 10.1090/S0273-0979-01-00923-5. |
[9] |
J. Q. Fan, Y. Feng and R. Song, Nonparametric independence screening in sparse ultra-high-dimensional additive models, Journal of the American Statistical Association, 106 (2011), 544– 557.
doi: 10.1198/jasa.2011.tm09779. |
[10] |
J. Q. Fan and R. Z. Li,
Variable selection via non-concave penalized likelihood and its oracle properties, Journal of the American Statistical Association, 96 (2001), 1348-1360.
doi: 10.1198/016214501753382273. |
[11] |
J. Q. Fan and J. C. Lv,
Sure independence screening for ultrahigh dimensional feature space, J. R. Stat. Soc. Ser. B Stat. Methodol., 70 (2008), 849-911.
doi: 10.1111/j.1467-9868.2008.00674.x. |
[12] |
Y. Feng, J. Fan and Y. Suykens, A statistical learning approach to modal regression, Journal of Machine Learning Research, 21 (2020), 1-35. Google Scholar |
[13] |
D. Geman and C. Yang,
Nonlinear image recovery with half-quadratic regularization, IEEE Transactions on Image Processing, 4 (1995), 932-946.
doi: 10.1109/83.392335. |
[14] |
T. L. Gong, Z. B. Xu and H. Chen,
Generalization analysis of Fredholm kernel regularized classifiers, Neural Computation, 29 (2017), 1879-1901.
doi: 10.1162/NECO_a_00967. |
[15] |
C. Gu, Smoothing Spline ANOVA Models, Second edition, Springer Series in Statistics, 297. Springer, New York, 2013.
doi: 10.1007/978-1-4614-5369-7. |
[16] |
X. He, J. Wang and S. Lv, Scalable kernel-based variable selection with sparsistency, preprint, arXiv: 1802.09246. Google Scholar |
[17] |
J. Huang, J. Horowitz and F. R. Wei,
Variable selection in nonparametric additive models, Annals of Statistics, 38 (2010), 2282-2313.
doi: 10.1214/09-AOS781. |
[18] |
J. Huang and L. J. Yang,
Identification of non-linear additive autoregressive models, Journal of the Royal Statistical Society, Series B, 66 (2004), 463-477.
doi: 10.1111/j.1369-7412.2004.05500.x. |
[19] |
J. Huang, S. G. Ma and C.-H. Zhang,
Adaptive lasso for sparse high-dimensional regression models, Statistica Sinica., 18 (2008), 1603-1618.
|
[20] |
K. Kandasamy and Y. Yu, Additive approximations in high-dimensional non- parametric regression via the salsa, International Conference on Machine Learning, (2016). Google Scholar |
[21] |
T. Kühn,
Covering numbers of Gaussian reproducing kernel Hilbert spaces, Journal of Complexity, 27 (2011), 489-499.
doi: 10.1016/j.jco.2011.01.005. |
[22] |
F. Kuo, G. Sloan, G. Wasilkowski and H. Woźniakowski,
On decompositions of multivariate functions, Mathematics of computation, Mathematics of Computation, 79 (2010), 953-966.
doi: 10.1090/S0025-5718-09-02319-9. |
[23] |
Y. Lin and H. Zhang,
Component selection and smoothing in multi-variate nonparametric regression, Annals of Statistics, 34 (2006), 2272-2297.
doi: 10.1214/009053606000000722. |
[24] |
T. Sager and R. Thisted,
Maximum likelihood estimation of isotonic modal regression, Annals of Statistics, 10 (1982), 690-707.
doi: 10.1214/aos/1176345865. |
[25] |
X. T. Shen, W. Pan and Y. Z. Zhu,
Likelihood-based selection and sharp parameter estimation, Journal of the American Statistical Association, 107 (2012), 223-232.
doi: 10.1080/01621459.2011.645783. |
[26] |
L. Shi, Y.-L. Feng and D.-X. Zhou, Concentration estimates for learning with $\ell^{1}$-regularizer and data dependent hypothesis space, Applied and Computational Harmonic Analysis, 31 (2011), 286 – 302.
doi: 10.1016/j.acha.2011.01.001. |
[27] |
T. Shively, R. Kohn and S. Wood,
Variable selection and function estimation in additive non-parametric regression using a data-based prior, Journal of the American Statistical Association, 94 (1999), 777-794.
doi: 10.1080/01621459.1999.10474180. |
[28] |
R. Tibshirani,
Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society, Series B, 58 (1996), 267-288.
doi: 10.1111/j.2517-6161.1996.tb02080.x. |
[29] |
X. Wang, H. Chen, W. Cai, D. Shen and H. Huang, Regularized modal regression with applications in cognitive impairment prediction, Advances in Neural Information Processing Systems, (2017). Google Scholar |
[30] |
Q. Wu, Y. M. Ying and D.-X. Zhou,
Multi-kernel regularized classifiers, Journal of Complexity, 23 (2007), 108-134.
doi: 10.1016/j.jco.2006.06.007. |
[31] |
Q. Wu and D.-X. Zhou,
Learning with sample dependent hypothesis spaces, Computers and Mathematics with Applications, 56 (2008), 2896-2907.
doi: 10.1016/j.camwa.2008.09.014. |
[32] |
W. Yao and R. Lindsay amd R. Li,
Local modal regression, Journal of Nonparametric Statistics, 24 (2012), 647-663.
doi: 10.1080/10485252.2012.678848. |
[33] |
J. Yin, X. Chen and E. Xing, Group sparse additive models, International Conference on Machine Learning, (2012). Google Scholar |
[34] |
M. Yuan and Y. Lin,
Model selection and estimation in regression with grouped variables, Journal of the Royal Statistical Society, Series B, 68 (2006), 49-67.
doi: 10.1111/j.1467-9868.2005.00532.x. |
[35] |
T. Zhang,
Covering number bounds of certain regularized linear function classes, Journal of Machine Learning Research, 2 (2002), 527-550.
|
[36] |
D.-X. Zhou,
The covering number in learning theory, Journal of Complexity, 18 (2002), 739-767.
doi: 10.1006/jcom.2002.0635. |
[37] |
D.-X. Zhou,
Capacity of reproducing kernel space in learning theory, IEEE Transactions on Information Theory, 49 (2003), 1743-1752.
doi: 10.1109/TIT.2003.813564. |
[38] |
D.-X. Zhou and K. Jetter,
Approximation with polynomial kernels and SVM classifiers, Advances in Computational Mathematics, 25 (2006), 323-344.
doi: 10.1007/s10444-004-7206-2. |
[39] |
H. Zou,
The adaptive lasso and its oracle properties, Journal of the American Statistical Association, 101 (2006), 1418-1429.
doi: 10.1198/016214506000000735. |

1: Require: Input data |
2: Ensure: |
3: Define function |
4: Initialize |
5:while not converge do |
6: Update |
7: Update |
8: update |
9: end while |
10: Output: |
1: Require: Input data |
2: Ensure: |
3: Define function |
4: Initialize |
5:while not converge do |
6: Update |
7: Update |
8: update |
9: end while |
10: Output: |
1: Start with the variable pool |
2: Solve (13) to obtain the maximum value |
3: for each variable |
4: |
5: Solve (13) to obtain the maximum value |
6: if |
7: Preserve |
8: end if |
9: end for |
10: Return |
1: Start with the variable pool |
2: Solve (13) to obtain the maximum value |
3: for each variable |
4: |
5: Solve (13) to obtain the maximum value |
6: if |
7: Preserve |
8: end if |
9: end for |
10: Return |
ID | Model | Intrinsic group structure |
M1 | ||
M2 | ||
M3 |
ID | Model | Intrinsic group structure |
M1 | ||
M2 | ||
M3 |
Parameters | M1 | M2 | M3 | ||||||||||||||||
MF | Size | TP | U | O | MF | Size | TP | U | O | MF | Size | TP | U | O | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.66 | 1 | 0 | 0 | 2 | 1 | 0 | 1 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.84 | 1 | 0 | 0 | 2 | 1 | 0 | 1 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.68 | 1 | 0 | 0 | 2 | 0.1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.46 | 0.46 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.62 | 0.62 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.78 | 0.78 | 0 | 0 | 2 | 1 | 0 | 0 | |||||
0 | 3 | 2 | 1 | 0 | 0 | 2 | 0.42 | 0.42 | 0 | 0 | 2 | 0.66 | 0.66 | 0 | |||||
0 | 2.84 | 1.78 | 0.94 | 0 | 0 | 2 | 0.54 | 0.54 | 0 | 0 | 2 | 0 | 1 | 0 | |||||
0 | 3.36 | 2.32 | 1 | 0 | 0 | 2 | 0.58 | 0.58 | 0 | 0 | 2.2 | 1.6 | 1 | 0 | |||||
0 | 4.9 | 3.9 | 1 | 0 | 0 | 2 | 0.78 | 0.78 | 0 | 50 | 4 | 4 | 0 | 0 | |||||
50 | 6 | 6 | 0 | 0 | 29 | 3.62 | 1.9 | 0 | 0.22 | 50 | 4 | 4 | 0 | 0 | |||||
50 | 6 | 6 | 0 | 0 | 0 | 5.38 | 1.62 | 0 | 1 | 0 | 6 | 2 | 0 | 1 | |||||
0 | 2.72 | 1.64 | 0.92 | 0 | 0 | 2 | 0.5 | 0.5 | 0 | 0 | 2.3 | 0.6 | 1 | 0 | |||||
0 | 3.4 | 1.6 | 0.8 | 0 | 0 | 2 | 0.58 | 0.58 | 0 | 0 | 3 | 2 | 1 | 0 | |||||
0 | 4.82 | 3.82 | 1 | 0 | 0 | 2.01 | 0.38 | 0.38 | 0 | 50 | 4 | 4 | 0 | 0 | |||||
27 | 5.54 | 5.08 | 0.46 | 0 | 28 | 3.44 | 1.76 | 0 | 0 | 50 | 4 | 4 | 0 | 0 | |||||
50 | 6 | 6 | 0 | 0 | 0 | 5 | 2 | 0 | 1 | 0 | 6 | 2 | 0 | 1 | |||||
50 | 6 | 6 | 0 | 0 | 0 | 6 | 1 | 0 | 1 | 0 | 6 | 2 | 0 | 1 |
Parameters | M1 | M2 | M3 | ||||||||||||||||
MF | Size | TP | U | O | MF | Size | TP | U | O | MF | Size | TP | U | O | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.66 | 1 | 0 | 0 | 2 | 1 | 0 | 1 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.84 | 1 | 0 | 0 | 2 | 1 | 0 | 1 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.68 | 1 | 0 | 0 | 2 | 0.1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.46 | 0.46 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.62 | 0.62 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.78 | 0.78 | 0 | 0 | 2 | 1 | 0 | 0 | |||||
0 | 3 | 2 | 1 | 0 | 0 | 2 | 0.42 | 0.42 | 0 | 0 | 2 | 0.66 | 0.66 | 0 | |||||
0 | 2.84 | 1.78 | 0.94 | 0 | 0 | 2 | 0.54 | 0.54 | 0 | 0 | 2 | 0 | 1 | 0 | |||||
0 | 3.36 | 2.32 | 1 | 0 | 0 | 2 | 0.58 | 0.58 | 0 | 0 | 2.2 | 1.6 | 1 | 0 | |||||
0 | 4.9 | 3.9 | 1 | 0 | 0 | 2 | 0.78 | 0.78 | 0 | 50 | 4 | 4 | 0 | 0 | |||||
50 | 6 | 6 | 0 | 0 | 29 | 3.62 | 1.9 | 0 | 0.22 | 50 | 4 | 4 | 0 | 0 | |||||
50 | 6 | 6 | 0 | 0 | 0 | 5.38 | 1.62 | 0 | 1 | 0 | 6 | 2 | 0 | 1 | |||||
0 | 2.72 | 1.64 | 0.92 | 0 | 0 | 2 | 0.5 | 0.5 | 0 | 0 | 2.3 | 0.6 | 1 | 0 | |||||
0 | 3.4 | 1.6 | 0.8 | 0 | 0 | 2 | 0.58 | 0.58 | 0 | 0 | 3 | 2 | 1 | 0 | |||||
0 | 4.82 | 3.82 | 1 | 0 | 0 | 2.01 | 0.38 | 0.38 | 0 | 50 | 4 | 4 | 0 | 0 | |||||
27 | 5.54 | 5.08 | 0.46 | 0 | 28 | 3.44 | 1.76 | 0 | 0 | 50 | 4 | 4 | 0 | 0 | |||||
50 | 6 | 6 | 0 | 0 | 0 | 5 | 2 | 0 | 1 | 0 | 6 | 2 | 0 | 1 | |||||
50 | 6 | 6 | 0 | 0 | 0 | 6 | 1 | 0 | 1 | 0 | 6 | 2 | 0 | 1 |
Parameters | M1 | M2 | M3 | ||||||||||||||||
MF | Size | TP | U | O | MF | Size | TP | U | O | MF | Size | TP | U | O | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.6 | 0.6 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.7 | 0.7 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 1 | 1 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.92 | 0.92 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.58 | 0.58 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.76 | 0.76 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.52 | 0.52 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 1 | 1 | 0 | 0 | 2.42 | 0.66 | 1 | 0 | |||||
0 | 3.8 | 2.6 | 1 | 0 | 0 | 2 | 0.8 | 0.8 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 4 | 3 | 1 | 0 | 5 | 2.26 | 0.92 | 0.62 | 0 | 50 | 4 | 4 | 0 | 0 | |||||
42 | 5.84 | 5.88 | 0.16 | 0 | 27 | 3.66 | 1.82 | 0 | 0.2 | 50 | 4 | 4 | 0 | 0 | |||||
50 | 6 | 6 | 0 | 0 | 0 | 6 | 1 | 0 | 1 | 0 | 6 | 2 | 0 | 1 | |||||
0 | 2.56 | 1.48 | 1 | 0 | 0 | 2 | 0.62 | 0.62 | 0 | 0 | 2 | 0.92 | 0.92 | 0 | |||||
0 | 3.5 | 2.5 | 1 | 0 | 0 | 2 | 0.66 | 0.66 | 0 | 0 | 3 | 2 | 1 | 0 | |||||
7 | 4.88 | 3.76 | 0.86 | 0 | 24 | 3.08 | 1.8 | 0 | 0.08 | 0 | 2.2 | 0.52 | 1 | 0 | |||||
8 | 4.94 | 3.84 | 0.84 | 0 | 27 | 3.4 | 1.6 | 0 | 0 | 50 | 4 | 4 | 0 | 0 | |||||
50 | 6 | 6 | 0 | 0 | 0 | 5 | 2 | 0 | 1 | 0 | 5.14 | 2.86 | 0 | 1 | |||||
50 | 6 | 6 | 0 | 0 | 0 | 6 | 1 | 0 | 1 | 0 | 6 | 2 | 0 | 1 |
Parameters | M1 | M2 | M3 | ||||||||||||||||
MF | Size | TP | U | O | MF | Size | TP | U | O | MF | Size | TP | U | O | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.6 | 0.6 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.7 | 0.7 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 1 | 1 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.92 | 0.92 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.58 | 0.58 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.76 | 0.76 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 0.52 | 0.52 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 2 | 1 | 1 | 0 | 0 | 2 | 1 | 1 | 0 | 0 | 2.42 | 0.66 | 1 | 0 | |||||
0 | 3.8 | 2.6 | 1 | 0 | 0 | 2 | 0.8 | 0.8 | 0 | 0 | 2 | 1 | 1 | 0 | |||||
0 | 4 | 3 | 1 | 0 | 5 | 2.26 | 0.92 | 0.62 | 0 | 50 | 4 | 4 | 0 | 0 | |||||
42 | 5.84 | 5.88 | 0.16 | 0 | 27 | 3.66 | 1.82 | 0 | 0.2 | 50 | 4 | 4 | 0 | 0 | |||||
50 | 6 | 6 | 0 | 0 | 0 | 6 | 1 | 0 | 1 | 0 | 6 | 2 | 0 | 1 | |||||
0 | 2.56 | 1.48 | 1 | 0 | 0 | 2 | 0.62 | 0.62 | 0 | 0 | 2 | 0.92 | 0.92 | 0 | |||||
0 | 3.5 | 2.5 | 1 | 0 | 0 | 2 | 0.66 | 0.66 | 0 | 0 | 3 | 2 | 1 | 0 | |||||
7 | 4.88 | 3.76 | 0.86 | 0 | 24 | 3.08 | 1.8 | 0 | 0.08 | 0 | 2.2 | 0.52 | 1 | 0 | |||||
8 | 4.94 | 3.84 | 0.84 | 0 | 27 | 3.4 | 1.6 | 0 | 0 | 50 | 4 | 4 | 0 | 0 | |||||
50 | 6 | 6 | 0 | 0 | 0 | 5 | 2 | 0 | 1 | 0 | 5.14 | 2.86 | 0 | 1 | |||||
50 | 6 | 6 | 0 | 0 | 0 | 6 | 1 | 0 | 1 | 0 | 6 | 2 | 0 | 1 |
GASI | MAM | |||||
Model | Gaussian | Gamma | Gaussian | Gamma | ||
M1 | ||||||
M2 | ||||||
M3 |
GASI | MAM | |||||
Model | Gaussian | Gamma | Gaussian | Gamma | ||
M1 | ||||||
M2 | ||||||
M3 |
[1] |
Liupeng Wang, Yunqing Huang. Error estimates for second-order SAV finite element method to phase field crystal model. Electronic Research Archive, 2021, 29 (1) : 1735-1752. doi: 10.3934/era.2020089 |
[2] |
Marc Homs-Dones. A generalization of the Babbage functional equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 899-919. doi: 10.3934/dcds.2020303 |
[3] |
Hongliang Chang, Yin Chen, Runxuan Zhang. A generalization on derivations of Lie algebras. Electronic Research Archive, , () : -. doi: 10.3934/era.2020124 |
[4] |
Ole Løseth Elvetun, Bjørn Fredrik Nielsen. A regularization operator for source identification for elliptic PDEs. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021006 |
[5] |
Xianbo Sun, Zhanbo Chen, Pei Yu. Parameter identification on Abelian integrals to achieve Chebyshev property. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020375 |
[6] |
Bimal Mandal, Aditi Kar Gangopadhyay. A note on generalization of bent boolean functions. Advances in Mathematics of Communications, 2021, 15 (2) : 329-346. doi: 10.3934/amc.2020069 |
[7] |
Xin Guo, Lexin Li, Qiang Wu. Modeling interactive components by coordinate kernel polynomial models. Mathematical Foundations of Computing, 2020, 3 (4) : 263-277. doi: 10.3934/mfc.2020010 |
[8] |
Pierre Baras. A generalization of a criterion for the existence of solutions to semilinear elliptic equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 465-504. doi: 10.3934/dcdss.2020439 |
[9] |
Mario Pulvirenti, Sergio Simonella. On the cardinality of collisional clusters for hard spheres at low density. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021021 |
[10] |
Bahaaeldin Abdalla, Thabet Abdeljawad. Oscillation criteria for kernel function dependent fractional dynamic equations. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020443 |
[11] |
Liam Burrows, Weihong Guo, Ke Chen, Francesco Torella. Reproducible kernel Hilbert space based global and local image segmentation. Inverse Problems & Imaging, 2021, 15 (1) : 1-25. doi: 10.3934/ipi.2020048 |
[12] |
Yuyuan Ouyang, Trevor Squires. Some worst-case datasets of deterministic first-order methods for solving binary logistic regression. Inverse Problems & Imaging, 2021, 15 (1) : 63-77. doi: 10.3934/ipi.2020047 |
[13] |
Jinsen Zhuang, Yan Zhou, Yonghui Xia. Synchronization analysis of drive-response multi-layer dynamical networks with additive couplings and stochastic perturbations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1607-1629. doi: 10.3934/dcdss.2020279 |
[14] |
Alberto Bressan, Wen Shen. A posteriori error estimates for self-similar solutions to the Euler equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 113-130. doi: 10.3934/dcds.2020168 |
[15] |
Chongyang Liu, Meijia Han, Zhaohua Gong, Kok Lay Teo. Robust parameter estimation for constrained time-delay systems with inexact measurements. Journal of Industrial & Management Optimization, 2021, 17 (1) : 317-337. doi: 10.3934/jimo.2019113 |
[16] |
Dominique Chapelle, Philippe Moireau, Patrick Le Tallec. Robust filtering for joint state-parameter estimation in distributed mechanical systems. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 65-84. doi: 10.3934/dcds.2009.23.65 |
[17] |
Xiyou Cheng, Zhitao Zhang. Structure of positive solutions to a class of Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020461 |
[18] |
Parikshit Upadhyaya, Elias Jarlebring, Emanuel H. Rubensson. A density matrix approach to the convergence of the self-consistent field iteration. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 99-115. doi: 10.3934/naco.2020018 |
[19] |
Toshiko Ogiwara, Danielle Hilhorst, Hiroshi Matano. Convergence and structure theorems for order-preserving dynamical systems with mass conservation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3883-3907. doi: 10.3934/dcds.2020129 |
[20] |
Adrian Viorel, Cristian D. Alecsa, Titus O. Pinţa. Asymptotic analysis of a structure-preserving integrator for damped Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020407 |
Impact Factor:
Tools
Article outline
Figures and Tables
[Back to Top]