[1]
|
R. G. Andrzejak, K. Lehnertz, F. Mormann, C. Rieke, P. David and C. E. Elger, Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state, Physical Review E, 64 (2001), 061907.
doi: 10.1103/PhysRevE.64.061907.
|
[2]
|
N. Aronszajn, Theory of reproducing kernels, Trans. Amer. Math. Soc., 68 (1950), 337-404.
doi: 10.1090/S0002-9947-1950-0051437-7.
|
[3]
|
R. K. Bock, A. Chilingarian, M. Gaug, F. Hakl, T. Hengstebeck, M. Jiřina, J. Klaschka, E. Kotrč, P. Savickỳ, S. Towers, A. Vaiciulis and W. Wittek, Methods for multidimensional event classification: a case study using images from a cherenkov gamma-ray telescope, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 516 (2004), 511-528.
doi: 10.1016/j.nima.2003.08.157.
|
[4]
|
C. Cortes and V. Vapnik, Support-vector networks, Machine Learning, 20 (1995), 273-297.
doi: 10.1007/BF00994018.
|
[5]
|
F. Cucker and D. X. Zhou, Learning Theory: An Approximation Theory Viewpoint, Cambridge University Press, Cambridge, 2007.
doi: 10.1017/CBO9780511618796.
|
[6]
|
J. Friedman, T. Hastie and R. Tibshirani, The Elements of Statistical Learning. Data Mining, Inference, and Prediction, Springer Series in Statistics, Springer-Verlag, New York, 2001.
doi: 10.1007/978-0-387-21606-5.
|
[7]
|
I. Goodfellow, Y. Bengio and A. Courville, Deep learning, MIT Press, Cambridge, MA, 2016.
|
[8]
|
X. Guo, T. Hu and Q. Wu, Distributed minimum error entropy algorithms, preprint, (2020).
|
[9]
|
Z.-C. Guo, S.-B. Lin and D.-X. Zhou, Learning theory of distributed spectral algorithms, Inverse Problems, 33 (2017), 074009.
doi: 10.1088/1361-6420/aa72b2.
|
[10]
|
Z.-C. Guo, L. Shi and Q. Wu, Learning theory of distributed regression with bias corrected regularization kernel network, Journal of Machine Learning Research, 18 (2017), 1-25.
|
[11]
|
Z.-C. Guo, D.-H. Xiang, X. Guo and D.-X. Zhou, Thresholded spectral algorithms for sparse approximations, Analysis and Applications, 15 (2017), 433-455.
doi: 10.1142/S0219530517500026.
|
[12]
|
T. Hu, Q. Wu and D.-X. Zhou, Distributed kernel gradient descent algorithm for minimum error entropy principle, Applied and Computational Harmonic Analysis, 49 (2020), 229-256.
doi: 10.1016/j.acha.2019.01.002.
|
[13]
|
B. A. Johnson, R. Tateishi and N. T. Hoan, A hybrid pansharpening approach and multiscale object-based image analysis for mapping diseased pine and oak trees, International Journal of Remote Sensing, 34 (2013), 6969-6982.
doi: 10.1080/01431161.2013.810825.
|
[14]
|
S.-B. Lin, X. Guo and D.-X. Zhou, Distributed learning with regularized least squares, Journal of Machine Learning Research, 18 (2017), 1-31.
|
[15]
|
E. C. Ozan, E. Riabchenko, S. Kiranyaz and M. Gabbouj, An optimized k-nn approach for classification on imbalanced datasets with missing data, in International Symposium on Intelligent Data Analysis, Springer, (2016), 387–392.
doi: 10.1007/978-3-319-46349-0_34.
|
[16]
|
J. Platt, Fast training of support vector machines using sequential minimal optimization, Advances in Kernel Methods - Support Vector Learning, MIT Press, Cambridge, MA, (1999), 185–208.
|
[17]
|
J. G. Rohra, B. Perumal, S. J. Narayanan, P. Thakur and R. B. Bhatt, User localization in an indoor environment using fuzzy hybrid of particle swarm optimization & gravitational search algorithm with neural networks, in Proceedings of Sixth International Conference on Soft Computing for Problem Solving, Springer, (2017), 286–295.
doi: 10.1007/978-981-10-3322-3_27.
|
[18]
|
J. D. Rosenblatt and B. Nadler, On the optimality of averaging in distributed statistical learning, Information and Inference: A Journal of the IMA, 5 (2016), 379-404.
doi: 10.1093/imaiai/iaw013.
|
[19]
|
E. R. Sparks, A. Talwalkar, V. Smith, J. Kottalam, X. Pan, J. Gonzalez, M. J. Franklin, M. I. Jordan and T. Kraska, Mli: An api for distributed machine learning, in 2013 IEEE 13th International Conference on Data Mining, IEEE, (2013), 1187–1192.
doi: 10.1109/ICDM.2013.158.
|
[20]
|
I. Steinwart, Support vector machines are universally consistent, Journal of Complexity, 18 (2002), 768-791.
doi: 10.1006/jcom.2002.0642.
|
[21]
|
I. Steinwart and A. Christmann, Support Vector Machines, Springer Science & Business Media, 2008.
|
[22]
|
V. Vapnik, Statistical learning theory, John Wiley & Sons, Inc., New York, 1998.
|
[23]
|
Q. Wu, Y. Ying and D.-X. Zhou, Multi-kernel regularized classifiers, Journal of Complexity, 23 (2007), 108-134.
|
[24]
|
Q. Wu and D.-X. Zhou, Analysis of support vector machine classification, Journal of Computational Analysis & Applications, 8 (2006), 99-119.
|
[25]
|
I.-C. Yeh and C.-H. Lien, The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients, Expert Systems with Applications, 36 (2009), 2473-2480.
doi: 10.1016/j.eswa.2007.12.020.
|
[26]
|
T. Zhang, Statistical behavior and consistency of classification methods based on convex risk minimization, Annals of Statistics, 32 (2004), 56-85.
doi: 10.1214/aos/1079120130.
|
[27]
|
Y. Zhang, J. C. Duchi and M. J. Wainwright, Divide and conquer kernel ridge regression: A distributed algorithm with minimax optimal rates, Journal of Machine Learning Research, 16 (2015), 3299-3340.
|