# American Institute of Mathematical Sciences

December  2019, 1(4): 491-506. doi: 10.3934/fods.2019020

## Cluster, classify, regress: A general method for learning discontinuous functions

 1 Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831, USA 2 Fusion Energy Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831, USA 3 Department of Mathematics, University of Manchester, Manchester, M13 4PL, UK

* Corresponding author: Clement Etienam

This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan http://energy.gov/downloads/doe-public-access-plan

Published  December 2019

This paper presents a method for solving the supervised learning problem in which the output is highly nonlinear and discontinuous. It is proposed to solve this problem in three stages: (ⅰ) cluster the pairs of input-output data points, resulting in a label for each point; (ⅱ) classify the data, where the corresponding label is the output; and finally (ⅲ) perform one separate regression for each class, where the training data corresponds to the subset of the original input-output pairs which have that label according to the classifier. It has not yet been proposed to combine these 3 fundamental building blocks of machine learning in this simple and powerful fashion. This can be viewed as a form of deep learning, where any of the intermediate layers can itself be deep. The utility and robustness of the methodology is illustrated on some toy problems, including one example problem arising from simulation of plasma fusion in a tokamak.

Citation: David E. Bernholdt, Mark R. Cianciosa, David L. Green, Jin M. Park, Kody J. H. Law, Clement Etienam. Cluster, classify, regress: A general method for learning discontinuous functions. Foundations of Data Science, 2019, 1 (4) : 491-506. doi: 10.3934/fods.2019020
##### References:

show all references

##### References:
Numerical examples 1-4 (row a), 2 (row b), and 3 (row c). The functions are plotted in columns (a), along with the final CCR machine output $f_r(x , f_c(x))$, and the intermediate $f_c(x)$. Columns (b) show a scatter plot of the true $f(x)$ and the CCR machine $f_r(x , f_c(x))$, illustrating the correlation. Column (c) shows a histogram of $f_r(x , f_c(x))-f(x)$, illustrating the dissimilarity between the CCR reconstruction and the truth
The results of CCR (a), DNN (b), and MLP(c) as applied to numerical example 2, $f_2$
Numerical example 5. $f_5(x)$ is plotted in panel (a), and $f_r(x, f_c(x))$ and $f_c(x)$ are plotted in panels (b) and (c), respectively. Panel (e) shows a scatter plot of the true $y(x)$ and the CCR machine $f_r(x , f_c(x))$, illustrating the correlation. Panel (d) shows a histogram of $f_r(x , f_c(x))-y(x)$, illustrating the dissimilarity between the CCR reconstruction and the truth
Numerical example 6. $f_6(x)$ is plotted in panel (a), and $f_r(x, f_c(x))$ and $f_c(x)$ are plotted in panels (b) and (c), respectively. Panel (e) shows a scatter plot of the true $y(x)$ and the CCR machine $f_r(x , f_c(x))$, illustrating the correlation. Panel (d) shows a histogram of $f_r(x , f_c(x))-y(x)$, illustrating the dissimilarity between the CCR reconstruction and the truth
Numerical example 7. Subfigure (a) shows some two variable slices over test data of the true function $\chi$ (a-c), the CCR machine output $f_r(x , f_c(x))$ (d-f), the absolute difference $|\chi(x) - f_r(x , f_c(x))|$ (g-i), and the intermediate $f_c(x)$ (j-l), with remaining inputs set to the mean $\mathbb E(x_{\backslash ij})$, where $x_{\backslash ij} = (m_1, \dots, m_{i-1}, m_{i+1}, \dots m_{j-1}, m_{j+1}, \dots, m_{10})$ (assuming $i<j$). Subfigure (b) shows the input data distribution marginals
Numerical example 7. Subfigure (a) shows all the remaining two variable slices of the true function $\chi$ (constructed as described in Fig. 5), and subfigure (b) shows the corresponding CCR machine output $f_r(x , f_c(x))$
Numerical example 7. The first 500 (random) training data output values are plotted in Panel (a), along with the clustering values of the training data, showing $\chi$ and the cluster labels. Panel (b) shows prediction results on test data: the final CCR machine output $f_r(x , f_c(x))$, the true $\chi(x)$, and the intermediate $f_c(x)$. Panel (c) shows a scatter plot of the true $y(x)$ and the CCR machine $f_r(x , f_c(x))$. Panel (d) shows a histogram of $f_r(x , f_c(x))-\chi(x)$
L2 and R2 comparison for the 7 numerical examples
 Accuracy 1 2 3 4 5 6 7 L2 0.9934 0.9961 0.9964 0.9978 0.9825 0.9934 0.9835 R2 0.9978 0.9967 0.9987 0.9983 0.9845 0.9945 0.9832
 Accuracy 1 2 3 4 5 6 7 L2 0.9934 0.9961 0.9964 0.9978 0.9825 0.9934 0.9835 R2 0.9978 0.9967 0.9987 0.9983 0.9845 0.9945 0.9832
Error attainment with set of sample points for active learning with Example 2 and strategy 1a: $N_{\rm res} = 1000$ and all points are used for passive learning, while only $n = 150$ points are used for active. We see with active learning we recover the same accuracy as to when all the points are used
 Active Passive L2 Error 0.0039 0.0039 $N$ 150 1000
 Active Passive L2 Error 0.0039 0.0039 $N$ 150 1000
 [1] Ning Zhang, Qiang Wu. Online learning for supervised dimension reduction. Mathematical Foundations of Computing, 2019, 2 (2) : 95-106. doi: 10.3934/mfc.2019008 [2] David E. Bernholdt, Mark R. Cianciosa, Clement Etienam, David L. Green, Kody J. H. Law, Jin M. Park. Corrigendum to "Cluster, classify, regress: A general method for learning discontinuous functions [1]". Foundations of Data Science, 2020, 2 (1) : 81-81. doi: 10.3934/fods.2020005 [3] Émilie Chouzenoux, Henri Gérard, Jean-Christophe Pesquet. General risk measures for robust machine learning. Foundations of Data Science, 2019, 1 (3) : 249-269. doi: 10.3934/fods.2019011 [4] Jiang Xie, Junfu Xu, Celine Nie, Qing Nie. Machine learning of swimming data via wisdom of crowd and regression analysis. Mathematical Biosciences & Engineering, 2017, 14 (2) : 511-527. doi: 10.3934/mbe.2017031 [5] Mingbao Cheng, Shuxian Xiao, Guosheng Liu. Single-machine rescheduling problems with learning effect under disruptions. Journal of Industrial & Management Optimization, 2018, 14 (3) : 967-980. doi: 10.3934/jimo.2017085 [6] Andreas Chirstmann, Qiang Wu, Ding-Xuan Zhou. Preface to the special issue on analysis in machine learning and data science. Communications on Pure & Applied Analysis, 2020, 19 (8) : i-iii. doi: 10.3934/cpaa.2020171 [7] Ping Yan, Ji-Bo Wang, Li-Qiang Zhao. Single-machine bi-criterion scheduling with release times and exponentially time-dependent learning effects. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1117-1131. doi: 10.3934/jimo.2018088 [8] Cai-Tong Yue, Jing Liang, Bo-Fei Lang, Bo-Yang Qu. Two-hidden-layer extreme learning machine based wrist vein recognition system. Big Data & Information Analytics, 2017, 2 (1) : 59-68. doi: 10.3934/bdia.2017008 [9] Xingong Zhang. Single machine and flowshop scheduling problems with sum-of-processing time based learning phenomenon. Journal of Industrial & Management Optimization, 2020, 16 (1) : 231-244. doi: 10.3934/jimo.2018148 [10] Kengo Nakai, Yoshitaka Saiki. Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020352 [11] Marc Bocquet, Julien Brajard, Alberto Carrassi, Laurent Bertino. Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization. Foundations of Data Science, 2020, 2 (1) : 55-80. doi: 10.3934/fods.2020004 [12] Alan Beggs. Learning in monotone bayesian games. Journal of Dynamics & Games, 2015, 2 (2) : 117-140. doi: 10.3934/jdg.2015.2.117 [13] Yangyang Xu, Wotao Yin, Stanley Osher. Learning circulant sensing kernels. Inverse Problems & Imaging, 2014, 8 (3) : 901-923. doi: 10.3934/ipi.2014.8.901 [14] Mauro Maggioni, James M. Murphy. Learning by active nonlinear diffusion. Foundations of Data Science, 2019, 1 (3) : 271-291. doi: 10.3934/fods.2019012 [15] Nicolás M. Crisosto, Christopher M. Kribs-Zaleta, Carlos Castillo-Chávez, Stephen Wirkus. Community resilience in collaborative learning. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 17-40. doi: 10.3934/dcdsb.2010.14.17 [16] Minlong Lin, Ke Tang. Selective further learning of hybrid ensemble for class imbalanced increment learning. Big Data & Information Analytics, 2017, 2 (1) : 1-21. doi: 10.3934/bdia.2017005 [17] Shuhua Wang, Zhenlong Chen, Baohuai Sheng. Convergence of online pairwise regression learning with quadratic loss. Communications on Pure & Applied Analysis, 2020, 19 (8) : 4023-4054. doi: 10.3934/cpaa.2020178 [18] Mikhail Langovoy, Akhilesh Gotmare, Martin Jaggi. Unsupervised robust nonparametric learning of hidden community properties. Mathematical Foundations of Computing, 2019, 2 (2) : 127-147. doi: 10.3934/mfc.2019010 [19] Wei Xue, Wensheng Zhang, Gaohang Yu. Least absolute deviations learning of multiple tasks. Journal of Industrial & Management Optimization, 2018, 14 (2) : 719-729. doi: 10.3934/jimo.2017071 [20] Yang Wang, Zhengfang Zhou. Source extraction in audio via background learning. Inverse Problems & Imaging, 2013, 7 (1) : 283-290. doi: 10.3934/ipi.2013.7.283

Impact Factor: