December  2019, 1(4): 491-506. doi: 10.3934/fods.2019020

Cluster, classify, regress: A general method for learning discontinuous functions

1. 

Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831, USA

2. 

Fusion Energy Division, Oak Ridge National Laboratory, Oak Ridge, TN, 37831, USA

3. 

Department of Mathematics, University of Manchester, Manchester, M13 4PL, UK

* Corresponding author: Clement Etienam

This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan http://energy.gov/downloads/doe-public-access-plan

Published  December 2019

This paper presents a method for solving the supervised learning problem in which the output is highly nonlinear and discontinuous. It is proposed to solve this problem in three stages: (ⅰ) cluster the pairs of input-output data points, resulting in a label for each point; (ⅱ) classify the data, where the corresponding label is the output; and finally (ⅲ) perform one separate regression for each class, where the training data corresponds to the subset of the original input-output pairs which have that label according to the classifier. It has not yet been proposed to combine these 3 fundamental building blocks of machine learning in this simple and powerful fashion. This can be viewed as a form of deep learning, where any of the intermediate layers can itself be deep. The utility and robustness of the methodology is illustrated on some toy problems, including one example problem arising from simulation of plasma fusion in a tokamak.

Citation: David E. Bernholdt, Mark R. Cianciosa, David L. Green, Jin M. Park, Kody J. H. Law, Clement Etienam. Cluster, classify, regress: A general method for learning discontinuous functions. Foundations of Data Science, 2019, 1 (4) : 491-506. doi: 10.3934/fods.2019020
References:
[1]

D. Adalsteinsson and J. A. Sethian, A fast level set method for propagating interfaces, Journal of Computational Physics, 118 (1995), 269-277.  doi: 10.1006/jcph.1995.1098.  Google Scholar

[2]

R. ArchibaldA. GelbR. Saxena and D. Xiu, Discontinuity detection in multivariate space for stochastic simulations, Journal of Computational Physics, 228 (2009), 2676-2689.  doi: 10.1016/j.jcp.2009.01.001.  Google Scholar

[3]

R. ArchibaldA. Gelb and J. Yoon, Polynomial fitting for edge detection in irregularly sampled signals and images, SIAM Journal on Numerical Analysis, 43 (2005), 259-279.  doi: 10.1137/S0036142903435259.  Google Scholar

[4]

G. BatemanA. H. KritzJ. E. KinseyA. J. Redd and J. Weiland, Predicting temperature and density profiles in tokamaks, Physics of Plasmas, 5 (1998), 1793-1799.  doi: 10.1063/1.872848.  Google Scholar

[5]

D. Batenkov, Complete algebraic reconstruction of piecewise-smooth functions from fourier data, Mathematics of Computation, 84 (2015), 2329-2350.  doi: 10.1090/S0025-5718-2015-02948-2.  Google Scholar

[6]

C. M. Bishop, Pattern Recognition and Machine Learning, springer, 2006. doi: 10.1007/978-0-387-45528-0.  Google Scholar

[7]

L. Breiman, Bagging predictors, Machine Learning, 24 (1996), 123-140.  doi: 10.1007/BF00058655.  Google Scholar

[8]

L. Breiman, Random forests, Machine Learning, 45 (2001), 5-32.   Google Scholar

[9]

H.-J. Bungartz and M. Griebel, Sparse grids, Acta Numerica, 13 (2004), 147-269.  doi: 10.1017/S0962492904000182.  Google Scholar

[10]

S. Conti and A. O'Hagan, Bayesian emulation of complex multi-output and dynamic computer models, Journal of Statistical Planning and Inference, 140 (2010), 640-651.  doi: 10.1016/j.jspi.2009.08.006.  Google Scholar

[11]

M. M. DunlopM. A. Iglesias and A. M. Stuart, Hierarchical bayesian level set inversion, Statistics and Computing, 27 (2017), 1555-1584.  doi: 10.1007/s11222-016-9704-8.  Google Scholar

[12]

K. S. Eckhoff, Accurate reconstructions of functions of finite regularity from truncated fourier series expansions, Mathematics of Computation, 64 (1995), 671-690.  doi: 10.1090/S0025-5718-1995-1265014-7.  Google Scholar

[13]

J. Friedman, T. Hastie and R. Tibshirani, The Elements of Statistical Learning, volume 1, Springer series in statistics New York, 2001. doi: 10.1007/978-0-387-21606-5.  Google Scholar

[14]

C. W. L. Gadd, S. Wade and A. Boukouvalas, Enriched mixtures of Gaussian process experts, arXiv preprint, arXiv: 1905.12969, 2019. Google Scholar

[15]

T. S. GardnerC. R. Cantor and J. J. Collins, Construction of a genetic toggle switch in escherichia coli, Nature, 403 (2000), 339-342.  doi: 10.1038/35002131.  Google Scholar

[16]

A. Gelb and E. Tadmor, Spectral reconstruction of piecewise smooth functions from their discrete data, ESAIM: Mathematical Modelling and Numerical Analysis, 36 (2002), 155-175.  doi: 10.1051/m2an:2002008.  Google Scholar

[17]

I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT press, 2016.  Google Scholar

[18]

A. Gorodetsky and Y. Marzouk, Efficient localization of discontinuities in complex computational simulations, SIAM Journal on Scientific Computing, 36 (2014), A2584–A2610. doi: 10.1137/140953137.  Google Scholar

[19]

J. Greenwald, Major next steps for fusion energy based on the spherical tokamak design, 2016. Google Scholar

[20]

R. A. Jacobs, M. I. Jordan, S. J. Nowlan and G. E. Hinton, et al., Adaptive mixtures of local experts, Neural Computation, 3 (1991), 79-87. doi: 10.1162/neco.1991.3.1.79.  Google Scholar

[21]

J. D. JakemanR. Archibald and D. Xiu, Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids, Journal of Computational Physics, 230 (2011), 3977-3997.  doi: 10.1016/j.jcp.2011.02.022.  Google Scholar

[22]

G. Janeschitz, G. W. Pacher, O. Zolotukhin, G. Pereverzev, H. D. Pacher, Y. Igitkhanov, G. Strohmeyer and M. Sugihara, A 1-d predictive model for energy and particle transport in h-mode, Plasma Physics and Controlled Fusion, 44 (2002), A459. doi: 10.1088/0741-3335/44/5A/351.  Google Scholar

[23]

D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv preprint, arXiv: 1412.6980, 2014. Google Scholar

[24]

T. M. Kodinariya and P. R. Makwana, Review on determining number of cluster in k-means clustering, International Journal, 1 (2013), 90-95.   Google Scholar

[25]

M. KotschenreutherW. DorlandM. A. Beer and G. W. Hammett, Quantitative predictions of tokamak energy confinement from first-principles simulations with kinetic effects, Physics of Plasmas, 2 (1995), 2381-2389.  doi: 10.1063/1.871261.  Google Scholar

[26]

O. Meneghini, S. P. Smith, P. B. Snyder, G. M. Staebler, J. Candy, E. Belli, L. Lao, M. Kostuk, T. Luce and T. Luda, et al., Self-consistent core-pedestal transport simulations with neural network accelerated models, Nuclear Fusion, 57 (2017), 086034. doi: 10.1088/1741-4326/aa7776.  Google Scholar

[27]

K. Monterrubio-Gómez, L. Roininen, S. Wade, T. Damoulas and M. Girolami, Posterior inference for sparse hierarchical non-stationary models, arXiv preprint, arXiv: 1804.01431, 2018. Google Scholar

[28]

K. P. Murphy, Machine Learning: A Probabilistic Perspective, The MIT Press, 2012. Google Scholar

[29]

H. N. NajmB. J. DebusschereY. M. MarzoukS. Widmer and O. P. Le Maître, Uncertainty quantification in chemical systems, International Journal for Numerical Methods in Engineering, 80 (2009), 789-814.  doi: 10.1002/nme.2551.  Google Scholar

[30]

T. Nguyen and E. Bonilla, Fast allocation of Gaussian process experts, In International Conference on Machine Learning, (2014), 145–153. Google Scholar

[31]

J. M. ParkM. MurakamiH. E. St JohnL. L. LaoM. S. Chu and R. Prater, An efficient transport solver for tokamak plasmas, Computer Physics Communications, 214 (2017), 1-5.  doi: 10.1016/j.cpc.2016.12.018.  Google Scholar

[32]

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss and V. Dubourg, et al., Scikit-learn: Machine learning in python, Journal of Machine Learning Research, 12 (2011), 2825-2830.  Google Scholar

[33]

D. PflügerB. Peherstorfer and H.-J. Bungartz, Spatially adaptive sparse grids for high-dimensional data-driven problems, Journal of Complexity, 26 (2010), 508-522.  doi: 10.1016/j.jco.2010.04.001.  Google Scholar

[34]

C. Rasmussen and C. Williams, Gaussian Processes for Machine Learning, Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, 2006.  Google Scholar

[35]

C. E. Rasmussen and Z. Ghahramani, Infinite mixtures of Gaussian process experts, In Advances in Neural Information Processing Systems, 2002,881–888. Google Scholar

[36]

H. Robbins, An Empirical Bayes Approach to Statistics, Office of Scientific Research, US Air Force, 1955. Google Scholar

[37]

J. SacksW. J. WelchT. J. Mitchell and H. P. Wynn, Design and analysis of computer experiments, Statistical Science, 4 (1989), 409-435.  doi: 10.1214/ss/1177012413.  Google Scholar

[38]

B. Settles, Active Learning Literature Survey, Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2009. Google Scholar

[39]

G. M. Staebler, J. E. Kinsey and R. E. Waltz, A theory-based transport model with comprehensive physics, Physics of Plasmas, 14 (2007), 055909. doi: 10.1063/1.2436852.  Google Scholar

[40]

V. Tresp, Mixtures of Gaussian processes, In Advances in Neural Information Processing Systems, (2001), 654–660. Google Scholar

[41]

R. E. WaltzG. M. StaeblerW. DorlandG. W. HammettM. Kotschenreuther and J. A. Konings, A gyro-landau-fluid transport model, Physics of Plasmas, 4 (1997), 2482-2496.  doi: 10.1063/1.872228.  Google Scholar

[42]

M. Y. WangX. Wang and D. Guo, A level set method for structural topology optimization, Computer Methods in Applied Mechanics and Engineering, 192 (2003), 227-246.  doi: 10.1016/S0045-7825(02)00559-5.  Google Scholar

[43]

D. Xiu, Numerical Methods for Stochastic Computations: A Spectral Method Approach, Princeton university press, 2010.  Google Scholar

[44]

G. ZhangC. G. WebsterM. Gunzburger and J. Burkardt, Hyperspherical sparse approximation techniques for high-dimensional discontinuity detection, SIAM Review, 58 (2016), 517-551.  doi: 10.1137/16M1071699.  Google Scholar

[45]

O. C. Zienkiewicz, R. L. Taylor, P. Nithiarasu and J. Z. Zhu, The Finite Element Method, volume 3., McGraw-hill London, 1977.  Google Scholar

show all references

References:
[1]

D. Adalsteinsson and J. A. Sethian, A fast level set method for propagating interfaces, Journal of Computational Physics, 118 (1995), 269-277.  doi: 10.1006/jcph.1995.1098.  Google Scholar

[2]

R. ArchibaldA. GelbR. Saxena and D. Xiu, Discontinuity detection in multivariate space for stochastic simulations, Journal of Computational Physics, 228 (2009), 2676-2689.  doi: 10.1016/j.jcp.2009.01.001.  Google Scholar

[3]

R. ArchibaldA. Gelb and J. Yoon, Polynomial fitting for edge detection in irregularly sampled signals and images, SIAM Journal on Numerical Analysis, 43 (2005), 259-279.  doi: 10.1137/S0036142903435259.  Google Scholar

[4]

G. BatemanA. H. KritzJ. E. KinseyA. J. Redd and J. Weiland, Predicting temperature and density profiles in tokamaks, Physics of Plasmas, 5 (1998), 1793-1799.  doi: 10.1063/1.872848.  Google Scholar

[5]

D. Batenkov, Complete algebraic reconstruction of piecewise-smooth functions from fourier data, Mathematics of Computation, 84 (2015), 2329-2350.  doi: 10.1090/S0025-5718-2015-02948-2.  Google Scholar

[6]

C. M. Bishop, Pattern Recognition and Machine Learning, springer, 2006. doi: 10.1007/978-0-387-45528-0.  Google Scholar

[7]

L. Breiman, Bagging predictors, Machine Learning, 24 (1996), 123-140.  doi: 10.1007/BF00058655.  Google Scholar

[8]

L. Breiman, Random forests, Machine Learning, 45 (2001), 5-32.   Google Scholar

[9]

H.-J. Bungartz and M. Griebel, Sparse grids, Acta Numerica, 13 (2004), 147-269.  doi: 10.1017/S0962492904000182.  Google Scholar

[10]

S. Conti and A. O'Hagan, Bayesian emulation of complex multi-output and dynamic computer models, Journal of Statistical Planning and Inference, 140 (2010), 640-651.  doi: 10.1016/j.jspi.2009.08.006.  Google Scholar

[11]

M. M. DunlopM. A. Iglesias and A. M. Stuart, Hierarchical bayesian level set inversion, Statistics and Computing, 27 (2017), 1555-1584.  doi: 10.1007/s11222-016-9704-8.  Google Scholar

[12]

K. S. Eckhoff, Accurate reconstructions of functions of finite regularity from truncated fourier series expansions, Mathematics of Computation, 64 (1995), 671-690.  doi: 10.1090/S0025-5718-1995-1265014-7.  Google Scholar

[13]

J. Friedman, T. Hastie and R. Tibshirani, The Elements of Statistical Learning, volume 1, Springer series in statistics New York, 2001. doi: 10.1007/978-0-387-21606-5.  Google Scholar

[14]

C. W. L. Gadd, S. Wade and A. Boukouvalas, Enriched mixtures of Gaussian process experts, arXiv preprint, arXiv: 1905.12969, 2019. Google Scholar

[15]

T. S. GardnerC. R. Cantor and J. J. Collins, Construction of a genetic toggle switch in escherichia coli, Nature, 403 (2000), 339-342.  doi: 10.1038/35002131.  Google Scholar

[16]

A. Gelb and E. Tadmor, Spectral reconstruction of piecewise smooth functions from their discrete data, ESAIM: Mathematical Modelling and Numerical Analysis, 36 (2002), 155-175.  doi: 10.1051/m2an:2002008.  Google Scholar

[17]

I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT press, 2016.  Google Scholar

[18]

A. Gorodetsky and Y. Marzouk, Efficient localization of discontinuities in complex computational simulations, SIAM Journal on Scientific Computing, 36 (2014), A2584–A2610. doi: 10.1137/140953137.  Google Scholar

[19]

J. Greenwald, Major next steps for fusion energy based on the spherical tokamak design, 2016. Google Scholar

[20]

R. A. Jacobs, M. I. Jordan, S. J. Nowlan and G. E. Hinton, et al., Adaptive mixtures of local experts, Neural Computation, 3 (1991), 79-87. doi: 10.1162/neco.1991.3.1.79.  Google Scholar

[21]

J. D. JakemanR. Archibald and D. Xiu, Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids, Journal of Computational Physics, 230 (2011), 3977-3997.  doi: 10.1016/j.jcp.2011.02.022.  Google Scholar

[22]

G. Janeschitz, G. W. Pacher, O. Zolotukhin, G. Pereverzev, H. D. Pacher, Y. Igitkhanov, G. Strohmeyer and M. Sugihara, A 1-d predictive model for energy and particle transport in h-mode, Plasma Physics and Controlled Fusion, 44 (2002), A459. doi: 10.1088/0741-3335/44/5A/351.  Google Scholar

[23]

D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv preprint, arXiv: 1412.6980, 2014. Google Scholar

[24]

T. M. Kodinariya and P. R. Makwana, Review on determining number of cluster in k-means clustering, International Journal, 1 (2013), 90-95.   Google Scholar

[25]

M. KotschenreutherW. DorlandM. A. Beer and G. W. Hammett, Quantitative predictions of tokamak energy confinement from first-principles simulations with kinetic effects, Physics of Plasmas, 2 (1995), 2381-2389.  doi: 10.1063/1.871261.  Google Scholar

[26]

O. Meneghini, S. P. Smith, P. B. Snyder, G. M. Staebler, J. Candy, E. Belli, L. Lao, M. Kostuk, T. Luce and T. Luda, et al., Self-consistent core-pedestal transport simulations with neural network accelerated models, Nuclear Fusion, 57 (2017), 086034. doi: 10.1088/1741-4326/aa7776.  Google Scholar

[27]

K. Monterrubio-Gómez, L. Roininen, S. Wade, T. Damoulas and M. Girolami, Posterior inference for sparse hierarchical non-stationary models, arXiv preprint, arXiv: 1804.01431, 2018. Google Scholar

[28]

K. P. Murphy, Machine Learning: A Probabilistic Perspective, The MIT Press, 2012. Google Scholar

[29]

H. N. NajmB. J. DebusschereY. M. MarzoukS. Widmer and O. P. Le Maître, Uncertainty quantification in chemical systems, International Journal for Numerical Methods in Engineering, 80 (2009), 789-814.  doi: 10.1002/nme.2551.  Google Scholar

[30]

T. Nguyen and E. Bonilla, Fast allocation of Gaussian process experts, In International Conference on Machine Learning, (2014), 145–153. Google Scholar

[31]

J. M. ParkM. MurakamiH. E. St JohnL. L. LaoM. S. Chu and R. Prater, An efficient transport solver for tokamak plasmas, Computer Physics Communications, 214 (2017), 1-5.  doi: 10.1016/j.cpc.2016.12.018.  Google Scholar

[32]

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss and V. Dubourg, et al., Scikit-learn: Machine learning in python, Journal of Machine Learning Research, 12 (2011), 2825-2830.  Google Scholar

[33]

D. PflügerB. Peherstorfer and H.-J. Bungartz, Spatially adaptive sparse grids for high-dimensional data-driven problems, Journal of Complexity, 26 (2010), 508-522.  doi: 10.1016/j.jco.2010.04.001.  Google Scholar

[34]

C. Rasmussen and C. Williams, Gaussian Processes for Machine Learning, Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, 2006.  Google Scholar

[35]

C. E. Rasmussen and Z. Ghahramani, Infinite mixtures of Gaussian process experts, In Advances in Neural Information Processing Systems, 2002,881–888. Google Scholar

[36]

H. Robbins, An Empirical Bayes Approach to Statistics, Office of Scientific Research, US Air Force, 1955. Google Scholar

[37]

J. SacksW. J. WelchT. J. Mitchell and H. P. Wynn, Design and analysis of computer experiments, Statistical Science, 4 (1989), 409-435.  doi: 10.1214/ss/1177012413.  Google Scholar

[38]

B. Settles, Active Learning Literature Survey, Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2009. Google Scholar

[39]

G. M. Staebler, J. E. Kinsey and R. E. Waltz, A theory-based transport model with comprehensive physics, Physics of Plasmas, 14 (2007), 055909. doi: 10.1063/1.2436852.  Google Scholar

[40]

V. Tresp, Mixtures of Gaussian processes, In Advances in Neural Information Processing Systems, (2001), 654–660. Google Scholar

[41]

R. E. WaltzG. M. StaeblerW. DorlandG. W. HammettM. Kotschenreuther and J. A. Konings, A gyro-landau-fluid transport model, Physics of Plasmas, 4 (1997), 2482-2496.  doi: 10.1063/1.872228.  Google Scholar

[42]

M. Y. WangX. Wang and D. Guo, A level set method for structural topology optimization, Computer Methods in Applied Mechanics and Engineering, 192 (2003), 227-246.  doi: 10.1016/S0045-7825(02)00559-5.  Google Scholar

[43]

D. Xiu, Numerical Methods for Stochastic Computations: A Spectral Method Approach, Princeton university press, 2010.  Google Scholar

[44]

G. ZhangC. G. WebsterM. Gunzburger and J. Burkardt, Hyperspherical sparse approximation techniques for high-dimensional discontinuity detection, SIAM Review, 58 (2016), 517-551.  doi: 10.1137/16M1071699.  Google Scholar

[45]

O. C. Zienkiewicz, R. L. Taylor, P. Nithiarasu and J. Z. Zhu, The Finite Element Method, volume 3., McGraw-hill London, 1977.  Google Scholar

Figure 1.  Numerical examples 1-4 (row a), 2 (row b), and 3 (row c). The functions are plotted in columns (a), along with the final CCR machine output $ f_r(x , f_c(x)) $, and the intermediate $ f_c(x) $. Columns (b) show a scatter plot of the true $ f(x) $ and the CCR machine $ f_r(x , f_c(x)) $, illustrating the correlation. Column (c) shows a histogram of $ f_r(x , f_c(x))-f(x) $, illustrating the dissimilarity between the CCR reconstruction and the truth
Figure 2.  The results of CCR (a), DNN (b), and MLP(c) as applied to numerical example 2, $ f_2 $
Figure 3.  Numerical example 5. $ f_5(x) $ is plotted in panel (a), and $ f_r(x, f_c(x)) $ and $ f_c(x) $ are plotted in panels (b) and (c), respectively. Panel (e) shows a scatter plot of the true $ y(x) $ and the CCR machine $ f_r(x , f_c(x)) $, illustrating the correlation. Panel (d) shows a histogram of $ f_r(x , f_c(x))-y(x) $, illustrating the dissimilarity between the CCR reconstruction and the truth
Figure 4.  Numerical example 6. $ f_6(x) $ is plotted in panel (a), and $ f_r(x, f_c(x)) $ and $ f_c(x) $ are plotted in panels (b) and (c), respectively. Panel (e) shows a scatter plot of the true $ y(x) $ and the CCR machine $ f_r(x , f_c(x)) $, illustrating the correlation. Panel (d) shows a histogram of $ f_r(x , f_c(x))-y(x) $, illustrating the dissimilarity between the CCR reconstruction and the truth
Figure 5.  Numerical example 7. Subfigure (a) shows some two variable slices over test data of the true function $ \chi $ (a-c), the CCR machine output $ f_r(x , f_c(x)) $ (d-f), the absolute difference $ |\chi(x) - f_r(x , f_c(x))| $ (g-i), and the intermediate $ f_c(x) $ (j-l), with remaining inputs set to the mean $ \mathbb E(x_{\backslash ij}) $, where $ x_{\backslash ij} = (m_1, \dots, m_{i-1}, m_{i+1}, \dots m_{j-1}, m_{j+1}, \dots, m_{10}) $ (assuming $ i<j $). Subfigure (b) shows the input data distribution marginals
Figure 6.  Numerical example 7. Subfigure (a) shows all the remaining two variable slices of the true function $ \chi $ (constructed as described in Fig. 5), and subfigure (b) shows the corresponding CCR machine output $ f_r(x , f_c(x)) $
Figure 7.  Numerical example 7. The first 500 (random) training data output values are plotted in Panel (a), along with the clustering values of the training data, showing $ \chi $ and the cluster labels. Panel (b) shows prediction results on test data: the final CCR machine output $ f_r(x , f_c(x)) $, the true $ \chi(x) $, and the intermediate $ f_c(x) $. Panel (c) shows a scatter plot of the true $ y(x) $ and the CCR machine $ f_r(x , f_c(x)) $. Panel (d) shows a histogram of $ f_r(x , f_c(x))-\chi(x) $
Table 1.  L2 and R2 comparison for the 7 numerical examples
Accuracy 1 2 3 4 5 6 7
L2 0.9934 0.9961 0.9964 0.9978 0.9825 0.9934 0.9835
R2 0.9978 0.9967 0.9987 0.9983 0.9845 0.9945 0.9832
Accuracy 1 2 3 4 5 6 7
L2 0.9934 0.9961 0.9964 0.9978 0.9825 0.9934 0.9835
R2 0.9978 0.9967 0.9987 0.9983 0.9845 0.9945 0.9832
Table 2.  Error attainment with set of sample points for active learning with Example 2 and strategy 1a: $ N_{\rm res} = 1000 $ and all points are used for passive learning, while only $ n = 150 $ points are used for active. We see with active learning we recover the same accuracy as to when all the points are used
Active Passive
L2 Error 0.0039 0.0039
$ N $ 150 1000
Active Passive
L2 Error 0.0039 0.0039
$ N $ 150 1000
[1]

Kengo Nakai, Yoshitaka Saiki. Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1079-1092. doi: 10.3934/dcdss.2020352

[2]

Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019

[3]

Min Ji, Xinna Ye, Fangyao Qian, T.C.E. Cheng, Yiwei Jiang. Parallel-machine scheduling in shared manufacturing. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020174

[4]

Ying Lin, Qi Ye. Support vector machine classifiers by non-Euclidean margins. Mathematical Foundations of Computing, 2020, 3 (4) : 279-300. doi: 10.3934/mfc.2020018

[5]

Zexuan Liu, Zhiyuan Sun, Jerry Zhijian Yang. A numerical study of superconvergence of the discontinuous Galerkin method by patch reconstruction. Electronic Research Archive, 2020, 28 (4) : 1487-1501. doi: 10.3934/era.2020078

[6]

Yue Feng, Yujie Liu, Ruishu Wang, Shangyou Zhang. A conforming discontinuous Galerkin finite element method on rectangular partitions. Electronic Research Archive, , () : -. doi: 10.3934/era.2020120

[7]

Bimal Mandal, Aditi Kar Gangopadhyay. A note on generalization of bent boolean functions. Advances in Mathematics of Communications, 2021, 15 (2) : 329-346. doi: 10.3934/amc.2020069

[8]

Andreas Koutsogiannis. Multiple ergodic averages for tempered functions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1177-1205. doi: 10.3934/dcds.2020314

[9]

Leilei Wei, Yinnian He. A fully discrete local discontinuous Galerkin method with the generalized numerical flux to solve the tempered fractional reaction-diffusion equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020319

[10]

Dorothee Knees, Chiara Zanini. Existence of parameterized BV-solutions for rate-independent systems with discontinuous loads. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 121-149. doi: 10.3934/dcdss.2020332

[11]

Huu-Quang Nguyen, Ya-Chi Chu, Ruey-Lin Sheu. On the convexity for the range set of two quadratic functions. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020169

[12]

Xinpeng Wang, Bingo Wing-Kuen Ling, Wei-Chao Kuang, Zhijing Yang. Orthogonal intrinsic mode functions via optimization approach. Journal of Industrial & Management Optimization, 2021, 17 (1) : 51-66. doi: 10.3934/jimo.2019098

[13]

Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2020  doi: 10.3934/jcd.2021006

[14]

Peter Giesl, Sigurdur Hafstein. System specific triangulations for the construction of CPA Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020378

[15]

Yu Zhou, Xinfeng Dong, Yongzhuang Wei, Fengrong Zhang. A note on the Signal-to-noise ratio of $ (n, m) $-functions. Advances in Mathematics of Communications, 2020  doi: 10.3934/amc.2020117

[16]

Djamel Aaid, Amel Noui, Özen Özer. Piecewise quadratic bounding functions for finding real roots of polynomials. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 63-73. doi: 10.3934/naco.2020015

[17]

Tahir Aliyev Azeroğlu, Bülent Nafi Örnek, Timur Düzenli. Some results on the behaviour of transfer functions at the right half plane. Evolution Equations & Control Theory, 2020  doi: 10.3934/eect.2020106

[18]

Meenakshi Rana, Shruti Sharma. Combinatorics of some fifth and sixth order mock theta functions. Electronic Research Archive, 2021, 29 (1) : 1803-1818. doi: 10.3934/era.2020092

[19]

Peter Giesl, Zachary Langhorne, Carlos Argáez, Sigurdur Hafstein. Computing complete Lyapunov functions for discrete-time dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 299-336. doi: 10.3934/dcdsb.2020331

[20]

Kalikinkar Mandal, Guang Gong. On ideal $ t $-tuple distribution of orthogonal functions in filtering de bruijn generators. Advances in Mathematics of Communications, 2020  doi: 10.3934/amc.2020125

[Back to Top]