\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Group sparse Bayesian learning for data-driven discovery of explicit model forms with multiple parametric datasets

  • *Corresponding author: Jian-Xun Wang

    *Corresponding author: Jian-Xun Wang

The paper is handled by Andreas Mang as the guest editor.

Abstract Full Text(HTML) Figure(5) / Table(13) Related Papers Cited by
  • Extracting the explicit governing equations of a dynamic system from limited data has attracted increasing attention in the data-driven modeling community. Compared to black-box learning approaches, the sparse-regression-based learning method enables discovering an analytical model form from data, which is more appealing due to its white-box nature. However, distilling explicit equations from real-world measurements with data uncertainty is challenging, where many existing methods are less robust. Moreover, it is unclear how to efficiently learn a parametric system from multiple data sets with different parameters. This paper presents a group sparse Bayesian learning approaches to uncover the explicit model forms of a parametric dynamical system with estimated uncertainties. A deep neural network is constructed to improve the calculation of derivatives from noisy measurements. Group sparsity is leveraged to enable synchronous learning from a group of parametric datasets governed by the equations with the same functional form but different parameter settings. The proposed approach has been studied over a few linear/nonlinear ODE systems in explicit and implicit settings. In particular, a simplified parametric model of intracranial dynamics was identified from multiple synthetic datasets with different patient-specific parameters. The numerical results demonstrated the effectiveness of the proposed approach and the merit of synchronous learning from multiple datasets in a group sparsifying Bayesian setting.

    Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Schematics of the Bayesian group-sparsifying equation discovery framework with (a) neural network pre-processing black: noisy signal. blue: denoised signal. red: estimated noises.; (b) SINDy-type library construction; (c) group sparse Bayesian learning; (d) and example for identified sparse structure and posterior

    Figure 2.  Identified systems of two different set of parameters with 5% noise: (1st column) $ \boldsymbol{\lambda} = [-0.05, 1.5, -1.5, -0.05] $ and (2nd column) $ \boldsymbol{\lambda} = [-0.15, 2, -2, -0.15] $. Only the trajectory of the state variable $ x $ is shown and the propagated uncertainty is given by the 3-$ \sigma $ region. The 2nd row is the zoomed-in view

    Figure 3.  Identified systems of two different sets of parameters with 5% noise: (1st column) $ \boldsymbol{\lambda} = [-0.05, 1.5, -1.5, -0.05] $ and (2nd column) $ \boldsymbol{\lambda} = [-0.15, 2, -2, -0.15] $. Only the trajectory of the state variable $ x $ is shown and the propagated uncertainty is given by the 3-$ \sigma $ region. The 2nd row is the zoomed-in view

    Figure 4.  Identified systems of two different set of parameters with 2% noise: (1st column) $ [j_x, V_{max}, K_m] = [0.3, 1.2, 0.2] $ and (2nd column) $ [j_x, V_{max}, K_m] = [0.3, 1.8, 0.4] $. The trajectory of the state variable $ x $ is shown and the propagated uncertainty is given by the 3-$ \sigma $ region. The 2nd row is the zoomed-in view

    Figure 5.  Identified systems of two different set of parameters with 1% noise: (1st column) $ [C_{an}, R_{f}, k_{E}] = [0.125, 2.38\times10^{3}, 0.231] $ and (2nd column) $ [C_{an}, R_{f}, k_{E}] = [0.15, 2.38\times10^{3}, 0.175] $. The trajectory of the ICP is shown and the propagated uncertainty is given by the 3-$ \sigma $ region

    Table 1.  System identification results for the parametric linear systems, $ \frac{dx}{dt} = \lambda_1x + \lambda_2y $, state $ \frac{dy}{dt} = \lambda_3x + \lambda_4y $. ($ 5\% $ data noise), where $ \lambda_{i} $ denotes the relevant coefficient, $ C(\text{const1}) $ denotes the redundant constant coefficient and $ \sigma(C(\text{const1})) $ denotes the standard deviation for equation $ \frac{dx}{dt} = \lambda_1x + \lambda_2y $, while $ C(\text{const2}) $ and $ \sigma(C(\text{const2})) $ are corresponding values for equation $ \frac{dy}{dt} = \lambda_3x + \lambda_4y $

    Identified systems with group sparsity
    Case 1 2 3 4 5 6 7 8 9
    $ \lambda_1 $ -0.0493 -0.0494 -0.0495 -0.1004 -0.1007 -0.0999 -0.1499 -0.1490 -0.1504
    $ \sigma(\lambda_1) $ 0.0019 0.0019 0.0019 0.0017 0.0016 0.0016 0.0015 0.0015 0.0015
    $ \lambda_2 $ 1.5010 2.0001 2.5008 1.5026 2.0018 2.5009 1.5015 2.0022 2.4996
    $ \sigma(\lambda_2) $ 0.0019 0.0019 0.0019 0.0016 0.0016 0.0016 0.0015 0.0015 0.0015
    $ \lambda_3 $ -1.4997 -2.0004 -2.4985 -1.4992 -1.9986 -2.4994 -1.5012 -2.0000 -2.5017
    $ \sigma(\lambda_3) $ 0.0019 0.0019 0.0019 0.0017 0.0016 0.0016 0.0015 0.0015 0.0015
    $ \lambda_4 $ -0.0515 -0.0510 -0.0503 -0.1017 -0.1009 -0.1011 -0.1530 -0.1532 -0.1513
    $ \sigma(\lambda_4) $ 0.0019 0.0019 0.0019 0.0016 0.0016 0.0016 0.0015 0.0015 0.0015
    Identified systems without group sparsity
    $ \lambda_1 $ -0.0493 -0.0494 -0.0495 -0.1004 -0.1007 -0.0999 -0.1499 -0.1491 -0.1505
    $ \sigma(\lambda_1) $ 0.0019 0.0019 0.0019 0.0017 0.0016 0.0016 0.0015 0.0015 0.0015
    $ \lambda_2 $ 1.5002 1.9997 2.5007 1.5008 2.0004 2.4998 1.4991 2.0000 2.4978
    $ \sigma(\lambda_2) $ 0.0019 0.0019 0.0019 0.0016 0.0016 0.0016 0.0015 0.0015 0.0015
    $ {C(\text{const1}))} $ 0 -0.0424 -0.0524 0 0 -0.0370 0 0 0
    $ \sigma({C(\text{const1}))} $ 0 0.0017 0.0017 0 0 0.0013 0 0 0
    $ \lambda_3 $ -1.4997 -2.0004 -2.4986 -1.4992 -1.9987 -2.4994 -1.5012 -1.9999 -2.5016
    $ \sigma(\lambda_3) $ 0.0019 0.0019 0.0019 0.0017 0.0016 0.0016 0.0015 0.0015 0.0015
    $ \lambda_4 $ -0.0507 -0.0506 -0.0502 -0.0996 -0.0993 -0.0999 -0.1499 -0.1507 -0.1493
    $ \sigma(\lambda_4) $ 0.0019 0.0019 0.0019 0.0016 0.0016 0.0016 0.0015 0.0015 0.0015
    $ {C(\text{const2}))} $ 0 0.0440 0.0551 0 0 0.0411 0 0 0
    $ \sigma({C(\text{const2}))}) $ 0 0.0017 0.0017 0 0 0.0013 0 0 0
    True systems
    $ \lambda_1 $ -0.05 -0.05 -0.05 -0.1 -0.1 -0.1 -0.15 -0.15 -0.15
    $ \lambda_2 $ 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5
    $ \lambda_3 $ -1.5 -2.0 -2.5 -1.5 -2.0 -2.5 -1.5 -2.0 -2.5
    $ \lambda_4 $ -0.05 -0.05 -0.05 -0.1 -0.1 -0.1 -0.15 -0.15 -0.15
     | Show Table
    DownLoad: CSV

    Table 2.  Different metrics for the parametric linear systems, $ \frac{dx}{dt} = \lambda_1x + \lambda_2y $, $ \frac{dy}{dt} = \lambda_3x + \lambda_4y $. ($ 5\% $ data noise)

    Identified systems with group sparsity
    Case 1 2 3 4 5 6 7 8 9
    $ \textbf{rmse} $ 0.90 0.44 0.51 1.50 0.91 0.44 1.66 1.41 0.62
    $ \mathbf{M_{R}} $ 1 1 1 1 1 1 1 1 1
    $ \mathbf{M_{P}} $ 1 1 1 1 1 1 1 1 1
    Identified systems without group sparsity
    $ \textbf{rmse} $ 0.50 21.59 21.51 0.58 0.58 15.62 0.70 0.41 0.81
    $ \mathbf{M_{R}} $ 1 0.67 0.67 1 1 0.67 1 1 1
    $ \mathbf{M_{P}} $ 1 1 1 1 1 1 1 1 1
    Identified systems with non-parsimonious model
    $ \textbf{rmse} $ 1414.37 1417.92 1425.69 1415.33 1412.46 1417.82 1422.44 1419.52 1411.64
    $ \mathbf{M_{R}} $ 0.095 0.095 0.095 0.095 0.095 0.095 0.095 0.095 0.095
    $ \mathbf{M_{P}} $ 1 1 1 1 1 1 1 1 1
     | Show Table
    DownLoad: CSV

    Table 3.  Different metrics for the parametric cubic systems $ \frac{dx}{dt} = \lambda_1x^3 + \lambda_2y^3 $, $ \frac{dy}{dt} = \lambda_3x^3 + \lambda_4y^3 $ (5% data noise)

    Identified systems with group sparsity
    Case 1 2 3 4 5 6 7 8 9
    $ \textbf{rmse} $ 5.32 2.05 2.86 8.03 6.38 3.97 11.62 8.64 6.14
    $ \mathbf{M_{R}} $ 1 1 1 1 1 1 1 1 1
    $ \mathbf{M_{P}} $ 1 1 1 1 1 1 1 1 1
    Identified systems without group sparsity
    $ \textbf{rmse} $ 61.16 61.25 60.07 48.28 50.58 46.30 40.66 46.12 45.36
    $ \mathbf{M_{R}} $ 0.67 0.67 0.67 0.67 0.67 0.57 0.67 0.5 0.57
    $ \mathbf{M_{P}} $ 1 1 1 1 1 1 1 1 1
    Identified systems with non-parsimonious model
    $ \textbf{rmse} $ 64.10 72.08 70.49 56.96 51.59 55.12 50.09 52.25 46.78
    $ \mathbf{M_{R}} $ 0.095 0.095 0.095 0.095 0.095 0.095 0.095 0.095 0.095
    $ \mathbf{M_{P}} $ 1 1 1 1 1 1 1 1 1
     | Show Table
    DownLoad: CSV

    Table 4.  System identification results for the parametric cubic systems $ \frac{dx}{dt} = \lambda_1x^3 + \lambda_2y^3 $, state $ \frac{dy}{dt} = \lambda_3x^3 + \lambda_4y^3 $ (5% data noise), where $ \lambda_i $ denotes relevant coefficients, $ C(\square) $ denotes the coefficient of redundant terms $ \square $

    Identified systems with group sparsity
    Case 1 2 3 4 5 6 7 8 9
    $ \lambda_1 $ -0.0458 -0.0471 -0.0468 -0.0956 -0.0951 -0.0953 -0.1456 -0.1428 -0.1448
    $ \sigma(\lambda_1) $ 0.0111 0.0144 0.0177 0.0073 0.0101 0.0114 0.0054 0.0085 0.0097
    $ \lambda_2 $ 1.5028 2.0037 2.4908 1.5093 2.0034 2.4931 1.5167 2.0100 2.5086
    $ \sigma(\lambda_2) $ 0.0106 0.0141 0.0177 0.0066 0.0096 0.0112 0.0047 0.0078 0.0094
    $ \lambda_3 $ -1.5086 -2.0034 -2.5001 -1.5132 -2.0170 -2.5113 -1.5167 -2.0209 -2.5190
    $ \sigma(\lambda_3) $ 0.0106 0.0139 0.0173 0.0070 0.0104 0.0117 0.0054 0.0074 0.0092
    $ \lambda_4 $ -0.0553 -0.0496 -0.0470 -0.1032 -0.1014 -0.0989 -0.1560 -0.1536 -0.1467
    $ \sigma(\lambda_4) $ 0.0101 0.0136 0.0172 0.0064 0.0099 0.0115 0.0046 0.0068 0.0089
    Identified systems without group sparsity
    $ \lambda_1 $ -0.0474 -0.0493 -0.0486 -0.0982 -0.1007 -0.1007 -0.1488 -0.1496 -0.1450
    $ \sigma(\lambda_1) $ 0.0021 0.0033 0.0050 0.0016 0.0027 0.0034 0.0012 0.0029 0.0031
    $ \lambda_2 $ 1.4991 2.0037 2.5034 1.5019 1.9898 2.4886 1.5034 2.0037 2.5127
    $ \sigma(\lambda_2) $ 0.0024 0.0043 0.0068 0.0020 0.0034 0.0045 0.0015 0.0034 0.0038
    $ C(y^2) $ -0.0921 -0.1260 -0.1530 -0.0737 -0.0966 -0.1098 -0.0603 -0.0849 -0.1174
    $ \sigma(C(y^2)) $ 0.0029 0.0050 0.0078 0.0022 0.0038 0.0045 0.0017 0.0037 0.0042
    $ C(x^2y) $ 0 0 0 0 0 0.0405 0 0.415 0
    $ \sigma(C(x^2y)) $ 0 0 0 0 0 0.0086 0 0.0068 0
    $ C(x^2y^2) $ 0 0 0 0 0 0 0 0 0.0416
    $ \sigma(C(x^2y^2)) $ 0 0 0 0 0 0 0 0 0.0051
    $ C(x^2y^3) $ 0 0 0 0 0 0 0 -0.0381 0
    $ \sigma(C(x^2y^3)) $ 0 0 0 0 0 0 0 0.0068 0
    $ \lambda_3 $ -1.5033 -1.9984 -2.4967 -1.5000 -2.0028 -2.4993 -1.4975 -2.0010 -2.5003
    $ \sigma(\lambda_3) $ 0.0026 0.0041 0.0054 0.0018 0.0028 0.0044 0.0015 0.0027 0.0030
    $ \lambda_4 $ -0.0547 -0.0509 -0.0490 -0.1008 -0.1017 -0.1001 -0.1517 -0.1531 -0.1475
    $ \sigma(\lambda_4) $ 0.0022 0.0035 0.0046 0.0014 0.0023 0.0038 0.0011 0.0022 0.0026
    $ C(x^2) $ 0.0912 0.1189 0.1471 0.0714 0.1052 0.1137 0.0621 0.0819 0.1005
    $ \sigma(C(x^2)) $ 0.0026 0.0041 0.0054 0.0018 0.0028 0.0044 0.0015 0.0027 0.0030
    True systems
    $ \lambda_1 $ -0.05 -0.05 -0.05 -0.1 -0.1 -0.1 -0.15 -0.15 -0.15
    $ \lambda_2 $ 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5
    $ \lambda_3 $ -1.5 -2.0 -2.5 -1.5 -2.0 -2.5 -1.5 -2.0 -2.5
    $ \lambda_4 $ -0.05 -0.05 -0.05 -0.1 -0.1 -0.1 -0.15 -0.15 -0.15
     | Show Table
    DownLoad: CSV

    Table 5.  Different metrics for parametric Michaelisâ€"Menten model (2% data noise)

    Identified systems with group sparsity
    Case 1 2 3 4 5 6 7 8 9 10
    $ \textbf{rmse} $ 42.74 9.53 32.39 30.08 12.99 13.40 17.58 12.23 33.10 35.94
    $ \mathbf{M_{R}} $ 1 1 1 1 1 1 1 1 1 1
    $ \mathbf{M_{P}} $ 1 1 1 1 1 1 1 1 1 1
    Identified systems with non-parsimonious model
    $ \textbf{rmse} $ 776.67 704.91 728.43 716.23 625.28 606.92 603.25 748.48 708.18 754.67
    $ \mathbf{M_{R}} $ 0.43 0.43 0.43 0.43 0.43 0.43 0.43 0.43 0.43 0.43
    $ \mathbf{M_{P}} $ 1 1 1 1 1 1 1 1 1 1
     | Show Table
    DownLoad: CSV

    Table 6.  Identification results for parametric Michaelisâ€"Menten model (2% data noise), where $ \lambda_i $ denotes the coefficient and $ \sigma(\lambda_i) $ denotes standard deviation for the parsimonious library terms

    Identified systems with group sparsity
    Term Coeff 1 2 3 4 5 6 7 8 9 10
    $ dx/dt $ $ \lambda_1 $ -0.1608 -0.1926 -0.2607 -0.3621 -0.2077 -0.2870 -0.3776 -0.1872 -0.2591 -0.3541
    $ \sigma(\lambda_1) $ 0.0288 0.0203 0.226 0.0253 0.0166 0.163 0.0198 0.0199 0.0256 0.0263
    $ x $ $ \lambda_2 $ -0.8985 -1.2028 -1.1931 -1.1984 -1.5070 -1.5002 -1.4989 -1.2025 -1.1962 -1.1962
    $ \sigma(\lambda_2) $ 0.0097 0.0088 0.0096 0.0105 0.0087 0.0085 0.0098 0.0088 0.0107 0.0110
    $ {\text{constant}} $ $ {\lambda_3} $ 0.0645 0.0516 0.0851 0.1154 0.0433 0.0741 0.1043 0.1127 0.1754 0.2354
    $ \sigma({\lambda_3}) $ 0.0154 0.0116 0.0121 0.0127 0.0098 0.0090 0.0105 0.0109 0.0131 0.0127
    True systems
    $ dx/dt $ $ \lambda_1 $ -0.2 -0.2 -0.3 -0.4 -0.2 -0.3 -0.4 -0.2 -0.3 -0.4
    $ x $ $ \lambda_2 $ -0.9 -1.2 -1.2 -1.2 -1.5 -1.5 -1.5 -1.2 -1.2 -1.2
    $ \text{constant} $ $ {\lambda_3} $ 0.06 0.06 0.09 0.12 0.06 0.09 0.12 0.12 0.18 0.24
     | Show Table
    DownLoad: CSV

    Table 7.  Control variables for parametric system identification

    Parameter $ C_{an} $ $ R_{f} $ $ k_{E} $
    $ 1 $ $ 0.15 $ $ 2.38\times10^{3} $ $ 0.231 $
    $ 2 $ $ 0.125 $ $ 2.38\times10^{3} $ $ 0.231 $
    $ 3 $ $ 0.15 $ $ 2.5\times10^{3} $ $ 0.231 $
    $ 4 $ $ 0.15 $ $ 2.38\times10^{3} $ $ 0.2 $
    $ 5 $ $ 0.15 $ $ 2.38\times10^{3} $ $ 0.175 $
     | Show Table
    DownLoad: CSV

    Table 8.  Right hand-side library and true parametric coefficient

    True systems
    Term Coeff 1 2 3 4 5
    $ P_{ic}P_aR_a\frac{d{C}_a}{dt} $ $ \lambda_1 $ 0.231 0.231 0.231 0.2 0.175
    $ P_{ic}^2R_a\frac{d{C}_a}{dt} $ $ \lambda_2 $ -0.231 -0.231 -0.231 -0.2 -0.175
    $ P_aP_{ic} $ $ \lambda_3 $ $ 1.2035\times10^{-4} $ $ 1.2035\times10^{-4} $ $ 1.1458\times10^{-4} $ $ 1.0420\times10^{-4} $ $ 9.118\times10^{-5} $
    $ P_{ic}^2 $ $ \lambda_4 $ $ -1.6571\times10^{-4} $ $ -1.6571\times10^{-4} $ $ -1.5993\times10^{-4} $ $ -1.4347\times10^{-4} $ $ -1.2554\times10^{-4} $
    $ P_{ic}^2Ra $ $ \lambda_5 $ $ -3.6576\times10^{-5} $ $ -3.6576\times10^{-5} $ $ -3.6576\times10^{-5} $ $ -3.1668\times10^{-5} $ $ -2.7709\times10^{-5} $
    $ P_{ic}Ra $ $ \lambda_6 $ $ 2.1946\times10^{-4} $ $ 2.1946\times10^{-4} $ $ 2.1946\times10^{-4} $ $ 1.9001\times10^{-4} $ $ 1.6625\times10^{-4} $
    $ P_{ic} $ $ \lambda_7 $ $ 2.7213\times10^{-4} $ $ 2.7213\times10^{-4} $ $ 2.7213\times10^{-4} $ $ 2.3561\times10^{-4} $ $ 2.0616\times10^{-4} $
    $ \frac{dP_{ic}}{dt} $ $ \lambda_8 $ $ -1.24 $ $ -1.24 $ $ -1.24 $ $ -1.24 $ $ -1.24 $
    $ C_aP_{ic}R_a\frac{d{P}_{ic}}{dt} $ $ \lambda_9 $ $ -0.231 $ $ -0.231 $ $ -0.231 $ $ -0.2 $ $ -0.175 $
    $ C_aP_{ic}\frac{d{P}_{ic}}{dt} $ $ \lambda_{10} $ $ -0.2864 $ $ -0.2864 $ $ -0.2864 $ $ -0.248 $ $ -0.217 $
    $ P_aP_{ic}\frac{d{C_{a}}}{dt} $ $ \lambda_{11} $ $ -0.2864 $ $ -0.2864 $ $ -0.2864 $ $ -0.248 $ $ -0.217 $
    $ P_{ic}^2\frac{d{C_{a}}}{dt} $ $ \lambda_{12} $ $ -0.2864 $ $ -0.2864 $ $ -0.2864 $ $ -0.248 $ $ -0.217 $
     | Show Table
    DownLoad: CSV

    Table 9.  Different metrics for parametric ICP model

    Identified systems with group sparsity
    Case 1 2 3 4 5
    $ \textbf{rmse} $ 406.25 407.79 407.79 359.38 351.82
    $ \mathbf{M_{R}} $ 1 1 1 1 1
    $ \mathbf{M_{P}} $ 1 1 1 1 0.92
    Identified systems with non-parsimonious model
    $ \textbf{rmse} $ 957.18 956.50 957.26 966.70 973.60
    $ \mathbf{M_{R}} $ 0.79 0.77 0.79 0.77 0.77
    $ \mathbf{M_{P}} $ 0.92 0.83 0.92 0.83 0.83
     | Show Table
    DownLoad: CSV

    Table 10.  Right hand-side library and inferred parametric coefficient, where $ \lambda_i $ denotes the coefficient and $ \sigma(\lambda_i) $ denotes standard deviation for the parsimonious library terms

    Identified systems with group sparsity
    Term Coeff 1 2 3 4 5
    $ P_{ic}P_aR_a\frac{d{C}_a}{dt} $ $ \lambda_1 $ 0.231 0.231 0.231 0.2 0.175
    $ \sigma(\lambda_1) $ $ 2.3745\times10^{-4} $ $ 5.9450\times10^{-4} $ $ 2.7723\times10^{-4} $ $ 3.4780\times10^{-4} $ $ 3.9709\times10^{-4} $
    $ P_{ic}^2R_a\frac{d{C}_a}{dt} $ $ \lambda_2 $ -0.231 -0.2307 -0.231 -0.2 -0.1748
    $ \sigma(\lambda_2) $ $ 5.9808\times10^{-4} $ $ 2.02\times10^{-2} $ $ 8.626\times10^{-4} $ $ 3\times10^{-3} $ $ 7.6\times10^{-3} $
    $ P_aP_{ic} $ $ \lambda_3 $ $ 1.1972\times10^{-4} $ $ 1.1905\times10^{-4} $ $ 1.1421\times10^{-4} $ $ 1.0319\times10^{-4} $ $ 8.886\times10^{-5} $
    $ \sigma(\lambda_3) $ $ 2.62\times10^{-5} $ $ 5.881\times10^{-5} $ $ 3.841\times10^{-5} $ $ 6.221\times10^{-5} $ $ 1.6\times10^{-5} $
    $ P_{ic}^2 $ $ \lambda_4 $ $ -1.6358\times10^{-4} $ $ -1.6118\times10^{-4} $ $ -1.5867\times10^{-4} $ $ -1.4143\times10^{-4} $ $ -1.1745\times10^{-4} $
    $ \sigma(\lambda_4) $ $ 6.862\times10^{-5} $ $ 1.4322\times10^{-4} $ $ 1.0414\times10^{-4} $ $ 1.3692\times10^{-4} $ $ 2.907\times10^{-5} $
    $ P_{ic}^2Ra $ $ \lambda_5 $ $ -3.6628\times10^{-5} $ $ -3.6622\times10^{-5} $ $ -3.6604\times10^{-4} $ $ -3.1036\times10^{-5} $ $ -2.7871\times10^{-5} $
    $ \sigma(\lambda_5) $ $ 2.364\times10^{-6} $ $ 1.1038\times10^{-5} $ $ 2.988\times10^{-6} $ $ 4.857\times10^{-5} $ $ 6.661\times10^{-6} $
    $ P_{ic}Ra $ $ \lambda_6 $ $ 2.2857\times10^{-4} $ $ 2.3723\times10^{-4} $ $ 2.2446\times10^{-4} $ $ 2.0615\times10^{-4} $ $ 1.9794\times10^{-4} $
    $ \sigma(\lambda_6) $ $ 4.2186\times10^{-4} $ $ 8.3757\times10^{-4} $ $ 5.9635\times10^{-4} $ $ 1.1\times10^{-3} $ $ 1.8554\times10^{-4} $
    $ P_{ic} $ $ \lambda_7 $ $ 2.1870\times10^{-4} $ $ 1.5004\times10^{-4} $ $ 2.4190\times10^{-4} $ $ 1.3637\times10^{-4} $ $ 0 $
    $ \sigma(\lambda_7) $ $ 2\times10^{-3} $ $ 4.7\times10^{-3} $ $ 3.1\times10^{-3} $ $ 5.6\times10^{-3} $ $ 0 $
    $ \frac{dP_{ic}}{dt} $ $ \lambda_8 $ $ -1.2071 $ $ -1.1633 $ $ -1.2195 $ $ -1.1759 $ $ -1.0668 $
    $ \sigma(\lambda_8) $ $ 0.9734 $ $ 1.8386 $ $ 1.5444 $ $ 2.4216 $ $ 0.4493 $
    $ C_aP_{ic}R_a\frac{d{P}_{ic}}{dt} $ $ \lambda_9 $ $ -0.231 $ $ -0.2312 $ $ -0.231 $ $ -0.2 $ $ -0.1751 $
    $ \sigma(\lambda_9) $ $ 5.1\times10^{-4} $ $ 1.19\times10^{-2} $ $ 7.4645\times10^{-4} $ $ 2.4\times10^{-3} $ $ 5.4\times10^{-3} $
    $ C_aP_{ic}\frac{d{P_{ic}}}{dt} $ $ \lambda_{10} $ $ -0.2789 $ $ -0.2678 $ $ -0.2817 $ $ -0.2351 $ $ -0.1859 $
    $ \sigma(\lambda_{10}) $ $ 0.2246 $ $ 0.4533 $ $ 0.3567 $ $ 0.4868 $ $ 8.89\times10^{-2} $
    $ P_aP_{ic}\frac{d{C_{a}}}{dt} $ $ \lambda_{11} $ $ -0.2789 $ $ -0.2687 $ $ -0.2817 $ $ -0.2352 $ $ -0.1867 $
    $ \sigma(\lambda_{11}) $ $ 0.2246 $ $ 0.4246 $ $ 0.3562 $ $ 0.4843 $ $ 7.77\times10^{-2} $
    $ P_{ic}^2\frac{d{C_{a}}}{dt} $ $ \lambda_{12} $ $ -0.2788 $ $ -0.2703 $ $ -0.2817 $ $ -0.2352 $ $ -0.1877 $
    $ \sigma(\lambda_{12}) $ $ 0.2249 $ $ 0.3936 $ $ 0.3562 $ $ 0.4819 $ $ 7.36\times10^{-2} $
     | Show Table
    DownLoad: CSV

    Table 11.  Training cost comparison

    Linear Cubic Michaelisâ€"Menten ICP
    $ \textbf{Parsimonious} $ 5s 13s 4882s 1148s
    $ \textbf{Non-parsimonious} $ 0.67s 0.2s 1251s 1134s
     | Show Table
    DownLoad: CSV

    Table 12.  System identification results for the linear parametric ODE, $ \frac{dx}{dt} = \lambda_1x + \lambda_2y $, state $ \frac{dy}{dt} = \lambda_3x + \lambda_4y $ with $ 10\% $ data noise, where $ \lambda_{i} $ denotes the relevant coefficient, $ C(\text{const1}) $ denotes the redundant constant coefficient and $ \sigma(C(\text{const1})) $ denotes the standard deviation for equation $ \frac{dx}{dt} = \lambda_1x + \lambda_2y $, while $ C(\text{const2}) $ and $ \sigma(C(\text{const2})) $ are corresponding values for equation $ \frac{dy}{dt} = \lambda_3x + \lambda_4y $

    Identified systems with group sparsity
    Case 1 2 3 4 5 6 7 8 9
    $ \lambda_1 $ -0.0506 -0.0513 -0.0522 -0.1026 -0.1042 -0.1033 -0.1517 -0.1502 -0.1539
    $ \sigma(\lambda_1) $ 0.0038 0.0038 0.0038 0.0033 0.0033 0.0033 0.0030 0.0030 0.0030
    $ \lambda_2 $ 1.5001 1.9977 2.4989 1.5034 2.0012 2.4992 1.5012 2.0017 2.4964
    $ \sigma(\lambda_2) $ 0.0038 0.0037 0.0037 0.0033 0.0032 0.0032 0.0029 0.0029 0.0029
    $ \lambda_3 $ -1.4976 -1.9982 -2.4940 -1.4963 -1.9945 -2.4957 -1.4996 -1.9973 -2.4997
    $ \sigma(\lambda_3) $ 0.0038 0.0038 0.0038 0.0033 0.0033 0.0033 0.0030 0.0030 0.0030
    $ \lambda_4 $ -0.0510 -0.0494 -0.0474 -0.1013 -0.0992 -0.0989 -0.1536 -0.1535 -0.1492
    $ \sigma(\lambda_4) $ 0.0038 0.0037 0.0037 0.0033 0.0032 0.0032 0.0029 0.0029 0.0029
    Identified systems without group sparsity
    $ \lambda_1 $ -0.0487 -0.0487 -0.0489 -0.1006 -0.1014 -0.1000 -0.1498 -0.1476 -0.1509
    $ \sigma(\lambda_1) $ 0.0038 0.0038 0.0038 0.0033 0.0033 0.0033 0.0030 0.0030 0.0030
    $ \lambda_2 $ 1.5005 1.9995 2.5017 1.5017 2.0008 2.5000 1.4982 1.9999 2.4958
    $ \sigma(\lambda_2) $ 0.0038 0.0037 0.0037 0.0033 0.0032 0.0032 0.0029 0.0029 0.0029
    $ C(\text{const1}) $ -0.0626 -0.0847 -0.1048 -0.0436 -0.0608 -0.0740 0 -0.0475 -0.0597
    $ \sigma(C(\text{const1})) $ 0.0035 0.0035 0.0035 0.0026 0.0026 0.0026 0 0.0021 0.0021
    $ \lambda_3 $ -1.4997 -2.0004 -2.4986 -1.4992 -1.9987 -2.4994 -1.5012 -1.9999 -2.5016
    $ \sigma(\lambda_3) $ 0.0019 0.0019 0.0019 0.0017 0.0016 0.0016 0.0015 0.0015 0.0015
    $ \lambda_4 $ -0.0507 -0.0506 -0.0502 -0.0996 -0.0993 -0.0999 -0.1499 -0.1507 -0.1493
    $ \sigma(\lambda_4) $ 0.0019 0.0019 0.0019 0.0016 0.0016 0.0016 0.0015 0.0015 0.0015
    $ C(\text{const2}) $ 0 0.0440 0.0551 0 0 0.0411 0 0 0
    $ \sigma(C(\text{const2})) $ 0 0.0017 0.0017 0 0 0.0013 0 0 0
    True systems
    $ \lambda_1 $ -0.05 -0.05 -0.05 -0.1 -0.1 -0.1 -0.15 -0.15 -0.15
    $ \lambda_2 $ 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5
    $ \lambda_3 $ -1.5 -2.0 -2.5 -1.5 -2.0 -2.5 -1.5 -2.0 -2.5
    $ \lambda_4 $ -0.05 -0.05 -0.05 -0.1 -0.1 -0.1 -0.15 -0.15 -0.15
     | Show Table
    DownLoad: CSV

    Table 13.  System identification results for the parametric cubic systems $ \frac{dx}{dt} = \lambda_1x^3 + \lambda_2y^3 $, state $ \frac{dy}{dt} = \lambda_3x^3 + \lambda_4y^3 $ with 10% data noise.

    Identified systems with group sparsity
    Case 1 2 3 4 5 6 7 8 9
    $ y^2 $ -0.1749 -0.2424 -0.2813 -0.1295 -0.1940 -0.2175 -0.1045 -0.1744 -0.2075
    $ \sigma(y^2) $ 0.0044 0.0061 0.0088 0.0027 0.0041 0.0050 0.0026 0.0035 0.0051
    $ \lambda_1 $ -0.0485 -0.0458 -0.0503 -0.1007 -0.0978 -0.1025 -0.1498 -0.1536 -0.1525
    $ \sigma(\lambda_1) $ 0.0039 0.0053 0.0075 0.0025 0.0038 0.0045 0.0025 0.0033 0.0047
    $ \lambda_2 $ 1.5013 1.9995 2.4978 1.4994 2.0097 2.4998 1.4909 2.0109 2.5204
    $ \sigma(\lambda_2) $ 0.0037 0.0052 0.0075 0.0023 0.0036 0.0044 0.0022 0.0031 0.0045
    $ x^2 $ 0.1797 0.2177 0.3016 0.1434 0.1922 0.2480 0.1273 0.1710 0.2083
    $ \sigma(x^2) $ 0.0047 0.0060 0.0084 0.0032 0.0049 0.0061 0.0030 0.0037 0.0060
    $ \lambda_3 $ -1.5033 -2.0000 -2.5049 -1.5059 -1.9995 -2.5155 -1.5117 -2.0026 -2.5049
    $ \sigma(\lambda_3) $ 0.0041 0.0051 0.0072 0.0029 0.0043 0.0054 0.0028 0.0034 0.0055
    $ \lambda_4 $ -0.0508 -0.0541 -0.0514 -0.0998 -0.1056 -0.0950 -0.1473 -0.1528 -0.1485
    $ \sigma(\lambda_4) $ 0.0039 0.0050 0.0072 0.0026 0.0041 0.0053 0.0023 0.0030 0.0053
    Identified systems without group sparsity
    $ C(y^2) $ $ -0.1772 $ $ -0.2423 $ $ -0.2791 $ $ -0.1377 $ $ -0.2122 $ $ -0.2376 $ $ -0.1153 $ $ -0.1898 $ $ -0.1975 $
    $ \sigma(C(y^2)) $ $ 0.0056 $ $ 0.0055 $ $ 0.0082 $ $ 0.0034 $ $ 0.0042 $ $ 0.0056 $ $ 0.0030 $ $ 0.0039 $ $ 0.0062 $
    $ \lambda_1 $ -0.0489 -0.0468 -0.0514 -0.0990 -0.0931 -0.1005 -0.1476 -0.1493 -0.1548
    $ \sigma(\lambda_1) $ 0.0037 0.0047 0.0055 0.0024 0.0030 0.0038 0.0022 0.0030 0.0045
    $ C(x^2y) $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ -0.0558 $
    $ \sigma(C(x^2y)) $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0.0189 $
    $ \lambda_2 $ 1.5068 1.9985 2.5174 1.4950 2.0084 2.5035 1.4918 2.0150 2.5325
    $ \sigma(\lambda_2) $ 0.0051 0.0069 0.0083 0.0034 0.0041 0.0050 0.0022 0.0039 0.0062
    $ C(x^2y^2) $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0.0361 $ $ 0.0378 $ $ 0 $ $ 0.0365 $ $ 0 $
    $ \sigma(C(x^2y^2)) $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0.0049 $ $ 0.0062 $ $ 0 $ $ 0.0051 $ $ 0 $
    $ C(x^4y) $ $ 0 $ $ 0 $ $ 0.0513 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $
    $ \sigma(C(x^4y)) $ $ 0 $ $ 0 $ $ 0.0137 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $
    $ C(x^2y^3) $ 0 0 -0.0589 0 0 0 0 0 0
    $ \sigma(C(x^2y^3)) $ 0 0 0.0125 0 0 0 0 0 0
    $ C(x^2) $ $ 0.1798 $ $ 0.2060 $ $ 0.2977 $ $ 0.1418 $ $ 0.1946 $ $ 0.2289 $ $ 0.1264 $ $ 0.1610 $ $ 0.2011 $
    $ \sigma(C(x^2)) $ $ 0.0050 $ $ 0.0072 $ $ 0.0092 $ $ 0.0039 $ $ 0.0042 $ $ 0.0058 $ $ 0.0023 $ $ 0.0032 $ $ 0.0059 $
    $ \lambda_3 $ -1.4942 -1.9919 -2.4866 -1.5093 -2.0194 -2.5208 -1.5049 -2.0173 -2.5176
    $ \sigma(\lambda_3) $ 0.0050 0.0063 0.0084 0.0037 0.0039 0.0052 0.0022 0.0031 0.0056
    $ C(xy^2) $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ -0.0386 $ $ 0 $
    $ \sigma(C(xy^2)) $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0.0057 $ $ 0 $
    $ \lambda_4 $ -0.0511 -0.0548 -0.0502 -0.0994 -0.1089 -0.0977 -0.1472 -0.1549 -0.1487
    $ \sigma(\lambda_4) $ 0.0035 0.0045 0.0062 0.0025 0.0027 0.0039 0.0016 0.0020 0.0042
    $ C(x^3y) $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0.0482 $ $ 0 $ $ 0 $ $ -0.0603 $
    $ \sigma(C(x^3y)) $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0.0064 $ $ 0 $ $ 0 $ $ 0.0068 $
    $ C(x^2y^2) $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0.0368 $ $ 0 $
    $ \sigma(C(x^2y^2)) $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0 $ $ 0.0044 $ $ 0 $
    $ C(xy^3) $ 0 0 0 0 0 0 0 0 0.0420
    $ \sigma(C(xy^3)) $ 0 0 0 0 0 0 0 0 0.0071
    $ C(x^3y^2) $ 0 0 0 0 0.0501 0 0 0.0637 0
    $ \sigma(C(x^3y^2)) $ 0 0 0 0 0.0073 0 0 0.0065 0
    True systems
    $ \lambda_1 $ -0.05 -0.05 -0.05 -0.1 -0.1 -0.1 -0.15 -0.15 -0.15
    $ \lambda_2 $ 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5
    $ \lambda_3 $ -1.5 -2.0 -2.5 -1.5 -2.0 -2.5 -1.5 -2.0 -2.5
    $ \lambda_4 $ -0.05 -0.05 -0.05 -0.1 -0.1 -0.1 -0.15 -0.15 -0.15
     | Show Table
    DownLoad: CSV
  • [1] C. M. Bishop and M. Tipping, Variational relevance vector machines, arXiv: 1301.3838.
    [2] S. L. BruntonJ. L. Proctor and J. N. Kutz, Discovering governing equations from data by sparse identification of nonlinear dynamical systems, Proceedings of the National Academy of Sciences, 113 (2016), 3932-3937.  doi: 10.1073/pnas.1517384113.
    [3] K. P. ChampionS. L. Brunton and J. N. Kutz, Discovery of nonlinear multiscale systems: Sampling strategies and embeddings, SIAM Journal on Applied Dynamical Systems, 18 (2019), 312-333.  doi: 10.1137/18M1188227.
    [4] K. ChampionB. LuschJ. N. Kutz and S. L. Brunton, Data-driven discovery of coordinates and governing equations, Proceedings of the National Academy of Sciences, 116 (2019), 22445-22451.  doi: 10.1073/pnas.1906995116.
    [5] Z. ChenY. Liu and H. Sun, Physics-informed learning of governing equations from scarce data, Nature Communications, 12 (2021), 1-13. 
    [6] Z. Chen, Y. Liu and H. Sun, Physics-informed learning of governing equations from scarce data, arXiv: 2005.03448.
    [7] H. K. Chu and M. Hayashibe, Discovering interpretable dynamics by sparsity promotion on energy and the lagrangian, IEEE Robotics and Automation Letters, 5 (2020), 2154-2160. 
    [8] M. Corbetta, Application of sparse identification of nonlinear dynamics for physics-informed learning, in 2020 IEEE Aerospace Conference, IEEE, (2020), 1-8.
    [9] A. C. Faul and M. E. Tipping, A variational approach to robust regression, in International Conference on Artificial Neural Networks, Springer, (2001), 95-102.
    [10] A. C. Faul and M. E. Tipping, Analysis of sparse Bayesian learning, in Advances in Neural Information Processing Systems, (2002), 383-389.
    [11] H. GaoL. Sun and J.-X. Wang, PhyGeoNet: Physics-informed geometry-adaptive convolutional neural networks for solving parameterized steady-state PDEs on irregular domain, Journal of Computational Physics, 428 (2021), 110079.  doi: 10.1016/j.jcp.2020.110079.
    [12] X. Han, H. Gao, T. Pfaff, J.-X. Wang and L. Liu, Predicting physics in mesh-reduced space with temporal attention, in International Conference on Learning Representations, 2022.
    [13] S. M. Hirsh, D. A. Barajas-Solano and J. N. Kutz, Sparsifying priors for bayesian uncertainty quantification in model discovery, arXiv: 2107.02107.
    [14] K. KahemanJ. N. Kutz and S. L. Brunton, SINDy-PI: a robust algorithm for parallel implicit sparse identification of nonlinear dynamics, Proceedings of the Royal Society A, 476 (2020), 20200279.  doi: 10.1098/rspa.2020.0279.
    [15] E. KaiserJ. N. Kutz and S. L. Brunton, Sparse identification of nonlinear dynamics for model predictive control in the low-data limit, Proceedings of the Royal Society A, 474 (2018), 20180335.  doi: 10.1098/rspa.2018.0335.
    [16] S. KimP. Y. LuS. MukherjeeM. GilbertL. JingV. Čeperić and M. Soljačić, Integration of neural network-based symbolic regression in deep learning for scientific discovery, IEEE Transactions on Neural Networks and Learning Systems, 32 (2020), 4166-4177.  doi: 10.1109/tnnls.2017.2665555.
    [17] X.-Y. Liu and J.-X. Wang, Physics-informed dyna-style model-based deep reinforcement learning for dynamic control, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 477 (2021), 20210618.  doi: 10.1098/rspa.2021.0618.
    [18] Z. LongY. Lu and B. Dong, Pde-net 2.0: Learning PDEs from data with a numeric-symbolic hybrid deep network, Journal of Computational Physics, 399 (2019), 108925.  doi: 10.1016/j.jcp.2019.108925.
    [19] L. LuP. JinG. PangZ. Zhang and G. E. Karniadakis, Learning nonlinear operators via deeponet based on the universal approximation theorem of operators, Nature Machine Intelligence, 3 (2021), 218-229. 
    [20] N. M. ManganT. AskhamS. L. BruntonJ. N. Kutz and J. L. Proctor, Model selection for hybrid dynamical systems via sparse regression, Proceedings of the Royal Society A, 475 (2019), 20180534.  doi: 10.1098/rspa.2018.0534.
    [21] N. M. ManganS. L. BruntonJ. L. Proctor and J. N. Kutz, Inferring biological networks by sparse identification of nonlinear dynamics, IEEE Transactions on Molecular, Biological and Multi-Scale Communications, 2 (2016), 52-63. 
    [22] B. K. Nelson, Time series analysis using autoregressive integrated moving average (ARIMA) models, Academic Emergency Medicine, 5 (1998), 739-744. 
    [23] W. PanA. Sootla and G.-B. Stan, Distributed reconstruction of nonlinear networks: An ADMM approach, IFAC Proceedings Volumes, 47 (2014), 3208-3213. 
    [24] W. PanY. YuanJ. Gonçalves and G.-B. Stan, A sparse Bayesian approach to the identification of nonlinear state-space systems, IEEE Transactions on Automatic Control, 61 (2015), 182-187.  doi: 10.1109/TAC.2015.2426291.
    [25] W. PanY. YuanL. LjungJ. Gonçalves and G.-B. Stan, Identification of nonlinear state-space systems from heterogeneous datasets, IEEE Transactions on Control of Network Systems, 5 (2017), 737-747.  doi: 10.1109/TCNS.2017.2758966.
    [26] L. PiroddiM. Farina and M. Lovera, Polynomial narx model identification: a Wienerâ€"Hammerstein benchmark, IFAC Proceedings, 42 (2009), 1074-1079. 
    [27] M. QuadeM. AbelJ. Nathan Kutz and S. L. Brunton, Sparse identification of nonlinear dynamics for rapid model recovery, Chaos: An Interdisciplinary, Journal of Nonlinear Science, 28 (2018), 063116.  doi: 10.1063/1.5027470.
    [28] S. H. Rudy, S. L. Brunton, J. L. Proctor and J. N. Kutz, Data-driven discovery of partial differential equations, Science Advances, 3 (2017), e1602614.
    [29] S. H. RudyJ. N. Kutz and S. L. Brunton, Deep learning of dynamics and signal-noise decomposition with time-stepping constraints, Journal of Computational Physics, 396 (2019), 483-506.  doi: 10.1016/j.jcp.2019.06.056.
    [30] S. RudyA. AllaS. L. Brunton and J. N. Kutz, Data-driven identification of parametric partial differential equations, SIAM Journal on Applied Dynamical Systems, 18 (2019), 643-660.  doi: 10.1137/18M1191944.
    [31] F. Sun, Y. Liu and H. Sun, Physics-informed spline learning for nonlinear dynamics discovery, arXiv: 2105.02368.
    [32] L. SunH. GaoS. Pan and J.-X. Wang, Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data, Computer Methods in Applied Mechanics and Engineering, 361 (2020), 112732.  doi: 10.1016/j.cma.2019.112732.
    [33] L. Sun and J.-X. Wang, Physics-constrained Bayesian neural network for fluid flow reconstruction with sparse and noisy data, Theoretical and Applied Mechanics Letters, 10 (2020), 161-169. 
    [34] M. E. Tipping, Sparse Bayesian learning and the relevance vector machine, Journal of Machine Learning Research, 1 (2001), 211-244.  doi: 10.1162/15324430152748236.
    [35] M. E. Tipping, A. C. Faul, et al., Fast marginal likelihood maximisation for sparse Bayesian models, in AISTATS, 2003.
    [36] M. Ursino and C. A. Lodi, A simple mathematical model of the interaction between intracranial pressure and cerebral hemodynamics, Journal of Applied Physiology, 82 (1997), 1256-1269. 
    [37] S. Wang, H. Wang and P. Perdikaris, Learning the solution operator of parametric partial differential equations with physics-informed deeponets, Science Advances, 7 (2021), eabi8605.
    [38] J. WangX. XieJ. ShiW. HeQ. ChenL. ChenW. Gu and T. Zhou, Denoising autoencoder, a deep learning algorithm, aids the identification of a novel molecular signature of lung adenocarcinoma, Genomics, Proteomics & Bioinformatics, 18 (2020), 468-480. 
    [39] H. Wu, P. Du, R. Kokate and J.-X. Wang, A semi-analytical solution and AI-based reconstruction algorithms for magnetic particle tracking, Plos One, 16 (2021), e0254051.
    [40] Y. YangM. Aziz Bhouri and P. Perdikaris, Bayesian differential programming for robust systems identification under uncertainty, Proceedings of the Royal Society A, 476 (2020), 20200290.  doi: 10.1098/rspa.2020.0290.
    [41] L. Zhang and H. Schaeffer, On the convergence of the SINDy algorithm, Multiscale Modeling & Simulation, 17 (2019), 948-972.  doi: 10.1137/18M1189828.
    [42] R. ZhangZ. ChenS. ChenJ. ZhengO. Büyüköztürk and H. Sun, Deep long short-term memory networks for nonlinear structural seismic response prediction, Computers & Structures, 220 (2019), 55-68. 
    [43] S. Zhang and G. Lin, Robust data-driven discovery of governing physical laws with error bars, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 474 (2018), 20180305.  doi: 10.1098/rspa.2018.0305.
    [44] S. Zhang and G. Lin, Robust data-driven discovery of governing physical laws using a new subsampling-based sparse bayesian method to tackle four challenges (large noise, outliers, data integration, and extrapolation), arXiv: 1907.07788.
    [45] Z. Zhang and Y. Liu, Parsimony-enhanced sparse bayesian learning for robust discovery of partial differential equations, Mechanical Systems and Signal Processing, 171 (2022), 108833. 
  • 加载中

Figures(5)

Tables(13)

SHARE

Article Metrics

HTML views(2698) PDF downloads(684) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return