\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Nonlinear dynamical system modeling via recurrent neural networks and a weighted state space search algorithm

Abstract Related Papers Cited by
  • Given a task of tracking a trajectory, a recurrent neural network may be considered as a black-box nonlinear regression model for tracking unknown dynamic systems. An error function is used to measure the difference between the system outputs and the desired trajectory that formulates a nonlinear least square problem with dynamical constraints. With the dynamical constraints, classical gradient type methods are difficult and time consuming due to the involving of the computation of the partial derivatives along the trajectory. We develop an alternative learning algorithm, namely the weighted state space search algorithm, which searches the neighborhood of the target trajectory in the state space instead of the parameter space. Since there is no computation of partial derivatives involved, our algorithm is simple and fast. We demonstrate our approach by modeling the short-term foreign exchange rates. The empirical results show that the weighted state space search method is very promising and effective in solving least square problems with dynamical constraints. Numerical costs between the gradient method and our the proposed method are provided.
    Mathematics Subject Classification: 68T05, 65K99.

    Citation:

    \begin{equation} \\ \end{equation}
  • [1]

    A. F. Atiya and A. G. Parlos, New results on recurrent network training: Unifying the algorithms and accelerating convergence, IEEE Transcations on Neural Networks, 11 (2000), 697-709.doi: 10.1109/72.846741.

    [2]

    Y. Fang and T. W. S. Chow, Non-linear dynamical systems control using a new RNN temporal learning strategy, IEEE Trans on Circuit and Systems, Part II, 52 (2005), 719-723.

    [3]

    R. A. Conn, K. Scheinberg and N. L. Vicente, "Introduction to Derivative-free Optimization," SIAM, 2009.doi: 10.1137/1.9780898718768.

    [4]

    J. F. G. Freitas, M. Niranjan, A. H. Gee and A. Doucet, Sequential Monte Carlo methods to train neural network models, Neural Computation, 12 (2000), 955-993.doi: 10.1162/089976600300015664.

    [5]

    L. K. Li, Learning sunspot series dynamics by recurrent neural networks, in "Advances in Data Mining and Modeling" (eds. W. K. Ching and K. P. Ng), World Science, (2003), 107-115.doi: 10.1142/9789812704955_0009.

    [6]

    L. K. Li, W. K. Pang, W. T. Yu and M. D. Trout, Forecasting short-term exchange Rates: a recurrent neural network approach, in "Neural Networks in Business Forecasting" (eds. G. P. Zhang), Idea Group Publishing, (2004), 195-212.doi: 10.4018/9781591401766.ch010.

    [7]

    L. K. Li and S. Shao, Dynamic properties of recurrent neural networks and its approximations, International Journal of Pure and Applied Mathematics, 39 (2007), 545-562.

    [8]

    L. K. Li and S. Shao, A neural network approach for global optimization with applications, Neural Network World, 18 (2008), 365-379.

    [9]

    L. K. Li, S. Shao and T. Zheleva, A state space search algorithm and its application to learn the short-term foreign exchange rates, Applied Mathematical Sciences, 2 (2008), 1705-1728.

    [10]

    X. D. Li, J. K. L. Ho and T. W. S. Chow, Approximation of dynamical time-variant systems by continuous-time recurrent neural networks, IEEE Trans on Circuit and Systems, Part II, 52 (2005), 656-660.

    [11]

    X. B. Liang and J. Wang, A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints, IEEE Transactions on Neural Networks, 11 (2000), 1251-1262.doi: 10.1109/72.883412.

    [12]

    Z. Liu and I. Elhanany, A Fast and Scalable Recurrent Neural Network Based on Stochastic Meta Descent, IEEE Transactions on Neural Networks, 19 (2008), 1652-1658.doi: 10.1109/TNN.2008.2000838.

    [13]

    S. Wang, Q. Shao and X. Zhou, Knot-optimizing spline networks (KOSNETS) for nonparametric regression, Journal of Industrial and Management Optimization, 4 (2008), 33?52.

    [14]

    X. Wang and E. K. Blum, Discrete-time versus continuous-time models of neural networks, Journal of Computer and System Sciences, 45 (1992), 1-19.doi: 10.1016/0022-0000(92)90038-K.

    [15]

    R. J. Williams and D. Zipser, A learning algorithm for continually running fully recurrent neural networks, Neural Computation, 1 (1989), 270-280.doi: 10.1162/neco.1989.1.2.270.

    [16]

    L. Xu and W. Liu, A new recurrent neural network adaptive approach for host-gate way rate control protocol within intranets using ATM ABR service, Journal of Industrial and Management Optimization, 1 (2005), 389-404.

    [17]

    J. Yao and C. L. Tan, A case study on using neural networks to perform technical forecasting of forex, Neural Computation, 34 (2000), 79-98.

    [18]

    K. F. C. Yiu, S. Wang, K. L. Teo and A. H. Tsoi, Nonlinear system modeling via knot-optimizing B-spline networks, IEEE Transactions on Neural Networks, 12 (2001), 1013-1022.doi: 10.1109/72.950131.

    [19]

    K. F. C. Yiu, Y. Liu and K. L. Teo, A hybrid descent method for global optimization, Journal of Global Optimization, 28 (2004), 229-238.doi: 10.1023/B:JOGO.0000015313.93974.b0.

  • 加载中
SHARE

Article Metrics

HTML views() PDF downloads(198) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return