\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Explicit bivariate rate functions for large deviations in AR(1) and MA(1) processes with Gaussian innovations

  • Corresponding author: Sílvia R. C. Lopes

    Corresponding author: Sílvia R. C. Lopes

The authors would like to thank the Editor, Associate Editor, and the anonymous Referee for the numerous comments and suggestions that improved this work. They also would like to express their sincere thanks to Dr. Bernard Bercu for indicating valuable references from the Large Deviations theory. M.J. Karling was supported by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)-Brazil (Grant No. 1736629) and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)-Brazil (Grant No. 170168/2018-2). A.O. Lopes’ research was partially supported by CNPq-Brazil (Grant No. 304048/2016-0). S.R.C. Lopes’ research was partially supported by CNPq-Brazil (Grant No. 303453/2018-4)..

Abstract / Introduction Full Text(HTML) Figure(12) Related Papers Cited by
  • We investigate the large deviations properties for centered stationary AR(1) and MA(1) processes with independent Gaussian innovations, by giving the explicit bivariate rate functions for the sequence of two-dimensional random vectors $ ({\boldsymbol{S}}_n)_{n \in \mathbb{N}} = \left(n^{-1}(\sum_{k=1}^n X_k, \sum_{k=1}^n X_k^2)\right)_{n \in \mathbb{N}} $. Via the Contraction Principle, we provide the explicit rate functions for the sample mean and the sample second moment. In the AR(1) case, we also give the explicit rate function for the sequence of two-dimensional random vectors $ ( {\boldsymbol{{\cal{W}}}}_n)_{n \geqslant 2} = \left(n^{-1}(\sum_{k=1}^n X_k^2, \sum_{k=2}^n X_k X_{k-1})\right)_{n \geqslant 2} $, but we obtain an analytic rate function that gives different values for the upper and lower bounds, depending on the evaluated set and its intersection with the respective set of exposed points. A careful analysis of the properties of a certain family of Toeplitz matrices is necessary. The large deviations properties of three particular sequences of one-dimensional random variables will follow after we show how to apply a weaker version of the Contraction Principle for our setting, providing new proofs for two already known results on the explicit deviation function for the sample second moment and Yule-Walker estimators. We exhibit the properties of the large deviations of the first-order empirical autocovariance, its explicit deviation function and this is also a new result.

    Mathematics Subject Classification: 11E25, 60F10, 60G10, 60G15, 62F12, 62M10.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 2.1.  Regions $ {\cal{D}}_1 $ (left-hand side panel), $ {\cal{D}}_2 $ (middle side panel) and $ {\cal{D}} = {\cal{D}}_1 \cup {\cal{D}}_2 $ (right-hand side panel), for $ {\cal{D}}_1 $ and $ {\cal{D}}_2 $ defined in (2.14) in the particular case when $ \phi = 0.5 $. In the third draw, we plotted the lines $ \lambda_2 = - \phi \pm(1+\phi^2-2\lambda_1)/2 $, for $ \lambda_1 < (1-\phi^2)/2 $ in thick green color to illustrate that this border belongs to the set $ {\cal{D}} $, whereas the curve $ (\phi + \lambda_2)^2 = \phi^2(1 - 2 \lambda_1) $, for $ \dfrac{1 - \phi^2}{2} \leqslant \lambda_1 \leqslant \dfrac{1}{2} $, is plotted in red color to indicate that it does not make part of the set $ {\cal{D}} $.

    Figure 2.2.  The two left-hand side panels show the graph of the function $ J(x,y) $, given in (2.19), from two different points of view, with $ \phi = 0.8 $, $ x \in (0,6] $ and $ y \in [-6, 6] $. The white line represents the curve $ {\cal{C}}_{0.8} $, defined in (2.16), where the domains of the two functions $ J_1(\cdot,\cdot) $ and $ J_2(\cdot,\cdot) $ intersect, changing roles in the definition of $ J(\cdot,\cdot) $. Notice that, $ J_1(\cdot,\cdot) $ converges to $ +\infty $ as $ (x,y) $ approaches the curves $ \{(x,y) \in \mathbb{R}^2: x \geqslant 0, |y| = x\} $; this behavior is inherited by $ J(\cdot,\cdot) $. The right-hand side panel displays the domains $ {\cal{A}}_\phi $ (in blue color) and $ {\cal{B}}_\phi $ (in orange color).

    Figure 2.3.  In the left-hand side panel, a plot of the region $ F = [2,3]\times[-1,1] $ is presented, showing that it is contained in the set of exposed points $ {\cal{F}}_{0.5}. $ In the right-hand side panel, a graph of the function $ J(x,y) ,$ for (x, y) $ \in F. $

    Figure 2.4.  In the left-hand side panel, a plot of the region $ F = [5,6]\times[-4,-1] $ is presented, showing that it is not entirely contained in the set of exposed points $ {\cal{F}}_{0.5} .$ In the right-hand side panel, a graph of the function $ J(x,y) $ for (x, y) $ \in F .$

    Figure 3.1.  Graphs of $ J_1(c,y) $ and $ J_2(c,y) $ with $ \phi = 0.8 $ and $ c = 4 $. The green and red dots represent the points $ -\alpha_c $ and $ \alpha_c $, respectively, where $ J_1(c, \cdot) $ and $ J_2(c, \cdot) $ change roles in the law of $ J(\cdot,\cdot) $. The blue dot is the point $ (y_c, J(c,y_c)) $. As this graph shows us, given that $ \phi > 0 $, $ J(c,y) $ is equal to $ J_1(c,y) $ if $ y \in (c,-\alpha_c)\cup(\alpha_c,c) $, and equal to $ J_2(c,y) $ if $ y \in [-\alpha_c,\alpha_c] $. When $ J(c,y) $ changes from $ J_2(c,y) $ to $ J_1(c,y) $ on the red dot $ \alpha_c $, the derivative of $ J_1(c,y) $ being negative for $ y \in (\alpha_c,y_c) $ shows that this function keeps decreasing from left to right until it reaches the global minimum at $ y_c $ (remember from Proposition 2.1 that $ J_1 $ and $ J_2 $ are convex), so that every point on its right and its left is greater than $ J_1(c,y_c) $, including the points in $ [-\alpha_c,\alpha_c] $ where $ J_2(c,y) $ rules. A similar analysis follows by symmetry for the case $ \phi < 0 $.

    Figure 3.2.  Graphs of $ I_1(c) $ for $ \phi \in \{0, \, 0.3, \, 0.6, \, 0.9\} $ and $ c \in (0, 10] $.

    Figure 3.3.  Graphs of $ J_1(x,c) $ and $ J_2(x,c) $, with $ \phi = 0.8 $. Here $ c = 2 $ is fixed. The red dot represents the point $ \beta_c $ where $ J_1(\cdot,c) $ and $ J_2(\cdot,c) $ change roles in the law of $ J(\cdot,\cdot) $. The blue dot is the point $ (x_c, J(x_c,c)) $. As the graph shows us, $ J(c,y) $ is equal to $ J_1(c,y) $ when $ x \in (|c|,\beta_c) $, and equal to $ J_2(c,y) $ when $ y \in [\beta_c,+\infty) $. When $ J(c,y) $ changes from $ J_2(x,c) $ to $ J_1(x,c) $ on the red dot $ \beta_c $, the derivative of $ J_1(c,y) $ being positive for $ y \in (x_c,+\infty) $ shows that this function keeps decreasing from right to left until it reaches the global minimum at $ x_c $ (remember from Proposition 2.1 that $ J_1 $ and $ J_2 $ are convex), so that every point on its right and its left is greater than $ J_1(x_c,c) $, including the points in $ [\beta_c,+\infty) $ where $ J_2(x,c) $ rules.

    Figure 3.4.  Graphs of $ I_2(c) $ for $ \phi \in \{-0.99, -0.6, \ 0, \, 0.6, \, 0.99\} $ and $ c \in [-4,4] $.

    Figure 3.5.  Graph of $ I_{\phi}(c) $ for $ \phi \in \{-0.5, 0, 0.5\} $ and $ c \in (-1, 1) $.

    Figure 3.6.  Graphs of $ J_1(x,c x) $ and $ J_2(x,c x) $, with $ \phi = 0.8 $. Here $ c = 0.5 $ is fixed. The red dot represents the point $ \delta_c $ where $ J_1(x,c x) $ and $ J_2(x,c x) $ change roles in the law of $ J(\cdot,\cdot) $. The blue dot is the point $ (x_c, J(x_c,c x_c)) $. Notice that $ J(x, c x) $ is equal to $ J_1(x, c x) $ when $ x \in (0,\delta_c) $, and equal to $ J_2(x,c x) $ when $ x \in [\delta_c,+\infty) $. When $ J(x,c x) $ changes from $ J_2(x,c x) $ to $ J_1(x,c x) $ on the red dot $ \delta_c $, the derivative of $ J_1(x,c x) $ being positive for $ x \in (x_c,+\infty) $ shows that this function keeps decreasing from right to left until it reaches the global minimum at $ x_c $ (remember from Proposition 2.1 that $ J_1 $ and $ J_2 $ are convex) so that every point on its right and its left is greater than $ J_1(x_c, c x_c) $, including the points in $ [\delta_c,+\infty) $ where $ J_2(x,c x) $ rules.

    Figure 4.1.  Graph of $ I_{\overline{X}}(c) $ for $ \phi \in \{-0.5, 0, 0.5\} $ and $ c \in [-10, 10] $.

    Figure 4.2.  Graphs of $ K_\theta(x) $ for $ \theta \in \{0.2, \, 0.4, \, 0.6, \, 0.8\} $ and $ x \in (0, 5] $.

  • [1]

    Avram, F., On bilinear forms in Gaussian random variables and Toeplitz matrices, Probability Theory and Related Fields, 1988, 79(1): 37−45.

    doi: 10.1007/BF00319101.

    [2] Bartle, R. G., The elements of integration and Lebesgue measure, John Wiley & Sons, New York, 1995.
    [3]

    Bercu, B., On large deviations in the Gaussian autoregressive process: Stable, unstable and explosive case, Bernoulli, 2001, 7(2): 299−316.

    doi: 10.2307/3318740.

    [4]

    Bercu, B., Gamboa, F. and Lavielle, M., Sharp large deviations for Gaussian quadratic forms with applications, ESAIM: Probability and Statistics, 2000, 4(1): 1−24.

    [5]

    Bercu, B., Gamboa, F. and Rouault, A., Large deviations for quadratic forms of stationary Gaussian processes, Stochastic Processes and their Applications, 1997, 71(1): 75−90.

    doi: 10.1016/S0304-4149(97)00071-9.

    [6]

    Bercu, B. and Richou, A., Large deviations for the Ornstein-Uhlenbeck with shift, Advances in Applied Probability, 2015, 47(3): 880−901.

    doi: 10.1239/aap/1444308886.

    [7]

    Bercu, B. and Richou, A., Large deviations for the Ornstein-Uhlenbeck process without tears, Statistics & Probability Letters, 2017, 123: 45−55.

    [8] Bickel, P. J. and Doksum, K. A., Mathematical Statistics: Basic Ideas and Selected Topics, vol. 1, 2nd ed., Prentice-Hall, Upper Saddle River, 2001.
    [9] Brockwell, P. J. and Davis, R. A., Time Series: Theory and Methods, 2nd ed., Springer, New York, 1991.
    [10]

    Bryc, W. and Dembo, A., Large deviations for quadratic functionals of Gaussian processes, Journal of Theoretical Probability, 1997, 10(2): 307−332.

    doi: 10.1023/A:1022656331883.

    [11]

    Bryc, W. and Smolenski, W., On the large deviation principle for a quadratic functional of the autoregressive process, Statistics & Probability Letters, 1993, 17(4): 281−285.

    [12] Bucklew, J. A., Large Deviation Techniques in Decision, Simulation, and Estimation, John Wiley & Sons, New York, 1990.
    [13]

    Burton, R. M. and Dehling, H., Large deviations for some weakly dependent random processes, Statistics & Probability Letters, 1990, 9(5): 397−401.

    [14]

    Carmona, S. C., Landim, C., Lopes, A. O. and Lopes, S. R. C., A level 1 large-deviation principle for the autocovariances of uniquely ergodic transformations with additive noise, Journal of Statistical Physics, 1998, 91: 395−421.

    doi: 10.1023/A:1023008608738.

    [15]

    Carmona, S. C. and Lopes, A. O., Large deviations for expanding transformations with additive white noise, Journal of Statistical Physics, 2000, 98: 1311−1333.

    doi: 10.1023/A:1018675914395.

    [16] Dembo, A. and Zeitouni, O., Large Deviations Techniques and Applications, 2nd ed., Springer-Verlag, New York, 2010.
    [17]

    Djellout, H. and Guillin, A., Large and moderate deviations for moving average processes, Annales de la Faculté des Sciences de Toulouse, 2001, X(1): 23−31.

    [18]

    Donsker, M. D. and Varadhan, S. R. S., Large deviations for stationary Gaussian processes, Communications in Mathematical Physics, 1985, 97: 187−210.

    doi: 10.1007/BF01206186.

    [19] Ellis, R. S., Entropy, Large Deviations, and Statistical Mechanics, 2nd ed., Springer-Verlag, New York, 1985.
    [20]

    Fayolle, G. and De La Fortelle, A., Entropy and large deviations for discrete-time Markov chains, Problems of Information Transmission, 2002, 38(4): 354−367.

    doi: 10.1023/A:1022006130735.

    [21]

    Ferreira, H. H., Lopes, A. O. and Lopes, S. R. C., Decision theory and large deviations for dynamical hypotheses tests: The Neyman-Pearson Lemma, Min-max and Bayesian tests, Journal of Dynamics and Games, 2022, 9(2): 123−150.

    doi: 10.3934/jdg.2021031.

    [22] Gradshteyn, I. S. and Ryzhik, I. M., Table of Integrals, Series, and Products, 7th ed., Academic Press, San Diego, 2007.
    [23]

    Gray, R. M., Toeplitz and circulant matrices: A review, Foundations and Trends in Communications and Information Theory, 2006, 2(3): 155−239.

    [24] Grenander, U. and Szegö, G., Toeplitz Forms and their Applications, 2nd ed., Cambridge University Press, Cambridge, 1958.
    [25]

    Heyde, C. C., A contribution to the theory of large deviations for sums of independent random variables, Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 1967, 7(5): 303−308.

    [26] Horn, R. A. and Johnson, C. R., Matrix Analysis, 2nd ed., Cambridge University Press, New York, 2013.
    [27] Jensen, J. L., Saddlepoint Approximations, Oxford University Press, New York, 1995.
    [28]

    Karling, M. J., Lopes, A. O. and Lopes, S. R. C., Pentadiagonal matrices and an application to the centered MA(1) stationary Gaussian process, International Journal of Applied Mathematics and Statistics, 2021, 61(1): 1−22.

    [29]

    Lewis, J. T. and Pfister, C.-E., Thermodynamic probability theory: Some aspects of large deviations, Russian Mathematical Surveys, 1995, 50(2): 279−317.

    doi: 10.1070/RM1995v050n02ABEH002056.

    [30]

    Macci, M. and Trapani, S., Large deviations for posterior distributions on the parameter of a multivariate AR( p ) process, Annals of the Institute of Statistical Mathematics, 2013, 65: 703−719.

    doi: 10.1007/s10463-012-0389-2.

    [31]

    Mann, H. B. and Wald, A., On the statistical treatment of linear stochastic difference equations, Econometrica, 1943, 11(3): 173−200.

    [32]

    Mas, A. and Menneteau, L., Large and moderate deviations for infinite-dimensional autoregressive processes, Journal of Multivariate Analysis, 2003, 87(2): 241−260.

    doi: 10.1016/S0047-259X(03)00053-8.

    [33]

    McLeod, A. I. and Jiménez, C., Nonnegative definiteness of the sample autocovariance function, The American Statistician, 1984, 38(4): 297−298.

    [34]

    Miao, Y., Large deviation principles for moving average processes of real stationary sequences, Acta Applicandae Mathematicae, 2009, 106: 177−184.

    doi: 10.1007/s10440-008-9288-1.

    [35] Nikolski, N., Toeplitz Matrices and Operators, Cambridge University Press, Cambridge, 2020.
    [36] Robertson, S. and Almost, C., Large Deviation Principles, Available at https://www.andrew.cmu.edu/user/calmost/pdfs/21-882-ldp_lec.pdf, 2010.
    [37] Rockafellar, R. T., Convex Analysis, Princeton University Press, New Jersey, 2016.
    [38]

    Rozovskii, L. V., Probabilities of large deviations of sums of independent random variables with common distribution function in the domain of attraction of the normal law, Theory of Probability & Its Applications, 1989, 34(4): 625−644.

    [39]

    Rozovskii, L. V., Large deviations of sums of independent random variables from the domain of attraction of a stable law, Journal of Mathematical Sciences, 1999, 93(3): 421−435.

    doi: 10.1007/BF02364828.

    [40] Shumway, R. H. and Stoffer, D. S., Time Series Analysis and its Applications: With R Examples, 4th ed., Springer, New York, 2016.
    [41]

    Tyrtyshnikov, E. E., Influence of matrix operations on the distribution of eigenvalues and singular values of Toeplitz matrices, Linear Algebra and its Applications, 1994, 207: 225−249.

    doi: 10.1016/0024-3795(94)90012-4.

    [42] Wu, L., On large deviations for moving average processes, In: Lai, T. L., Yang, H. and Yung, S.-P.(eds.), Probability, Finance and Insurance: Proceedings of a Workshop, the University of Hong Kong, World Scientific, 2004, 15–49.
    [43]

    Zaigraev, A., Multivariate large deviations with stable limit laws, Probability and Mathematical Statistics, 1999, 19(2): 323−335.

    [44]

    Zani, M., Sample path large deviations for squares of stationary Gaussian processes, Theory of Probability & its Applications, 2013, 57(2): 347−357.

  • 加载中

Figures(12)

SHARE

Article Metrics

HTML views(4422) PDF downloads(425) Cited by(0)

Access History

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return