SINGULAR ARMA SYSTEMS: A STRUCTURE THEORY

. Singular vector ARMA systems are vector ARMA (VARMA) systems with singular innovation variance or equivalently with singular spectral density of the corresponding VARMA process. Such systems occur in linear dynamic factor models, e.g. if the dimension of the static factors is strictly larger than the dimension of the dynamic factors or in linear dynamic stochastic equilibrium models, if the number of outputs is strictly larger than the number of shocks. We describe the relation of factor models and singular ARMA systems and a realization procedure for singular ARMA systems. Finally we discuss kernel systems.


1.
Introduction. Singular ARMA systems are multivariate ARMA systems with singular innovation variance: Let the ARMA system be of the form a(z)y t = b(z)ε t (1) where z is a complex variable as well as the backward shift on the integers Z, and where (ε t ) is white noise, i.e. E ε t = 0 and E ε s ε t = δ st Σ. We will assume throughout stability, i.e.
For singular ARMA systems Σ is singular of rank q < n, say. Write Σ as where b is unique up to orthogonal postmultiplication. Then (1) can be written as a(z)y t = b(z)bν t =b(z)ν t (4) where (ν t ) is white noise with E ν t ν t = I q .

MANFRED DEISTLER
Under the stability condition (2), there exists a steady state solution of the form Then the singular ARMA process (y t ) is the only linearly regular stationary solution of (1). The solution of (1) may have a linearly singular component, which however we do not consider here. The miniphase condition forb(z) is b(z) has full rank q, |z| ≤ 1. Let denote the transfer function from (ν t ) to (y t ).
The spectral density (written as a function of z ∈ C) of the singular ARMA process (y t ) then is of the form where k * (z) = k(z −1 ) . Clearly f (z) has normal rank equal to q (i.e. it has rank q, except for a finite number of points). Let f(λ) = f (e −iλ ), λ ∈ [−π, π] and γ(j) = E y t+j y t . Then, as is well known, and As is also well known, every rational n × q (q ≤ n) matrix k(z) of rank q has a Smith-McMillan form (see e.g. [7] Hannan and Deistler (2012)) where u(z) and υ(z) are n×n and q×q respectively unimodular polynomial matrices (i.e. det u(z) = constant = 0, det υ(z) = constant = 0) and where ε i , ψ i for i = 1, ..., q are relatively prime monic polynomials, ε i divides ε i+1 and ψ i+1 divides ψ i , i = 1, ..., q − 1; d(z) is unique for given k(z), whereas u(z) and υ(z) are not unique in general. The zeros of the ε i are the zeros of k and the zeros of ψ i are the poles of k. Now assume that k(z) satisfies the stability assumption, i.e. that k(z) has no poles for |z| ≤ 1 and the miniphase assumption that k(z) has full rank q for |z| ≤ 1. In addition we throughout assume that (a(z),b(z)) is left coprime, which is equivalent to rk(a(z),b(z)) = n ∀z ∈ C, where rk denotes the rank, see e.g. [ (2018)) we know that for a given f (z), a stable and miniphase transferfunction is unique up to right postmultiplication by orthogonal matrices. The transfer function is unique for given f(z) if we e.g. postulate that the submatrix of k 0 consisting of the first q rows is nonsingular, lower triangular and has ones on its main diagonal.
Due to the stability and the miniphase assumption, (5) is already the Wold decomposition, i.e. (ε t ) and (ν t ) are innovations for (y t ) and, in particular, (ν t ) is unique for given (y t ) up to orthogonal premultiplication Oν t . As n > q holds, the left inverse of k(z) is not unique. A special causal left inverse of k(z) is given by (compare (11)) As is easily seen, k − (z) has no poles and zeros for |z| ≤ 1 and we have The remaining parts of the paper are organized as follows: In section 2 we deal with the relation of singular ARMA systems and factor models, in order to take into account, for instance, observational noise. Section 3 presents a state space realization algorithm for singular ARMA systems, section 4 deals with the corresponding ARMA realizations. The singularity of the spectral density of singular ARMA processes implies an exact (i.e. noiseless) linear dynamic relation between the components of the process. This is discussed in section 5.
2. Factor Models and Singular ARMA Systems. As is easy to see, due to e.g. observational noise, singular ARMA processes, i.e. ARMA processes with a singular spectral density, are unlikely to occur "in practice". For this reason a noise model has to be added. This leads to linear dynamic factor models or linear dynamic errors in variables models (see e.g. [11] Scherrer and Deistler (1998) or [5] Forni, Hallin, Lippi and Reichlin (2000)) where the observations have to be denoised, in order to obtain a latent process with a singular spectral density. Singular ARMA processes also occur in DSGE (dynamic stochastic general equilibrium) models, see e.g. [9] Komunjer and Ng (2011), when the number of shocks is smaller than n.
A convenient presentation of the latent process then is of the form where f t are the so called minimal static factors, which are obtained as follows: Write If we assume that in addition γ(0) is of rank r < n, then a minimal static factor is given by As is easy to see, a minimal static factor f t is unique up to premultiplication by a constant, nonsingular matrix T and the factor loading matrix L is unique up to postmultiplication by T −1 .
The advantage of extracting a static factor process (f t ) for (y t ) is that in modeling the dynamics of (y t ), the dimension of the parameter space can be reduced. As can be shown, even for q < n, we may have r = n, see [6] Forni et al. (2015). According to this reference, the latent process in a linear dynamic factor model is only required to have a Wold decomposition (5) with q < n, and we allow for either r = n or q = r. The ν t 's are (minimal) dynamic factors.
Note that, because (16) and (18) are static linear transformations, the innovation spaces for (y t ) and (f t ) are the same. Clearly (f t ) is a process with rational spectral density and therefore has an ARMA representation where c(z), d(z) are r × r polynomial matrices satisfying (2) and (3) respectively, d(z) = d(z)b and (c(z),d(z)) is assumed to be relatively left prime. The corresponding transfer function then is stable and miniphase. A representation of k(z), n > q of the form r ≤ n, q ≤ r will be called an FM-type representation.
3. State Space Realizations for FM-type Transfer-Function Representations. Here we are concerned with construction of L and w(z) from k(z) and with the corresponding realizations of w(z). For the realization problem we refer to [8] Ho and Kalman (1966) and [2] Akaike (1974). We commence from the (block) Hankel matrix of the transfer function k(z). Then, from the Wold representation (5) we have: whereŷ t+j|t is the (best linear) least squares predictior of y t+j from the infinite past y t−s , s ≥ 0.
We now e.g. select the first basis rows of H k which span the rowspace of H k . Let S ∈ R s×∞ be the corresponding selector matrix, where s denotes the rank of H k . Then a minimal sate space system is defined by (see e.g. [2] Akaike (1974)) and A minimal state is given by Equations (26) (2012)). Alternative choices of S lead to alternative canonical forms. We have: We now decompose k(z) as follows: In a first step we have: Theorem 3.1. Let (y t ) be a singular ARMA process with Wold representation (5). Then the following statements are equivalent: (i) r < n, i.e. rk γ(0) = r < n (ii) (k 0 , k 1 , ...) has rank r (iii) The left kernel of f(λ) contains exactly (n−r) > 0 linearly independent constant (i.e. not dependent on λ) vectors.
Proof. From If the left kernel of f(λ), for all λ ∈ [−π, π], contains (n − r) linearly independent constant vectors, then by (31) these vectors are also contained in the left kernel of γ(0).
Thus rk γ(0) ≤ r. Now assume rk γ(0) < r, then there exists a vector g ∈ R n say with gγ(0)g = 0, where g is not an element of the left kernel of f(λ), ∀λ ∈ [−π, π] and thus, g π −π f(λ)dλ g > 0 which gives a contradiction. Now, let S 1 ∈ R r×n denote the matrix selecting the first basis rows from (k 0 , k 1 , ...) spanning the row space of (k 0 , k 1 , ...). Then we may define a matrix L by and is a minimal static factor. For simplicity of presentation, we will assume that f t consists of the first r elements of y t , and then S 1 = (I r | 0). This can always be achieved by reordering the entries in y t . We have for a suitably chosen L 2 .
Then we have is the transfer function from (ν t ) to (f t ).
Note that from A minimal state space system (A w , B w , C w , w 0 ), for (f t ), wherẽ SINGULAR ARMA SYSTEMS: A STRUCTURE THEORY 389 then is given by: and and the state is given by: Let us summarize: Consider an n-dimensional singular ARMA process (y t ) with spectral density f of normal rank q < n. Then: 1. Under (2) and (6) the n × q transfer function k(z) in (7) is unique up to orthogonal postmultiplication. 2. For given k(z) an FM-type representation k(z) = Lw(z) , L ∈ R n×r is obtained from (32) and (35). A minimal static factor is given by (33) and the dynamics of the static factor is described by (34)  4. VARMA and VAR Representations. As described e.g. in [7] Hannan and Deistler (2012), Chapter 2, an echelon form VARMA realization (a w ,b w ) for w(z) satisfying (2), (6) and (13) can be obtained commencing from the Hankel matrix H w . Let denote the Smith-McMillan form of w(z) and let L ⊥ ∈ R n×(n−r) be the completion of L to a nonsingular matrix (L, L ⊥ ) ∈ R n×n . Then As (L, L ⊥ ) u w 0 0 I is unimodular (43) is a Smith-McMillan form of k(z). This implies that k(z) and w(z) have the same pole-zero structure in terms of ε 1 /ψ 1 , ..., ε q /ψ q . The tall matrixb w (z), for prescribed order p, is generically of full rank q for all z ∈ C. The intuition behind this is, that, generically, there are no zeros common to all q × q minors ofb(z). For an exact proof see [1] Anderson and Deistler (2008). Now, as easily seen from the Smith form (i.e. the Smith McMillan form for the special case of a polynomial matrix) ofb(z), ifb(z) is zeroless, there exists a matrix b c (z) such that (b(z), b c (z)) is unimodular. Therefore: leads to the singular VAR model 5. The Kernel System. Let g denote a rational (n−q)×n matrix of rank (n−q), whose rows form a basis for the left kernel of k(z). Thus, in particular, and g(z) is unique up to premultiplication by nonsingular rational (n − q) × (n − q) matrices. Therefore, by extracting the least common denominator polynomial of the elements of g(z), the basis can always be chosen as polynomial, and by extracting all non-unimodular common left factors of g(z), as left coprime. Clearly (44) implies which gives an exact linear dynamic relation ("comovement") between the component process in (y t ). Adding noise to (44) leads to an errors-in-variables representation or to a factor model. As is easy to see, the left kernels of k(z) and of the spectral density f (z) are the same, thus, by Theorem 3.1, the first (n − r) rows of g(z) may be chosen to be constant and we may write where G ∈ R (n−r)×n is a basis for the left kernel of the factor loading matrix L and g(z) is polynomial and relatively left coprime. Now, write where G 1 is (n − r) × (n − q) andg 1 (z) is (r − q) × (n − q), then we can write in an obvious notation: g(z) = (g 1 (z) | g 2 (z)) = G 1 G 2 g 1 (z)g 2 (z) where g 1 (z) is (n − q) × (n − q). We assume that g 1 (z) is nonsingular, which can always be assumed by reordering the components in (y t ) as y 1,t y 2,t } (n − q) } q Then (45) gives y 1,t = −g 1 (z) −1 g 2 (z)y 2,t .
Using the Smith form of g 1 (z) we may extract those zeros in g 1 (z) where |z| ≤ 1, such that (48) is a causal linear dynamic transformation. Thus we have Theorem 5.1. Let (y t ) be a n-dimensional ARMA process with spectral density of rank q < n, then 1. there exist (n − q) exact linear, in general, dynamic relations (48) between the component processes of (y t ). 2. There exist exactly (n−r) linear static transformations between the component processes of (y t ).
6. Conclusion. Singular VARMA processes occur e.g. in linear dynamic factor models or in DSGE models (the latter are used in macroeconomics) after "denoising" of the observations (i.e. after estimating the latent variables). We describe the structure of singular ARMA systems and their realization by state space and ARMA systems. Finally we describe the exact linear dynamic relations between the components of singular ARMA processes.