Projection-Based Model Reduction for Time-Varying Descriptor Systems: New Results

We have presented a projection-based approach for model reduction of linear time-varying descriptor systems in [9] which was based on earlier ideas in the work of J. Philips [12] and others. This contribution continues that work by presenting more details of linear time-varying descriptor systems and new results coming from real fields of application. The idea behind the proposed procedure is based on a multipoint rational approximation of the monodromy matrix of the corresponding differential-algebraic equation. This is realized by orthogonal projection onto a rational Krylov subspace. The algorithmic realization of the method employs recycling techniques for shifted Krylov subspaces and their invariance properties. The proposed method works efficiently for macro-models, such as time varying circuit systems and models arising in network interconnection. Bode plots and step response are used to illustrate both the stability and accuracy of the reduced order models.


Introduction
In recent years, scientists and engineers have put a lot of attention on the analysis and control of linear periodic time-varying (LPTV) systems as they explain several man made and natural phenomena [5,12,22].A continuous-time LPTV descriptor system in general has the form E(t) ẋ(t) = A(t)x(t) + B(t)u(t), y(t) = C(t)x(t) + D(t)u(t), where x(t) ∈ R n is called the descriptor vector, u(t) is the system input, y(t) is the system output, and n is the system order at any given time t.All the system matrices are time-varying, periodic with period K ≥ 1 and the matrices E(t) and A(t) can be singular at any given time t.The matrix D(t) does not have any effect on the dynamics of the corresponding system.Hence it is considered zero in most references and we will omit it in the description of LPTV descriptor systems in the consequent sections.
Formally speaking, a reduced-order system of order r for system (1) (omitting D(t)) would be a system of the form Ê(t) ẋ(t) = Â(t)x(t) + B(t)u(t), ŷ(t) = Ĉ(t)x(t). (2 The system is of potentially smaller dimension, i.e., x ∈ R r with r << n, and thus lower computational cost, than the original system (1), and suitable for use in higher level simulation.Apart from having much smaller state-space dimension, the reduced-order system should preserve some essential and important characteristics of the original system.
Model reduction using projection formulation has become a popular and well accepted technique in the field of signal analysis and electrical interconnections.Today, the best choices for these projection subspaces, in model reduction of linear time-invariant (LTI) systems, are Krylov subspaces.In this approach, the lower order model is obtained such that some of the first moments (and/or markov parameters) [3,1,8,16] of the original and reduced systems are matched where the moments are the coefficients of the Taylor series expansion of the transfer function at a suitable point.Methods based on multipoint rational approximations [2,1] are also efficient for particular cases.However, model reduction for time-varying systems is much more complex in that projection approach.Balanced truncation methods [19] have been applied, but there are still issues with the efficient implementation of these techniques.Therefore, huge talents have been worked on developing the techniques of model reduction using rational approximations and the projection formulations.
In this paper, we discuss efficient implementations of Krylov subspace based projection methods for model order reduction.The paper outline is as follows.In Section 2, we discuss the basic essentials of LTI systems regarding the projection based model reduction approach.Also we discuss some definitions and theorems related to our projection approach.In Section 3, we represent the linear time-varying (LTV) signal analysis and discuss the frequency-domain matrix formulation of LTV systems which gives the concept for the model reduction procedure.We choice a projection framework using a finite discretization method and approximate the appropriate subspaces and discuss how to compute them more efficiently.Section 4 contains the outline of the model reduction procedure with more details.The efficiency and accuracy of the developed algorithm is illustrated by numerical examples which are given in Section 5. Some conclusions are given in 6.This paper is an extension and continuation of the previous work [9] with more details and new results coming from real fields of application.

Background
Before proceeding with projection methods, the generalized state space form will be briefly described.This will help to create a more general framework for the projection techniques.

LTI Systems
Let us consider the linear time-invariant multi-input and multi-output (MIMO) system in differential-algebraic form where , n is the system order, and p and q are the numbers of system inputs and outputs, respectively.The matrices E, A ∈ R n×n , B ∈ R n×p , C ∈ R q×n are time-invariant.For single-input single-output (SISO) systems, p, q=1, the matrices B and C changes to vectors b and c T , respectively.Now performing the Laplace transformations on the system, we obtain where x(s), etc. represents the Laplace transform of x(t), etc.It can easily be shown that the output function of the above system is now where is called the transfer function of the system.
A projection method reduces (3) by choosing two r-dimensional projection spaces, S 1 , S 2 ⊆ R n , so that the solution space is projected onto S 2 , i.e. x ∈ S 2 , and the residual of (3) is orthogonal to S 1 .The projection can be considered as follows: By applying this projection to system (3), and then pre-multiplying by V T , a realization of the reduced-order system of order r satisfies the projection equations where the columns of V and U form bases for S 1 and S 2 , respectively, If S 1 = S 2 , the projection is orthogonal, otherwise oblique.The matrices V and U are refered to as the left truncation matrix and the right truncation matrix, respectively.The corresponding transfer function of the reduced-order system is given by

Transfer Function Moments
Let us assume that system (3) is a MIMO system with the transfer function as in (6).By assuming that A is nonsingular, the Taylor series expansion of the transfer matrix (6) about zero is, The coefficients of s in this series, without negative sign, are called moments.
Definition 2.1 In system (3), suppose that A is nonsingular, then the i-th moment (about zero) of this system is given by where m i is a q × p matrix in the MIMO case and a scalar m i , in the SISO case.
We observe that the transfer function H(s) is a rational function in s, so it is logical that it should be approximated through a rational approach, such as Padé approximation [3,1,6].In case of Padé approximation, the reduced-order model is obtained such that 2r + 1 moments (SISO) of the original and reduced system are matched, where r is the order of the reduced system.In MIMO case the number of moment matching may vary.It is shown in [8] that in a time domain, a reduced r-th order p-input q-output model can match (r/p) + (r/q) moments with these of the original system.For p = q, the number of moment matrices matching will be (2r/p).
The k-th moment of the transfer function is given by CA −k−1 B. Clearly our desired approximations are connected with powers of the matrix A −1 acting on B, or powers of A −T acting on C T .Hence the main work of our reduction technique is to connect the moments with the projection matrices V and U .More of these ideas will be discussed in the next section.

The Krylov-Subspace Projection
The reduced-order model is computed applying suitable projections to system (3).We will calculate these projections via Krylov subspaces, defined in the following: The order m Krylov subspace is the space defined as where A ∈ R n×n and b ∈ R n is called the starting vector.The vectors b, Ab, A 2 b, ......, A m−1 b that construct the subspace, are called basic vectors.
If the ith basic vector in the Krylov subspace ( 4) is a linear combination of the previous vectors, then the next basic vectors can be written as a linear combination of the first i-1 vectors.Therefore the first independent basic vectors can be considered as a basis of the Krylov subspace.

Definition 2.3
The block Krylov subspace of order m is the space defined as where A ∈ R n×n and B ∈ R n×p contains the starting vectors.Note that rank(K The block Krylov subspace with p starting vectors can be considered as a union of p Krylov subspaces defined for each starting vector.The following theorems demonstrate how to choose the projection matrices to find the reduced-order system and explain details of matching the moments of the original and reduced-order systems. Theorem 1 If the columns of the matrix U used in (8) form a basis for the order r a Krylov subspace K ra (A −1 E, A −1 b) and the matrix V is chosen such that Â is nonsingular, then the first r a moments (about zero) of the original and reduced-order systems match.
Theorem 2 If the columns of the matrix V form a basis for the order r b Krylov subspace K r b (A −T E T , A −T c) and the matrix U is chosen such that Â is nonsingular, then the first r b moments (about zero) of the original and reduced-order order systems match.
Theorem 3 Assume that A and Â are invertible.If the columns of the matrices U and V form bases for the Krylov subspaces κ ra (A −1 E, A −1 b) and κ r b (A −T E T , A −T c), respectively, then the first r a + r b moments of the original and reduced-order systems match.
The projection matrices play an important role in the formulation of a reduced model.In case of Padé approximation [8,6] and Lanczos approximation technique [1,18], the choice of V and U is , where V l and U l contain the biorthogonal Lanzcos vectors, and in case of Arnoldi method [1,18], U a = V a where U a is the orthonormal matrix generated by Arnoldi process.In our projection technique, we will take V = U , because in this case the reduced model will inherit the structural properties, such as stability and passivity, from the original model.We will extend this approach to our multipoint approximations where the moments match at several points in the complex plane [8,17].In that case V and U will contain the basis for the union of the Krylov subspaces constructed at different frequency points.

LTV Systems
Let us consider the time-varying system (1) (omitting D(t)), where E(t), A(t), B(t), C(t) are matrices of order compatible with x(t), u(t), and y(t) and are assumed to be continuous functions of time.In integrated circuit applications, the most common origin of LTV systems is by linearization of the nonlinear system of equations around a timevarying operating point.We will discuss details of this linearization scheme in the next subsection.

Analysis of LTV Signals
We consider the nonlinear system driven by a large signal b l and a small signal u(t), to produce an output zt.For simplicity we assume that both u(t) and z(t) are scalars.Now we model the nonlinear system using differential-algebraic equations, that describes the circuit equations [15,21,4,10,11] and many other applications: where v(t) represents the nodal voltages and branch currents, u(t) represents the input source, q(.) and f (.) are nonlinear functions describing the charge/flux and resistive terms, respectively, in the circuit.B(t) and C(t) contain vectors that link the input and output to the rest of the system.In (14), voltage v(t) and input variable u(t) represent the total quantities.Now we split them into two parts, a large signal part and a small signal part [15,14,20], in order to obtain a LTV model, Now linearizing around the large signal part v (L) [15,20,21], we obtain a linear timevarying system of the form where Ḡ(t) = ∂f (v (L) )(t)/∂v (L) and C(t) = ∂q(v (L) )(t)/∂v (L) are the time-varying conductance and capacitance matrices, for the small response v (s) .We can write ( 16) in a more general form for a small signal v.To relate to the standard notation, we may make the identification

Frequency-Domain Matrix of LTV System
Most of the work in model reduction for LTI systems has been done on the basis of rational approximations of the frequency-domain transfer function.Thus motivated, we adopt the formalism of L. Zadeh's variable transfer functions [24] that were developed to describe time-varying systems.In this formalism the response v(t) can be written as an inverse Fourier transform of the product of a time-varying transfer function and Fourier transform of u(t), u(ώ).That is, where h(iώ, t) is the time-varying transfer function.To obtain the frequency-by-frequency response, we assume u as a single-frequency input.For this reason we will express u in terms of Delta function, u(ώ) = u(ω)δ(ω − ώ).Equation ( 18) then transforms to the form Writing s = iω and substituting into (10), we get an equation for h(s, t) as Hence, the transfer path of the system from the input u(t) to the output y(t) can be represented by the time-varying transfer function Φ(s, t) where, We consider here that the time-varying transfer functions are rational functions.Hence it is reasonable that the reduced order model will be obtained from the same sorts of rational approximations that have been suitable for the reduction of LTI systems.Therefore, we first seek a representation of the transfer functions in terms of finitedimensional matrices.

Discretization of Transfer Function
The rational matrix function can be obtained by discretizing (20).Since we focus our work on LPTV systems, we need to specify C(t) and B(t) over a fundamental period T .We construct a time-domain version of (20) by collocating h(s, t) over time samples t ∈ [0, T ] at M sample time points t 1 , . . ., t M , with periodicity t M = T .
Using the linear multistep formula (e.g., backward Euler [21,20]) and considering the periodicity of h(s, t), i.e., h(s, t) = h(s, t M ), we get the representation of (20) in terms of finite-dimensional matrices with where Ḡj = Ḡ(t j ), Cj = C(t j ), B j = B(t j ), h j (s) = h(s, t j ), and j is the jth time step.
Setting additionally where C j = C(t j ), the matrix of the baseband-referred transfer functions H T D (s) is given by [14] Equation ( 29) is called the time-domain matrix form of of the LTV transfer functions.The discretization procedure has converted the n dimensional time-varying system of (20) to an equivalent LTI system of dimension N = nM , which is larger by a factor equal to the number of time steps M in the discretization.Equation (29) can be used directly for reduced-order modeling.At that point algorithmic approaches that can be used for the model reduction of LTI systems, can be applied to matrices defined in ( 23)-(29).

Approximation by Krylov Subspace Methods
Following the work in [21], the transfer function for a small-signal steady-state response of the periodic time-varying system is obtained by solving the finite-difference equations where α(s) ≡ e −sT , T is the fundamental period, and B(s, t k ) = e st k B. The transfer function h(s, t) is then given by h(s, t) = e −st ṽ(t).
Although (30) can be solved using sparse matrix techniques, but we look for a more efficient approach which exploits the fact that the matrix is mostly block lower triangular and is typically solved for the shift of frequencies.To describe this approach, we first find a suitable representation of (30) in the time-domain matrix form.
For this purpose, we decompose the coefficient matrix of ( 23) into two triangular parts, A T D = L + U, where L be the nonsingular lower triangular portion and U is the upper triangular portion of A T D in (23) Using the expressions for L and U, we can represent (30) in the time-domain matrix form (L + α(s)U)ṽ = B(s). ( If we define a small-signal modulation operator ψ(s), then we obtain an expression of the transfer function as follows, and also B(s) = ψ(s)B T D .
Now we can obtain an approximation from the finite-difference discretization as The difference between the two sides of (36) depends on the treatment of the small signal that has been applied to the test.The left hand side represents a spectral discretization, and the right hand side represents a finite-difference discretization.
It is briefly discussed in [13,12] that the spectral form (22) that is amenable to model reduction is less convenient to work with.If we use the Krylov subspace scheme and use a lower-triangular preconditioner, at each different frequency point the preconditioner must be reconstructed.That means we must re-factor the diagonal blocks, and the computational cost as well as the problem size increases [21].
To resolve this dilemma assume the projection matrix V is not a basis for the Krylov subspace generated by (s E T D − A T D ) −1 , but instead for a nearby matrix.In that case, the reduced-order model would still be a projection of the original, having some small error in it.As long as the model is not evaluated in the neighborhood of a pole, it can be expected that the additional errors introduced into the model are small enough.Hence, instead of choosing the spectral form, the basis for the projector in the model reduction procedure can be obtained from the finite-difference equations.

Preconditioning and Recycling of Krylov Subspaces
Our interest is to see how the finite difference method approximates the appropriate basis V for the reduced-order system.Suppose we need to solve (33) for some different B. Again following [21], consider preconditioning with the matrix L. Then the preconditioned system can be written as As L is lower triangular, its inverse is easily applied by factoring the diagonal blocks and then back-solving.The structure of (37) suggests to explore the shift-invariance property of Krylov subspaces [8].It states the Krylov subspace of a matrix A is invariant with respect to shifts of the form A → A + αI, for α being any nonzero scaler.This recycled Krylov subspace method also enables us to use the same Krylov subspace to solve (37) at multiple frequency points.In that context we would like to introduce the following corollary to clarify the fact. of V .For the second column, it takes now E T D • v as its right side vector, where v is the previous orthonormal column generated for V .The process continues until m is reached for s 1 .For the next frequency s 2 , Step 9 computes the first column and then orthonormalizes it with respect to all the previously computed orthonormalized columns of V generated for s 1 (such an orthonormalization is efficient and fruitful because of the recycled Krylov scheme used for multiple expansion points).The total number of such orthonormalized columns is counted by k and it is initialized at the beginning of the algorithm.
As soon as the projection columns of V for a particular s i are computed, the algorithm run for the next frequency point.The projection matrix V is the union of all these projections obtained for all s i , where i runs form 1 to n s .Therefore, the number of columns of the projection matrix V is m • n s .This can be expressed as 1: Set k = 1 2: for i = 1 to n s do 3: for j = 1 to m do end if end for 13: end for 16: end for Remark 1 In Algorithm 1, Step 9 uses recycling technique to produce the projection columns of V .It is clear from the context that if a preconditioner L is not used to solve Step 9, each new vector in the model reduction is obtained by an inner Krylov iteration with the matrix A T D .Also, each new right-hand-sides u i is generated for each sweep of frequency s i .Due to the shift-invariance property, since each new right-hand-side u i in the model reduction procedure is drawn from a Krylov subspace of K m (A T D , B T D ) for some m, it is reasonable that the next term in the space of K i (A T D , B T D ) is related to the K m (A T D , B T D ), where i slightly exceeds m [12].
The net result of the algorithm is an N × mn s projection matrix V with orthonormal columns.We use the rank revealing QR factorization (RRQR) [7] with prescribed tolerance τ for the formulation of the projected matrix V , because the matrix V , we have obtained from the direct use of the proposed algorithm, has linear dependent columns.The rank revealing QR factorization truncates those redundant constraints and produces an orthonormal basis of the projected matrix for the reduced-order system.Last of all the reduced-order system is generated through the projection with V .

Numerical Results
To test the time-varying model reduction procedure, the proposed algorithm has been implemented in a time-domain RF circuit simulator.The large-signal periodic steady state is calculated using a shooting method [20].The LTV system is discretized using second-order backward-difference formulas.The data files for both the following model problems have been provided by Michael Striebel1 , former postdoctoral researcher, NXP Semiconductors, High Tech Campus 37, NL-5656 AE Eindhoven, The Netherlands.

Simple RF Circuit
We consider here a simple example where the data is obtained from a small RF circuit simulator.The circuit system consists of 5 nodes, and is excited by a local oscillator (LO) at 2 KHz driving the mixer, while the RF input is fed into the I-channel buffer.The time-varying system is obtained around a steady state of the circuit at the oscillatory frequency; a total of M = 129 timesteps are used to describe the steady-state waveform.For the model reduction procedure, the input function B(t) is a constant column vector, corresponding to the continuous small-signal input.To analyze the circuit, we consider a period of T = 1ms for the steady state analysis.The final discretized model is a real LTI system of order N = 645.
The assigned algorithm produces a very good approximation of the original model for multiple frequency points.Three different expansion points on the positive real axis at s = 2kHz, 4kHz, 6kHz are considered.The reduced-order model is generated by matching four moments of the Krylov subspace generated for every expansion point.We use Figure 1: Frequency response of transfer function: exact system versus reduced-order system of order r = 3 (RF circuit).
the rank revealing QR factorization for the formulation of the projected matrix with tolerance, tol = 10 −5 .
We obtain a reduced-order model of order r = 3 .The computing time for the reduced model is very small and efficient compared to the original model.We plot the frequency response of the transfer functions for both the original and reduced-order systems and compare the relative error.Fig. 1 shows a very nice matching of the baseband transfer functions H T D (s) and ĤT D (s), and the relative error in Fig. 2 is very small.The Bode diagram and the step response in Fig. 3 and Fig. 4 show the better efficiency of the reduced-order model.

Mixer Circuit
In this example, we apply the proposed algorithm on a multi-tone mixer circuit, consisting of several functional component blocks.The circuit generates 43 equations in the circuit simulator.201 timesteps are needed for time-domain analysis, so that the matrix A T D has rank N = 8643.
The mixing elements shift the input from the RF frequency to the mixer LO frequency.For the model reduction procedure, the input function B(t) is chosen to be a constant column vector, corresponding to the continuous small-signal input.To analyze the circuit, a periodic steady state analysis is run with a T = 1ns period.Step response for exact system and the reduced-order system of order r = 3 (RF circuit).The proposed algorithm produces a very good approximation of the original model for multiple frequency points.Five different expansion points on the positive real axis in the range from s=2MHz to 6MHz are used.The reduced-order model is generated by matching six moments of the Krylov subspace generated for every expansion point.We use the RRQR factorization for the formulation of the projected matrix with tolerance tol = 10 −6 .
We obtain a reduced-order model of order r = 4.The computing time for the reducedorder model is only 0.0037 CPU seconds, while the original model took almost 8 × 10 3 CPU seconds.We plot the transfer functions for both the original and reduced-order systems in Fig. 5 and depict their relative error in Fig. 6.Both the transfer functions match and the relative error is very small.In addition, the plotted Bode diagram and the step response in Fig. 7 and Fig. 8 show the better efficiency of the reduced-order model.

Discussion
The system model-design applied here is efficient for small-signal analysis and time parameters.Therefore the model is capable of representing very complicated physical dynamics in circuit problems.We observe that the proposed algorithm produces a very good approximation of the original model and the reduced-order model is very small and efficient compared to the original model.
There are several scopes for the future extensions of the ideas of this section.The  Step responses for the exact system and the reduced-order system of order r = 4 (Mixer circuit).
formalism and algorithms can be trivially extended to the case of quasi-periodic small signal analysis [23].

Figure 2 :
Figure 2: Error in the frequency response of transfer function of reduced-order system (RF circuit).

Figure 3 :
Figure 3: Bode plots for the exact system and the reduced-order system of order r = 3 (RF circuit).

Figure 4 :
Figure 4:Step response for exact system and the reduced-order system of order r = 3 (RF circuit).

Figure 5 :
Figure 5: Transfer function: exact system versus reduced-order system of order r = 4 (Mixer circuit).

Figure 6 :
Figure 6: Error in the frequency response of transfer function of reduced-order system (Mixer circuit).

Figure 7 :
Figure 7: Bode plots for the exact system and the reduced-order system of order r = 4 (Mixer circuit).

Figure 8 :
Figure 8:Step responses for the exact system and the reduced-order system of order r = 4 (Mixer circuit).