Article Contents
Article Contents

# An improved algorithm for generalized least squares estimation

• * Corresponding author

This work was supported in part by NSERC of Canada Grant RGPIN-2017-0513

• The textbook direct method for generalized least squares estimation was developed by Christopher C. Paige about 40 years ago. He proposed two algorithms. Suppose that the noise covariance matrix, rather than its factor, is available. Both of the Paige's algorithms involve three matrix factorizations. The first does not exploit the matrix structure of the problem, but it can be implemented by blocking techniques to reduce data communication time on modern computer processors. The second takes advantage of the matrix structure, but its main part cannot be implemented by blocking techniques. In this paper, we propose an improved algorithm. The new algorithm involves only two matrix factorizations, instead of three, and can be implemented by blocking techniques. We show that, in terms of flop counts, the improved algorithm costs less than Paige's first algorithm in any case and less than his second algorithm in some cases. Numerical tests show that in terms of CPU running time, our improved algorithm is faster than both of the existing algorithms when blocking techniques are used.

Mathematics Subject Classification: Primary: 62J05, 65F20; Secondary: 15A23; 93E24.

 Citation:

• Figure 6.1.  Average CPU running time per solve over $10$ solves for $\mathbf{A}\in {\mathbb{R}}^{m \times n}$ with $n = 80$, $m = 100:200:3500$

 Algorithm 1: 1: Compute the QL factorization of $\mathbf{A}$: $\mathbf{H}_n\cdots \mathbf{H}_1 \mathbf{A}= \begin{bmatrix} {\boldsymbol{0}} \mathbf{L}_{ \mathbf{A}} \end{bmatrix}$ 2: Compute $\bar {\boldsymbol{\Sigma}}= \mathbf{H}_1\cdots \mathbf{H}_n {\boldsymbol{\Sigma}} \mathbf{H}_n\cdots \mathbf{H}_1$ 3: Compute $\tilde {\mathbf{y}} = \mathbf{H}_1\cdots \mathbf{H}_n {\mathbf{y}}$ 4: Compute the Cholesky factorization of $\bar {\boldsymbol{\Sigma}}$: $\bar {\boldsymbol{\Sigma}}= \mathbf{L} \mathbf{L}^ {\mkern-1.5mu\mathsf{T}}$ 5: Solve $\mathbf{L}(1\!:\!n,1\!:\!n) {\mathbf{z}}_1 = \tilde {\mathbf{y}}(1\!:\!n)$ for ${\mathbf{z}}_1$ 6: Solve $\mathbf{L}_{ \mathbf{A}} {\hat{\mathbf{x}}}=\tilde {\mathbf{y}}(n+1:m)- \mathbf{L}(n+1\!:\!m,1\!:\!n) {\mathbf{z}}_1$ for ${\hat{\mathbf{x}}}$
•  [1] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney and D. Sorensen, LAPACK Users' Guide, SIAM, Philadelphia, PA, Third Edition, 1999. doi: 10.1137/1.9780898718201. [2] Jeff Bezanson, Alan Edelman, Stefan Karpinski and Viral B Shah, Julia: A fresh approach to numerical computing, SIAM Review, 59 (2017), 65-98.  doi: 10.1137/141000671. [3] Ǻ. Björck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA, 1996. [4] Ǻ. Björck, Numerical Methods in Matrix Computations, Springer, Philadelphia, PA, 2015. [5] G. H. Golub and  C. F. Van Loan,  Matrix Computations, The Johns Hopkins University Press, Baltimore, Maryland, 4th edition, 2013. [6] S. Kourouklis and C. C. Paige, A constrained least squares approach to the general Gauss-Markov linear model, Journal of the American Statistical Association, 76 (1981), 620-625. [7] C. C. Paige, Computer solution and perturbation analysis of generalized linear least squares problems, Math. Comput., 33 (1979), 171-183.  doi: 10.2307/2006034. [8] C. C. Paige, Fast numerically stable computations for generalized linear least squares problems, SIAM J. Num. Anal., 16 (1979), 165-171.  doi: 10.1137/0716012. [9] R. Schreiber and C. Van Loan, A storage efficient WY representation for products of Householder transformations, SIAM J. of Sci. Stat. Computing, 10 (1991), 53-57.  doi: 10.1137/0910005.

Figures(1)

Tables(1)

• on this site

/