COMPUTING MINIMUM NORM SOLUTION OF LINEAR SYSTEMS OF EQUATIONS BY THE GENERALIZED NEWTON METHOD

. The aim of this paper is to ﬁnd the minimum norm solution of a linear system of equations. The proposed method is based on presenting a view of solution on the dual exterior penalty problem of primal quadratic programming. To solve the unconstrained minimization problem, the generalized Newton method was employed and to guarantee its ﬁnite global convergence, the Armijo step size regulation was adopted. This method was tested on all systems selected in NETLIB 1 . Numerical results were compared with the MOSEK Optimization Software 2 on linear systems in NETLIB (Table 1) and on linear systems generated by the Linear systems generator (Table 2).


1.
Introduction. When a linear system has multiple solutions, it may be important to select a particular solution, and in these cases, a natural choice would be the solution with the minimum norm [2,10]. Consider the following problem: where A ∈ R m×n and b ∈ R m .
The solution can be obtained by minimizing dual exterior penalty problem of primal quadratic programming [10,11]. The alternative method offers another way to solve this problem [7,9]. The use of dual exterior penalty problem of primal quadratic programming (1) leads to minimization function, which is piecewise quadratic, convex, and only once differentiable. On the other hand, for this function, the ordinary Hessian is not applicable. Here, for the minimization of this function, the generalized Newton method [10,11,12], along with the Armijo step size regulation [1,3,4,13,14] were used. Both methods and generalized Newton method were discussed in details in Sections 2 and 3. In the last section, a comparison of the test results with MOSEK was presented.
As for our notations, a few points need to be made . By denoting n-dimensional real space by R n and A , . , . ∞ we mean the transpose of matrix A and Euclidean norm and ∞-norm, respectively. a + replaces negative components of the vector a by zeros.
2. Main Results. In this section, we explain two approaches discussed in the introduction.
Consider the following function where f is an arbitrary differentiable function on R n to R, and g is an arbitrary real and differentiable function. The next result provides sufficient conditions to determine minimum norm solution for the problem (1).
Theorem 2.1. Let u * be a stationery point for function h and assume that there exist a vector v * ∈ R m such that is the solution to problem (1).
Proof. Let u * be a stationery point for h and v * ∈ R m satisfies (2), then we will have ∇h(u * ) = A∇f (A u * ) − bg (b u * ) = 0 and from here A(A v * ) + − b = 0. This means (A v * ) + is the feasible solution to (1).
In the following section, two approaches based on the Theorem 2.1 for solving problem (1) are described.
2.1. First approach. For the first approach, we consider As shown in [10,11] there exists some positive ε so that if we choose ε ∈ (0, ε), then x * = 1 ε (A u * ) + will be the minimum norm solution of (1). Therefore, from (2) and from Theorem 2.1, we have the following: In fact this approach is based on the dual exterior penalty problem.

Second approach.
In the second approach we consider h as follows, be the minimum norm solution of (1) (see [7]). As such, from (2) and from Theorem 2.1, we have the following: This approach is based on the alternative method. 3. Generalized Newton Method. In this section, the generalized Newton method for solving the unconstrained optimization problem for the first approach is explained. Function h(u) = 1 2 (A u) + 2 − εb u is piecewise quadratic, convex, and differentiable, but it is not twice differentiable.
By assuming u and s ∈ R m , for gradient of h(u), we have this means ∇h is globally Lipschitz continuous with constant K = A A T . Thus, for this function, the existence of a generalized Hessian is approved, which is defined as m × m symmetric positive semidefinite matrix [8] where D(z) denotes an n×n diagonal matrix with i-diagonal element and z i is equal to 1 if A T u(ε) i > 0; otherwise, it is equal to 0. Therefore, the generalized Newton method can be used to solve this problem, and it requires a line-search algorithm to obtain global termination (see [13]). In the following algorithm, the generalized Newton method is applied with a line-search based on the Armijo rule. Choose any u 0 ∈ R m and > 0 i=0; In this algorithm, the generalized Hessian may be singular, so we used a modified Newton direction Cholesky factorizations as follows: where M is an upper triangular matrix, γ is a small positive number and I m is the identity matrix of m order. Now, the following iterative process can be introduced, and if u * = arg min u∈R m {−εb T u + 1 2 ((A T u) + 2 }, then x * = 1 ε (A u * ) + is the minimum norm solution to (1). The proof for the finite global convergence of this algorithm is given in [1,11]. 4. Numerical Results. This section presents some numerical results on linear systems in NETLIB (See Table 1) along with various randomly generated systems (See Table 2) to test the iterative process (9). The test system generator creates a random matrix A for a given m, n and density d. The elements of A are uniformly distributed between −50 and +50. These linear systems are generated using the following MATLAB code:  Table(1), a comparison has been drawn between the MOSEK Optimization Software for convex quadratic problems (cqpMosek) and our method, which were implemented in MATLAB 7.6 code (ssGNewton ) 3 on systems in NETLIB. The results of the comparison on various randomly generated systems, are shown in Table(2).
In the following solved examples, the starting vector is u 0 = 0 m , and ε = (n/m)10 −5 , tol = 10 −10 . For this purpose, we used 5000 GHz, AMD 64 Athlon X2 Dual Core with 2 Gigabyte of memory. The total time of computations in each example is provided in the second column of the table. 5. Conclusion. In this paper, we proposed the use of fast generalized Newton method and Armijo rule to obtain the minimum norm solution for a linear system. We probed its finite global convergence with the Armijo step size regulation. The solution and convergence of this method were analyzed. In support of the predicted theory, several test examples were solved using this method. A comparison of the numerical results was made by tabulating. The computational results on test largescale linear systems demonstrated the effectiveness of the ssGNewton compared to the MOSEK Optimization Software for convex quadratic problems (cqpMosek). is the optimal solution for ssGNewton and x * cqpM osek is the optimal solution for cqpMosek, then for all solved systems we have x * ssGN ewton < x * cqpM osek . For example in the problem "80bau3b" x * ssGN ewton = 4.129965300963039e + 003, x * cqpM osek = 4.129965305352692e + 003, or in the problem "osa14 " x * ssGN ewton = 1.195823216292518e + 005, x * cqpM osek = 1.195823217807026e + 005.