MATRIX GROUP MONOTONICITY USING A DOMINANCE NOTION

. A dominance rule for group invertible matrices using proper splitting is proposed, and used this notion to show that a matrix is group monotone. Then some possible applications are discussed.

1. Introduction. Matrix monotonicity plays an important role in many places such as finite difference for partial differential equations, input-output production and growth models in economics, Markov processes in probability and statistics, and linear complementarity problems in operations research, to name a few. A real n × n matrix A is called monotone if Ax ≥ 0 ⇒ x ≥ 0. Here, x ≥ 0 means x i ≥ 0 for i = 1, 2, · · · , n. This notion was introduced by Collatz [7], who showed that A is monotone (also called inverse positive) if and only if A −1 exists and A −1 ≥ 0, where the latter denotes that all the entries of A −1 are nonnegative. Several characterizations and generalizations of monotone matrices are also available in the literature. Motivated by Collatz's result, Mangasarian [9] extended the concept of monotone matrices to the rectangular case, and proved that a rectangular matrix is monotone if and only if it has a nonnegative left-inverse. Berman and Plemmons extended the notion of monotonicity along several directions using generalized inverses. (See for instance, [3] and [4], and the book [5].) They studied extension of monotonicity of singular square matrices in [4]. Again this notion was generalized by Pye [11]. The Drazin inverse has various applications in finite Markov chains, singular differential and difference equations, cryptography and iterative methods in numerical analysis.
Szidarovszky and Okuguchi, [12] extended the notion of diagonal dominance for square matrices. They obtained a sufficient condition for a matrix having nonnegative inverse. Then they applied it to characterize M -matrix and block-M -matrix.
The objective of this work is to extend the diagonal dominance notion up to certain extent and then to apply to certain class of singular square matrices to show that these are group monotone (see next section for its definition). Our approach here has been inspired and guided mainly by the recent work of Mishra and Sivakumar, [10]. They have provided a generalization of the above fact for rectangular matrices to obtain a similar result using the Moore-Penrose inverse of A.

DEBASISHA MISHRA
The organization of this paper is as follows. In Section 2, we fix our notation, and discuss preliminary notions and results that will be used in the sequel. Section 3 presents few results for the Drazin inverse using index-proper splitting and also the main result using proper splitting. Then some possible applications are discussed.

2.
Preliminaries. Let R n denote the n dimensional real Euclidean space. Throughout, all our matrices are real square matrices of order n. We denote the range space, the null space and the transpose of A by R(A), N (A) and A T , respectively. Let X, Y be complementary subspaces of R n . Let P X,Y denote the projection of R n onto X along Y . Then P X,Y A = A if and only if R(A) ⊆ X and AP X, The group inverse of a matrix A ∈ R n×n (if it exists), denoted by A # is the unique matrix X satisfying A = AXA, X = XAX and AX = XA. Equivalently, A # is the unique matrix X satisfying XAx = x for all x ∈ R(A) and Xy = 0 for all Equivalently, A D is the unique matrix X satisfying XAx = x for all x ∈ R(A k ) and Xy = 0 for all y ∈ N (A k ). Drazin inverse and its connection to Krylov subspace methods for solving singular linear system of equations can be found in [16].
We list some of the well-known properties of A D [1] which will be frequently used in this paper: Let A be of index k. Then a splitting of the form is called an index-proper splitting [6] of A. The authors of [6] and [8] also have showed that Index-proper splitting is more general than the index splitting introduced in [14], and more on index splitting can be found in [15]. When ind U = 1, then A = U − V is called an index splitting. When k = 1, then both the above splittings reduce to proper splitting [2].
Let us revisit some of the earlier results from [8] and [14] to get an expression for A D using index-proper splitting and index splitting, respectively. In particular, we will also have an expression for A # using proper splitting.
When index of U = 1, then index-proper splitting coincides with index splitting.
Therefore R(U ) and N (U ) are complementary subspaces. Thus U # exists and then the above result coincides with Theorem 3.1, [14].
When index of A and U are 1, then A = U − V is a proper splitting of A. We then have the following corollary.
. However, the expression does not hold good in Corollary 2.2. The reason is explained in [8]. The following result will also be used in next section.
3. Main Results. The following result will be used to prove our first main result. The proof is same as proof of (e ⇒ a) of Theorem 4.14, [8]. We have included here for the completeness.
For index of U = 1, we have the following corollary for index splitting.
When index of both matrices A and U are 1, then Lemma 3.1 and above Corollary admit the following corollary.
The other way follows by setting S = A D .  We hereby proceed to propose a dominance rule for the non-negativity of group inverse of matrices. Let us recall the notion of a certain regularity for a subset of matrices, introduced in [12]. Let ϕ : R n×n −→ R n×n be a matrix-matrix mapping and E : R n×n −→ R m be a matrix-vector mapping where m ≥ 1 is a given integer). The following assumptions are made for ϕ and E: (a) ϕ is linear and idempotent. (b) Whenever α ∈ R with |α| ≥ 1, the condition

E(αA) E(A)
holds where is a transitive relation in R m .    Using the above concept, we propose an extension of regularity to singular square matrices.
Definition 3.11. Let J ⊆ R n×n and A be a matrix of index k. Then we say that Hence, if (ϕ, E, ) is regular with respect to J ⊆ R n×n , then (ϕ, E, ) is Dregular with respect to J and for k = 1, i.e., ind A = 1, it is called G-regular with respect to J. Next, we provide an example of a class of matrices J satisfying the above definition which extends Example 3.9.
Similarly, an extension of Example 3.10 can be given. For matrices A ∈ J satisfying certain additional conditions, we have the following result. Henceforth, we shall assume that for any µ ≥ 1, A + µϕ(A) ∈ J, whenever A ∈ J. We have E(ϕ(C)) = E(λϕ(A)) E(A − ϕ(A)) = E(C − ϕ(C)). Hence C is (ϕ, E, )-dominant. Since C ∈ J, we have N (CA) = N (A). The scalar λ satisfies the equation    II. Define ϕ and as in case I, and suppose that A is defined as in case I. Define If anyone of the following conditions hold, then A # ≥ 0.
We can prove these only by choosing appropriate E so that A becomes (ϕ, E, )dominant. Then other conditions of Theorem 3.15 will follow similarly as in case I. Hence A # ≥ 0.