AN INTERIOR POINT CONTINUOUS PATH-FOLLOWING TRAJECTORY FOR LINEAR PROGRAMMING

. In this paper, an interior point continuous path-following trajectory is proposed for linear programming. The descent direction in our continuous trajectory can be viewed as some combination of the aﬃne scaling direction and the centering direction for linear programming. A key component in our interior point continuous path-following trajectory is an ordinary diﬀerential equation (ODE) system. Various properties including the convergence in the limit for the solution of this ODE system are analyzed and discussed in detail. Several illustrative examples are also provided to demonstrate the numerical behavior of this continuous trajectory.

1. Introduction. In this paper, we consider the following linear programming problem min c T x s.t. Ax = b, x ≥ 0, where x ∈ R n , A ∈ R m×n , b ∈ R m , and c ∈ R n . Our goal is to establish a continuous path starting from any interior feasible point and converging to an optimal solution of (1). Different from the iterative methods, the main idea in the continuous trajectory approach is to convert the optimization problem (1) into finding an equilibrium point of the following ordinary differential equation (ODE) system: where variable t is a scalar, I ⊂ R denotes the maximal interval of existence of t for the ODE system (2), vector function f : D = J × I ⊆ R n × R → R n is a mapping defined on the product of a convex set J of R n and I. The vector function x(t) is a solution of the ODE system (2) on interval I ⊂ R. In the literature, there has been some non-interior point research work on ODE systems for optimization problems, see [3,11,7,8,9,10,14,13]. In addition, the neural network approach for optimization problems has been very active since 1980's, see the review paper [22] for more references.
Since the introduction of the interior point method by Karmarkar [21] in 1984, lots of research has been made both theoretically and numerically for solving the problem (1). These methods can be grouped into the following three main categories: a) Affine scaling methods, originally due to Dikin [15], were further studied by Adler et al. [2], Barnes [4], Megiddo and Shub [24], Monma and Morton [25], Monterio et al. [27], Vanderbei et al. [40], Sun [35,36]. b) Path-following methods, consisting of short-step and long-step methods, were studied by Gill et al. [17], Monterio and Adler [26], Roos [31], Roos and Vial [32], Gonzaga [18,20], and so on. c) Potential reduction methods, were studied by Gonzaga [18,19], Freund [16], Monterio [29], and so on. Interior point continuous trajectories for linear programming have been studied by Bayer and Lagarias [5,6], Megiddo and Shub [24], Adler and Monteiro [26], Witzgall et al. [41], Monterio [29], Liao [23], and Qian and Liao [30]. In particular, [24] analyzed continuous trajectories for linear programming in the framework of primal-dual complementarity relationship. [29] analyzed the continuous trajectories corresponding to a polynomial potential reduction (PR) algorithm and showed that all PR trajectories converge to the unique center of the optimal face of the given optimization problem. Liao [23] studied a dual affine scaling continuous trajectory for (1). While Qian and Liao [30] discussed the primal affine scaling continuous trajectory for convex programming. In this paper, we propose an interior point continuous trajectory which belongs to the continuous path-following approach. The descent direction in our continuous trajectory can be viewed as some combination of the affine scaling direction and the centering direction. Our continuous trajectory can be represented by an ODE system in the form of (2). An in-depth and detailed investigation on the behavior of this continuous trajectory will be conducted, in particular, we will prove that this continuous trajectory converges to an optimal solution of the problem (1) in the limit for any interior feasible point.
The rest of this paper is organized as follows. In Section 2, some definitions and basic properties for linear programming are presented. In Section 3, our interior point continuous path-following trajectory, which can be represented by an ODE system, is introduced. Various properties for the solution of this ODE system are explored. In particular, we prove that for any starting interior feasible point, the solution of this ODE system will converge to an optimal solution of the problem (1) in the limit. Several illustrative examples are presented in Section 4. Finally, some concluding remarks are drawn in Section 5.
2. Preliminaries and definitions. In this section, some definitions and basic properties for the problem (1) will be presented.
Let X = diag(x) denote the diagonal matrix with the components of x on the diagonal and e ∈ R n be the column vector of all one's. The pair of the primal linear programming and its dual are x ≥ 0. s ≥ 0. Denoted by P and D are the feasible set of problems (P ) and (D), respectively, The interiors of P and D are denoted by P + and D + , The affine hulls of P and D are P a and D a , respectively, 2.1. Duality results and the central path. Finding the optimal solutions of (P ) and (D) is equivalent of solving the following equations: In general, it is very hard to solve (3), because the first equation of (3) is nonlinear. By relaxing the right-hand side of (3a) by µe, µ > 0, we can arrive at the following new system: From the implicit function theorem, it can be easily verified that for any µ > 0, there exists a unique (x, y, s) in (4). Let (x(µ), y(µ), s(µ)) denote the unique solution of (4). These solutions are called the µ-centers of (P ) and (D). The path that is formed by the set of all µ-centers is called the central path. In Section 3, we will propose a continuous path-following trajectory by applying Newton's method to solve this system.
3. The interior point continuous path-following trajectory. In this section, we propose an interior point continuous path-following trajectory for the problem (1).

LIMING SUN AND LI-ZHI LIAO
Stewart [34] and Todd [37] independently proved that this quantity is always finite.
Next, we derive our interior point continuous trajectory in more detail. Consider the following logarithmic barrier function φ(x): Let x ∈ P + and µ > 0. The gradient of φ(x) and the Hessian matrix of φ(x) at x are denoted by g(x) and H(x) respectively, in particular We denote the Newton step by ∆x at x. The next step x + ∆x should be positive, and satisfies A(x+∆x) = b. Then ∆x must satisfy A∆x = 0. We define the Newton step ∆x at x to be the solution of Instantly, we get the solution ∆x : The projector P AX is the projection onto the null space of AX and is given by Adopting the direction ∆x in (6), we can establish our interior point continuous path-following trajectory x(t) as the solution of where From the ODE system (8), it is easy to check that dc T x dt ≤ 0 ∀t ≥ t 0 . Thus, from Assumption 3.1 and Proposition 1, we have that if x(t) is a solution of (8), x(t) is bounded. As a result, it is easy to see that µ(x(t)) is also bounded.
Proof. Since the right-hand side of (8) is continuous in R n + = {x : x ∈ R n , x > 0}, the Cauchy-Peano theorem ensures that there exists a solution x(t) of the dynamical system (8) on its maximal existence interval [t 0 , α), which is continuous on t.
Similar to the proof of Lemma 3.2 in [23], it can be shown that x(t) > 0 ∀t ≥ t 0 .
Proof. From it follows that Lemmas 3.2 and 3.3 ensure that the solution x(t) of the system (8) always stays in P + . The following lemma shows that the right-hand side of (8) is Lipschitz continuous on any bounded positive set: Then P AX Xc, XP AX Xc, and µ(x)XP AX e are all Lipschitz continuous in D.
Proof. For any x ∈ D and i = 1, · · · , n, we have where e i is the ith column of matrix I n . Since Equations (10) and (11) imply Notice that Since QX 2 ≤χ and XQX ≤ 1, we have From Lemma 4.1.9 in [12], we have for anyx,x ∈ D, that is, P AX Xc is Lipschitz continuous in D.
For XP AX Xc, from (12), we have Following the same argument as that for (14), we have that XP AX Xc is also Lipschitz continuous in D.
For µ(x)XP AX e, from the above discussions, we have that From Lemma 4.1.9 in [12], we have forx,x ∈ D, that is, XP AX e is Lipschitz continuous in D. From the above arguments and the definition of The result of Lemma 3.4 is important in ensuring the existence of the solution of the ODE (8) for all t ≥ t 0 . The result in the following Theorem 3.6 ensures that lim t→∞ P AX Xc = 0 as t → ∞. But first, the following lemma is needed. [33] If the differentiable function f (t) has a finite limit as t → +∞, andḟ is uniformly continuous, thenḟ → 0 as t → +∞. Now we show that the solution of (8) exists for all t ≥ t 0 and lim t→∞ P AX Xc = 0 as t → ∞. Theorem 3.6. Let x(t) be the solution of (8). Then x(t) is well defined and unique in [t 0 , ∞), and lim t→∞ P AX Xc = 0.
Proof. First, from Lemma 3.2, we get that the solution x(t) of (8) stays in P + on its maximal existence interval [t 0 , α). Furthermore, we know For all cases of (15), we get Thus c T x(t) is decreasing along the trajectory space, so x(t) is bounded (the bound may depend on x 0 ) for any t ≥ t 0 from Assumption 3.1 and Proposition 1. So there exists a unique solution x(t) of (8) in [t 0 , +∞) followed from Lemma 3.4, the Cauchy-Peano theorem and Picard-Lindelöf theorem.
In addition, from Assumption 3.1 and Proposition 1, we have that if x(t) is a solution of (8), x(t) is bounded. From Lemma 3.4, P AX Xc and XP AX Xc are Lipschitz continuous. Thus, it is straightforward to verify that P AX Xc 2 and α(x)(e T P AX Xc) 2 are also Lipschitz continuous. Then from (15), it is easy to see that dc T x dt is uniformly continuous in t. Thus Lemma 3.5 ensures lim t→∞ P AX Xc = 0.
From Theorem 3.6, we can see that the right-hand side of (8) will converge to zero as t → +∞. The next lemma shows that if the right-hand side of (8) equals to zero, i.e., f µ (x) = 0, the points satisfying f µ (x) = 0 lie on the primal central path. Proof. The following equivalences are straightforward.
The next lemma shows that the right-hand side of (8) does not vanish in finite time. Proof. Assume, by contradiction, that there exists a finite time, sayt > 0, such that f µ (x) = 0. By Lemma 3.7, we get that Xs = µ(x)e for some (y, s) ∈ D + . From Lemma 3.2, we have x(t) > 0 ∀t ≥ t 0 .
Then, we have Let us define y = (AX 2 A T ) −1 AX 2 c, then c = A T y , this contradicts with Assumption 3.1. Case 2: µ(x) > 0. From the definition of µ(x), we get e T P AX Xc < 0. Since f µ (x) = 0, we obtain that Multiplying both sides of (17) from the left by e T , it follows that Hence, the right-hand side of (18) is negative while the left-hand side is nonnegative. So, we get a contradiciton.
From the above two cases, the lemma is proved.

3.2.
Convergence analysis of (8). In this section, we will study and verify the global convergence of the solution trajectory x(t) of the ODE system (8). First, let us state some basic properties for an ODE system. Consider the following ODE system: where g : (α, β) → R is continuous. A solution of (19) is a differentiable path for all t in the open interval I ⊆ (α, β). The ODE system (19) is called autonomous if g(t) ≡ 1. In this case, (19) becomes: Proposition 2.
In Proposition 2, both α + and ω + can be extended to +∞. Next, we show that lim t→∞ x(t) exists, where x(t) is the solution of (8). First, let us introduce two important results.
[1] Let E(·) be a real analytic function and let x(t) be a C 1 curve in R n , withẋ = dx(t) dt denoting its time derivative. Assume that there exists a δ > 0 and a real τ such that for t > τ , x(t) satisfies the angle condition and a weak decrease condition Then, either lim t→∞ x(t) = ∞ or there exists x * ∈ R n such that lim t→∞ x(t) = x * .
Our strong convergence result can be obtained by using the above two theorems.
Theorem 3.11. For any x 0 ∈ P + , let x(t) be the solution of (8). Then x(t) is convergent as t → +∞ and its limit x * (x 0 ) ∈ P.
Proof. We know that the solution of (8) exists and is unique from Lemmas 3.2 and 3.3, and Theorem 3.6. If f µ (x)| t=t0 = 0, by Lemma 3.8, we have P AX Xc = 0. Similar to the proof of Case 1 in Lemma 3.8, this contradicts with Assumption 3.1, so f µ (x)| t=t0 = 0. Again from Lemma 3.8, f µ (x) = 0 for any t ≥ t 0 . In Theorem 3.10, let us define where µ(x) is in (8). So, we can write Now, we define From the numerator of (25), by the definition of α(x) and β(x) in (8), we get From the denominator of (25), we get Substituting α(x) = XP AX Xc |e T P AX Xc| XP AX e + 2 into (27), we get Using (25), (26), (28), and Theorem 3.9, we obtain So, all conditions of Theorem 3.10 are satisfied. In addition, we know that the trajectory x(t) of (8) is bounded for all t ≥ t 0 , hence we have that there exists a point x * (x 0 ) ∈ P such that lim t→+∞ This theorem shows that the solution x(t) of the ODE system (8) converges to a point x * (x 0 ). Next, we prove that this x * (x 0 ) is an optimal solution of (1).

3.3.
Optimality. In this section, we will study in more detail about the limit point property of the solution of (8). In addition, we will also introduce the dual variable and dual estimates. Without loss of generality, we will study an equivalent form of the ODE system (8). We consider a new ODE system: Here, the vector field associated with (29a) and (29b) is the new function whose domain of the definition is the set P + × R + = {t : t ∈ R, t > 0}. We know that h(t) > 0 for all t in the definition of (8) if (x(t), h(t)) is the solution of (29).

Remark 1. (a) The function Ψ µ (x, h) does not vanish in the set
is the solution of (29), the merit function defined asĒ(x, h) = E(x) = c T x is a decreasing function of t.  Proof. It is similar to the proof of Proposition 3.1 in [29]. Now, let us define the dual estimates associated with the solution of (29).
Definition 3.12. The dual estimates (y µ (x), s µ (x)) ∈ D a at the point x ∈ P a are defined as: Next, we study the dual solution curves associated with the solution of (29). Let (x, h) : (α − , α + ) → P + × R + denote the solution of (29). For a given point (y 0 , s 0 ) ∈ D a , let us define the dual solution curves through (y 0 , s 0 ) to be the solution of (y, s) : (α − , α + ) → D a of the following ODE system: whose domain of the definition is the set D a × (α − , α + ).
By using the dual solution curves, we can study the limiting behavior of the solution of (29).
Therefore, the unique solution of this problem is equal to pe −t . So, we get From (30a) and (31), we know that (x(t), y(t), s(t)) can be regarded as the optimal solutions of some convex optimization problem. The following corollary reveals the relationship between the solution of (29) and this convex optimization problem. Corollary 1. Let (x, h) : (α − , α + ) → P + × R + be the solution of (29) and its associated dual solution curve be denoted as (y, s) : (α − , α + ) → D a through (y 0 , s 0 ) ∈ D a . Then for all t ∈ (α − , α + ), x(t) is the (unique) optimal solution of the problem Proof. Let us define lnx j , ψ(x) is strictly convex and differentiable. Thus, the Lagrangian function of (32) is defined as From the optimality condition of (32), we can write Let (x(t), y(t), s(t)) be the unique solution of (34). By simplifying (34a) and Proposition 4, we can have where x(t) ∈ P + and ((y(t), s(t)) ∈ D a . Thus the result is proved. [t 0 , α + )} is bounded. By (31), it follows that This implies for all t ∈ (α − , α + ). By Proposition 3, we have that the sets {x(t) : t ∈ [t 0 , α + )} and {h(t) : t ∈ [t 0 , α + )} are bounded. We can get that every term in the last formla is also bounded. So, there exists an M > 0 such that . Since x 0 > 0 and s(t) − e −t p > 0 for all t ∈ [t 0 , α + ), we can see that (s(t) − e −t p) is bounded and s(t) > 0 is bounded for all t ∈ [t 0 , α + ).
(b) From (9) and (3.6), we can have that lim Hence, by Proposition 3, we have So, the results follow.
The next theorem will reveal the relationship between the solution of (29) and the optimal solution of the problem (32).
Proof. Let (x(t), y(t), s(t)) be the solution of the following system: It is easy to check that the Jacobian matrix of the above system is nonsingular. From the implicit function theorem, there exists a unique solution (x(t), y(t), s(t)) for the above system, in addition (x(t), y(t), s(t)) has continuous derivatives. By differentiating (37), we geṫ After some straightforward manipulations and using the equations in (38) and (37), we can get that x(t) is a solution of (29).
Proof. From Theorem 3.11, let ξ(t) be the solution of (8), we know that there exists a point x * such that lim t→∞ ξ(t) = x * .
By (3), we get that x * is an optimal solution of the problem (1).
From Theorem 3.14 and Proposition 3, we can obtain the following result.

Corollary 2.
For any x 0 ∈ P + , let x(t) be the solution of (8). Then where x * is an optimal solution of the problem (1).
This corollary shows that the continuous path is formed from any initial point x 0 ∈ P + and converges to an optimal solution of the problem (1). 4. Numerical experiments. In this section, we illustrate some numerical results by using our proposed continuous path-following trajectory. We simulate several small examples to verify the effectiveness of our trajectory and show all these trajectories approaching to the optimal solutions in the limit. All our experiments are carried out on a computer with a Dell Pentium(R) CPU 3.40GHz and 2GB RAM on the MATLAB platform.
The optimal solution of this problem is x * = (20, 20, 0, 0). Two feasible starting points x 0 = (20, 10, 10, 10) and x 0 = (15,15,10,15) are used in the test. We use our continuous path-following trajectory to solve this problem and provide the following figures to illustrate the convergence of our trajectory. From Fig. 1 and Fig. 2, we can see that x(t)'s converge to the optimal solution x * in our continuous path-following trajectories in the limit.
Figs. 3 and 4 illustrate the transient behaviors of the solution x(t) of (8) with two different starting points, x 0 and x 0 respectively. The two figures clearly show that x(t)'s converge to some optimal solutions of Example 4.2.

5.
Conclusion. In this paper, an interior point continuous path-following trajectory is proposed for linear programming. Strong convergence of our continuous trajectory with any starting interior feasible point is proved. In addition, the limit of this continuous trajectory is shown to be an optimal solution of the original problem. Our preliminary numerical results clearly show the convergence property of our continuous path-following trajectory. x 1 x 2 x 3 x 4 x 5 x 6 x 7 c T x x 1 x 2 x 3 x 4 x 5 x 6 x 7 c T x