Analysis of complexity of primal-dual interior-point algorithms based on a new kernel function for linear optimization

Kernel functions play an important role in defining new search 
directions for primal-dual interior-point algorithm. In this 
paper, a new kernel function which its barrier term is integral 
type is proposed. We study the properties of the new kernel 
function, and give a primal-dual interior-point algorithm for 
solving linear optimization based on the new kernel function. 
Polynomial complexity of algorithm is analyzed. The iteration 
bounds both for large-update and for small-update methods are 
obtained, respectively. The iteration bound for small-update 
method is the best known complexity bound.


1.
Introduction. Interior-point algorithm was presented by Karmarkar [6] in 1984 to solve linear optimization problems. After then, linear optimization problems became an active area of research. In 1986, Renegar [9] proposed trace the path interior-point algorithm. Then, Roos [10] proved that the essence of the Karmarkar's interior-point algorithm is a method of classical logarithmic barrier function. Nesterov [7] presented a primal-dual interior-point algorithm with polynomial time in 1994. In 2002, Peng [8] proposed a class of polynomial primal-dual interior-point algorithms, and then, several different primal-dual interior-point algorithms for linear optimization problems were proposed by Bai, et al, respectively. Readers may refer to [1][2][3][4][5]. In these primal-dual interior-point algorithms, Kernel functions play an important role in determining the search directions of primal-dual interior-point algorithms for solving linear optimization problems. In order to analyze the complexity bound of the algorithm, Bai, et al. introduced eligible kernel function which is required to satisfy several desirable properties , and gave a scheme that iteration bounds for both small-and large-update methods can be obtained for any eligible kernel function [1].
In [1], Bai, et al. introduced a new kernel function and they showed that the iteration bounds both for large-update and for smallupdate methods are O( √ n(log n) 2 log n ε ) and O( √ n log n ε ), respectively. In [2], Bai, et al. introduced a parameter in the above kernel function, the iteration bound was improved by changing the parameter. We put forward a new kernel function inspired by the above kernel function Based on the new kernel function, a primal-dual interior-point algorithms for solving linear optimization problems is proposed. Then we analyze the iteration bounds both for large-update and for small-update methods by using the scheme proposed in [1]. The iteration bound for small-update method is the best known bound. Although the iteration bound for large-update method is worse than the best known bound, if we introduce a parameter in the new kernel function like literature [2], the best known bound can be obtained by choosing parameter. For convenience, we only study the new kernel function without parameter. The remainder of the paper is organized as follows: in Section 2, we introduce the central path, the search directions, and generic primal-dual interior-point algorithm for linear optimization problems. In Section 3, we introduce kernel function and the properties of kernel function, and verify the new kernel function is eligible. We analyze iteration bound for the corresponding algorithm in Section 4. At last, a short conclusion is given in Section 5.
2.1. The central path. We consider the following pair of primal and dual linear optimization problems: (LD) max{b T y : A T y + s = c, s ≥ 0}, (4) where A ∈ R m×n ,b ∈ R m , and c ∈ R n . Finding an optimal solution of (LP ) and (LD) is equivalent to solve the following system of optimality conditions: where xs denotes the componentwise product of the vectors x and s. The third equation in (5) is so-called the complementarily condition for (LP ) and (LD). In order to ensure that the equations (5) has a unique solution, we assume that the row rank of the constraint matrix A is m , i.e. rank(A) = m , and both (LP ) and (LD) satisfy the interior-point condition (IP C), i.e. there exists a solution (x 0 , s 0 , y 0 ) such that It is well known that the IP C can be assumed without loss of generality. In fact, we may assume that x 0 = s 0 = e. The basic idea of primal-dual interior-point methods (IP M S) is to replace the complementarily condition in (5) by the parameterized equation xs = µe, where µ is a positive number, and e denotes the all-one vector. Therefore, we consider the following system: Under the assumptions of the IP C and rank(A) = m, the system (7) has a unique solution for any µ > 0. It is denoted as (x(µ), y(µ), s(µ)). We call x(µ) the µ-center of (LP ) and (y(µ), s(µ)) the µ-center of (LD). When µ runs through all positive real numbers, (x(µ), y(µ), s(µ)) gives a homotopy curve. It is called the central path of (LP ) and (LD). If µ tends to zero, then the limit point of the central path exists, and since the limit points satisfy the complementarity condition, the limit yields optimal solutions for the (LP ) and the (LD).
It is obvious that v = e if and only if x and (y, s) are the µ-center of (LP ) and (LD), respectively. For any kernel function ψ(t) the corresponding primal-dual barrier function Φ(x, s, µ) is defined by Note that ψ(t) is a strictly convex differentiable nonnegative function and minimal at t = 1, with ψ(1) = 0. Thus, Φ(x, s, µ) is nonnegative, and zero if and only if v = e. Therefore, the value of Φ(x, s, µ) can be considered as a measure for the closeness of x and (y, s) to the µ-center of (LP ) and (LD). Let By solving the following equations, we can obtain d x , ∆y, and We define ∆x = xd x v and ∆s = sd s v .
Then ∆x, ∆y, and ∆s are the search directions in the x -space, y-space, and sspace, respectively. By taking a step along these search directions, with the step size α ∈ (0, 1) defined by some line search rules, we can construct a new triple (x + , y + , s + ) according to x + = x + α∆x, y + = y + α∆y, and s + = s + α∆s.
2.3. Generic primal-dual interior-point algorithm for LP . We assume that a proximity parameter τ and a barrier update parameter θ are given with τ > 0 and 0 < θ < 1. If Ψ(v) ≤ τ then we start a new outer iteration by performing a µ-update, otherwise we enter an inner iteration by computing the search directions at the current iterates with respect to the current value of µ and apply (11) to get new iterates. The generic form of the algorithm is shown in Figure 1.
(ii) lim where R ++ and R + are the set of positive real numbers and the set of nonnegative real numbers, respectively.
Definition 3.2. Ψ : R n ++ → R n + is called a barrier function if it is defined as follows: Let ψ : R ++ → R + be there times differentiable. The kernel function ψ(t) is called eligible if it satisfies the following conditions: For any eligible kernel function, iteration bounds for both small-and large-update methods can be obtained by using the scheme presented in [6]. The scheme is given in Figure 2 .
Step 2 Calculate the decrease of Ψ(v) in terms of δ = δ(v) for the default step sizeα from Step 3 Solve the equation ψ(t) = s to get ̺(s), the inverse function of ψ(t), t ≥ 1. If the equation is hard to solve, derive lower and upper bounds for ̺(s).
Step 4 Derive a lower bound for δ = δ(v) in terms of Ψ(v) by using Step 5 Using the results of Step 3 and 4 find positive constants κ and γ, with γ ∈ (0, 1], such that Step 6 Calculate the uniform upper bound Ψ 0 for Ψ(v) from Step 7 Derive an upper bound for the total number of iterations from Ψ γ 0 θκγ log n ǫ .

Properties of the new kernel function.
A new kernel function is given as follows: The first, second and third-order derive of ψ(t) are It is easy to verify that ψ(t) satisfies the definition of kernel function, that is, (12), (13) and (14) hold for the new kernel function.

4.
Analysis of the algorithm. In this section, we derive the iteration bounds both for large-and small-update methods of primal-dual interior-point algorithms based on a new kernel function for linear optimization according to the scheme in Figure 2.
At the start of inner iteration we have Ψ(v) ≥ τ , we assume that τ ≥ 1, and that τ is large enough to ensure that δ(v) ≥ 1, we assume throughout that the threshold parameter τ in the algorithm satisfies τ ≥ 2.
This lower bound for ρ(s) is given in (24).
Step 2. Letting f (α) denote the decrement of Ψ(v) during an inner iteration, compute an upper bound for f (α) in terms of δ = δ(v) , whereα is the default step size. Using (21) and (24) and that ψ (t) is monotonically decreasing, it follows that Step 3. Solve the equation ψ(t) = s to get (s), t ≥ 1. If the equation is hard to solve, derive lower and upper bounds for (s).
These bounds are given in (23).
Next, we apply to τ = O(1) and θ = Θ( 1 √ n ) to calculate an iteration bound for small-update method.
By using Taylor's theorem and (12), we have From (35) and (30), we obtain Since ψ (1) = 3 and √ 1 − θ ≤ θ, one has Using this upper bound for Ψ 0 in Step 7, we can get the following iteration bound: Since τ = O(1) and θ = Θ( 1 √ n ) , we get that Ψ 0 = O(1) and the iteration bound for small-update methods is It is the best known bound for small-update methods.

5.
Conclusion. This paper presents a new kernel function which satisfies the definition of eligible kernel function. For this paper, it is a major contribution. Based on the new kernel function, we present a primal-dual interior-point method for linear optimization problems. According to the scheme introduced in [1], we derive the iteration bounds both for large-and small-update methods of the proposed method. The iteration bound for small-update method is the best known bound, but the iteration bound for large -update method does not meet the best known bound. If we introduce a parameter in the new kernel function, the best known iteration bound for large -update method can be obtain by choosing the appropriate parameter value. It will be the subject of future research.