Prox-dual regularization algorithm for generalized fractional programs

Prox-regularization algorithms for solving generalized fractional programs (GFP) were already considered by several authors. Since the standard dual of a generalized fractional program has not generally the form of GFP, these approaches can not apply directly to the dual problem. In this paper, we propose a primal-dual algorithm for solving convex generalized fractional programs. That is, we use a prox-regularization method to the dual problem that generates a sequence of auxiliary dual problems with unique solutions. So we can avoid the numerical difficulties that can occur if the fractional program does not have a unique solution. Our algorithm is based on Dinkelbach-type algorithms for generalized fractional programming, but uses a regularized parametric auxiliary problem. We establish then the convergence and rate of convergence of this new algorithm.


(Communicated by Vladimir Shikhman)
Abstract. Prox-regularization algorithms for solving generalized fractional programs (GFP) were already considered by several authors. Since the standard dual of a generalized fractional program has not generally the form of GFP, these approaches can not apply directly to the dual problem. In this paper, we propose a primal-dual algorithm for solving convex generalized fractional programs. That is, we use a prox-regularization method to the dual problem that generates a sequence of auxiliary dual problems with unique solutions. So we can avoid the numerical difficulties that can occur if the fractional program does not have a unique solution. Our algorithm is based on Dinkelbach-type algorithms for generalized fractional programming, but uses a regularized parametric auxiliary problem. We establish then the convergence and rate of convergence of this new algorithm.
1. Introduction. In this paper, we will be interested to generalized fractional programs of the form where I = {1, . . . , m}, m ≥ 1, and X a non empty subset of R n . The functions f i and g i are defined on an open subset K containing X, continuous and satisfy g i (x) > 0 for all x ∈ X and i ∈ I. Problems of such type arise in management applications of goal programming, in mathematical economics and numerical analysis, and in telecommunications, information theory and computer science. Applications of single and multi-ratio programming can be found in ( [20], [16], [12]).
The Dinkelbach-type algorithms DT1 and DT2 ( [7], [8]) generalize Dinkelbach algorithm [10] to the case m > 1. In these algorithms, at each iteration a subproblem must be solved. They are useful to apply if the subproblem is easier to solve than the original problem (P ). The algorithms DT1 and DT2 are based on the same principle but DT2 is generally faster than DT1.
The algorithms DTR1 and DTR2 introduced in [13] use the proximal point algorithm (see [14], for ex.), to regularize the parametric auxiliary problem of DT1 and DT2 respectively. These algorithms are useful to surmount the numerical difficulties which can occur if a fractional program does not have a unique solution or the feasible set is unbounded. By using a prox-regularization method that generates a sequence of auxiliary problems with unique solutions, these difficulties can be avoided.
The dual algorithms introduced in [2] and [3] solve a dual problem for convex generalized fractional programs and the main feature of these algorithms is that at each iteration a single-ratio fractional programming problem is solved and the optimal objective value of this fractional programming problem provides a lower bound on the optimal objective value of the original generalized fractional program.
The purpose of this paper is to introduce a prox-regularized dual problem for convex generalized fractional programs and an algorithm to solve this problem. The proposed algorithm combines the dual approach and the prox-regularization method. It is based on two ideas: The first is the proximal point algorithm and the second is the dual approach used in [3] to solve generalized fractional programs. In our contribution, unlike the algorithm used in [3] which calculates exact solutions for the intermediate problems generated by these algorithms, we content our selves by giving approximate ones. That is we prox-regularize the auxiliary problems by applying the method used in Gugat's paper [13] for explicit fractional generalized program, but in our algorithm we use it for a particular generalized fractional program involving an infinite number of ratios.
Our algorithm generates a sequence of dual values which converges monotonically to the optimal value of (P ), and a sequence of dual solutions which converges to a solution of the introduced dual problem of (P ). For a class of problems, including linear fractional programs, we establish that this algorithm converges at least linearly.

2.
Preliminaries. Before introducing and analyzing the prox-dual regularization method, we will first briefly recall the dual procedure proposed in [3] by Barros et al.. To introduce this algorithm, Barros et al. assume that (P ) is a convex generalized fractional programming problem, where the feasible nonempty set X is given by X := {x ∈ S | h(x) ≤ 0}, with S ⊂ K a compact convex set and h : R n −→ R r a vector-valued function such that for every 1 ≤ j ≤ r, the j th component h j is convex. Moreover, the continuous functions f i , g i : K −→ R, i ∈ I, verify either of the following convexity/concavity assumptions: (C1) For every i ∈ I, the function f i : K −→ R is convex on S and nonnegative on X and the function g i : K −→ R is positive and concave on S; (C2) For every i ∈ I, the function f i : K −→ R is convex on S and the function g i : K −→ R is positive and affine on S.
The following Slater's condition is also imposed.
(C3) Let J denote the set of indices 1 ≤ j ≤ r such that the j th component h j of the vector-valued function h : R n −→ R r is affine. There exists some x belonging to the relative interior ri (S) of S satisfying h j (x) < 0, j / ∈ J and h j (x) ≤ 0, j ∈ J.
Remark 1. Notice that if assumption C1 is satisfied and λ ≥ 0 or that assumption C2 is fulfilled and λ ∈ R, then the function In addition, if the vector-valued function h is convex then the function is convex for all (y, z) ∈ R m + × R r + . Next we describe this method. For this, let and g(x) = g 1 (x), . . . , g m (x) .
The dual problem considered in [3] is where the function d : and For all (λ, y, z) ∈ R × R m × R r , define the functions By means of Lagrange duality, the authors gave in [3] a dual of the parametric problem appearing in Dinkelbach-type methods. This dual problem is precisely Under the Slater's condition (C3), the problem (Q) achieves its maximum on Σ × R r + and one has ϑ(Q λ ) = ϑ(P λ ) and ϑ(Q) = ϑ(P ), ( [3], Proposition 1), where for a problem (R), ϑ(R) designates the optimal value of (R). This result allows us to solve the problem (P ) by solving its dual (Q).
Bellow we summarize the dual algorithm.
Notice that a scaled version of Algorithm 2.1 is obtained in [3] by following the same strategy used to derive DT2.
3. The Regularization of the dual Problem. In this section we propose a prox-regularization of the dual problem presented by Barros et al.. We consider the problem (P ) with the notations of section 2 and we assume that the assumption (C1) or (C2) is fulfilled and that (C3) holds.
That is to say that, the function (y,

Proof.
1. It is obvious that for all λ ∈ R the function (y, z) −→ G(λ, y, z) is concave, as a lower hull of concave functions ; and that the function (λ, y, z) −→ G(λ, y, z) is upper semi-continuous as a lower hull of continuous functions. 2. Let (y, z) ∈ Σ × R r + . We have Thus, G (d(y, z), y, z) ≥ 0.
On the other hand, since S is compact, there exists some x ∈ S such that d(y, z) = min which implies that Thus, G (d(y, z), y, z) ≤ 0.
Therefore, G(d(y, z), y, z) = 0 ∀(y, z) ∈ Σ × R r + . Now let (y * , z * ) be an optimal solution of (Q) and λ * be the optimal value of (P ). By definition, d(y * , z * ) is the optimal value of (Q). From ( [3], Proposi- is concave and finite for all λ ∈ R, then the function It follows that for all (y, z) ∈ Σ × R r Then, from the definition of G(λ, y, z), for all (y, z) ∈ Σ × R r + , there exists x ∈ S such that Hence, Finally we have max y∈Σ, z≥0 On the other hand, max y∈Σ, z≥0 Consequently, which implies that Thus, From (1) and (2) we deduce that and the assertion holds.

We have
Now we introduce our algorithm.

3.
Calculate set k = k + 1 and go to 2.
The important step in our algorithm is to solve the minimax problem Before analyzing the convergence and the rate of convergence of the algorithm, we will give an equivalent simpler problem than (Q(d k , y k , z k )).
We have Remark that and let Then, we can remark that Let With these notations, we have and Remark 2. In order to ensure the convexity of the functions f i − d k g i in the case where the hypothesis C1 is fulfilled, we need to have d k ≥ 0. We will show later that the sequence {d k } is monotonically increasing. Therefore, to have d k ≥ 0 for all k ∈ N when the hypothesis C1 is fulfilled, it suffices to take d 0 = 0.
Proposition 2. Suppose that either assumption C1 is satisfied and d k ≥ 0, or C2 is fulfilled. Then we have and its dual problem is sup Let γ be a critical point of l k (. , µ, ν). Then Then, Thus, Combining (6) and (7), we obtain sup µ∈R ν≥0 inf γ∈R m+r l k (γ, µ, ν) = sup t∈R m+r µ∈R, ν≥0 Then using (8) and (9), we obtain min c γ=1 γ≥0 So, for all x ∈ S max c γ=1 γ≥0 Remark that if hypothesis C1 is satisfied then d k ≥ 0, by our assumption, and if hypothesis C2 is fulfilled then the functions g i are affine for all i ∈ I. In all cases, is concave for all x ∈ S, and the sets S and {γ ∈ R m+r | c γ = 1, γ ≥ 0} are convex and S is compact. Then, Sion's theorem ( [21], Theorem 3.4 and Corollary 3.3) implies that Referring to (4), (5) and the last equality we obtain Therefore, this equality with (10) give Proposition 3. With the hypotheses of Proposition 2, let (x k ,μ k ,t k ) ∈ S × R × R m+r be an optimal solution of the problem inf x∈S, µ∈R t∈R m+r Proof. Let L k (x, µ, t, γ) denote the Lagrangian associated to (11). Then where the last equality follows from Proposition 2.
It follows that ifγ k is an optimal solution of the problem and if (ȳ k ,z k ) ∈ R m × R r is such thatγ k = ȳ k z k , then (ȳ k ,z k ) is the optimal solution of (Q(d k , y k , z k )).
It suffices now to show that ifγ k is an optimal solution of (12) thenγ k =t k . For this, let (x k ,μ k ,t k ) be an optimal solution of (11) and letγ k be an optimal solution of (12). Then inf x∈S, µ∈R t∈R m+r Since the function achieves its minimum at the unique point t =γ k for all (x, µ) ∈ S × R, then it follows that inf x∈S, µ∈R t∈R m+r andt k =γ k , which gives the desired result.

4.
Convergence and rate of convergence of algorithm 3.1. To prove convergence and give rate of convergence of our algorithm, we begin showing some intermediate results. Proof. From the definition of (y k+1 , z k+1 ) in Algorithm 3.1, we have So, G (d(y k , z k ), y k+1 , z k+1 ) ≥ α (y k+1 , z k+1 ) − (y k , z k ) 2 .

MOSTAFA EL HAFFARI AND AHMED ROUBI
On the other hand ∀x ∈ S, and then This implies that Finally, d(y k+1 , z k+1 ) ≥ d(y k , z k ), and the sequence {d(y k , z k )} is increasing. Now, let x ∈ X. Then, there exists r ∈ I such that which implies that Since g i > 0 on X, we have Thus, for all y ∈ Σ, This implies that On the other hand, we have X ⊂ S, then for all z ∈ R r + , ≤ min x∈X y f (x) y g(x) .

PROX-DUAL REGULARIZATION ALGORITHM FOR GFP 2003
The last inequality follows from the fact that h(x) ≤ 0 on X and z ≥ 0. Thus, Therefore, Hence, d(y, z) ≤ λ * , and in particular d(y k , z k ) ≤ λ * ∀k ∈ N. Which means that d(y k , z k ) is bounded from above by λ * .
and let {u k } be a sequence of reals such that Then, the sequence {u k } converges to some u ∈ R ∪ {−∞}.
Proof. Recall that we use the notations d k = d(y k , z k ) for all k ∈ N, and δ = min x∈S min i∈I g i (x) and ∆ = max x∈S max i∈I g i (x).
Theorem 4.6. The sequence {d(y k , z k )} converges to λ * and the sequence {(y k , z k )} converges to a solution of (Q).
Proof. Since the set S is compact and the function is continuous on S × Σ × R r + then the function d(. , .) defined by On the other hand, by Lemma 4.5, the sequence {(y k , z k )} is bounded, and so we can choose a subsequence {(y k , z k )} k∈K converging to some (ȳ,z) ∈ Σ × R r + . Since the sequence {d(y k , z k )} converges by In the following we will analyze the rate of convergence of Algorithm 3.1. For this, let Γ = Σ × R r + and Γ * = arg max (y,z)∈Σ×R r + G(λ * , y, z).
The last equality and the assumption (H) imply that The definition of (ỹ k ,z k ) implies that G(λ * ,ỹ k ,z k ) = 0. Then, using Lemma 3.1 and the last inequality we get Then, with (y, z) = (ỹ k ,z k ) in (22) and the three last inequalities we obtain and a i , b i ∈ R ; C a p × n matrix and ξ ∈ R p . We notice by A and B (resp. a and b) the matrices (resp. vectors) whose rows are the A i 's and B i 's respectively (resp. whose components are a i and b i respectively).
The data A i , B i , a i , b i , C and ξ are generated as follows: Later, we will write the sets S and X as follows X = S = {x ∈ R n |Cx ≤ξ}.
We will compare Algorithm 3.1 to Algorithm [3]. The stopping criterion for Algorithm 3.1 is to reach the accuracy where (x k+1 , y k+1 , z k+1 ) is obtained from a solution of (11).  Table 2. The number of iterations and times with n = 10, m = 10, p = 5.
Observe that if we set f i (x) = A i x + a i , g i (x) = B i x + b i for i = 1, . . . , m, and h(x) =Cx −ξ, then y [(A − dB)x + a − db] + z C x −ξ = y (f (x) − dg(x)) + z h(x).
But we know from Proposition 4 and the proof of Theorem 4.6 that H(d,ȳ,z) = 0 implies thatd = λ * and (ȳ,z) is a solution of the dual (Q). This justifies the use of the previous stopping criterion. For Algorithm [3], we use the stopping criterion G(d k , y k+1 , z k+1 ) ≤ 10 −8 . With the same arguments as previously, we see that if G(d,ȳ,z) = 0 thend = λ * and (ȳ,z) is a solution of the dual (Q).
During these numerical tests, the two algorithms will be tested for different sizes (n = 5, m = 5, p = 5), (n = 10, m = 10, p = 5), (n = 20, m = 10, p = 5) and (n = 50, m = 10, p = 5), where n is the number of variables, m is the number of ratios and p is the number of constraints.
In these tests, we analyze the behavior of Algorithm 3.1 with respect to the regularizing parameter α on sets of ten problems, and in the same time, we test the efficiency of the two algorithms. The results are reported in the tables 1-4.