A NEW SMOOTHING APPROACH TO EXACT PENALTY FUNCTIONS FOR INEQUALITY CONSTRAINED OPTIMIZATION PROBLEMS

. In this study, we introduce a new smoothing approximation to the non-diﬀerentiable exact penalty functions for inequality constrained optimization problems. Error estimations are investigated between non-smooth penalty function and smoothed penalty function. In order to demonstrate the eﬀec-tiveness of proposed smoothing approach the numerical examples are given.

Let us define the set of feasible solution by G 0 (G 0 := {x ∈ R n |g i (x) ≤ 0, i = 1, 2, ...m}) and we assume that G 0 is not empty.
The penalty function methods have been proposed in order to transform a constrained optimization problem to an unconstrained optimization problem. The following function is one of the well-known penalty function where ρ > 0 is a penalty parameter and g + i (x) = max{0, g i (x)}, i ∈ I. Clearly, F 2 (x, ρ) is continuously differentiable exact penalty function. According to Zangwill [19], an exact penalty function has been defined by The obvious difficulty in minimization of F 1 is the non-differentiability of F 1 which originates from the presence of "max" operator (when the power of max is equal 1).

AHMET SAHINER, GULDEN KAPUSUZ AND NURULLAH YILMAZ
In the last decade, the lower order penalty functions have been investigated [11]. One of the important tools for solving these types of non-smooth (non-Lipshitz) problems is the smoothing approach. The smoothing approach is based on making some modification on the objective function or approximate the objective function by smooth functions. The first study on smoothing approach is the Bertsekas's famous paper [4]. In order to improve the smoothing approaches, different types of valuable techniques and algorithms are developed [18,16,3,5]. In recent years, the smoothing approach protects its popularity among the scientist who faced non-smooth optimization problems. Smoothing approach for the exact penalty function start with [10], for other important studies we refer to the following references [7,8,15,9,12,2]. In this paper we aim to develop more efficient methods for smoothing each of the exact penalty functions F 1 (x, ρ) and F p (x, ρ) and solve the corresponding unconstrained optimization problems (P 1 ) min x∈R n F 1 (x, ρ), and (P p ) min x∈R n F p (x, ρ), p ∈ (0, 1), by using the differentiation based methods.
The next section is devoted to some preliminary knowledge about smoothing approaches. In Section 3, we propose a new smoothing approach and we prove some results for error estimates among the optimal objective function values of the smoothed penalty problem, non-smooth penalty problem and original optimization problem. In Section 4, we present minimization algorithms for problems (P 1 ) and (P p ) in order to find an approximate solution for the problem (P ). In Section 5, we apply the algorithms on the important test problems and compare the results obtained from (P 1 ) and (P p ) and, show the convergence of the algorithms. In sections 6, we present some concluding remarks.

2.
Preliminaries. The differentiability of the optimization problems is one of the useful property in terms of the optimality conditions. As is mentioned above, in order to exploiting the good properties of the differentiability the smoothing approach has been arisen. The smoothing studies starts with the smoothing of the following non-smooth problem: for x ∈ R n . One of the effective smoothing approach is hyperbolic smoothing method proposed in [13,17,14] and it is defined by where τ > 0 is parameter. That smoothing approach is used for solving min-max problems in [1] and it is used for solving non-smooth and non-Lipschitz regularization problems in [6].
In this study, we construct a new smoothing approach for non-smooth max{x, 0} and non-lipschitzian (max{x, 0}) p , 0 < p < 1 and we use this smoothing approach together with the exact penalty function method for solving the inequality constrained optimization problems.
Depending on above smoothing approach we consider the differentiable objective functionF and the corresponding minimization problem (P 1 ) min x∈R nF 1 (x, ρ, r, τ ) . Theorem 3.3. Let x ∈ R n , τ > 0 and r > 1 Proof. From Lemma 3.1 we obtaiñ Theorem 3.4. For a fixed r, let {τ j } → 0 and x j be a solution of (P 1 ) for ρ > 0.
Assume that x is an accumulation point of x j . Then x is an optimal solution for (P 1 ).
Proof. By considering the Corollary 1 and Theorem 3.3, the proof is obtained.
Theorem 3.5. For a fixed τ , let {r j } → ∞ and x j be a solution of (P 1 ) for ρ > 0.
Assume that x is an accumulation point of x j . Then x is an optimal solution for (P 1 ).
Proof. By considering the Corollary 1 and Theorem 3.3, the proof is obtained.
Theorem 3.6. Let x * be an optimal solution for the problem (P 1 ) and x be an optimal solution for the problem (P 1 ). Then we have the following: Proof. From the Theorem 3.3 we have the following:  The green and solid one is the graph q(x), the red and dotted one is the graph of q 4 (x, 0.5), the blue and dashed one is the graph ofq 1.5 (x, 0.5).
Theorem 3.8. Let x * be an optimal solution for (P 1 ), x be an optimal solution for (P 1 ) and let x * be a feasible solution for (P ) and x be an τ − feasible solution for (P ), then we have Let us consider the non-lipschitz function q p (t) = max{t, 0} p for 0 < p ≤ 1 we define the following smoothing function: where r > 1 is a parameter such that pr > 1 and τ > 0.
Proof. For r > 1 and τ > 0 Since the difference equationq p r (t, τ ) − q p (t) is constant for t ≤ 0 and decreasing for t > 0. Thus, the difference betweenq p r (t, τ ) and q p (t) can be at most as the parameter τ > 0.
Assume that x is an accumulation point of x j . Then x is an optimal solution for (P p ).
Proof. By considering the Corollary 2 and Theorem 4.3, the proof is obtained.
Theorem 4.5. Let x * be an optimal solution of (P p ) and x be an optimal solution of (P p ). Then we have Proof. From the Theorem 4.3 we have the following:   Theorem 4.6. Let x * be an optimal solution of (P p ), x be an optimal solution of (P p ) and let x * be a feasible solution of (P ) and x be an τ − feasible solution of (P ), 5. Algorithms for Minimization Procedure. In this section, we propose algorithms to find the global optimal point by considering above smoothing approach. The first algorithm is proposed for the problem (P 1 ) and the second algorithm is proposed for the problem (P p ).
Step 2 Use x j as the starting point to solve (P 1 ). Let x j+1 be the solution.
In order to guaranteed that the algorithm is worked straightly, we have to prove the following theorem. Let x j is generated by Algorithm I when ηN < 1. If {x j } has a limit point, then the limit point of x j is the solution for (P ).
Proof. Assume x is a limit point of {x j }. Then there exists set J ⊂ N, such that x j → x for j ∈ J. We have to show that x is the optimal solution for (P). Thus, it is sufficient to show (i) x ∈ G 0 and (ii) f (x) ≤ inf x∈G0 f (x).
i. Let us consider the contrary that x ∈ G 0 , i.e. for sufficiently large j ∈ J, there exist δ 0 > 0 and i 0 ∈ {1, 2, . . . , m} such that Since x j is the global minimum according j-th values of the parameters ρ j , τ j , r j , for any x ∈ G 0 we have If j → ∞ then, ρ → ∞, ρ j τ j → 0 and ρ j δ 0 → ∞. Thus, f (x) takes infinite values on G 0 and it contradicts with the boundedness of f on G 0 . ii. By considering the Step 2 in Algorithm I and for any x ∈ G 0 ,
Step 2 Use x j as the starting point to solve (P p ). Let x j+1 be the solution.
Step 3 If x j+1 is τ −feasible for (P ), then stop and x j+1 is the optimal solution. If not, determine ρ j+1 = N ρ j , τ j + 1 = ητ j , r j+1 = r j + 2 and j = j + 1, then go to Step 2. Let x j is generated by Algorithm II when ηN < 1. If {x j } has a limit point, then the limit point of x j is the solution for (P).
Proof. The proof is very similar to the proof of the Theorem 5.1.

Numerical Examples.
In this section, we apply our algorithm to test problems. The proposed algorithm is programmed in Matlab R2011A. Numerical results shows the efficiency of this method. The detailed results are presented in the tables for all problems. For these tables we use some symbols in order to abbreviate the expressions. The meaning of these symbols are as the following: j : The number of iterations.
x j : The local minimum point of the jth iteration.
We choose x 0 = (1, 1) as a starting point ρ 0 = 10, τ 0 = 0.01, r = 2, η 0 = 0.1 and N = 3. The results are shown in the Table 1 and 2. By considering both (P 1 ) and (P p ) the global minimum is obtained at a point x * = (0.7254, 0.3993) with the corresponding value 1.8376. In the papers [7,15], the obtained global minimum point is x * = (0.72540669, 0.3992805) with the corresponding value 1.837623. Both of our algorithms find the correct point as in [7,15].
Both of our algorithms find the correct solutions with the lower iteration numbers than [15]. Table 3. Table of minimization process of the Problem 2 by considering Algorithm I Table 4. Table of minimization process of the Problem 2 by considering Algorithm II for p = 2/3  It can be seen that our algorithms present numerically better results than [7] and they find the approximate solutions with the lower iteration numbers in comparison with the [15].
We choose x 0 = (0, 0, ..., 0) as a starting point ρ 0 = 300, τ 0 = 0.01, η 0 = 0.1,r 0 = 8 and N = 3 for both Algorithm I and Algorithm II. The results are shown in the Table 7  In [15] in which three algorithms are offered for a new smoothing technique, approximate solution is found with 4, 3 and 13 iterations in the Algorithms I, II and III, respectively. We note that the solution is not found in Algorithm II of [15]. Whereas, approximate solution is found with 3 iterations in both of our Algorithms I and II for our smoothing technique. Of course the lower iteration numbers in no way mean the lesser computational times. 6. Conclusion. In this study, we propose two new smoothing approaches for l 1 and l p exact penalty functions. Both of our smoothing approaches present lower errors among non-smooth penalty problems, smoothed penalty problems and original optimization problem. By considering these smoothing approaches, we construct the minimization algorithms. We apply these algorithms on test problems and obtain satisfactorily results. Our smoothing techniques provide good approximations to the non-smooth function. In fact, both of smoothing techniques can be used for for non-smooth and non-Lipschitz functions by controlling the parameter r. Moreover, they have easy formulations and they are easy applicable.
The algorithms are effective for both medium scale and large scale optimization problems. The Algorithm I reaches the optimum value rapidly and Algorithm II presents high accuracy in finding of the optimum point.
For future works, we plan to introduce new smoothing techniques for min-max, regularization problems and penalty function approach with smaller errors and use these smoothing techniques inside the new global optimization algorithms.