A new type of quasi-newton updating formulas based on the new quasi-newton equation

The quasi-Newton equation is the very foundation of an assortment of the quasi-Newton methods. Therefore, by using the offered alternative equation, we derive the modified BFGS quasi-Newton updating formulas. In this paper, a new y-technique has been introduced to modify the secant equation of the quasi-Newton methods. Prove the global convergence of this algorithm is associated with a line search rule. The numerical results explain that the offered method is effectual for the known test problems.


1.
Introduction. This paper deals with methods that competently lessen unconstrained problems. This type of problems is of the form: where f : R n −→ R, R n is known as the n-dimensional Euclidean space. More details can be found in [4]. As well known, quasi-Newton methods are iterative algorithms of the shape: where d k is a search direction and α k is the parameter determined by exact linesearch as: Usually, the step length is chosen to satisfy the Wolfe line search conditions: where 0 < δ < σ < 1. For more details, see [11]. The search direction is generated by: B k d k + g k = 0 (6) where B k is a non-negative definite matrix that estimates the Hessian Q k = ∇ 2 f (x k ) and g k = ∇f (x k ) is the gradient of f (x k ). By custom,{B k } satisfies the next quasi-Newton equation: where s k = x k+1 − x k = α k d k and y k = g k+1 − g k , see [3]. One famous updated formula is the BFGS formula which is broadly used. The famous updated is the standard BFGS formula: Let H k be the inverse of B k . Certainly, the BFGS update (8) is overtly known by: It has been revealed that the BFGS update is awfully capable of solving unrestrained problems (1). More details can be found in [7,18].
Grand efforts have been made to discover a Quasi-Newton method that not merely possesses worldwide convergence but it is also better from the BFGS update in the numerical performance. Let us elucidate some of the modifications of the QN-equations in more details as follows: Table 1. Some modifications of QN-equations Wei, Li, and Qi Zhang, Deng, and Chen B k+1 s k =ỹ k = y k + Yuan and Wei Yuan, Wei and Wu B k+1 s k =ỹ k = y k + max(0,6(f k −f k+1 )+3(g k+1 +g k ) T s k s k 2 s k [17] There are many modified methods have been presented can be seen [10,2,15,16] and [12] in details. In addition, Yuan et al. [15] and [16] proved the worldwide convergence of BFGS method under a modified weak Wolfe-Powell line search technique.
In Quasi-Newton techniques, the Quasi-Newton equation definiteness is very important. Below we will derive a new quasi-Newton equation whose idea is based on the quadratic model. As well as the worldwide convergence of the method is analysed and the numerical results explain that the new methods are more efficient.

Derivation of the new QN equation.
The key idea behind deriving the new quasi-Newton equation is a quadratic model. Let us elucidate: where g k is the vector of gradients at x k and Q(x k ) is the Hessian matrix of secondorder derivatives. We know the minimum of s k which occurs at: This leads to the meaning of Quasi-Newton equation given below: The meaning of x k+1 implies that s k satisfies s k = α k d k . It follows from the meaning of : It follows from the meaning of y k we get: By putting (14) in (13) we get: To obtain a higher truth in estimating the Hessian matrix by B k+1 it is reasonable to let B k+1 satisfy: This implies that the QN equation can be simplified to the following form: Thus one of the possible choices in an approximation of B k+1 s k can be given by: where u k is any vector such that s T k u k = 0 . Two choices of vector u k in (18) for the secant equations will be given here: New quasi-Newton updating formulas can be modified when y k is replaced byỹ k which is defined by (18). Now we state the new BBFGS algorithm with Wolfe line search as follows: 3 Find the optimal step-size α k using equations (4) and (5). (9) and (18), otherwise let H k+1 = H k . 5 Set k = k + 1 and go to (1).
The good property of the new equation in the next theorem can be seen.
Theorem 2.1. Let (α k , x k+1 , g k+1 , d k+1 ) be generated through the new algorithm. Then B k+1 is non negative definite for ∀k provided that s T kỹ k > 0.
Proof. And by evaluating the amount s T kỹ k > 0, we have by the Wolfe conditions (4) and (5) that: Noting the s T k g k = α k d T k g k < 0, we know that there exists a constant m > 0 such that: The proof is complete.

Convergence analysis.
In what follows, the global convergence property of the proposed methods are explained. To do so, the following assumptions are required. Assumptions: The function f is twice incessantly differentiable on S and there exists a constant L > 0 such that: Since {f k } is a declining series, it is clear that the series {x k } generated by a new Algorithm is found in S , and there exists a constant f * such that: 3. The function f is uniformly convex, i.e., there exist positive constants M and m such that: It holds for all x ∈ S and d ∈ R n where Q(x) = ∇ 2 f (x). These assumptions are the same as those in [10].
Proof. Following the meaning ofỹ k , and in comparison to equation (10) with (13) we get: = 2(f k − f k+1 ) − 2s T k g k By using Taylor's series and the mean value theorem, we have: where η k = x k + ξ(x k+1 − x k ) and ξ ∈ (0, 1) . Therefore, it follows from (26) and (27) that: = s T k Q(η k )s k Rally with Assumption (3), it is simple to find: Now we turn to the proof of (5). By the meaning ofỹ k and the Taylor's series once more, we have: Now we turn to the proof of (25). Through the rule (5) and Assumption (2) we have: This implies that: On the other hand, from (1), we have: which merges with rule (4) and yields: For quadratic function, does we have proved the convergence property.  (1) and (2), expect that there exist constants α 1 and α 2 such that satisfy the relation: It holds, then we obtain: lim Proof. We ensue by contradiction, we let g k ≥ ε for all k with few non-negative constant ε . Taking into account where the inequality follows from (32), inequalities B k s k ≤ a 1 s k and s T k B k s 2 ≥ a 2 s k 2 . Thus we get (37). For a general function, it is feasible that s T kỹ k > 0 might not be positive. For positive definiteness of the B k+1 consider: where β > 0 is constant and δ > 0 is bounded. In view of theorem (3.2), in tidy to prove the convergence property, it suffices to show that (36) satisfies. Now, we utter the following theorem. Then there exist constants a 1 , a 2 such that for any positive integer t (36) satisfies at least [ t 2 ] values of k ∈ {1, 2, · · · , t}. This was proved by Byrd and Nocedal [1]. Theorem 3.4. Suppose f satisfy Assumptions and {x k } be generated by BBFGS algorithm. Then the lim k→∞ inf g k = 0 satisfies.
Proof. By theorem (3.2) to show that (36) satisfies for infinitely k . If K is a finite set, then B k is a constant-matrix, obviously, (36) satisfies. Now, Let K is an infinite set. By contradiction, assume that ε > 0, ∀k ∈ K .
We get: Eq. (30) together with (42), lead to: Using theorem 3.3 to the sub {B k } k∈K , obviously, there exist a 1 and a 2 such that (36) satisfies for infinitely many k . Therefore, by theorem 3.2 the proof are finished. The above theorem explains the global convergence property of BBFGS-Algorithm not including convexity supposition on f.
By using law Himmeblau [13], we discontinued the algorithms: For every problem, if g k < ε or stop1 < 10 −5 is satisfied, the program will be stopped.
In the next table, the numerical results are on paper in the form NI/NF, where NI, NF denote the number of iterations, function evaluations respectively. Dim denotes the dimension of the test problems.
From Table 2 we can see the performances of (3) methods. We detect that the typical performance of the BBFGS update with are the fittest amongst the (3) methods, and the normal performance of the BBFGS update with u k = y k is slightly healthier than the BFGS technique. So, the BBFGS update with u k = g k+1 is the mainly well-organized update amongst the quasi-Newton techniques for unrestrained problems.

5.
Conclusions. In this paper, a new quasi-Newton equation has been derived to find an update with better ability. The worldwide convergence of the method is analysed and arithmetical results that explain these methods are awfully well doing. Commonly, we can compute the percentage performance of the new proposed algorithms BBFGS compared in opposition to the standard BFGS algorithm for the general tools NI and NF as follows: