A SIMPLEX GREY WOLF OPTIMIZER FOR SOLVING INTEGER PROGRAMMING AND MINIMAX PROBLEMS

. In this paper, we propose a new hybrid grey wolf optimizer (GWO) algorithm with simplex Nelder-Mead method in order to solve integer program- ming and minimax problems. We call the proposed algorithm a Simplex Grey Wolf Optimizer (SGWO) algorithm. In the the proposed SGWO algorithm, we combine the GWO algorithm with the Nelder-Mead method in order to reﬁne the best obtained solution from the standard GWO algorithm. We test it on 7 integer programming problems and 10 minimax problems in order to investigate the general performance of the proposed SGWO algorithm. Also, we compare SGWO with 10 algorithms for solving integer programming problems and 9 algorithms for solving minimax problems. The experiments results show the eﬃciency of the proposed algorithm and its ability to solve integer and minimax optimization problems in reasonable time. program- ming

1. Introduction. Our goal from this article is to solve two problems, namely, integer programming and minimax problems, via metaheuristic algorithm.
An integer programming problem is a mathematical optimization problem in which all of the variables are restricted to be integers. The unconstrained integer programming problem can be defined as follow.
where Z is the set of integer variables, S is a not necessarily bounded set.
Branch and Bound (BB) is one of the most famous exact integer programming algorithm, however it suffer from high complexity, since they explore a hundred of nodes in a big tree structure when solving a large scale problems. Recently, there are some efforts to apply some of swarm intelligence algorithms to solve integer programming problems such as ant colony algorithm [14], [15], artificial bee colony algorithm [2], [38], particle swarm optimization algorithm [29], cuckoo search algorithm [39] and firefly algorithm [3].
We consider solving another problem in this paper, minimax problem. We define the general form of the minimax problem [41] as follows: where with f i (x) : S ⊂ R n → R, i = 1, . . . , m. The nonlinear programming problems of the form: can be transformed to minimax problems as follows: where α i > 0, ; i = 2, . . . , m It has been shown [4] that for sufficiently large α i , the optimum point of the minimax problem coincides with the optimum point of the nonlinear programming problem.
There are some algorithms based on a smooth techniques have been applied for solving minimax problems. These techniques are solving a sequence of smooth problems, which approximate the minimax problems in the limit [21], [30], [41]. The algorithms based in theses techniques aim to generate a sequence of approximations, which converges to Kuhn-Tucker point of the minimax problem, for a decreasing sequence of positive smoothing parameters. However, the drawback of theses algorithms is these parameters are small too fast and the smooth problems become significantly ill-conditioned. Some swarm intelligence algorithms have been applied to solve minimax problems such as PSO [29]. The main drawback of applying swarm intelligence algorithms for solving minimax and integer programming problems is that they are a population based methods which are computationally expensive.
There are many real life applications such as warehouse location problem, very large scale integration(VLSI ), circuits design problems, robot path planning problems, scheduling problem, game theory, engineering design problems, [8], [26], [45] can be formulated as minimax and integer programming problems. Grey wolf optimizer (GWO) is a population based meta-heuristics algorithm simulates the leadership hierarchy and hunting mechanism of grey wolves in nature proposed by Mirjalili et al. in 2014 [24]. GWO and other metaheuristics algorithms such as Ant Colony Optimization (ACO) [7], Artificial Bee Colony [16], Particle Swarm Optimization (PSO) [17], Bacterial foraging [28], Bat algorithm [42], Bee Colony Optimization (BCO) [37], Wolf search [36], Cat swarm [6], Firefly algorithm [43], Fish swarm/school [20], etc, have been applied to solve global optimization problems. These algorithms have been widely used to solve unconstrained and constrained problems and their applications, however they have been applied in a few works to solve minimax and integer programming problems.
The aim of this work is to propose a new hybrid grey wolf algorithm with a Nelder-Mead (NM) method in order to overcome the slow convergence of the standard grey wolf optimizer algorithm. The Nelder-Mead method accelerates the convergence of the proposed algorithm. In this paper, we proposed a new hybrid algorithm, which is called a Simplex Grey Wolf Optimizer (SGWO) by combining grey wolf optimizer algorithm with a Nelder-Mead method in order to accelerate the search and avoid running the algorithm with more iterations without any improvements. The SGWO algorithm is tested on 7 integer programming and 10 minimax benchmark problems. The experimental results show that the proposed SGWO is a promising algorithm and can obtain the optimal or near optimal solution for most of the tested function in reasonable time.
The reminder of this paper is organized as follow. In Section 2, we overview the Nelder-Mead method and summarize the main concepts of grey wolf optimizer algorithm (GWO). We present the main structure of the proposed SGWO algorithm in Section 3. In Section 4, we give the experimental results and finally, makes up the conclusion and future work in Section 5.

2.
Overview of the standard algorithms.
2.1. Nelder Mead method. In this section, we present Nelder-Mead algorithm. The Nelder-Mead algorithm (NM) was proposed by Nelder and Mead in 1965 [25]. NM algorithm is one of the most popular derivative-free nonlinear optimization algorithms. We describe the main steps of the Nelder-Mead algorithm in Algorithm 1.

2.2.
Standard grey wolf optimizer. We will overview the main concepts and structure of the grey wolf optimizer algorithm.
Grey wolves are considered as apex predators, which they are at the top of the food chain. Grey wolves prefer to live in a group (pack), each group contains 5-12 members on average. All the members in the group have a very strict social dominant hierarchy as shown in Figure 1. The social hierarchy consists of the following four levels.
• The first level is called Alpha (α). The alpha wolves are the leaders of the pack and they are a male and a female. They are responsible for making decisions about hunting, time to walk, sleeping place and so on. The pack members have to dictate the alpha decisions and they acknowledge the alpha by holding their tails down. The alpha wolf is considered the dominant wolf in the pack and all his/her orders should be followed by the pack members. • The second level is called Beta (β). The betas are subordinate wolves, which help the alpha in decision making. The beta wolf can e either male or female and it consider the best candidate to be the alpha when the alpha passes away or becomes very old. The beta reinforce the alpha's commands throughout the pack and gives the feedback to alpha. • The third level is called Delta (δ). The delta wolves are not alpha or beta wolves and they are called subordinates. Delta wolves have to submit to Algorithm 1 The Nelder-Mead Algorithm 1. Let xi denote the list of vertices in the current simplex, i = 1, . . . , n + 1.

Order.
Order and re-label the n + 1 vertices from lowest function value f (x1) to 3. Reflection. Compute the reflected point xr by xr =x + ρ(x − x (n+1) ), wherex is the centroid of the n best points, Replace xn+1 with the reflected point xr and go to Step 7.
Replace xn+1 with xe and go to Step 7. else Replace xn+1 with xr and go to Step 7.
Perform a contraction betweenx and the best among xn+1 and xr.
Replace xn+1 with xoc and go to Step 7. else Go to Step 6.
Replace xn+1 with xic and go to Step 7. else Go to Step 6. end if 6. Shrink. Evaluate the n new vertices x = x1 + φ(xi − x1), i = 2, . . . , n + 1. Replace the vertices x2, . . . , xn+1 with the new vertices x 2 , . . . , x n+1 . 7. Stopping Condition. Order and re-label the vertices of the new simplex as x1, x2, . . . , xn+1 such that Stop, where > 0 is a small predetermined tolerance. else Go to Step 3. end if the alpha and beta but they dominate the omega (the lowest level in wolves social hierarchy). There are different categories of delta as follows Scouts. The scout wolves are responsible for watching the boundaries of the territory and warning the pack in case of any danger.
Sentinels. The sentinel wolves are responsible for protecting the pack.
Elders. The elder wolves are the experienced wolves who used to be alpha or beta.
Hunters. The hunters wolves are responsible for helping the alpha and beta wolves in hunting and providing food for the pack.
Caretakers. The caretakers are responsible for caring for the ill, weak and wounded wolves in the pack.
• The fourth (lowest) level is called Omega (ω). The omega wolves are considered the scapegoat in the pack, they have to submit to all the other dominant wolves. They may seem are not important individuals in the pack and they are the last allowed wolves to eat. The whole pack are fighting in case of losing the omega. Now we present the mathematical models of the social hierarchy, tracking, encircling and attacking prey as follows. Social hierarchy. In the grey wolf optimizer (GWO), we consider the fittest solution as the alpha α, while the second and the third fittest solutions are named beta β and delta δ, respectively. The rest of the solutions are considered omega ω. In GWO algorithm, the hunting is guided by α, β and δ. The ω solutions follow these three wolves.
Encircling prey. During the hunting, the grey wolves encircle prey. The mathematical model of the encircling behavior is presented in the following equations.
Where t is the current iteration, A and C are coefficient vectors, X p is the position vector of the prey, and X indicates the position vector of a grey wolf.

MOHAMED A. TAWHID AND AHMED F. ALI
The vectors A and C are calculated as follows: Where components of a are linearly decreased from 2 to 0 over the course of iterations and r 1 , r 2 are random vectors in [0, 1] Hunting. The hunting operation is usually guided by the alpha α. The beta β and delta δ might participate in hunting occasionally. In the mathematical model of hunting behavior of grey wolves, we assumed the alpha α, beta β and delta δ have better knowledge about the potential location of prey. The first three best solutions are saved and the other agent are oblige to update their positions according to the position of the best search agents as shown in the following equations.
Attacking prey (exploitation). The grey wolf finish the hunt by attacking the prey when it stop moving. The vector A is a random value in interval [−2a, 2a], where a is decreased from 2 to 0 over the course of iterations. When |A| < 1, the wolves attack towards the prey, which represents an exploitation process.
Search for prey (exploration). The exploration process in GWO is applied according to the position α, β and δ, that diverge from each other to search for prey and converge to attack prey. The exploration process is modelled mathematically by utilizing A with random values greater than 1 or less than -1 to oblige the search agent to diverge from the prey. When |A| > 1, the wolves are forced to diverge from the prey to fined a fitter prey. Now we present the GWO algorithm. Generate an initial population X i (t) randomly.

5:
Evaluate the fitness function of each search agent (solution) f ( X i ). 6: end for 7: Assign the values of the first, second and the third best solution X α , X β and X δ , respectively. 8: repeat 9: for (i = 1 : i ≤ n) do 10: Update each search agent in the population as in Equation 12.

11:
Decrease the parameter a from 2 to 0.

12:
Update the coefficients A and C as shown in Equations 8 and 9, respectively.

13:
Evaluate the fitness function of each search agent (vector) f ( X i ). 14: end for 15: Update the vectors X α , X β and X δ . 16: {Termination criteria are satisfied} 18: Produce the best solution X α .
-Step 4. Assign the values of the first, second and the third best solution X α , X β and δ, respectively. (Line 7) -Step 5. The following steps are repeated until the termination criterion satisfied (Lines 9-14) Step 5.1. Each search agent (solution) in the population is updated as shown in Equation 12. (Line 10) Step 5.2. Decrease the parameter a from 2 to 0. (Line 11) Step 5.3. The coefficients A and C are updated as shown in Equations 8, 9, respectively. (Line 12) Step 5.4. Each search agent in the population is evaluated by calculating its fitness function f ( X i ). (Line 13) -Step 6. The first, second and the third best solutions are updated X α , X β and X δ , respectively. (Line 15) -Step 7. The iteration counter is increasing t = t + 1. (Line 16) -Step 8. The overall process is repeated until termination criteria satisfied. (Line 17) -Step 9. Produce the best found search agent (solution) so far X α . (Line  18) 3. The proposed SGWO algorithm. In the following steps, we summarize the main steps of the proposed algorithm as shown in Algorithm 3 In the proposed algorithm, we have applied the main steps of the standard GWO algorithm for number of iterations, then the best obtained solution X α is passing Generate an initial population X i (t) randomly.

5:
Evaluate the fitness function of each search agent (solution) f ( X i ). 6: end for 7: Assign the values of the first, second and the third best solution X α , X β and X δ , respectively. 8: repeat 9: for (i = 1 : i ≤ n) do 10: Update each search agent in the population as in Equation 12.

11:
Decrease the parameter a from 2 to 0.

12:
Update the coefficients A and C as shown in Equations 8 and 9, respectively.

13:
Evaluate the fitness function of each search agent (vector) f ( X i ). 14: end for 15: Update the vectors X α , X β and X δ . 4. Numerical experiments. We investigate the efficiency of the proposed SGWO by testing the general performance of it with different benchmark functions and comparing the results of the proposed algorithm with several algorithms. We program SGWO by MATLAB and take the results of the comparative algorithms from their original papers. In the following subsections, we report the parameter setting of the proposed algorithm with more details and the properties of the applied test functions. Also we present the performance analysis of the proposed algorithm with the comparative results between it and the other algorithms. 4.1. Parameter setting. We summarize the parameters of the SGWO algorithm and their values in Table 1. We select these values based on the common setting in the literature or our preliminary numerical experiments.
• Search agents number n. The experimental tests show that the best search agents (n) size (population size) is to set to 20, increasing this number will increase the evaluation function values without any improvement in the obtained results.   (8) and (9). • r 1 and r 2 . are random vectors in [0, 1] • Intensification parameter N elite . In the final intensification stage, we apply a local search by using Nelder-Mead method starting from the elite solutions (N elite ), which obtained in the previous search stage, we set N elite = 1.
Increasing the number of the selected elite solutions will increase the function evaluations. • Maximum number of iterations M ax itr The main termination criterion in standard GWO algorithm is the number of iterations. In the proposed algorithm, we run the standard GWO algorithm 2d iteration for the integer programming problems and 3d iterations for the minimax problems , then we pass the best found solution to the Nelder-Mead method.

4.2.
Integer programming optimization test problems. In this subsection, we test the efficiency of the SGWO algorithm by applying it on 7 benchmark integer programming problems (F I 1 − F I 7 ) as shown in Table 2.
In Table 3, we list the properties of the benchmark functions (function number, dimension of the problem, problem bound and the global optimal of each problem). Now we define our benchmark integer programming problems as follows.
4.3. The efficiency of the proposed SGWO algorithm on integer programming problems. In this subsection, we verify the importance of invoking the Nelder-Mead (NM) method in the final stage as a final intensification process. In Table 4, the results show the mean evaluation function values of the standard grey wolf algorithm (GWO) without applying the Nelder-Mead method, the Nelder-Mead method and the proposed SGWO algorithm, respectively. We apply the same termination criterion for all algorithms, which terminates the search when all algorithms reach to the optimal solution within an error of 10 −4 before the 20,000 function evaluation value. We report the average function evaluation over 50 runs and give the best results in boldface text. The initial solution in the Nelder-Mead method is randomly generated. In Table 4, the results show that invoking the Nelder-Mead method in the final stage enhances the general performance of the proposed algorithm and can accelerate the search to reach to the optimal solution or near optimal solution in 5 cases out of 7 faster than the standard grey wolf algorithm without applying the Nelder-Mead method and the Nelder-Mead method.
. . .     Figure 2 for four functions F I 1 , F I 2 , F I 6 , F I 7 (randomly picked ). In Figure 2, the solid line represents the results of the standard GWO algorithm, while the dotted line represents the results of the proposed algorithm after passing the best obtained solution from the standard GWO algorithm to the Nelder-Mead method. The results in Figure 2 show that the function values of the proposed SGWO rapidly decrease as the number of iterations increases. Hence invoking the Nelder-Mead method as an intensification process can accelerate the search and obtain the optimal or near optimal solution in reasonable time.

SGWO and other algorithms.
We compare the proposed SGWO with four benchmark algorithms (particle swarm optimization with variousvariants algorithms) in order to verify of the efficiency of the proposed algorithm. Before discussing the comparison results of all algorithms, let us present a brief description about the comparative four algorithms [29].
• RWMPSOg. RWMPSOg is a Random Walk Memetic Particle Swarm Optimization (with global variant), which combines the particle swarm optimization with random walk with direction exploitation. • RWMPSOl. RWMPSOl is a Random Walk Memetic Particle Swarm Optimization (with local variant), which combines the particle swarm optimization with random walk with direction exploitation. • PSOg. PSOg is a standard particle swarm optimization with global variant without local search method. • PSOl. PSOl is a standard particle swarm optimization with local variant without local search method.

4.5.1.
Comparison between RWMPSOg, RWMPSOl, PSOg, PSOl and SGWO for integer programming problems. We present the comparison results between our SGWO algorithm and the other algorithms in order to verify of the efficiency of our proposed algorithm. We test the five comparative algorithms on 7 benchmark functions. We take the results of the comparative algorithms from their original paper [29]. In Table 5, we report the minimum (min), maximum (max), average (Mean), standard deviation (St.D) and Success rate (%Suc) of the evaluation function values over 50 runs. The run is successful if the algorithm reaches to the global minimum of the solution within an error of 10 −4 before the 20,000 function evaluation value. We report the best results between the comparative algorithms in boldface text. The results in Table 5 show that the proposed SGWO algorithm succeeds in all functions except for functions F I 4 and F I 6 , they are little bit better than the proposed algorithm, however the rate of success of the proposed algorithm is 100% for all functions.

SGWO and other meta-heuristics and swarm intelligence algorithms for integer programming problems.
We test the SGWO algorithm with different metaheuristics and swarm intelligence algorithms (SI) such as genetic algorithm (GA) [12], differential evolution algorithm [35], particle swarm optimization (PSO) [17], firefly (FF) algorithm [43] and cuckoo search (CS) algorithm [44]. In order to make fair comparison we set the population size =20 for all algorithms and the termination criteria for all algorithm are the same where the algorithm reaches to the global minimum of the solution within an error of 10 −4 before the 20,000 function evaluation value. We apply the standard parameter setting for all compared metaheuristics and swarm intelligence algorithms. In Table 6, we report the average (Avg) and standard deviation (SD) of all algorithms over 50 runs. We can conclude from Table 6 that SGWO algorithm can obtain the desired optimum values faster than the other SI algorithm. 4.6. SGWO and the branch and bound method. We apply another test to verify of the powerful of the proposed algorithm with the integer programming problems, by comparing the SGWO algorithm with the branch and bound (BB) method [5], [19], [23]. Before we discuss the comparative results between the proposed algorithm and the BB method, we present the BB method and the main steps of its algorithm as follows. 4.6.1. Branch and bound method. The branch and bound method (BB) is one of the most widely used method for solving optimization problems. The main idea of BB method is the feasible region of the problem is partitioned subsequently into several sub regions, this operation is called branching. The lower and upper bounds value of the function can be determined over these partitions, this operation is called bounding. For the sake of completeness, we report the main steps of BB method in Algorithm 4, and summarize the BB algorithm in the following steps.   Table 7, we give the comparison results between the BB method and the proposed SGWO and take the results of the BB method from its original paper  Determine new bound on the new partition elements 14: until (i ≥ m) [18]. In [18], the BB algorithm transforms the initial integer programming problem to a continuous one. For the bounding, the BB uses the sequential quadratic programming method to solve the generated sub problems. While for branching, depth first traversal with backtracking was used. We report the comparative results between the BB algorithm and the proposed algorithm in Table 7. In Table 7, we report the average (Mean), standard deviation (St.D) and rate of success (Suc) over 30 runs, and the best mean evaluation values between the two algorithms in boldface text. The results in Table 7 show that the proposed algorithm results are better than the results of the BB method in all functions except for functions F I 2 and F I 6 . The overall results in Table 7 show that the proposed algorithm is faster and more efficient than the BB method for most cases. Minimax optimization test problems. We consider another type of optimization test problems in order to investigate the efficiency of the proposed algorithm. We define 10 benchmark minimax functions in Table 8 and report their properties in Table 9.
4.8. The efficiency of the proposed SGWO algorithm on minimax problems. We apply another test to investigate the idea of invoking the Nelder-Mead method in the final stage as a final intensification process with the standard social spider algorithm. In Table 10, we show the mean evaluation function values of the standard grey wolf optimizer without applying the Nelder-Mead method, the Nelder-Mead method and the proposed SGWO algorithm, respectively. We apply for all algorithms the same termination criterion, which terminates the search when both algorithms reach to the optimal solution within an error of 10 −4 before the 20,000 function evaluation value. We report the average function evaluation over 100 runs and the best results in boldface text. Also we show in Table 10 that invoking the Nelder-Mead method in the final stage in the proposed algorithm enhance the general performance of it and can accelerate the search to reach to the optimal solution or near optimal solution faster than the standard social spider algorithm and the Nelder-Mead method.   intensification process can accelerate the search and obtain the optimal or near optimal solution in reasonable time.

SGWO and other algorithms.
We compare the SGWO with three benchmark algorithms in order to verify of the efficiency of the proposed algorithm on minimax problems. Before discussing the comparison results of all algorithms, we present a brief description about the comparative three algorithms as follows.
• HPS2 [13]. HPS2 is a Heuristic Pattern Search algorithm, which is applied for solving bound constrained minimax problems by combining the Hook and Jeeves (HJ) pattern and exploratory moves with a randomly generated approximate descent direction. • UPSOm [27]. UPSOm is a Unified Particle Swarm Optimization algorithm, which combines the global and local variants of the standard PSO and incorporates a stochastic parameter to imitate mutation in evolutionary algorithms. • RWMPSOg [29]. RWMPSOg is a Random Walk Memetic Particle Swarm Optimization (with global variant), which combines the particle swarm optimization with random walk as direction exploitation. We present the comparison results between the proposed SGWO algorithm and the other algorithms in order to verify of the efficiency of the proposed algorithm. We test four comparative algorithms on 10 benchmark functions and take the results of the comparative algorithms from their original papers [13]. In Table 11, we report the average (Avg), standard deviation (SD) and Success rate (%Suc) are reported over 100 runs. When the results of these algorithms are not reported in its original paper, we denote the mark (-) as F M 8 in HPS2 algorithm and F M 2 , F M 8 and F M 9 in RWMPSOg algorithm. The run is successful if the algorithm reaches the global minimum of the solution within an error of 10 −4 before the 20,000 function evaluation value. The results in Table 11 show that the proposed SGWO algorithm succeeds in most runs and obtains the objective value of each function faster than the other algorithms, except for functions F M 3 , F M 9 and F M 10 the HPS2 results are better than the proposed algorithm.
4.10.2. SGWO and other meta-heuristics and swarm intelligence algorithms for minimax problems. Also, we compare the proposed SGWO algorithm with the same meta-heuristics and swarm intelligence algorithms (SI) as in Subsection 4.5.2 for minimax problems. In Table 12, we report the average (Avg) and standard deviation (SD) of all algorithms over 100 runs. The results in Table 12 show that the proposed SGWO algorithm is outperform the other SI algorithm 4.11. SGWO and SQP method. Another test for our proposed algorithm is to compare the SGWO with another famous method which is called Sequential  4.11.1. Sequential quadratic programming (SQP). The first references to SQP algorithms was mentioned in Wilson's PhD thesis in 1963 [40], he proposed the Newton-SQP algorithm to solve unconstrained optimization. The development of the secant or variable-metric algorithms led to the extension of these methods to solve the constrained problem. The main steps of the SPQ method can be summarized as follow.
• Step 1. The SQP algorithm starts with an initial solution x 0 and initialization of the Hessian matrix of the objective function. • Step 2. At each iteration, the BFGS method has been used to calculate a positive definite quasi-Newton approximation of the Hessian matrix , where the Hessian update is calculated as the following where s n = x n+1 − x n and q n = ∇f (x n+1 ) • Step 3. The QP problem is solved in z as follows min q(z) = 1/2z T Hz + c T z.
• Step 4. The new potential solution is calculating using the solution z n x n+1 = x n + α n z n (15) where α n is a step length and determined through line search. For an extended theoretical aspects of the SQP algorithm refer to [9], [11].
We report the results of the two comparative algorithms on 10 benchmark functions. We take the results of the SQP algorithm from paper [18]. In Table 13, we report the average (Avg), standard deviation (SD) and Success rate (%Suc) over 30 runs. The run is successful if the algorithm reaches the global minimum of the solution within an error of 10 −4 before the 20,000 function evaluation value. The results in Table 13 show that the proposed SGWO algorithm outperforms the SQP algorithm in 7 of 10 functions, while the results of SQP algorithm are better than our proposed algorithm for functions F M 3 , F M 5 and F M 6 . We can conclude from this comparison that the proposed SGWO is outperform the SQP algorithm in most cases of tested minimax problems.

5.
Conclusion and future work. In this paper, we propose a new hybrid grey wolf algorithm with Nelder-Mead method in order to solve integer programming and minimax problems. The proposed algorithm is called a Simplex Grey Wolf Optimizer (SGWO) algorithm. The purpose of invoking the Nelder-Mead method in the standard grey wolf algorithm is to help the proposed algorithm to overcome the slow convergence of the standard grey wolf optimizer (GWO) algorithm by refining the best obtained solution from the standard GWO instead of keep running the algorithm with more iterations without improvements (or slow improvements) in the results. In order to verify the robustness and the effectiveness of the proposed algorithm, we have applied it on 7 integer programming and 10 minimax problems.
The experimental results show that the proposed algorithm is a promising algorithm and has a powerful ability to solve integer programming and minimax problems faster than other algorithms in most cases.
Our future works will explore the following directions: • Solve constrained optimization and engineering problems such as design of a tension/compression spring [1], design of a welded beam [31], design of a gear train [33], and design of a pressure vessel [33] by our proposed algorithm. • Solve other combinatorial problems, large scale integer programming and minimax problems by extending and modifying our proposed algorithm in order to . • Study the complexity and convergence analysis (mathematically) of the proposed algorithm as suggested by one of the referees.