On a hybrid particle swarm optimization algorithm

The research on proposing various variants of Particle Swarm Optimization Technique is continued for last several decades. Efforts are being made to develop a most efficient algorithm. In this paper a newly developed Hybrid Particle Swarm Optimization Algorithm. (It will be known as PARIPSO) has been proposed. This algorithm has been constructed by taking contribution of gbest as 65% and contribution of pbest as 35% which is novel philosophy to update velocity equation. The proposed algorithm has been tested on several benchmark problems. The results thus obtained have been compared with those obtained using Standard Particle Swarm Optimization (SPSO) and Mean Particle Swarm Optimization (MPSO). On the basis of results obtained it is concluded that the proposed algorithm performs better than SPSO and MPSO in most of the cases in the terms of efficiency, time computation, reliability, accuracy and stability.


Introduction
*Particle Swarm Optimization (PSO) was developed by Kennedy and Eberhart (1995) and Kennedy et al. (2001), based on the swarm behavior such as fish and bird schooling in nature. The Particle Swarm Optimization algorithm is comprised of a collection of particles that move around the search space influenced by their own best past location and the best past location of the whole swarm or a close neighbor. In each iteration a particle's velocity is updated using (Eq. 1): ( + 1) = ( ) + ( 1 × () × ( − ( ))) + where ( + 1)is the new velocity for the ℎ particle, 1 and 2 are the weighting coefficients for the personal best and global best positions respectively, ( ) is the ℎ particle's position at time k, is the ℎ particle's best known position, and is the best position known to the swarm. The rand () function generate a uniformly random numbers in [0,1]. Variants on this update equation consider best positions within a particles local neighborhood at time t.
A particle's position is updated using (Eq. 2): ( + 1) = ( ) + ( + 1) Particle, ( ) is the ℎ particle's position at time k and ( )is the old velocity for the ℎ particle. Shi and Eberhart (1998) have introduced the inertia weight in this theory. An Inertia weight is related with the speed of last iteration, and the velocity update equation for the change of the speed is the following (Eq. 3): ( + 1) = × ( ) + ( 1 × () × ( − ( ))) + ( 2 × () × ( − ( ))) (3) Clerc (1999) introduced the concept of construction factor. The following is the formula for its position and speed changing (Eq. 4): is called the contraction factor, ∅ = 1 + 2 > 4. Generally, ϕ is equal to 4.1, so = 0.729. The experimental results of equation (4) as compared with the PSO algorithm with inertia weights, the convergence speed in the PSO algorithm with the convergence agent is much quicker. In fact, when the proper values of w, 1 and 2 is decided, the two calculation methods are identical. So, the PSO algorithm with convergence agent can be regarded as a special example of the particle swarm optimization algorithm with inertia weights. Meanwhile, the properly selected parameters in the algorithms can improve the function of the methods.

Literature review
Several interesting variation of PSO algorithm have recently been proposed by various researcher.
Multi-Swarm Cooperative Particle Swarm Optimizer is developed by Niu et al. (2007). MCPSO is based on a master-slave model, in which a population consists of one master swarm and several slave swarms. The slave swarms execute a single PSO or its variants independently to maintain the diversity of particles, while the master swarm evolves based on its own knowledge and also the knowledge of the slave swarms. According to the coevolutionary relationship between master swarm and slave swarms, two versions of MCPSO are proposed, namely the competitive version of MCPSO (COM-MCPSO) and the collaborative version of MCPSO (COL-MCPSO), where the master swarm enhances its particles based on an antagonistic scenario or a synergistic scenario, respectively. The performance of the proposed algorithms has been compared with the standard PSO (SPSO) and its variants to demonstrate the superiority of MCPSO.
Quadratic Interpolation Particle Swarm Optimization is developed by Pant et al. (2007). The QIPSO algorithm makes use of a multiparent, quadratic crossover/reproduction operator defined in the BPSO algorithm. The author has compared the results with Basic Particle Swarm Optimization.
Mean particle swarm optimization for function optimization has been introduced by Deep and Bansal (2009). The method is based on a novel philosophy by modifying the velocity update equation. This is done by replacing two terms of original velocity update equation by two new terms based on the linear combination of pbest and gbest. Its performance is compared with the standard PSO (SPSO) by testing on benchmark problems. Based on the numerical and graphical analyses of results it is shown that the MeanPSO outperforms the SPSO, in terms of efficiency, reliability, accuracy and stability.
Competitive Learning Model in introduced by Murugesan and Palaniswami (2012). The hybridization of this Algorithm using Swarm Intelligent techniques further improves the efficiency of the Algorithm. Various works on the hybridization of Particle Swarm Optimization (PSO) with Simple Competitive Learning (SCL) have been proposed and are found to be efficient in Image Segmentation.
A Modified Hybrid Particle Swarm Optimization (MHPSO) algorithm has been developed by Said Labed et al. (2011). This approach is combined by some principles of Particle Swarm Optimization (PSO), the Crossover operation of the Genetic Algorithm and 2-opt improvement heuristic. The main feature of this approach is that it allows avoiding a major problem of met heuristics by the parameters setting.
A New Disc-Based Particle Swarm Optimization is developed by Yadav and Deep (2012). With the help of this approach authors have solved complex optimization problems. The reliability of the algorithms is validated statistically on several benchmark problems and also compared with the existing versions of PSO.
One half global best position particle swarm optimization has been introduced by Singh and Singh (2011). The performance of this algorithm has been tested through numerical and graphical results. The results obtained are compared with the standard PSO (SPSO) for scalable and non-scalable problems.
The results indicate that new approach is better as comparison to SPSO in the terms of efficiency, reliability, accuracy and stability.
Personal best position particle swarm optimization has been introduced by . In the proposed approach a novel philosophy of modifying the velocity update equation of Standard Particle Swarm Optimization approach has been used. The modification has been done by vanishing the gbest term in the velocity update equation of SPSO and thus relying on pbest only. The performance of the proposed algorithm (Personal Best Position Particle Swarm Optimization, PBPPSO) has been tested on several benchmark problems. It is concluded that the PBPPSO performs better than SPSO in terms of accuracy and quality of solution.
A new version of particle swarm optimization algorithm has been developed by . The algorithm has been developed by combining two different approaches of PSO i.e., Standard Particle Swarm Optimization and Mean Particle Swarm Optimization. Numerical experiments for scalable and non-scalable well known test problems have shown the superiority of newly proposed Hybrid Particle Swarm Optimization (HPSO) approach, compared to the classical SPSO algorithm in terms of convergence, speed and quality of obtained solutions.
A Modified PSO algorithm has also been developed by Ghatei et al. (2012). In this approach, the range for achieved answers is defined that is the same parameter used in the GDA called "water level". Amount of this range reduces or increases regarding to algorithm's property being used in terms of minimum or maximum during the time. This algorithm has been tested on some standard functions and its performance has been compared with standard PSO. Test results indicate that the proposed method significantly improves the ability of PSO of escaping from the local optimal raise and increases the accuracy and the convergence rate.

New proposed algorithm: Pari PSO
The Objective of developing a new algorithm was to reduce the number of clocks in finding the minimum functional value and hence making the method more economic. To achieve it lot of numerical experiments were performed. In this algorithm the velocity equation has been updated as (Eq. 5): In the velocity update equation of this new PSO the first term represents the current velocity of the particle and can be thought of as a momentum term. The second term is proportional to the vector(0.35 × − ( )), is responsible for the attractor of particle's current position and positive direction of its own best position (pbest). The third term is proportional to the vector (0.65 × − ( ))), which is responsible for the attractor of particle's current position.
The (original) process for implementing the global version of Pari PSO is as follows: ALGORITHM-Pari PSO For each particle Initialize particle END Do For each particle Calculate fitness value If the fitness value is better than its peronal best set current value as the new pbest End Choose the particle with the best fitness value of all as gbest For each particle Calculate particle velocity according Equation

Remark
The name of this algorithm has been coined by the first author in the lingering memories of his beloved daughter Late Ms. Pari.

SPSO parameters settings
The parameter in the SPSO given in the literature is: 1. The number of particles should be low, around 20-40 2. The speed a particle can move (maximum change in its position per iteration) should be bounded, such as to a percentage of the size of the domain. 3. A local bias (local neighborhood) factor can be introduced where neighbors are determined based on Euclidean distance between particle positions. 4. Particles may leave the boundary of the problem space and may be penalized, be reflected back into the domain or biased to return back toward a position in the problem domain. Alternatively, a wrapping strategy may be used at the edge of the domain creating a loop, torrid or related geometrical structures at the chosen dimensionality. 5. An inertia or momentum coefficient can be introduced to limit the change in velocity (Weights 0.4 to 0.9 and momentum coefficient 1.4 to 2.0). 6. The maximum number of function evaluations is fixed to be 30,000. 7. The dynamic range for each element of a particle is defined as (-100,100), that is, the particle cannot move out of this range in each dim and thus Xmax = 100. 8. Maximum Error = 0.1 to 0.9 9. In the proposed method we have to test the same parameters as in SPSO

The test problems
Every new techniques of PSO has to be tested on some benchmark problems. Keeping this in view the proposed algorithms has been tested on 28 benchmark problems (15 Scalable and 13 Non-Scalable Problems). All these problems vary in difficulty levels and problem size. The performance of SPSO, MPSO and Pari PSO is evaluated on these benchmarks problems. These problems have been divided in two kinds of problem sets.
Problem Set I: Scalable Problems: -Those problems in which the dimension of the problems can be increased / decreased at will. In general, the complexity of the problem increases as the problem size is increased.
Problem Set II: Non-Scalable Problems-In which the problem size is fixed, but the problems have many local as well as global optima.

Analysis
In SPSO, MPSO and newly proposed algorithm Pari PSO the balance between the local and global exploration abilities is mainly controlled by the inertia weight. The experimental results have been performed to illustrate this. By setting the maximum velocity to be two, it was found that SPSO, MPSO and Pari PSO with an inertia weight in the range [0.4, 0.9] on average has a better and bad performance; that is, it has a large chance to find the global optimum within a reasonable number of iterations. A time decreasing inertia weight is found to be better than a fixed inertia weight 0.8 and acceleration coefficients 1.6.
A number of criteria are used to evaluate the performance of SPSO, MPSO with Pari PSO. The percentage of success is used to evaluate the reliability. The average number of function evaluations of successful runs and the average computational time of the successful runs, are used to evaluate the cost.
For Problem Set-I. The quality of the solution obtained is measured by the minimum, mean and standard deviation of the objective function values out of thirty runs. This is shown in Table 2, 8 and 14. The corresponding information for Problem Set-II is shown in Tables 5, 11 and 17 respectively. Similarly we obtained the time decreasing performance of the SPSO, MPSO and Pari PSO in Table 3 , 6, 9, 12, 15, 18 respectively. We are testing the new approach Pari Particle Swarm Optimization Algorithm on the parameter setting: inertia weight 0.6 and 0.7, swarm size 30 dim, function evaluation 30,000, acceptable error 0.9 and acceleration coefficient 1.4 and 1.5. On this parameter setting the results obtained by the new approach has been listed in the Tables 1, 2 , 3, 4, 5, 6, 7, 8, 9, 10 and 11. These results have also been illustrated through Figs. 1 to 6. The results indicate that the new approach does not yield the global optimal point in all 100% cases. Finally, we are testing the new approach on the parameter setting: inertia weight 0.8, swarm size 30 dim, function evaluations 30,000, acceptable error 0.9 and acceleration coefficient 1.6. For this parameter setting results indicate that Pari PSO is most efficient for finding the global optimal point as comparison to SPSO and MPSO in the terms of cases in the terms of efficiency, time computation, reliability, accuracy and stability many several types of benchmarks problems.

Experimental results and discussion
The Performance of the proposed PSO model is tested on a number of analytical benchmark functions which have been extensively used to compare PSO-type meta-heuristic algorithms in the literature. This paper utilizes the benchmark function set, shown in Set-I and Set-II. The new algorithm was tested on a set of 28 benchmark Problems (15 Scalable and 13 Non-Scalable). The scalable and non-scalable problems were chosen as the test problems. The Standard Particle Swarm Optimization implementation was written in C and compiled using the Borland C++ Version 4.5 compiler. For the purpose of comparison, all the simulation use the parameter setting of the SPSO implementation except the inertia weight w , acceleration coefficient, swarm size and maximum velocity allowed. The swarm size (number of particles) varies from 20 to 30, inertia weight from 0.4 to 0.9 and acceleration coefficient between 1.4 and 2.0. The dynamic range for each element of a particle has been defined as (-100, 100), i.e., the particle cannot move out of this range in each dim and thus Xmax = 100. The maximum number of iterations allowed is 30,000. If the SPSO, Mean PSO and PariPSO implementation cannot find an acceptable solution within 30,000 iterations, it is ruled that it fails to find the global optimum in this run.
A number of criteria have been used to evaluate the performance of SPSO, Mean PSO and PariPSO. The percentage of success is used to evaluate the reliability. The average number of function evaluations of successful runs and the average computational time of the successful runs, are used to evaluate the cost. For problem SET-I, the conclusion has been drawn on the basis of the minimum mean, success rate and standard deviation of the objective function values in fifty runs. The corresponding information for problem SET-II has been drawn on similar basis. The new approach has been tested on different types of parameters. When we are testing the new approach for swarm size 30 dim, function evaluation 30,000, inertia weight 0.6 and 0.7, acceptable error 0.9 and acceleration coefficient 1.4 and 1.5, new approach PariPSO, SPSO and MPSO are failed to find the global optimal result on several scalable and non-scalable problems.
For the parameter setting swarm size 30 dim, function evaluation 30,000, inertia weight 0.8, acceptable error 0.9 and acceleration coefficient 1.6 the proposed algorithm has been tested on the given benchmark problems. With the help of this parameter setting we have obtained the optimal solution in most of the cases. These results have been shown in the Tables 14, 15, 17 and 18.  The results of Table 14 indicate that the new approach has solved thirteen scalable problems with 100% success and it also outperforms SPSO and MPSO in all these problems. All approaches has been failed to find global optimal solution for one scalable problem. In the fifteenth problem the performance of PariPSO is better to SPSO not to MPSO.
Moreover the results of Table 15 shows that PariPSO is finding the optimal point in less number of clocks on thirteen scalable problems as comparison to SPSO and MPSO. Thus it takes less CPU time and hence becomes most economic amongst these methods. Only in one problem it takes less time to SPSO but more time in comparison to MPSO.
The results of Table 17 show that the new approach has solved eight non scalable problems with 100% success and outperformed SPSO and MPSO. All these approaches failed to solve two nonscalable problems successfully. In the remaining three problems the proposed approach is better to SPSO but not to MPSO.

Conclusion
A new version of the Particle Swarm Optimization (PSO) has been introduced in this paper. The method will be known as Pari Particle Swarm Optimization (Pari PSO).

SPSO
MPSO Pari PSO SPSO MPSO Pari PSO       The performance of this approach has been compared with SPSO and MPSO in the terms of efficiency, time computation, reliability, accuracy and stability and number of clocks.
The test results shows that the proposed approach significantly improves the ability of PSO to find global optimal solution and also increases the accuracy or convergence rate.