Search. Read. Cite.

Easy to search. Easy to read. Easy to cite with credible sources.

Research Article
Dynamic Parameters Optimization for Enhancing Performance and Stability of PSO

Amir Khosravani-Rad, Ramin Ayanzadeh and Elaheh Raisi

Trends in Applied Sciences Research, 2014, 9(5), 238-245.


In this study, a new method for optimal control of parameters in particle swarm optimization based on fuzzy rules, is presented. In proposed method, to prevent premature convergence, social and personal learning coefficients are updated according to the convergence rate of the algorithm. In other words, fuzzy linguistic variables and membership functions are employed to conduct the swarm toward global optimum point. Several computational simulations are carried out to demonstrate high performance and stability of this method. Simulation results reveal superior optimality and stability and lower computational cost of the new algorithm compared to the traditional metaheuristics such as standard particle swarm optimization, genetic algorithms and standard particle swarm optimization which justifies its advantages for particle swarm optimization algorithms.

ASCI-ID: 95-568

Setayeshi and Fadaei, 2011; Jang et al., 2005). Optimization problems are defined when words like best, worst, cheapest, lowest, etc are involved and their purpose is to find out the optimal or semi-optimal solution with respect to some goals among all feasible and potential solutions (Shahamatnia et al., 2011). Optimization problems have fundamental importance in mathematics and computer science areas, including multi processor scheduling systems, job-shop scheduling, network and telecommunication routing, large scale integrated circuit design, time-cost trade-off, training neural networks and time tabling (Jang et al., 2005; Russel and Norving, 2006). To solve an optimization problem following steps must be taken.

Problem formulation: In this stage, state space (search space) which involves all states of problem variables or in a looser language, the space of all feasible solutions is created. Each point in the search space represents one potential solution. In training artificial neural network, for instance, system variables are assumed as weights of neurons (Ayanzadeh et al., 2011a; Jang et al., 2005). In scheduling problems, orders of tasks are system variables and set of all possible orders are task graph. In fact, search space includes all possible and some times impossible permutations (Russel and Norving, 2006).

Defining an objective function: Objective function or cost function receives a point in state space as input and returns a real number as output and then different solutions are compared regarding these outputs. In other words, objective function estimates a quantitative value to measure qualitative features in optimization problems. Purpose of optimization is to find a solution that minimize/maximize objective function. In multi-objective optimization problems, several features must be maximized or minimized to find a solution. These features are mostly incompatible which lead to many difficulties (Ayanzadeh et al., 2009b, 2011b; Marzban et al., 2014; Russel and Norving, 2006).

Finding optimum solution: Optimization is finding the best solution among all possible (and impossible) answers. If the state space size is finite, searching among all potential solutions to find the best one will be performed in acceptable time; however, for large scale problems with many solutions (state space is infinite or continues). Thus, finding optimum solution will be somehow impossible (Haupt and Haupt, 2004; Russel and Norving, 2006). For example, for search space size equal to n, finding optimum solution in real world problems would take several years even with high performance computational machines. In computer science realm, these problems are known as NP-complete (Russel and Norving, 2006). Moreover, optimization methods may have to satisfy some constraints in which parts of state space are infeasible. These constraints are divided into hard and soft constraints. Hard constraints “must” be satisfied; however, soft constraints are “advised” to be satisfied (Ayanzadeh et al., 2009a; Jaberi et al., 2012; Russel and Norving, 2006).

A lot of methods have been introduced to solve optimization problems (Eiben and Smith, 2003; Engelbrecht, 2007; Mitchell, 2002). Most of mathematical methods in applied mathematics compute the 1st and 2nd derivations of objective function (Jang et al., 2005). Obviously, there may be no derivable objective function in real world application. Thus, mathematical approaches would encounter some limitations. To overcome these restrictions, numerical approaches have been proposed including but not limited to Heuristics such as hill climbing and simulated annealing (Cano et al., 2007; Hernandez et al., 2008; Jacobson et al., 2006; Johnson and Jacobson, 2002), metaheuristic methods like evolutionary algorithms (Eiben and Smith, 2003; Haupt and Haupt, 2004; Mitchell, 2002), swarm intelligence techniques (Engelbrecht, 2007), cellular and learning automata (Setayeshi and Fadaei, 2011), artificial immune systems (Engelbrecht, 2007), memetic algorithms (Shahamatnia et al., 2011) are some other examples for numerical optimization algorithms.

Particle Swarm Optimization (PSO) is a popular swarm intelligence method (like honey bees optimization and ant colony optimization) that applies both personal and global/social knowledge to optimize a cost function (Engelbrecht, 2007). However, particle swarm optimization finds more optimum solutions in continues state spaces compared to traditional metaheuristics such as genetic algorithms (Jaberi et al., 2013; Kennedy and Eberhart, 1995).

Beside all advantages that PSO brings, it suffers from some drawbacks such as premature convergence. In other words, despite the fact that PSO serves desirable functionality in continuous search spaces, convergence speed is quiet high so the algorithm is more likely to be trapped in local optimimums (Ayanzadeh et al., 2010).

In this sense, several modifications have been performed. Settles and Soule (2005) injected genetic operators such as crossover and mutation into the structure of standard PSO to introduce breading PSO technique. In the same way, Ratnaweera et al. (2004) proposed hierarchical PSO (HPSO) and mutation PSO (MPSO) to prevent premature convergence. Employing a memory buffer to make a delay on impacting global knowledge on particles’ position vector was another study that provided desirable enhancement on standard PSO (Xiang et al., 2007). On this basis, in this study, a novel fuzzy based approach is introduced to avoid premature convergence in particle swarm optimization. This approach enjoys some fuzzy sets for optimum and dynamic parameter adjustment. In effect, fuzzy sets are exploited to dynamic control of personal and social learning coefficients.

Particle swarm optimization: Particle swarm optimization has been developed by Kennedy and Eberhart (1995) for problem solving and optimization applications. Particle swarm optimization is a population based technique which has been inspired from social behavior and living style of animals. In some sense, this algorithm is the most reputable swarm intelligent approach for optimization purposes more preciously in continuous state spaces (Engelbrecht, 2007).

Similar to all other metaheuristics and swarm intelligence approaches, PSO method optimizes the objective function in order to find the optimal solution. Swarm (population) is made up of potentially possible solutions, called particles. PSO is initialized with random population as such other numerical optimization techniques. Particles seek state space, during this movement, each particle updates their own velocity and position vectors based on the best experience of their own and the whole population (Engelbrecht, 2007). Updating particles to accelerate the convergence rate is performed similar to arithmetic crossover operator in evolutionary algorithms. Despite PSO is a general metaheuristics for optimization applications, it is mainly applicable for continuous search spaces. In other words, to apply this algorithm in discrete state spaces, some modifications on standard formulation are required (Engelbrecht, 2007).

Each particle in this method consists of position (x), velocity (v) and personal memory vectors. Personal memory keeps track of the best position of each particle since initial iteration. Velocity and position vectors are updated as follow:



where, vj(t) represents the velocity of particle j in time t. In the same way, xj(t) denotes the position of particle j in time t, xjgBest(t) is the best global position of particle j until time t and xjpBest(t) is the best personal position of particle j until time t based on its memory, r1 and r2 are uniform random numbers, C1 and C2 are personal and social learning coefficients which are positive and constant. Equation 1 consists of 3 major parts. The 2nd and the 3rd one specify personal and social learning. Therefore, C1 and C2 parameters define personal and social impact of knowledge in optimizing process (Engelbrecht, 2007).


Previous experiments show that when complexity of problem (state space size) increases, efficiency of particle swarm optimization reduces noticeably. In fact, PSO’s performance has exponential relationship with complexity of search space. Furthermore, it has been proved that static coefficients (offline parameter adjustment) could not prevent premature convergence (Ratnaweera et al., 2004; Settles and Soule, 2005; Xiang et al., 2007). To deal with this drawback, a novel approach is proposed to enhance the performance and stability of particle swarm optimization by dynamic and online adjusting the personal and social learning coefficients during optimization process.

In this method, rate of convergence in each iteration is compared to the rate of convergence in previous iteration. That is, decrement/increment rate of objective function for global best particle in current iteration comparing to the previous iteration will be calculated. In order to evaluate coefficients, 2 fuzzy sets named "low convergence" and "high convergence" are defined. Figure 1 illustrates membership functions of these two fuzzy sets (Ayanzadeh et al., 2012; Dahmani and Benyettou, 2004; Mamlook, 2006; Gholami et al., 2014; Jaberi et al., 2011).

In the case of low convergence rate, social and personal learning coefficients are increased and decreased, respectively. On the contrary, when convergence rate is high, personal and social learning coefficients are increase and decrease consequently. These modifications are performed on the basis of low and high membership functions.

Clearly, personal and social learning parameters must be initialized in advance, so several scenarios are introduced. For example, learning coefficients can be initialized with constant or random values (usually with uniform distribution). However, according to the online coefficient adjustment, initial values seem to have no remarkable impact on the performance and stability of the algorithm.



Simulated results are discussed to evaluate performance of the proposed method. For this purpose, 2 types of experiments have been performed; visual and statistical evaluations.

Visual evaluations: In this section optimization progress for minimizing Ackley function in terms of different variables is shown. Figure 2-5 depict progress of optimizing Ackley function by employing proposed method for n = 2, 5, 10 and 20, respectively.

Fig. 1: Membership functions of low and high convergence

Fig. 2: Minimization process for Ackley function with n = 2

Fig. 3: Minimization process for Ackley function with n = 5

Fig. 4:Minimization process for Ackley function with n = 10

It can be seen that utilizing fuzzy sets for online and dynamic parameter adjustment in particle swarm optimization ebbs the probability of premature convergence. Strictly speaking, the proposed method subsides velocity of convergence when complexity is rised.

Fig. 5: Minimization process for Ackley function with n = 20

Table 1: Statistical indicators for Ackley function optimization with n = 2

Table 2: Statistical indicators for Ackley function optimization with n = 5

Table 3: Statistical indicators for Ackley function optimization with n =10

Table 4: Statistical indicators for Ackley function optimization with n = 20

Statistical evaluations: Due to the stochastic behaviour of particle swarm optimization, different results will be obtained in every running time. In statistical experiments, performance and stability of proposed approach is compared with other traditional techniques. In this experiment, each simulation is rerun 30 times to extract statistical indicators. Specifically, best (to evaluate accuracy and efficiency), average (to evaluate quality) and variance (to evaluate stability) indicators are estimated.

Table 1-4 explain the evaluated statistical indicators in minimizing multi-variable Ackley function for n = 2, 5, 10 and 20, respectively. As it is shown, new proposed method generates more desirable and stable results comparing to genetic algorithms and standard particle swarm optimization.


Particle swarm optimization is a reputable solution for optimization problems which attracted lots of attentions in recent decades thanks to its special properties. PSO with continuous range is more effective, easy to implement. There are a few parameters to be adjusted and no overlapping and mutation calculation required compared to traditional metaheuristics such as genetic algorithm. One major flaw of this method is premature convergence. Expressly, the algorithm estimates local and global optimums at initial iterations, but converges to one of these initial optimums.

In this study, a novel method is presented to dynamically adjust social and personal learning coefficients. In the proposed method, personal learning coefficients are grown when algorithm trapped to a local optimum. When convergence of algorithm is high, social learning coefficients are decreased. In this sense, several computational simulations were performed to verify and validate performance and stability of the proposed method. Simulation results show that proposed new method is more stable and effective in comparison with standard particle swarm optimization and genetic algorithms.

" class="btn btn-success" target="_blank">View Fulltext

Related Articles

Strategies of Differential Evolution for Optimum Cropping Pattern

Trends in Applied Sciences Research, 2010, 5(1), 1-15.

Response Surface Method as an Efficient Tool for Medium Optimisation

Trends in Applied Sciences Research, 2011, 6(2), 121-129.

Fatigue Analysis of Hydraulic Pump Gears of JD 1165 Harvester Combine through Finite Element Method

Trends in Applied Sciences Research, 2011, 6(2), 174-181.

Review on Horizontal Axis Wind Turbine Rotor Design and Optimization

Trends in Applied Sciences Research, 2011, 6(4), 309-344.

Analytical Methods for Quality Assessment of Biodiesel from Animal and Vegetable Oils

Trends in Applied Sciences Research, 2011, 6(6), 537-553.

Development of Delivery Cargoes for Debriding Enzymes Effective in Wound Healing

Trends in Applied Sciences Research, 2011, 6(8), 863-876.

A New Reinforcement Learning Optimization Method for Capacitor Allocation Considering Variable Load

Trends in Applied Sciences Research, 2012, 7(3), 210-220.

Biodegradation Potential of Tropical Hydrocarbon Degrading Providencia stuartii

Trends in Applied Sciences Research, 2020, 15(3), 253-259.

Considering the Effect of Series Capacitor in Optimal Coordination of Directional Over-current Relays

Trends in Applied Sciences Research, 2012, 7(6), 421-433.

Cited By

A comparative effect of presenting vocabularies in semantically related and unrelated sets on Iranian EFL learners’ short term retention

International Journal of Research Studies in Education, 2016, 5(4), . DOI: 10.5861/ijrse.2016.1260

Preparation of activated carbon from peanut shell by conventional pyrolysis and microwave irradiation-pyrolysis to remove organic dyes from aqueous solutions

Journal of Environmental Chemical Engineering, 2016, 4(1), 266. DOI: 10.1016/j.jece.2015.11.018

Facile photocatalytic reactor development using nano-TiO2 immobilized mosquito net and energy efficient UVLED for industrial dyes effluent treatment

Journal of Environmental Chemical Engineering, 2016, 4(1), 319. DOI: 10.1016/j.jece.2015.11.024

Magnetically modified peanut husks as an effective sorbent of heavy metals

Journal of Environmental Chemical Engineering, 2016, 4(1), 549. DOI: 10.1016/j.jece.2015.10.039

Evaluating the suitability of maggot meal as a partial substitute of soya bean on the productive traits, digestibility indices and organoleptic properties of broiler meat

Journal of Animal Physiology and Animal Nutrition, 2016, (), n/a. DOI: 10.1111/jpn.12419

Effect of dietary probiotic, prebiotic and synbiotic supplementation on performance, immune responses, intestinal morphology and bacterial populations in broilers

Journal of Animal Physiology and Animal Nutrition, 2016, (), n/a. DOI: 10.1111/jpn.12431

Effect of Dates of Sowing and Soil Moisture Level in Different Growth Stages and Yield Dynamics of Fenugreek (Trigonella foenum-graecum L.)

National Academy Science Letters, 2016, (), . DOI: 10.1007/s40009-016-0428-2

In vitro regeneration of Melastoma malabatricum Linn. through organogenesis and assessment of clonal and biochemical fidelity using RAPD and HPLC

Plant Cell, Tissue and Organ Culture (PCTOC), 2016, 124(3), 517. DOI: 10.1007/s11240-015-0911-3

Europium incorporation dynamics and some physical investigations within ZnO sprayed thin films

Materials Science in Semiconductor Processing, 2016, 43(), 8. DOI: 10.1016/j.mssp.2015.11.005

Effects of molarity on structural, optical, morphological and CO2 gas sensing properties of nanostructured copper oxide films deposited by spray pyrolysis

Materials Science in Semiconductor Processing, 2016, 43(), 214. DOI: 10.1016/j.mssp.2015.12.019

Climate change trends in Malta and related beliefs, concerns and attitudes toward adaptation among Gozitan farmers

European Journal of Agronomy, 2016, 74(), 18. DOI: 10.1016/j.eja.2015.11.011

Gold-Catalyzed Enantio- and Diastereoselective Syntheses of Left Fragments of Azadirachtin/Meliacarpin-Type Limonoids

The Journal of Organic Chemistry, 2016, 81(3), 751. DOI: 10.1021/acs.joc.5b02560

Removal of methyl orange from synthetic wastewater using nano-MgO and nano-MgO/UV combination

Desalination and Water Treatment, 2016, 57(18), 8330. DOI: 10.1080/19443994.2015.1017744

Modeling of effluent quality parameters in a submerged membrane bioreactor with simultaneous upward and downward aeration treating municipal wastewater using hybrid models

Desalination and Water Treatment, 2016, 57(18), 8068. DOI: 10.1080/19443994.2015.1021852