1、MANY-OBJECTIVE PARTICLE SWARM OPTIMIZATION BASED ON PARALLEL CELL COORDINATE SYSTEMGary G. Yen, Ph.D., FIEEE, FIETProfessor, Oklahoma State UniversityPast President, IEEE Computational Intelligence SocietyFounding Editor-in-Chief, IEEE Computational Intelligence Magazine ieee-wcci2014.orgPaper Submi
2、ssion Deadline: December 20, 2013ieee-wcci2016.orgJuly 24-29, 2016Multiobjective Optimization lOptimization problems involve more than one objective functionlVery common, yet difficult problems in the field of science, engineering, and business managementlNonconflicting objectives: achieve a single
3、optimal solution satisfies all objectives simultaneously SOPslCompeting objectives: cannot be optimized simultaneouslylMOP search for a set of “acceptable” maybe only suboptimal for one objective solutions is our goallIn operation research/management terms multiple criterion decision making (MCDM)Wh
4、y MOP? Buying an AutomobilelObjective = reduce cost, while maximize comfortlWhich solution (1, A, B, C, 2) is best ?lNo solution from this set makes both objectives look better than any other solution from the setlNo single optimal solutionlTrade off between conflicting objectives- cost and comfortM
5、athematical DefinitionMathematical model to formulate the optimization problem oDesign Variables: decision and objective vectoroConstraints: equality and inequalityoGreater-than-equal-to inequality constraint can be converted to less-than-equal-to constraint by multiplying -1oObjective Function: max
6、imization can be converted to minimization due to the duality principle,0,0,:),(minULxxxeee)g(x)h(xxfynxObjective vectorsDecision vectorsEquality constraintsInequality constraintsVariable boundsEnvironmentstatesPareto OptimalityFormal Definition: the minimization of the n components of a vector func
7、tion f of a vector variable x in a universe , where Then a decision vector is said to be Pareto-optimal if and only if there is no for whichdominates , that is, there is no such thatnkfk, 1,)(,),(),()(21xxxxfnfffuxvx),()(1nvvv xfv),()(1nuuu xfuvxiiuvni, 1 andiiuvni|, 1 Nondominated set (Pareto front
8、)f1f2objective spaceABCPareto Optimal Set the set of all Pareto-optimal decision vectors, which yields a set of nondominated solutionsNon-dominated Set corresponding objective vector set- Pareto Frontx2x1Pareto optimal setABCdecision spaceDZDT Test SuiteAn unorthodox, stochastic, and population base
9、d parallel searching heuristics maybe more suitable for MOPsClassification of EAs oGenetic Algorithm; oGenetic Programming; oEvolutionary Programming; oEvolutionary Strategy; oAnt Colony; oArtificial Immune System; oParticle Swarm Optimization; oDifferential Evolution; oMemetic Algorithm Why Populat
10、ion-Based Heuristics?ability of handling complex problems with discontinuities, multimodality, disjoint feasible spaces and dynamismResearch Issues for MOPs Modifying the fitness assignment Enhancing the convergence Preserving the diversity Managing the population Constraints and uncertainty handlin
11、gProgressions of development in EMO community- from evolutionary&nature-inspired computational metaphors,to solving single-objective optimization problems, to solving constrained optimization problems, to solving dynamic optimization problems, to solving multi-objective optimization problems, and to
12、 solving now Many-Objective Optimization Problems. Multi-Objective Optimization Problems (MOPs) with a large number of objectives (in general over five) are referred to as MaOPs. Progression in EMO DevelopmentslWhen encounter problems with many objectives (more than five), nearly all algorithms perf
13、orms poorly because of loss of selection pressure in fitness evaluation solely based upon Pareto domination.With the increasing number of objectives, there are a few challenges to be addressed:lIneffective definition in the Pareto-dominance deteriorates the convergence ability of MOEAs lAn exponenti
14、ally large number of solutions are required to approximate the whole Pareto front lIn balance of computational complexity and quality of the solution foundlVisualization of a large-dimensional front is really difficultlMetrics to quantify the performance of the designsResearch Issues for MaOPslObjec
15、tive ReductionlNon-Pareto-Based TechniqueslIncorporation of Preference InformationlGradient InformationModifications of MOEAs for MaOPslPareto Domination RevisionsDominance Area Control, -Dominance, k-Optimality, Grid Dominance, Fuzzy-based Pareto Dominance FD-NSGA-II, FD-SPEA2 (He & Yen, TEVC, 2013
16、)lDecomposition MethodsMOEA/D (Zhang & Li, TEVC, 2007); NSGA-III (Deb & Jain, TEVC, 2013)lGrid Based ApproachesTDEA (Pierro et al., TEVC, 2007); e-MOEA (Deb et al., EvolComput, 2005); GrEA (Yang et al., TEVC, 2013)lIndicator Based MethodsSMS-EMOA (Beume et al., EJOperResearch; 2007)HypE (Bader & Zit
17、zler, EvolComput., 2011)State-of-the-Art MaOEAsIn PSO sidelMeta-heuristically inspired by the social behavior of bird flocking or fish schooling, the relative simplicity and the success as a single-objective optimizer have motivated researchers to extend PSO from SOPs to MOPs.lHowever, apart from th
18、e common issue in MOEAs to maintain an archive, there are two particular issues in MOPSO:Managing convergence and diversitylfast convergence of PSO incurs a rapid loss of diversity during the evolutionary processSelecting global best (gBest) and personal best (pBest) lthere is no absolute best solut
19、ion but rather a set of non-dominated solutions.lMany mechanisms were proposed in the existing MOPSOs in term of leader selection, archive maintenance, and perturbation to tackle these issues. lHowever, few MOPSOs are designed to dynamically adjust the balance in exploration and exploitation accordi
20、ng to the feedback information through interacting the evolutionary environment.lThe challenges in MOPSO for managing the convergence and diversity:updating archiveselecting gBest and pBestSelf-adapting flight parametersperturbing stagnationMotivationslA mechanism (different from grid-based approach
21、es) for :assessing diversity to select global best for a particle and update archiveevaluating the evolutionary environment to dynamically adjust the evolutionary strategieslParallel coordinates is a popular way of visualizing high-dimensional geometry and analyzing multivariate data.lTransform a mu
22、lti-objectivespace into a 2-D grid to evaluate the distribution of an approximate Pareto frontParallel Cell Coordinate SystemlMap the individuals in global archive from Cartesian Coordinate System into Parallel Cell Coordinate System (PCCS)K by M cellsK non-dominated individuals in M-objective space
23、lestimating density to update archive and select diversity gBest (d_gBest) with minimal densityThe distance between two vectors, named Parallel Cell Distance, is measured by the sum of numbers of cells away from each other in all objectives. The density of Pi, in the hyperspace formed by the archive
24、 can be measured by the Parallel Cell Distance between Pi and all other members, Pj (j=1,2,K, ji), in the archive. lranking non-dominated solutions in archive for selecting convergence gBest (c_gBest), with minimal potential. MmmiiLPPotential1,)()(c_gBestThe potential quantifies a non-dominated solu
25、tion among itscompetitors in the archive by combining the order relation along the optimization direction and the degree in the unit ofcell in PCCS.lDetecting the Evolutionary Environment by EntropyKkMmmkmkKMtCellKMtCelltEntropy11,)(log)()() 1()()(tEntropytEntropytEntropy Abrupt changes indicate a c
26、onvergence status because a new solution dominates some old solutions in archive and the population makes a progress or breaks through a local Pareto front. Mild changes indicate a diversity status because a new solution with better density replaces an old solution in archive. No change indicates a
27、stagnation status. Curves of Entropy and Entropy detected from ZDT4 with many local Pareto fronts. Proposed pccsAMOPSOlUpdating archivelComplexity: O(ML2)M: the number of objectivesL: the number of Members in archivelSelecting gBestLeader GrouplM c_gBests & M d_gBeststhe type of candidates between c
28、-gBest and d-gBest is decided in probabilitya candidate for a particle is randomly drawn from the chosen type of gBestA candidate is randomly drawn from the chosen typelThe threshold is the maximal probable variation of entropy.lSelecting pBest from pArchivepArchive with a quarter bounded size of gA
29、rchive to decrease the cost of pArchive maintenancepBest is selected according to the minimal hyperbox:),(max)(,max,ddkddkdkgBestpBestvxpBestL),(min)(,min,ddkddkdkgBestpBestvxpBestLlSelf-Adaptive PSO flight- An example of self-adaptive parameters obtained by pccsAMOPSO for ZDT4 with many local Paret
30、o frontsEntropyEntropySteptEntropyEntropySteptEntropytt0 if ) 1( if )1 (2) 1(0 if 1)-()(EntropyEntropySteptcEntropyEntropySteptcEntropytctccc0 if ) 1( if )1 (2) 1(0 if 1)-()(111111EntropyEntropySteptcEntropyEntropySteptcEntropytctccc0 if ) 1( if )1 (2) 1(0 if 1)-()(222222lPerturbing a particle to ac
31、celerate the convergence or escape from local Pareto frontsElitism Learning Strategy from Z. H. Zhan, J. Zhang, Y. Li, and H. S. Chung, “Adaptive particle swarm optimization,” IEEE Trans. Syst. Man, Cybern. B, Cybern., vol. 39, no. 6, pp.1362-1381, Dec. 2009.Randomly perturb a dimension of gBestPert
32、urbation range is damped by a Gaussian function, G(0,lr2)learning rate, lr, is linearly decreased from 1.0 down to 0.1.),()(2GaussianxxgBestgBestlowerdupperdddlThe integrated algorithm of pccsAMOPSOpccsAMOPSO for DTLZ2(3)ExperimentlPeer algorithmssigmaMOSPO: Sigma value method by Mostaghim and Teich
33、, 2003agMOPSO: adaptive grid by Padhye, 2009cdMOPSO: crowding distance by Coello, Pulido and Lechuga, 2004clusterMOPSO: clustering population by Mostaghim and Teich, 2003pdMOPSO: ROUND + RAND+ PROB by Alvarez-Benitez. Everson and Fieldsend, 2005lTest instancesZTD series (2-objective): five instances
34、DTLZ series (3-objective) : seven instanceslMetric: IGD & HyperVolume (HV) reference point at 11 in each objective for HVHypervolume1-pccsAMOPSO 2-sigmaMOPSO 3-agMOPSO4-cdMOPSO5-cluster POPSO6-pdMOPSOknowniiiknownPFxxaPFH)()(1-pccsAMOPSO2-sigmaMOPSO 3-agMOPSO4-cdMOPSO5-cluster POPSO6-pdMOPSOlScores
35、of Merit on 12 test instances for HVProposed MOPSOsigmaMOPSOagMOPSOcdMOPSOclusterMOPSOpdMOPSONSGA-IISPEA2MOEA/DBetter(+)10129121212109Same(=)20000011Worse(-)00300012Score1012612121297),(1) ,(PAAPvdPPPIGDvIGDAccuracy and stability of an algorithm are indicated by the mean and variance, respectively,
36、of its ranks among all peer algorithms on all test instances. The proposed MOPSO remarkably dominates all other algorithms in terms of the mean and variance of ranks on the 12 test instances. Archive StrategylThe approximate Pareto fronts obtained by PCCS, Adaptive Grid, and Crowding Distance on DTL
37、Z2lMore uniform and convergent by PCCS on IGDIGDConclusionslA new method for estimating density and assessing evolutionary environment, named PCCS, is devised for selecting gBest, updating archive, and providing evolutionary environmental feedback information. lAn integrated and adaptive MOPSO based
38、 on PCCS, abbreviated as pccsAMOPSO, is proposed with the strategies including leader group, self-adaptive parameters, and perturbing operator for balancing convergence and diversity.lThe proposed algorithm significantly outperforms the other five state-of-the-art MOPSOs on 12 test problems in term of HV & IGD.p 经常不断地学习,你就什么都知道。你知道得越多,你就越有力量p StudyConstantly,AndYouWillKnowEverything.TheMoreYouKnow,TheMorePowerfulYouWillBe写在最后谢谢大家荣幸这一路,与你同行ItS An Honor To Walk With You All The Way演讲人:XXXXXX 时 间:XX年XX月XX日