http://www.jhuapl.edu/ISSO/.
Frank Hoffmeister and Thomas Bäck. Genetic Algorithms and Evolution Strategies: Similarities and Differences. In Hans-Paul Schwefel and Reinhard Männer, Proceedings of the International Conference on Parallel Problem Solving from Nature (PPSN I), October 1-3, 1990, Dortmund, Germany, volume 496 of Lecture Notes in Computer Science, pages 455-469, Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-540-54148-6. https://doi.org/10.1007/BFb0029787.
Bases: Algorithm2
The EA is a population-based algorithm using unary and binary operators.
It starts with a list of mu
randomly initialized solutions. In each step, it retains the mu best solutions and generates lambda_
new solutions from them using the unary operator (op1
) with probability 1-br
and the binary search operator ((op2
) at rate br
. From the joint set of mu+lambda_ solutions, it again selects the best mu ones for the next iteration. And so on.
If mu=1, lambda_=1, and br=0, then this algorithm is exactly equivalent to the RLS
if the same unary and nullary operator are used.
Log the parameters of the algorithm to a logger.
logger (KeyValueLogSection
) – the logger for the parameters
Fitness Assignment Processes assign scalar fitnesses to solutions.
A Fitness
Assignment Process uses the information of a set of instances of FRecord
to compute their scalar fitness
. This fitness is then used by Selection
algorithms. Selection
is important in, e.g., Evolutionary Algorithms (~moptipy.algorithms.so.general_ea.GeneralEA), where it is used in two places: As survival selection, it chooses which points will be allowed to remain in the population and, hence, survive into the mating pool for the next generation. As mating selection methods, they choose the inputs of the search operations from the mating pool.
The following Fitness
Assignment Processes have been implemented so far:
Direct
directly copies the objective values (f
) of the solution records directly over to the fitness.
Rank
ranks the solutions by their objective values and uses the ranks as fitness.
RankAndIteration
also uses the rank of the objective values in the fitness. Additionally, if two solutions have the same objective value but one of them is newer, then the newer one will receive the better fitness. This is done by accessing the iteration counter (it
) of the solution records.
FFA
performs the Frequency Fitness Assignment which is suitable for problems with few different objective values and large computational budgets.
Bases: Record
, FitnessRecord
A point x in the search space with its quality and fitness.
the fitness assigned to the solution x
Bases: Component
The base class for fitness assignment processes.
Log the information of this fitness assignment process to the process.
An instance of Process
is given to this method after the algorithm has completed its work. The fitness assignment process then may store some data as a separate log section via add_log_section()
if it wants to. Implementing this method is optional. This method is only invoked if has_log()
returns True.
A fully configurable, general (mu+lambda) Evolutionary Algorithm.
This evolutionary algorithm begins by sampling mu
solutions using the nullary search operation op0
. In each iteration, it then uses mu
existing solutions as input for the search operations, where, for each solution to be sampled, the binary operation op2
is used with probability br
and (otherwise), the unary operator Algorithm1
is used. The inputs of both operators are chosen from the mu
solutions using mating
selection. After lambda_
new solutions have been created this way (and have been evaluated as well), a fitness assignment process (Fitness
) assigns fitness values to them based on their objective values (f
), maybe also using the index of the iteration (it
) in which they were created. The survival selection survival
then chooses, from the joint set of mu+lambda solutions, the mu solutions for the next iteration. Both mating and survival selection are instances of class Selection
.
This algorithm is equivalent to EA
, but allows for using a customized fitness assignment step (Fitness
) as well as customizable survival and mating
selection (Selection
).
Thomas Bäck, David B. Fogel, and Zbigniew Michalewicz, eds., Handbook of Evolutionary Computation. 1997. Computational Intelligence Library. New York, NY, USA: Oxford University Press, Inc. ISBN: 0-7503-0392-1
James C. Spall. Introduction to Stochastic Search and Optimization. Estimation, Simulation, and Control - Wiley-Interscience Series in Discrete Mathematics and Optimization, volume 6. 2003. Chichester, West Sussex, UK: Wiley Interscience. ISBN: 0-471-33052-3. http://www.jhuapl.edu/ISSO/.
Frank Hoffmeister and Thomas Bäck. Genetic Algorithms and Evolution Strategies: Similarities and Differences. In Hans-Paul Schwefel and Reinhard Männer, Proceedings of the International Conference on Parallel Problem Solving from Nature (PPSN I), October 1-3, 1990, Dortmund, Germany, volume 496 of Lecture Notes in Computer Science, pages 455-469, Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-540-54148-6. https://doi.org/10.1007/BFb0029787.
Bases: EA
The fully customizable (mu+lambda) EA.
Log the parameters of the algorithm to a logger.
logger (KeyValueLogSection
) – the logger for the parameters
A fully configurable, general (mu+lambda) Memetic Algorithm.
This Memetic Algorithm implementation compares to the one in ma
like the general Evolutionary Algorithm from general_ea
compares to the simple one in ea
: It adds survival and mating selection as well as a fitness assignment procedure.
It begins by sampling mu
solutions using the nullary search operation op0
. Each of these solutions is refined for ls_fes
objective function evaluations using the local search ls
. In each iteration, this algorithm then uses mu
existing solutions as input for the binary search operation op2
. The inputs of the operator are chosen from the mu
solutions using mating
selection. Each of the lambda_
new solutions have been created this way are again refined for ls_fes
objective function evaluations using the local search ls
. Then, a fitness assignment process (Fitness
) assigns fitness values to them based on their objective values (f
), maybe also using the index of the iteration (it
) in which they were created. The survival selection survival
then chooses, from the joint set of mu+lambda solutions, the mu solutions for the next iteration. Both mating and survival selection are instances of class Selection
.
Pablo Moscato. On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms. Caltech Concurrent Computation Program Report C3P 826. 1989. Pasadena, CA, USA: California Institute of Technology (Caltech), Caltech Concurrent Computation Program (C3P). https://www.researchgate.net/publication/2354457
Carlos Cotta, Luke Mathieson, and Pablo Moscato. Memetic Algorithms. In Rafael Martí, Panos M. Pardalos, and Mauricio G. C. Resende, editors, Handbook of Heuristics. Part~III: Metaheuristics, pages 607-638. 2018. Cham, Switzerland: Springer. ISBN: 978-3-319-07123-7. doi: https://doi.org/10.1007/978-3-319-07153-4_29-1 https://www.researchgate.net/publication/315660932
William Eugene Hart, James E. Smith, and Natalio Krasnogor, editors. Recent Advances in Memetic Algorithms. Studies in Fuzziness and Soft Computing (STUDFUZZ), volume 166. 2005. Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-540-22904-9. doi: https://doi.org/10.1007/3-540-32363-5
Ferrante Neri, Carlos Cotta, and Pablo Moscato. Handbook of Memetic Algorithms. Volume 379 of Studies in Computational Intelligence (SCI). 2012. Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-642-23246-6 doi https://doi.org/10.1007/978-3-642-23247-3.
L. Darrell Whitley, V. Scott Gordon, and Keith E. Mathias. Lamarckian Evolution, The Baldwin Effect and Function Optimization. In Yuval Davidor, Hans-Paul Schwefel, and Reinhard Männer, editors, Proceedings of the Third Conference on Parallel Problem Solving from Nature; International Conference on Evolutionary Computation (PPSN III), October 9-14, 1994, Jerusalem, Israel, pages 5-15. Volume 866/1994 of Lecture Notes in Computer Science, Berlin, Germany: Springer-Verlag GmbH. ISBN 0387584846. doi: https://doi.org/10.1007/3-540-58484-6_245. https://www.researchgate.net/publication/2521727
Thomas Bäck, David B. Fogel, and Zbigniew Michalewicz, eds., Handbook of Evolutionary Computation. 1997. Computational Intelligence Library. New York, NY, USA: Oxford University Press, Inc. ISBN: 0-7503-0392-1
James C. Spall. Introduction to Stochastic Search and Optimization. Estimation, Simulation, and Control - Wiley-Interscience Series in Discrete Mathematics and Optimization, volume 6. 2003. Chichester, West Sussex, UK: Wiley Interscience. ISBN: 0-471-33052-3. http://www.jhuapl.edu/ISSO/.
Frank Hoffmeister and Thomas Bäck. Genetic Algorithms and Evolution Strategies: Similarities and Differences. In Hans-Paul Schwefel and Reinhard Männer, Proceedings of the International Conference on Parallel Problem Solving from Nature (PPSN I), October 1-3, 1990, Dortmund, Germany, volume 496 of Lecture Notes in Computer Science, pages 455-469, Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-540-54148-6. https://doi.org/10.1007/BFb0029787.
Bases: MA
The fully customizable (mu+lambda) MA.
Log the parameters of the algorithm to a logger.
logger (KeyValueLogSection
) – the logger for the parameters
The modified Greedy (2+1) EAmod Evolutionary Algorithm.
The Greedy (2+1) EAmod maintains a population of two individuals. Both solutions, x0 and x1, are intiially sampled independently and at random. Each iteration consists of two steps, a crossover step and a mutation step. The binary operator (crossover) is only applied if x0 and x1 have the same objective value to produce offspring z1. If x0 and x1 have different objective values, then z1 is set to be the better of the two parents. Then, the final offspring z2 is derived by applying the unary mutation operator. If z2 is at least as good as the better one of x0 and x1, then it will be accepted. If x1 and x0 are as same as good, one of them is randomly chosen to be replaced by z2. Otherwise, the worse one is replaced.
This is the implementation of a general black-box version of the Greedy (2+1) GAmod by Carvalho Pinto and Doerr. The original algorithm is a genetic algorithm, i.e., an EA with a bit string-based search space and a mutation operator flipping a Binomially distributed number of bits and performing uniform crossover. Here we implement is a general EA where you can plug in any crossover or mutation operator. Furthermore, the algorithm by Carvalho Pinto and Doerr is, in turn, a modified version of Sudhold’s Greedy (2+1) GA, with some improvements for efficiency.
Eduardo Carvalho Pinto and Carola Doerr. Towards a More Practice-Aware Runtime Analysis of Evolutionary Algorithms. 2017. arXiv:1812.00493v1 [cs.NE] 3 Dec 2018. [Online]. http://arxiv.org/pdf/1812.00493.pdf.
Dirk Sudholt. Crossover Speeds up Building-Block Assembly. In Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation (GECCO’12), July 7-11, 2012, Philadelphia, Pennsylvania, USA, pages 689-702. ACM, 2012. https://doi.org/10.1145/2330163.2330260.
Bases: Algorithm2
The modified Greedy (2+1) Evolutionary Algorithm.
The implementation of the basic hill climbing algorithm hc.
The algorithm starts by applying the nullary search operator, an implementation of op0()
, to sample one fully random solution. This is the first best-so-far solution. In each step, it applies the unary operator, an implementation of op1()
, to the best-so-far solution to obtain a new, similar solution. If this new solution is strictly better than the current best-so-far solution, it replaces this solution. Otherwise, it is discarded.
The hill climbing algorithm is a simple local search that only accepts strictly improving moves. It is thus similar to the randomized local search (rls) implemented in RLS
, which, however, accepts non-deteriorating moves. We also provide hcr, a variant of the hill climber that restarts automatically with a certain number of moves were not able to improve the current best-so-far solution in class HillClimberWithRestarts
.
Stuart Jonathan Russell and Peter Norvig. Artificial Intelligence: A Modern Approach (AIMA). 2nd edition. 2002. Upper Saddle River, NJ, USA: Prentice Hall International Inc. ISBN: 0-13-080302-2
Steven S. Skiena. The Algorithm Design Manual. 2nd edition. 2008. London, UK: Springer-Verlag. ISBN: 978-1-84800-069-8. http://doi.org/10.1007/978-1-84800-070-4.
David Stifler Johnson, Christos H. Papadimitriou, and Mihalis Yannakakis. How Easy Is Local Search? Journal of Computer and System Sciences. 37(1):79-100. August 1988. http://doi.org/10.1016/0022-0000(88)90046-3 http://www2.karlin.mff.cuni.cz/~krajicek/jpy2.pdf
James C. Spall. Introduction to Stochastic Search and Optimization. April 2003. Estimation, Simulation, and Control – Wiley-Interscience Series in Discrete Mathematics and Optimization, volume 6. Chichester, West Sussex, UK: Wiley Interscience. ISBN: 0-471-33052-3. http://www.jhuapl.edu/ISSO/
Holger H. Hoos and Thomas Stützle. Stochastic Local Search: Foundations and Applications. 2005. ISBN: 1493303732. In The Morgan Kaufmann Series in Artificial Intelligence. Amsterdam, The Netherlands: Elsevier.
Thomas Weise. Optimization Algorithms. 2021. Hefei, Anhui, China: Institute of Applied Optimization (IAO), School of Artificial Intelligence and Big Data, Hefei University. http://thomasweise.github.io/oa/
Thomas Weise. Global Optimization Algorithms - Theory and Application. 2009. http://www.it-weise.de/projects/book.pdf
Bases: Algorithm1
The stochastic hill climbing algorithm only accepts improving moves.
In each step, a hill climber creates a modified copy new_x of the current best solution best_x. If new_x is better than best_x, it becomes the new best_x. Otherwise, it is discarded.
The implementation of the hill climbing algorithm with restarts hcr.
This algorithm basically works like the normal hill climber hc (HillClimber
), but it will restart automatically if no move was successful for max_moves_without_improvement iterative steps. It therefore maintains an internal counter count which is set to zero at the beginning of each restart and which is also set to zero again any time a move successfully improved the best-so-far solution of the current restart. If a search move, i.e., an application of the unary operator, yielded a new solution which is not better than the best-so-far solution of the current restart, count is incremented. If count >= max_moves_without_improvement, the algorithm begins a new restart with a new random solution.
Thomas Weise. Optimization Algorithms. 2021. Hefei, Anhui, China: Institute of Applied Optimization (IAO), School of Artificial Intelligence and Big Data, Hefei University. http://thomasweise.github.io/oa/
Bases: Algorithm1
The stochastic hill climbing algorithm only accepts improving moves.
In each step, a hill climber creates a modified copy new_x of the current best solution best_x. If new_x is better than best_x, it becomes the new best_x. Otherwise, it is discarded. If no improvement is made for max_moves_without_improvement steps, the algorithm restarts.
Log all parameters of this component as key-value pairs.
logger (KeyValueLogSection
) – the logger for the parameters
A simple implementation of a (mu+lambda) Memetic Algorithm.
A memetic algorithm (MA
) works similar to a (mu+lambda) EA
, but it refines all results of the search operators with a local search ls
executed for ls_fes
objective function evaluations. It also only employs the nullary search operator (to create the initial random solutions) and the binary search operator (to combine two selected parental solutions).
This is the general form of the Memetic Algorithm of the type “EA+something else.” A specialized version combining an ea
with rls
can be found in marls
.
This Memetic Algorithm implementation begins by sampling mu
solutions using the nullary search operation op0
. Each of these initial solutions is used as a starting point of a local search ls
, which is executed for ls_fes
objective function evaluations. In each iteration, it then uses the mu
existing solutions as input for the binary search operator op2
to create lambda_
new solutions, each of which is again used as a starting point of a local search ls
executed for ls_fes
objective function evaluations. The results of the local searches enter the population and in the next iteration, the mu
best solutions of the mu
+ lambda_
ones in the population are retained.
Due to the Process
and subprocesses
API of moptipy, you can use almost arbitrary algorithms as local search ls
. The only requirement is that is a subclass of Algorithm0
and uses it uses an instance of moptipy.operators.op0_forward.Op0Forward
as nullary search operator (op0
). This allows the MA to set a solution as starting point for the inner algorithm ls
.
It should be noted that it is by no means required that the inner algorithm ls
needs to be a local search. Any algorithm that fulfills the above requirements could be used. For example, if we were conducting numerical optimization, it would be totally OK to plug an instance of the Sequential Least Squares Programming (SLSQP
) algorithm into the memetic algorithm…
Further reading on how to realize using one algorithm as a sub-algorithm of another one can be found in the documentation of from_starting_point()
, for_fes()
, and moptipy.operators.op0_forward.Op0Forward
.
Pablo Moscato. On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms. Caltech Concurrent Computation Program Report C3P 826. 1989. Pasadena, CA, USA: California Institute of Technology (Caltech), Caltech Concurrent Computation Program (C3P). https://www.researchgate.net/publication/2354457
Carlos Cotta, Luke Mathieson, and Pablo Moscato. Memetic Algorithms. In Rafael Martí, Panos M. Pardalos, and Mauricio G. C. Resende, editors, Handbook of Heuristics. Part~III: Metaheuristics, pages 607-638. 2018. Cham, Switzerland: Springer. ISBN: 978-3-319-07123-7. doi: https://doi.org/10.1007/978-3-319-07153-4_29-1 https://www.researchgate.net/publication/315660932
William Eugene Hart, James E. Smith, and Natalio Krasnogor, editors. Recent Advances in Memetic Algorithms. Studies in Fuzziness and Soft Computing (STUDFUZZ), volume 166. 2005. Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-540-22904-9. doi: https://doi.org/10.1007/3-540-32363-5
Ferrante Neri, Carlos Cotta, and Pablo Moscato. Handbook of Memetic Algorithms. Volume 379 of Studies in Computational Intelligence (SCI). 2012. Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-642-23246-6 doi https://doi.org/10.1007/978-3-642-23247-3.
L. Darrell Whitley, V. Scott Gordon, and Keith E. Mathias. Lamarckian Evolution, The Baldwin Effect and Function Optimization. In Yuval Davidor, Hans-Paul Schwefel, and Reinhard Männer, editors, Proceedings of the Third Conference on Parallel Problem Solving from Nature; International Conference on Evolutionary Computation (PPSN III), October 9-14, 1994, Jerusalem, Israel, pages 5-15. Volume 866/1994 of Lecture Notes in Computer Science, Berlin, Germany: Springer-Verlag GmbH. ISBN 0387584846. doi: https://doi.org/10.1007/3-540-58484-6_245. https://www.researchgate.net/publication/2521727
Bases: Algorithm0
An MA is a population-based algorithm using binary operators.
Log the parameters of the algorithm to a logger.
logger (KeyValueLogSection
) – the logger for the parameters
Final
[Algorithm0
]¶the local search algorithm
A (mu+lambda) Memetic Algorithm using Randomized Local Search (RLS).
A memetic algorithm (MA
) works similar to a (mu+lambda) EA
, but it refines all results of the search operators with a local search, in this case, RLS
, which is executed for ls_fes
objective function evaluations. It also only employs the nullary search operator (to create the initial random solutions) and the binary search operator (to combine two selected parental solutions), leaving the application of the unary search operator to the RLS.
A general implementation of a Memetic Algorithm into which arbitrary algorithms can be plugged is given in ma
. Here, the RLS part and the EA part of the MA are directly merged in a hard-coded fashion. If the general MA
is configured equivalently to the MARLS
here, i.e., uses the same search operators, same mu, lambda_, and ls_fes, then both algorithms will do exactly the same search steps.
Pablo Moscato. On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms. Caltech Concurrent Computation Program Report C3P 826. 1989. Pasadena, CA, USA: California Institute of Technology (Caltech), Caltech Concurrent Computation Program (C3P). https://www.researchgate.net/publication/2354457
Carlos Cotta, Luke Mathieson, and Pablo Moscato. Memetic Algorithms. In Rafael Martí, Panos M. Pardalos, and Mauricio G. C. Resende, editors, Handbook of Heuristics. Part~III: Metaheuristics, pages 607-638. 2018. Cham, Switzerland: Springer. ISBN: 978-3-319-07123-7. doi: https://doi.org/10.1007/978-3-319-07153-4_29-1 https://www.researchgate.net/publication/315660932
William Eugene Hart, James E. Smith, and Natalio Krasnogor, editors. Recent Advances in Memetic Algorithms. Studies in Fuzziness and Soft Computing (STUDFUZZ), volume 166. 2005. Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-540-22904-9. doi: https://doi.org/10.1007/3-540-32363-5
Ferrante Neri, Carlos Cotta, and Pablo Moscato. Handbook of Memetic Algorithms. Volume 379 of Studies in Computational Intelligence (SCI). 2012. Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-642-23246-6 doi https://doi.org/10.1007/978-3-642-23247-3.
L. Darrell Whitley, V. Scott Gordon, and Keith E. Mathias. Lamarckian Evolution, The Baldwin Effect and Function Optimization. In Yuval Davidor, Hans-Paul Schwefel, and Reinhard Männer, editors, Proceedings of the Third Conference on Parallel Problem Solving from Nature; International Conference on Evolutionary Computation (PPSN III), October 9-14, 1994, Jerusalem, Israel, pages 5-15. Volume 866/1994 of Lecture Notes in Computer Science, Berlin, Germany: Springer-Verlag GmbH. ISBN 0387584846. doi: https://doi.org/10.1007/3-540-58484-6_245. https://www.researchgate.net/publication/2521727
Bases: Algorithm2
A Memetic Algorithm as Evolutionary Algorithm + Randomized Local Search.
This class implements a Memetic Algorithm (MA) basically as an Evolutionary Algorithm (EA), which only uses the nullary and binary search operators, and that refines each newly generated solution by applying a Randomized Local Search (RLS), which employs the unary search operator.
Log the parameters of the algorithm to a logger.
logger (KeyValueLogSection
) – the logger for the parameters
A simple implementation of a Plant Propagation Algorithm (PPA).
This is a simple implementation of the Plant Propagation Algorithm, PPA for short, with some tweaks and modifications. Our PPA implementation works as follows:
It starts with a set of m
randomly sampled solutions in the list lst. Each solution x is evaluated and its objective value f(x) is remembered.
In the main loop…
First, the range [fmin,fmax] of the objective values of the first m
solutions in lst is determined. We set frange = fmax - fmin, where fmax is the largest objective value of any of the first m solutions in lst and fmin is the smallest one. If frange > 0, then the fitness z(x) of each element be (f(x) - fmin) / frange. Otherwise, i.e., if all solutions in lst have the same objective value, we set z(x) to be a random number uniformly distributed in [0,1) and drawn separately for each solution.
For each of the first m
solutions x in lst, we create 1 + int(nmax * r * (1 - z(x))) offspring, where nmax
is the maximum number of offspring per solution and r be again an independently drawn random number uniformly distributed in [0,1). In other words, solutions with a fitness close to zero will produce more offspring. If the solutions in the list lst have different objective values, then this means that better solutions produce more offsprings. Each such offspring is the result of the application of a unary operator with step size, i.e., an instance of Op1WithStepSize
. The step size is set to r * max_step * z(x), where r again is a freshly and independently drawn random number uniformly distributed in [0,1). This means that better solutions are modified with smaller step sizes and worse solutions are modified more strongly. max_step
is a parameter of the algorithm that determines the maximum permissible step size. It is always from the interval [0,1]. Examples for such operators are given in op1_swap_exactly_n
, op1_swap_try_n
, or op1_flip_m
. The new solutions are appended into lst and their objective values are computed.
The list is then sorted by objective values in ascending order, meaning that the best solutions are up front.
The main differences between this procedure and the “standard-PPA” are as follows:
The algorithm is implemented for minimization and all equations are modified accordingly.
After normalizing the objective values in the population, the tanh-based scaling is not applied. Instead, the normalized objective values, where 0 is best and 1 is worst, are used directly to determine the number of offspring per record and the step length.
The fitness of a record equals its normalized objective value (in [0, 1]), unless all records have the same objective value, in which case the fitness of each record is set to a random number uniformly distributed in [0, 1). If all elements in the population have the same objective value, normalizing is not possible as it would lead to a division by zero. One could use a constant value, say 0.5, in this case, but there is no guarantee that this would be a good choice. We therefore use random values from [0, 1) instead. These may sometimes be suitable, sometimes not. But at least they likely are not always a bad choice, which might happen in some scenarios with 0.5 or any other constant.
The decisions regarding the number of offspring per selected record and the step-width of the search moves are made only based on this fitness (and, again, not on the tanh scaling which is not used). Since we normally do not know the characteristics of the objective function in advance, I think that we also often do not know whether a tanh scaling (that emphasizes objective values close to the best and close to the worst) is necessary or a good idea. It could be good in some cases, but it might as well be a bad choice in others. For now, I have thus not implemented this and just use the raw normalized objective values.
As unary operators, we employ instances of the class Op1WithStepSize
, which provides a unary operator with a step size between 0 (smallest possible modification) to 1 (largest possible modification) and will scale appropriately between the two extremes. Often, instances of this class will determine the number or magnitude of changes based on an exponential scaling (see exponential_step_size()
) of the step length. The idea is that small step sizes should be emphasized and that really big step sizes are often rarely needed. This thus effectively takes the place of the tanh scaling.
Maximum step lengths, i.e., the parameter max_step
, are not always explicitly used in some of the papers.
In order to understand the behavior of the algorithm, consider the following case. Assume that we set the maximum number (nmax
) of offspring per solution to 1 and the number m
of solutions to survive selection to 1 as well. In this case, the PPA has exactly the same “population structure” as the Randomized Local Search (rls
), namely it preserves the best-so-far solution and generates one new solution in each step, which then competes with that best-so-far solution. The two algorithms then only differ in their search operator: The step-length of the unary operator used in PPA depends on the relative objective value and the one of the RLS does not. However, there are many situations where the two could still be equivalent. For example, if the current and new solution have different objective values, normalizing the objective value will mean that the best of the two has normalized objective value “0”. This equates to the shortest possible step length. In this case, for example, the step-length based operator op1_swap_try_n
behaves exactly like the op1_swap2
operator and the step-length based op1_flip_m
operator behaves like the op1_flip1
. Of course, if both the current and the new solution have the same objective value, then we use a random number from [0,1) as normalized objective value, so the operators would not behave the same. Then again, one could set the maximum step length max_step
to 0. In this case, the step length is always zero and most of our step-length based operations will behave like fixed small step-length based counterparts, as mentioned above. So in other words, if we set both m
and nmax
to 1 and set max_step
to 0, our PPA behaves like rls
(if the search operators are appropriately chosen).
Now rls
is also known as the (1+1) EA and indeed, it is a special case of the (mu+lambda) EA implemented in ea
. I think with some appropriate settings of the parameter, we can probably construct some setups of both algorithms with larger populations that should be equivalent or close-to-equivalent in the big picture.
Below, you can find references on the PPA.
Abdellah Salhi and Eric Serafin Fraga. Nature-Inspired Optimisation Approaches and the New Plant Propagation Algorithm. Proceeding of the International Conference on Numerical Analysis and Optimization (ICeMATH’2011), June 6-8, 2011, Yogyakarta, Indonesia, volume 1, pages K2-1–K2-8. ISBN: 978-602-98919-1-1. https://doi.org/10.13140/2.1.3262.0806. https://repository.essex.ac.uk/9974/1/paper.pdf.
Misha Paauw and Daan van den Berg. Paintings, Polygons and Plant Propagation. In Anikó Ekárt, Antonios Liapis, and María Luz Castro Pena, editors, Proceedings of the 8th International Conference on Computational Intelligence in Music, Sound, Art and Design (EvoMUSART’19, Part of EvoStar), April 24-26, 2019, Leipzig, Germany, Lecture Notes in Computer Science (LNCS), volume 11453, pages 84-97. ISBN: 978-3-030-16666-3. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-16667-0_6. https://www.researchgate.net/publication/332328080.
Muhammad Sulaiman, Abdellah Salhi, Eric Serafin Fraga, Wali Khan Mashwa, and Muhammad M. Rashi. A Novel Plant Propagation Algorithm: Modifications and Implementation. Science International (Lahore) 28(1):201-209, #2330, January/February 2016. http://www.sci-int.com/pdf/4066579081%20a%20201-209%20PPA%20Science%20international_Wali.pdf. https://arxiv.org/pdf/1412.4290.pdf
Hussein Fouad Almazini, Salah Mortada, Hassan Fouad Abbas Al-Mazini, Hayder Naser Khraibet AL-Behadili, and Jawad Alkenani. Improved Discrete Plant Propagation Algorithm for Solving the Traveling Salesman Problem. IAES International Journal of Artificial Intelligence (IJ-AI) 11(1):13-22. March 2022. http://doi.org/10.11591/ijai.v11.i1.pp13-22. https://www.researchgate.net/publication/357484222.
Birsen İrem Selamoğlu and Abdellah Salhi. The Plant Propagation Algorithm for Discrete Optimisation: The Case of the Travelling Salesman Problem. In Xin-She Yang, editor, Nature-Inspired Computation in Engineering, pages 43-61. Studies in Computational Intelligence (SCI), Volume 637. March 2016. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-30235-5_3. https://www.researchgate.net/publication/299286896.
Marleen de Jonge and Daan van den Berg. Parameter Sensitivity Patterns in the Plant Propagation Algorithm. In Juan Julián Merelo Guervós, Jonathan M. Garibaldi, Christian Wagner, Thomas Bäck, Kurosh Madani, and Kevin Warwick, editors, Proceedings of the 12th International Joint Conference on Computational Intelligence (IJCCI’20), November 2-4, 2020, Budapest, Hungary, pages 92-99. Setúbal, Portugal: SciTePress. https://doi.org/10.5220/0010134300920099. https://www.researchgate.net/publication/346829569.
Ege de Bruin. Escaping Local Optima by Preferring Rarity with the Frequency Fitness Assignment. Master’s Thesis at Vrije Universiteit Amsterdam, Amsterdam, the Netherlands. 2022.
Wouter Vrielink and Daan van den Berg. Parameter control for the Plant Propagation Algorithm. In Antonio M. Mora and Anna Isabel Esparcia-Alcázar, editors, Late-Breaking Abstracts of EvoStar’21, April 7-9, 2021, online conference. https://arxiv.org/pdf/2106.11804.pdf. https://www.researchgate.net/publication/350328314.
Levi Koppenhol, Nielis Brouwer, Danny Dijkzeul, Iris Pijning, Joeri Sleegers, and Daan van den Berg. Exactly Characterizable Parameter Settings in a Crossoverless Evolutionary Algorithm. In Jonathan E. Fieldsend and Markus Wagner, editors, Genetic and Evolutionary Computation Conference (GECCO’22) Companion Volume, July 9-13, 2022, Boston, MA, USA, pages 1640-1649. New York, NY, USA: ACM. https://doi.org/10.1145/3520304.3533968. https://www.researchgate.net/publication/362120506.
Bases: Algorithm1
The Plant Propagation Algorithm (PPA).
Log the parameters of the algorithm to a logger.
logger (KeyValueLogSection
) – the logger for the parameters
the number of records to survive in each generation
the maximum step length
the maximum number of offsprings per solution per iteration
A simple record for storing a solution with its quality.
Bases: object
A point in the search space, its quality, and creation time.
A record stores a point in the search space x
together with the corresponding objective value f
. It also stores a “iteration index” it
, i.e., the time when the point was created or modified.
This allows for representing and storing solutions in a population. If the population is sorted, then records with better objective value will be moved to the beginning of the list. Ties are broken such that younger individuals (with higher it
value) are preferred.
Final
¶the point in the search space
The implementation of the Randomized Local Search algorithm rls.
The algorithm starts by applying the nullary search operator, an implementation of op0()
, to sample one fully random solution. This is the first best-so-far solution. In each step, it applies the unary operator, an implementation of op1()
, to the best-so-far solution to obtain a new, similar solution. If this new solution is not worse than the current best-so-far solution, it replaces this solution. Otherwise, it is discarded.
The rls algorithm is a simple local search that accepts all non-deteriorating moves. It is thus similar to the simple hill climber hc implemented in HillClimber
, which, however, accepts strictly improving moves. rls is also equivalent to a (mu+lambda)-EA without crossover as implemented in EA
if the same unary and nullary operator are used and mu=1, lambda=1, and br=0. rls, however, will be faster as it does not represent a population of solutions as list of objects but can directly utilize local variables.
Strictly speaking, the name “Randomized Local Search” only fits partially to the algorithm we implement here. Take the discrete search domain, where the search spaces are bit strings of a fixed length n, as an example. The name “Randomized Local Search” and the abbreviation rls has a fixed meaning on this domain: It is the algorithm that starts with a random solution and flips exactly one randomly chosen bit in each step. This corresponds to our rls algorithm with the operator Op1Flip1
. However, an algorithm that starts with a random solution and flips a number of bits sampled from a Binomial distribution is called (1+1) EA. Now this algorithm corresponds again to our rls algorithm, but this time with operator Op1MoverNflip
. In other words, we can implement (at least) two algorithms with well-known and fixed names by plugging different operators into our rls approach. One of them is called RLS, the other one is called (1+1) EA. Now this is somewhat confusing but results from the general nature of our basic framework. Regardless of what we do, we will have some form of name clash here. We advise the user of our algorithms to be careful with respect to literature and scientific conventions when using our framework.
Frank Neumann and Ingo Wegener. Randomized Local Search, Evolutionary Algorithms, and the Minimum Spanning Tree Problem. Theoretical Computer Science. 378(1):32-40, June 2007. https://doi.org/10.1016/j.tcs.2006.11.002, https://eldorado.tu-dortmund.de/bitstream/2003/5454/1/165.pdf
Holger H. Hoos and Thomas Stützle. Stochastic Local Search: Foundations and Applications. 2005. ISBN: 1493303732. In The Morgan Kaufmann Series in Artificial Intelligence. Amsterdam, The Netherlands: Elsevier.
Thomas Weise. Optimization Algorithms. 2021. Hefei, Anhui, China: Institute of Applied Optimization (IAO), School of Artificial Intelligence and Big Data, Hefei University. http://thomasweise.github.io/oa/
Bases: Algorithm1
The RLS is a simple local search accepting all non-worsening moves.
In each step, an RLS creates a modified copy new_x of the current best solution best_x. If new_x is not worse than best_x, it becomes the new best_x. Otherwise, it is discarded.
The simulated annealing algorithm with configurable temperature schedule.
A basic randomized local search (rls
) maintains one interesting solution and derives one new solution from it using the unary search operator (Op1
). The new solution replaces the current solution if it is not worse, i.e., better or equally good. Simulated Annealing is similar to the rls
, but it sometimes also accepts solutions that are worse than the current one. It does so with a probability that becomes smaller the worse the new solution is and also becomes smaller the smaller the current “temperature” is.
Simulated Annealing applies a so-called temperature schedule (see temperature_schedule
), which basically is function that relates the index of the algorithm iteration (i.e., the index of the current objective function evaluation) to a temperature. It therefore is a function accepting an integer value as input and returning a float temperature. This function is usually monotonously decreasing over time, meaning that the initial “temperature” is high and then becomes smaller. The algorithm therefore is more likely to accept worse solutions at its beginning, whereas it behaves basically like a rls
at the end of its computational budget (if configured correctly).
Simulated Annealing was independently developed by several researchers [1-4]. The idea is inspired by Metropolis’ approximation of how annealing can be simulated [5].
Scott Kirkpatrick, C. Daniel Gelatt, Jr., and Mario P. Vecchi. Optimization by Simulated Annealing. Science Magazine. 220(4598):671-680. May 13, 1983. doi: https://doi.org/10.1126/science.220.4598.671. https://www.researchgate.net/publication/6026283
Vladimír Černý. Thermodynamical Approach to the Traveling Salesman Problem: An Efficient Simulation Algorithm. Journal of Optimization Theory and Applications. 45(1):41-51. January 1985. doi: https://doi.org/10.1007/BF00940812. http://mkweb.bcgsc.ca/papers/cerny-travelingsalesman.pdf.
Dean Jacobs, Jan Prins, Peter Siegel, and Kenneth Wilson. Monte Carlo Techniques in Code Optimization. ACM SIGMICRO Newsletter. 13(4):143-148. December 1982. Also in Proceedings of the 15th Annual Workshop on Microprogramming (MICRO 15), October 5-7, 1982, Palo Alto, CA, USA, New York, NY, USA: ACM. doi: http://doi.org/10.1145/1014194.800944.
Martin Pincus. Letter to the Editor - A Monte Carlo Method for the Approximate Solution of Certain Types of Constrained Optimization Problems. Operations Research. 18(6):1225-1228. November/December 1970. doi: https://doi.org/10.1287/opre.18.6.1225.
Nicholas Metropolis, Arianna W. Rosenbluth, Marshall Nicholas Rosenbluth, Augusta H. Teller, Edward Teller. Equation of State Calculations by Fast Computing Machines. The Journal of Chemical Physics. 21(6):1087-1092. June 1953. doi: https://doi.org/10.1063/1.1699114. http://scienze-como.uninsubria.it/bressanini/montecarlo-history/mrt2.pdf.
Bases: Algorithm1
Simulated Annealing is an RLS sometimes accepting worsening moves.
Log all parameters of the simulated annealing algorithm.
logger (KeyValueLogSection
) – the logger for the parameters
Final
[TemperatureSchedule
]¶the temperature schedule