moptipy.algorithms.so package

The single-objective optimization algorithms of the moptipy package.

Subpackages

Submodules

moptipy.algorithms.so.ea module

A simple implementation of a (mu+lambda) Evolutionary Algorithm.

This is the basic mu+lambda-EA, which works as follows:

  1. Start with a list lst of mu random records and lambda blank records.

  2. In each iteration:

    2.1. Use the mu first records as input to the search operators to

    generate lambda new points in the search space. For each new point to be created, the binary operator is applied with probability 0<=br<=1 and the unary operator is used otherwise.

    2.2. Sort the list lst according to the objective value of the record.

    Ties are broken by preferring younger solutions over old ones. Soring uses the __lt__ dunder method of class Record. This moves the best solutions to the front of the list. The tie breaking method both encourages drift and ensures compatibility with RLS.

If mu=1, lambda=1, and br=0, then this algorithm is exactly equivalent to the RLS if the same unary and nullary operator are used. It is only a bit slower due to the additional overhead of maintaining a list of records. This compatibility is achieved by the tie breaking strategy of step 2.2 above: RLS will prefer the newer solution over the current one if the new solution is either better or as same as good. Now the latter case cannot be achieved by just sorting the list without considering the iteration at which a solution was created, since sorting in Python is stable (equal elements remain in the order in which they are encountered in the original list) and because our new solutions would be in the lambda last entries of the list. This can easily be fixed by the tie breaking, which is implemented in the __lt__ dunder method of class Record.

  1. Thomas Bäck, David B. Fogel, and Zbigniew Michalewicz, eds., Handbook of Evolutionary Computation. 1997. Computational Intelligence Library. New York, NY, USA: Oxford University Press, Inc. ISBN: 0-7503-0392-1

  2. James C. Spall. Introduction to Stochastic Search and Optimization. Estimation, Simulation, and Control - Wiley-Interscience Series in Discrete Mathematics and Optimization, volume 6. 2003. Chichester, West Sussex, UK: Wiley Interscience. ISBN: 0-471-33052-3. http://www.jhuapl.edu/ISSO/.

  3. Frank Hoffmeister and Thomas Bäck. Genetic Algorithms and Evolution Strategies: Similarities and Differences. In Hans-Paul Schwefel and Reinhard Männer, Proceedings of the International Conference on Parallel Problem Solving from Nature (PPSN I), October 1-3, 1990, Dortmund, Germany, volume 496 of Lecture Notes in Computer Science, pages 455-469, Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-540-54148-6. https://doi.org/10.1007/BFb0029787.

class moptipy.algorithms.so.ea.EA(op0, op1=None, op2=None, mu=1, lambda_=1, br=None, name='ea')[source]

Bases: Algorithm2

The EA is a population-based algorithm using unary and binary operators.

It starts with a list of mu randomly initialized solutions. In each step, it retains the mu best solutions and generates lambda_ new solutions from them using the unary operator (op1) with probability 1-br and the binary search operator ((op2) at rate br. From the joint set of mu+lambda_ solutions, it again selects the best mu ones for the next iteration. And so on.

If mu=1, lambda_=1, and br=0, then this algorithm is exactly equivalent to the RLS if the same unary and nullary operator are used.

br: Final[float]

the rate at which the binary operator is applied

lambda_: Final[int]

the number of offsprings per generation

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

mu: Final[int]

the number of records to survive in each generation

solve(process)[source]

Apply the EA to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

moptipy.algorithms.so.fea1plus1 module

The FFA-based version of the (1+1)-EA: the (1+1)-FEA.

This algorithm is based on RLS, i.e., the (1+1)-EA, but uses Frequency Fitness Assignment (FFA) as fitness assignment process. FFA replaces all objective values with their encounter frequencies in the selection decisions. The more often an objective value is encountered, the higher gets its encounter frequency. Therefore, local optima are slowly receiving worse and worse fitness.

Most of the existing metaheuristic algorithms have in common that they maintain a set Si of one or multiple solutions and derive a set Ni of one or multiple new solutions in each iteration i. From the joint set Pi = Si + Ni of old and new solutions, they then select the set Si+1 of solutions to be propagated to the next iteration, and so on. This selection decision is undertaken based mainly on the objective values f(x) of the solutions x in Pi and solutions with better objective values tend to be preferred over solutions with worse objective values.

Frequency Fitness Assignment (FFA) completely breaks with this most fundamental concept of optimization. FFA was first proposed by Weise et al. as a “plug-in” for metaheuristics intended to prevent premature convergence. It therefore maintains a frequency table H for objective values. Before the metaheuristic chooses the set Si+1 from Pi, it increments the encounter frequencies of the objective value of each solution in Pi, i.e., performs H[yj] <- H[yj] + 1 for each xj in Pi, where yj = f(xj). In its selection decisions, the algorithm then uses the frequency fitness H[yj] instead of the objective values yj.

Here we integrate FFA into the randomized local search algorithm RLS, which is also known as the (1+1) EA. In its original form, RLS maintains a single solution and derives a slightly modified copy from it in every iteration. If the modified copy is not worse than the original solution, it replaces it. “Not worse” here means that its objective value needs to be better or equally good, i.e., <=, than the maintained current best solution. The RLS with FFA (here called (1+1) FEA) now replaces the comparison of objective values with a comparison of the frequencies of the objective values. Of course, following the definition of FFA, the frequencies are first incremented (both of the current and the new solution) and then compared.

The algorithm is here implemented in two different ways: If the objective function is always integer valued and the difference between its upper and lower bound is not too high, then we count the frequency fitness by using a numpy array. This means that frequency updates and getting frequency values is very fast. If the objective function is not always integer or if the difference between its maximum and minimum is too large, then we will use a collections.Counter to back the frequency table instead. This will be slower and probably require more memory, but it may be the only way to accommodate the frequency table. Of course, this will still fail if there are too many different objective values and the memory consumed is simply too high.

FFA is also implemented as a fitness assignment process (fitness) in module ffa.

  1. Thomas Weise, Zhize Wu, Xinlu Li, and Yan Chen. Frequency Fitness Assignment: Making Optimization Algorithms Invariant under Bijective Transformations of the Objective Function Value. IEEE Transactions on Evolutionary Computation 25(2):307-319. April 2021. Preprint available at arXiv:2001.01416v5 [cs.NE] 15 Oct 2020. https://dx.doi.org/10.1109/TEVC.2020.3032090

  2. Thomas Weise, Zhize Wu, Xinlu Li, Yan Chen, and Jörg Lässig. Frequency Fitness Assignment: Optimization without Bias for Good Solutions can be Efficient. IEEE Transactions on Evolutionary Computation (TEVC). 2022. Early Access. https://dx.doi.org/10.1109/TEVC.2022.3191698

  3. Thomas Weise, Mingxu Wan, Ke Tang, Pu Wang, Alexandre Devert, and Xin Yao. Frequency Fitness Assignment. IEEE Transactions on Evolutionary Computation (IEEE-EC), 18(2):226-243, April 2014. https://dx.doi.org/10.1109/TEVC.2013.2251885

  4. Thomas Weise, Yan Chen, Xinlu Li, and Zhize Wu. Selecting a diverse set of benchmark instances from a tunable model problem for black-box discrete optimization algorithms. Applied Soft Computing Journal (ASOC), 92:106269, June 2020. https://dx.doi.org/10.1016/j.asoc.2020.106269

  5. Thomas Weise, Xinlu Li, Yan Chen, and Zhize Wu. Solving Job Shop Scheduling Problems Without Using a Bias for Good Solutions. In Genetic and Evolutionary Computation Conference Companion (GECCO’21 Companion), July 10-14, 2021, Lille, France. ACM, New York, NY, USA. ISBN 978-1-4503-8351-6. https://dx.doi.org/10.1145/3449726.3463124

  6. Thomas Weise, Mingxu Wan, Ke Tang, and Xin Yao. Evolving Exact Integer Algorithms with Genetic Programming. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC’14), Proceedings of the 2014 World Congress on Computational Intelligence (WCCI’14), pages 1816-1823, July 6-11, 2014, Beijing, China. Los Alamitos, CA, USA: IEEE Computer Society Press. ISBN: 978-1-4799-1488-3. https://dx.doi.org/10.1109/CEC.2014.6900292

class moptipy.algorithms.so.fea1plus1.FEA1plus1(op0, op1, log_h_tbl=True)[source]

Bases: Algorithm1

The FFA-based version of the (1+1)-EA: the (1+1)-FEA.

This algorithm applies Frequency Fitness Assignment (FFA). This means that it does not select solutions based on whether they are better or worse. Instead, it selects the solution whose objective value has been encountered during the search less often. The word “best” therefore is not used in the traditional sense, i.e., that one solution is better than another one terms of its objective value. Instead, the current best solution is always the one whose objective value we have seen the least often.

In each step, a (1+1)-FEA creates a modified copy new_x of the current best solution best_x. It then increments the frequency fitness of both solutions by 1. If frequency fitness of new_x is not bigger the one of best_x, it becomes the new best_x. Otherwise, it is discarded.

This algorithm implementation requires that objective values are integers and have lower and upper bounds that are not too far away from each other. A more general version is available as a fitness assignment process (fitness) that can be plugged into a general EA (general_ea) in module ffa.

log_h_tbl: Final[bool]

True if we should log the H table, False otherwise

solve(process)[source]

Apply the (1+1)-FEA to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

moptipy.algorithms.so.fea1plus1.H_LOG_SECTION: Final[str] = 'H'

the log section for the frequency table

moptipy.algorithms.so.fea1plus1.SWITCH_TO_MAP_RANGE: Final[int] = 67108864

The difference between upper- and lower bound at which we switch from using arrays as backing store for the frequency table H to maps.

moptipy.algorithms.so.fea1plus1.log_h(process, indices, h, print_index)[source]

Convert a frequency table H to a string and log it to a process.

The frequency table is logged as a single line of text into a section H delimited by the lines BEGIN_H and END_H. The line consists of 2*n semicolon separated values. Each such value pair consists of an objective value y and its observed frequency H[y]. The former is either an integer or a float and the latter is an integer.

Parameters:
Return type:

None

moptipy.algorithms.so.fitness module

Fitness Assignment Processes assign scalar fitnesses to solutions.

A Fitness Assignment Process uses the information of a set of instances of FRecord to compute their scalar fitness. This fitness is then used by Selection algorithms. Selection is important in, e.g., Evolutionary Algorithms (~moptipy.algorithms.so.general_ea.GeneralEA), where it is used in two places: As survival selection, it chooses which points will be allowed to remain in the population and, hence, survive into the mating pool for the next generation. As mating selection methods, they choose the inputs of the search operations from the mating pool.

The following Fitness Assignment Processes have been implemented so far:

  • Direct directly copies the objective values (f) of the solution records directly over to the fitness.

  • Rank ranks the solutions by their objective values and uses the ranks as fitness.

  • RankAndIteration also uses the rank of the objective values in the fitness. Additionally, if two solutions have the same objective value but one of them is newer, then the newer one will receive the better fitness. This is done by accessing the iteration counter (it) of the solution records.

  • FFA performs the Frequency Fitness Assignment which is suitable for problems with few different objective values and large computational budgets.

class moptipy.algorithms.so.fitness.FRecord(x, f)[source]

Bases: Record, FitnessRecord

A point x in the search space with its quality and fitness.

fitness: int | float

the fitness assigned to the solution x

class moptipy.algorithms.so.fitness.Fitness[source]

Bases: Component

The base class for fitness assignment processes.

assign_fitness(p, random)[source]

Assign a fitness to each element in the list p.

Parameters:
Return type:

None

log_information_after_run(process)[source]

Log the information of this fitness assignment process to the process.

An instance of Process is given to this method after the algorithm has completed its work. The fitness assignment process then may store some data as a separate log section via add_log_section() if it wants to. Implementing this method is optional. This method is only invoked if has_log() returns True.

Return type:

None

moptipy.algorithms.so.fitness.check_fitness(fitness)[source]

Check whether an object is a valid instance of Fitness.

Parameters:

fitness (Fitness) – the Fitness object

Return type:

Fitness

Returns:

the object

Raises:

TypeError – if fitness is not an instance of Fitness

moptipy.algorithms.so.general_ea module

A fully configurable, general (mu+lambda) Evolutionary Algorithm.

This evolutionary algorithm begins by sampling mu solutions using the nullary search operation op0. In each iteration, it then uses mu existing solutions as input for the search operations, where, for each solution to be sampled, the binary operation op2 is used with probability br and (otherwise), the unary operator Algorithm1 is used. The inputs of both operators are chosen from the mu solutions using mating selection. After lambda_ new solutions have been created this way (and have been evaluated as well), a fitness assignment process (Fitness) assigns fitness values to them based on their objective values (f), maybe also using the index of the iteration (it) in which they were created. The survival selection survival then chooses, from the joint set of mu+lambda solutions, the mu solutions for the next iteration. Both mating and survival selection are instances of class Selection.

This algorithm is equivalent to EA, but allows for using a customized fitness assignment step (Fitness) as well as customizable survival and mating selection (Selection).

  1. Thomas Bäck, David B. Fogel, and Zbigniew Michalewicz, eds., Handbook of Evolutionary Computation. 1997. Computational Intelligence Library. New York, NY, USA: Oxford University Press, Inc. ISBN: 0-7503-0392-1

  2. James C. Spall. Introduction to Stochastic Search and Optimization. Estimation, Simulation, and Control - Wiley-Interscience Series in Discrete Mathematics and Optimization, volume 6. 2003. Chichester, West Sussex, UK: Wiley Interscience. ISBN: 0-471-33052-3. http://www.jhuapl.edu/ISSO/.

  3. Frank Hoffmeister and Thomas Bäck. Genetic Algorithms and Evolution Strategies: Similarities and Differences. In Hans-Paul Schwefel and Reinhard Männer, Proceedings of the International Conference on Parallel Problem Solving from Nature (PPSN I), October 1-3, 1990, Dortmund, Germany, volume 496 of Lecture Notes in Computer Science, pages 455-469, Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-540-54148-6. https://doi.org/10.1007/BFb0029787.

class moptipy.algorithms.so.general_ea.GeneralEA(op0, op1=None, op2=None, mu=1, lambda_=1, br=None, fitness=None, survival=None, mating=None, name='generalEa')[source]

Bases: EA

The fully customizable (mu+lambda) EA.

fitness: Final[Fitness]

the fitness assignment process

initialize()[source]

Initialize the algorithm.

Return type:

None

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

mating: Final[Selection]

the mating selection algorithm

solve(process)[source]

Apply the (mu+lambda) EA to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

survival: Final[Selection]

the survival selection algorithm

moptipy.algorithms.so.general_ma module

A fully configurable, general (mu+lambda) Memetic Algorithm.

This Memetic Algorithm implementation compares to the one in ma like the general Evolutionary Algorithm from general_ea compares to the simple one in ea: It adds survival and mating selection as well as a fitness assignment procedure.

It begins by sampling mu solutions using the nullary search operation op0. Each of these solutions is refined for ls_fes objective function evaluations using the local search ls. In each iteration, this algorithm then uses mu existing solutions as input for the binary search operation op2. The inputs of the operator are chosen from the mu solutions using mating selection. Each of the lambda_ new solutions have been created this way are again refined for ls_fes objective function evaluations using the local search ls. Then, a fitness assignment process (Fitness) assigns fitness values to them based on their objective values (f), maybe also using the index of the iteration (it) in which they were created. The survival selection survival then chooses, from the joint set of mu+lambda solutions, the mu solutions for the next iteration. Both mating and survival selection are instances of class Selection.

  1. Pablo Moscato. On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms. Caltech Concurrent Computation Program Report C3P 826. 1989. Pasadena, CA, USA: California Institute of Technology (Caltech), Caltech Concurrent Computation Program (C3P). https://www.researchgate.net/publication/2354457

  2. Carlos Cotta, Luke Mathieson, and Pablo Moscato. Memetic Algorithms. In Rafael Martí, Panos M. Pardalos, and Mauricio G. C. Resende, editors, Handbook of Heuristics. Part~III: Metaheuristics, pages 607-638. 2018. Cham, Switzerland: Springer. ISBN: 978-3-319-07123-7. doi: https://doi.org/10.1007/978-3-319-07153-4_29-1 https://www.researchgate.net/publication/315660932

  3. William Eugene Hart, James E. Smith, and Natalio Krasnogor, editors. Recent Advances in Memetic Algorithms. Studies in Fuzziness and Soft Computing (STUDFUZZ), volume 166. 2005. Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-540-22904-9. doi: https://doi.org/10.1007/3-540-32363-5

  4. Ferrante Neri, Carlos Cotta, and Pablo Moscato. Handbook of Memetic Algorithms. Volume 379 of Studies in Computational Intelligence (SCI). 2012. Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-642-23246-6 doi https://doi.org/10.1007/978-3-642-23247-3.

  5. L. Darrell Whitley, V. Scott Gordon, and Keith E. Mathias. Lamarckian Evolution, The Baldwin Effect and Function Optimization. In Yuval Davidor, Hans-Paul Schwefel, and Reinhard Männer, editors, Proceedings of the Third Conference on Parallel Problem Solving from Nature; International Conference on Evolutionary Computation (PPSN III), October 9-14, 1994, Jerusalem, Israel, pages 5-15. Volume 866/1994 of Lecture Notes in Computer Science, Berlin, Germany: Springer-Verlag GmbH. ISBN 0387584846. doi: https://doi.org/10.1007/3-540-58484-6_245. https://www.researchgate.net/publication/2521727

  6. Thomas Bäck, David B. Fogel, and Zbigniew Michalewicz, eds., Handbook of Evolutionary Computation. 1997. Computational Intelligence Library. New York, NY, USA: Oxford University Press, Inc. ISBN: 0-7503-0392-1

  7. James C. Spall. Introduction to Stochastic Search and Optimization. Estimation, Simulation, and Control - Wiley-Interscience Series in Discrete Mathematics and Optimization, volume 6. 2003. Chichester, West Sussex, UK: Wiley Interscience. ISBN: 0-471-33052-3. http://www.jhuapl.edu/ISSO/.

  8. Frank Hoffmeister and Thomas Bäck. Genetic Algorithms and Evolution Strategies: Similarities and Differences. In Hans-Paul Schwefel and Reinhard Männer, Proceedings of the International Conference on Parallel Problem Solving from Nature (PPSN I), October 1-3, 1990, Dortmund, Germany, volume 496 of Lecture Notes in Computer Science, pages 455-469, Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-540-54148-6. https://doi.org/10.1007/BFb0029787.

class moptipy.algorithms.so.general_ma.GeneralMA(op0, op2, ls, mu=2, lambda_=1, ls_fes=1000, fitness=None, survival=None, mating=None, name='generalMa')[source]

Bases: MA

The fully customizable (mu+lambda) MA.

fitness: Final[Fitness]

the fitness assignment process

initialize()[source]

Initialize the algorithm.

Return type:

None

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

mating: Final[Selection]

the mating selection algorithm

solve(process)[source]

Apply the (mu+lambda) MA to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

survival: Final[Selection]

the survival selection algorithm

moptipy.algorithms.so.greedy_2plus1_ea_mod module

The modified Greedy (2+1) EAmod Evolutionary Algorithm.

The Greedy (2+1) EAmod maintains a population of two individuals. Both solutions, x0 and x1, are intiially sampled independently and at random. Each iteration consists of two steps, a crossover step and a mutation step. The binary operator (crossover) is only applied if x0 and x1 have the same objective value to produce offspring z1. If x0 and x1 have different objective values, then z1 is set to be the better of the two parents. Then, the final offspring z2 is derived by applying the unary mutation operator. If z2 is at least as good as the better one of x0 and x1, then it will be accepted. If x1 and x0 are as same as good, one of them is randomly chosen to be replaced by z2. Otherwise, the worse one is replaced.

This is the implementation of a general black-box version of the Greedy (2+1) GAmod by Carvalho Pinto and Doerr. The original algorithm is a genetic algorithm, i.e., an EA with a bit string-based search space and a mutation operator flipping a Binomially distributed number of bits and performing uniform crossover. Here we implement is a general EA where you can plug in any crossover or mutation operator. Furthermore, the algorithm by Carvalho Pinto and Doerr is, in turn, a modified version of Sudhold’s Greedy (2+1) GA, with some improvements for efficiency.

  1. Eduardo Carvalho Pinto and Carola Doerr. Towards a More Practice-Aware Runtime Analysis of Evolutionary Algorithms. 2017. arXiv:1812.00493v1 [cs.NE] 3 Dec 2018. [Online]. http://arxiv.org/pdf/1812.00493.pdf.

  2. Dirk Sudholt. Crossover Speeds up Building-Block Assembly. In Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation (GECCO’12), July 7-11, 2012, Philadelphia, Pennsylvania, USA, pages 689-702. ACM, 2012. https://doi.org/10.1145/2330163.2330260.

class moptipy.algorithms.so.greedy_2plus1_ea_mod.GreedyTwoPlusOneEAmod(op0, op1, op2)[source]

Bases: Algorithm2

The modified Greedy (2+1) Evolutionary Algorithm.

solve(process)[source]

Apply the EA to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

moptipy.algorithms.so.hill_climber module

The implementation of the basic hill climbing algorithm hc.

The algorithm starts by applying the nullary search operator, an implementation of op0(), to sample one fully random solution. This is the first best-so-far solution. In each step, it applies the unary operator, an implementation of op1(), to the best-so-far solution to obtain a new, similar solution. If this new solution is strictly better than the current best-so-far solution, it replaces this solution. Otherwise, it is discarded.

The hill climbing algorithm is a simple local search that only accepts strictly improving moves. It is thus similar to the randomized local search (rls) implemented in RLS, which, however, accepts non-deteriorating moves. We also provide hcr, a variant of the hill climber that restarts automatically with a certain number of moves were not able to improve the current best-so-far solution in class HillClimberWithRestarts.

  1. Stuart Jonathan Russell and Peter Norvig. Artificial Intelligence: A Modern Approach (AIMA). 2nd edition. 2002. Upper Saddle River, NJ, USA: Prentice Hall International Inc. ISBN: 0-13-080302-2

  2. Steven S. Skiena. The Algorithm Design Manual. 2nd edition. 2008. London, UK: Springer-Verlag. ISBN: 978-1-84800-069-8. http://doi.org/10.1007/978-1-84800-070-4.

  3. David Stifler Johnson, Christos H. Papadimitriou, and Mihalis Yannakakis. How Easy Is Local Search? Journal of Computer and System Sciences. 37(1):79-100. August 1988. http://doi.org/10.1016/0022-0000(88)90046-3 http://www2.karlin.mff.cuni.cz/~krajicek/jpy2.pdf

  4. James C. Spall. Introduction to Stochastic Search and Optimization. April 2003. Estimation, Simulation, and Control – Wiley-Interscience Series in Discrete Mathematics and Optimization, volume 6. Chichester, West Sussex, UK: Wiley Interscience. ISBN: 0-471-33052-3. http://www.jhuapl.edu/ISSO/

  5. Holger H. Hoos and Thomas Stützle. Stochastic Local Search: Foundations and Applications. 2005. ISBN: 1493303732. In The Morgan Kaufmann Series in Artificial Intelligence. Amsterdam, The Netherlands: Elsevier.

  6. Thomas Weise. Optimization Algorithms. 2021. Hefei, Anhui, China: Institute of Applied Optimization (IAO), School of Artificial Intelligence and Big Data, Hefei University. http://thomasweise.github.io/oa/

  7. Thomas Weise. Global Optimization Algorithms - Theory and Application. 2009. http://www.it-weise.de/projects/book.pdf

class moptipy.algorithms.so.hill_climber.HillClimber(op0, op1)[source]

Bases: Algorithm1

The stochastic hill climbing algorithm only accepts improving moves.

In each step, a hill climber creates a modified copy new_x of the current best solution best_x. If new_x is better than best_x, it becomes the new best_x. Otherwise, it is discarded.

solve(process)[source]

Apply the hill climber to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

moptipy.algorithms.so.hill_climber_with_restarts module

The implementation of the hill climbing algorithm with restarts hcr.

This algorithm basically works like the normal hill climber hc (HillClimber), but it will restart automatically if no move was successful for max_moves_without_improvement iterative steps. It therefore maintains an internal counter count which is set to zero at the beginning of each restart and which is also set to zero again any time a move successfully improved the best-so-far solution of the current restart. If a search move, i.e., an application of the unary operator, yielded a new solution which is not better than the best-so-far solution of the current restart, count is incremented. If count >= max_moves_without_improvement, the algorithm begins a new restart with a new random solution.

  1. Thomas Weise. Optimization Algorithms. 2021. Hefei, Anhui, China: Institute of Applied Optimization (IAO), School of Artificial Intelligence and Big Data, Hefei University. http://thomasweise.github.io/oa/

class moptipy.algorithms.so.hill_climber_with_restarts.HillClimberWithRestarts(op0, op1, max_moves_without_improvement)[source]

Bases: Algorithm1

The stochastic hill climbing algorithm only accepts improving moves.

In each step, a hill climber creates a modified copy new_x of the current best solution best_x. If new_x is better than best_x, it becomes the new best_x. Otherwise, it is discarded. If no improvement is made for max_moves_without_improvement steps, the algorithm restarts.

log_parameters_to(logger)[source]

Log all parameters of this component as key-value pairs.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

max_moves_without_improvement: Final[int]

the maximum moves without improvement

solve(process)[source]

Apply the hill climber with restarts to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

moptipy.algorithms.so.ma module

A simple implementation of a (mu+lambda) Memetic Algorithm.

A memetic algorithm (MA) works similar to a (mu+lambda) EA, but it refines all results of the search operators with a local search ls executed for ls_fes objective function evaluations. It also only employs the nullary search operator (to create the initial random solutions) and the binary search operator (to combine two selected parental solutions).

This is the general form of the Memetic Algorithm of the type “EA+something else.” A specialized version combining an ea with rls can be found in marls.

This Memetic Algorithm implementation begins by sampling mu solutions using the nullary search operation op0. Each of these initial solutions is used as a starting point of a local search ls, which is executed for ls_fes objective function evaluations. In each iteration, it then uses the mu existing solutions as input for the binary search operator op2 to create lambda_ new solutions, each of which is again used as a starting point of a local search ls executed for ls_fes objective function evaluations. The results of the local searches enter the population and in the next iteration, the mu best solutions of the mu + lambda_ ones in the population are retained.

Due to the Process and subprocesses API of moptipy, you can use almost arbitrary algorithms as local search ls. The only requirement is that is a subclass of Algorithm0 and uses it uses an instance of moptipy.operators.op0_forward.Op0Forward as nullary search operator (op0). This allows the MA to set a solution as starting point for the inner algorithm ls.

It should be noted that it is by no means required that the inner algorithm ls needs to be a local search. Any algorithm that fulfills the above requirements could be used. For example, if we were conducting numerical optimization, it would be totally OK to plug an instance of the Sequential Least Squares Programming (SLSQP) algorithm into the memetic algorithm&hellip;

Further reading on how to realize using one algorithm as a sub-algorithm of another one can be found in the documentation of from_starting_point(), for_fes(), and moptipy.operators.op0_forward.Op0Forward.

  1. Pablo Moscato. On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms. Caltech Concurrent Computation Program Report C3P 826. 1989. Pasadena, CA, USA: California Institute of Technology (Caltech), Caltech Concurrent Computation Program (C3P). https://www.researchgate.net/publication/2354457

  2. Carlos Cotta, Luke Mathieson, and Pablo Moscato. Memetic Algorithms. In Rafael Martí, Panos M. Pardalos, and Mauricio G. C. Resende, editors, Handbook of Heuristics. Part~III: Metaheuristics, pages 607-638. 2018. Cham, Switzerland: Springer. ISBN: 978-3-319-07123-7. doi: https://doi.org/10.1007/978-3-319-07153-4_29-1 https://www.researchgate.net/publication/315660932

  3. William Eugene Hart, James E. Smith, and Natalio Krasnogor, editors. Recent Advances in Memetic Algorithms. Studies in Fuzziness and Soft Computing (STUDFUZZ), volume 166. 2005. Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-540-22904-9. doi: https://doi.org/10.1007/3-540-32363-5

  4. Ferrante Neri, Carlos Cotta, and Pablo Moscato. Handbook of Memetic Algorithms. Volume 379 of Studies in Computational Intelligence (SCI). 2012. Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-642-23246-6 doi https://doi.org/10.1007/978-3-642-23247-3.

  5. L. Darrell Whitley, V. Scott Gordon, and Keith E. Mathias. Lamarckian Evolution, The Baldwin Effect and Function Optimization. In Yuval Davidor, Hans-Paul Schwefel, and Reinhard Männer, editors, Proceedings of the Third Conference on Parallel Problem Solving from Nature; International Conference on Evolutionary Computation (PPSN III), October 9-14, 1994, Jerusalem, Israel, pages 5-15. Volume 866/1994 of Lecture Notes in Computer Science, Berlin, Germany: Springer-Verlag GmbH. ISBN 0387584846. doi: https://doi.org/10.1007/3-540-58484-6_245. https://www.researchgate.net/publication/2521727

class moptipy.algorithms.so.ma.MA(op0, op2, ls, mu=2, lambda_=1, ls_fes=1000, name='ma')[source]

Bases: Algorithm0

An MA is a population-based algorithm using binary operators.

initialize()[source]

Initialize the memetic algorithm.

Return type:

None

lambda_: Final[int]

the number of offsprings per generation

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

ls: Final[Algorithm0]

the local search algorithm

ls_fes: Final[int]

the number of FEs per local search run

mu: Final[int]

the number of records to survive in each generation

op2: Final[Op2]

The binary search operator.

solve(process)[source]

Apply the MA to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

moptipy.algorithms.so.marls module

A (mu+lambda) Memetic Algorithm using Randomized Local Search (RLS).

A memetic algorithm (MA) works similar to a (mu+lambda) EA, but it refines all results of the search operators with a local search, in this case, RLS, which is executed for ls_fes objective function evaluations. It also only employs the nullary search operator (to create the initial random solutions) and the binary search operator (to combine two selected parental solutions), leaving the application of the unary search operator to the RLS.

A general implementation of a Memetic Algorithm into which arbitrary algorithms can be plugged is given in ma. Here, the RLS part and the EA part of the MA are directly merged in a hard-coded fashion. If the general MA is configured equivalently to the MARLS here, i.e., uses the same search operators, same mu, lambda_, and ls_fes, then both algorithms will do exactly the same search steps.

  1. Pablo Moscato. On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms. Caltech Concurrent Computation Program Report C3P 826. 1989. Pasadena, CA, USA: California Institute of Technology (Caltech), Caltech Concurrent Computation Program (C3P). https://www.researchgate.net/publication/2354457

  2. Carlos Cotta, Luke Mathieson, and Pablo Moscato. Memetic Algorithms. In Rafael Martí, Panos M. Pardalos, and Mauricio G. C. Resende, editors, Handbook of Heuristics. Part~III: Metaheuristics, pages 607-638. 2018. Cham, Switzerland: Springer. ISBN: 978-3-319-07123-7. doi: https://doi.org/10.1007/978-3-319-07153-4_29-1 https://www.researchgate.net/publication/315660932

  3. William Eugene Hart, James E. Smith, and Natalio Krasnogor, editors. Recent Advances in Memetic Algorithms. Studies in Fuzziness and Soft Computing (STUDFUZZ), volume 166. 2005. Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-540-22904-9. doi: https://doi.org/10.1007/3-540-32363-5

  4. Ferrante Neri, Carlos Cotta, and Pablo Moscato. Handbook of Memetic Algorithms. Volume 379 of Studies in Computational Intelligence (SCI). 2012. Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-642-23246-6 doi https://doi.org/10.1007/978-3-642-23247-3.

  5. L. Darrell Whitley, V. Scott Gordon, and Keith E. Mathias. Lamarckian Evolution, The Baldwin Effect and Function Optimization. In Yuval Davidor, Hans-Paul Schwefel, and Reinhard Männer, editors, Proceedings of the Third Conference on Parallel Problem Solving from Nature; International Conference on Evolutionary Computation (PPSN III), October 9-14, 1994, Jerusalem, Israel, pages 5-15. Volume 866/1994 of Lecture Notes in Computer Science, Berlin, Germany: Springer-Verlag GmbH. ISBN 0387584846. doi: https://doi.org/10.1007/3-540-58484-6_245. https://www.researchgate.net/publication/2521727

class moptipy.algorithms.so.marls.MARLS(op0, op1, op2, mu=1, lambda_=1, ls_fes=1000, name='marls')[source]

Bases: Algorithm2

A Memetic Algorithm as Evolutionary Algorithm + Randomized Local Search.

This class implements a Memetic Algorithm (MA) basically as an Evolutionary Algorithm (EA), which only uses the nullary and binary search operators, and that refines each newly generated solution by applying a Randomized Local Search (RLS), which employs the unary search operator.

lambda_: Final[int]

the number of offsprings per generation

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

ls_fes: Final[int]

the number of FEs per local search run

mu: Final[int]

the number of records to survive in each generation

solve(process)[source]

Apply the MA=EA+RLS to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

moptipy.algorithms.so.ppa module

A simple implementation of a Plant Propagation Algorithm (PPA).

This is a simple implementation of the Plant Propagation Algorithm, PPA for short, with some tweaks and modifications. Our PPA implementation works as follows:

  1. It starts with a set of m randomly sampled solutions in the list lst. Each solution x is evaluated and its objective value f(x) is remembered.

  2. In the main loop…

    1. First, the range [fmin,fmax] of the objective values of the first m solutions in lst is determined. We set frange = fmax - fmin, where fmax is the largest objective value of any of the first m solutions in lst and fmin is the smallest one. If frange > 0, then the fitness z(x) of each element be (f(x) - fmin) / frange. Otherwise, i.e., if all solutions in lst have the same objective value, we set z(x) to be a random number uniformly distributed in [0,1) and drawn separately for each solution.

    2. For each of the first m solutions x in lst, we create 1 + int(nmax * r * (1 - z(x))) offspring, where nmax is the maximum number of offspring per solution and r be again an independently drawn random number uniformly distributed in [0,1). In other words, solutions with a fitness close to zero will produce more offspring. If the solutions in the list lst have different objective values, then this means that better solutions produce more offsprings. Each such offspring is the result of the application of a unary operator with step size, i.e., an instance of Op1WithStepSize. The step size is set to r * max_step * z(x), where r again is a freshly and independently drawn random number uniformly distributed in [0,1). This means that better solutions are modified with smaller step sizes and worse solutions are modified more strongly. max_step is a parameter of the algorithm that determines the maximum permissible step size. It is always from the interval [0,1]. Examples for such operators are given in op1_swap_exactly_n, op1_swap_try_n, or op1_flip_m. The new solutions are appended into lst and their objective values are computed.

    3. The list is then sorted by objective values in ascending order, meaning that the best solutions are up front.

The main differences between this procedure and the “standard-PPA” are as follows:

  1. The algorithm is implemented for minimization and all equations are modified accordingly.

  2. After normalizing the objective values in the population, the tanh-based scaling is not applied. Instead, the normalized objective values, where 0 is best and 1 is worst, are used directly to determine the number of offspring per record and the step length.

  3. The fitness of a record equals its normalized objective value (in [0, 1]), unless all records have the same objective value, in which case the fitness of each record is set to a random number uniformly distributed in [0, 1). If all elements in the population have the same objective value, normalizing is not possible as it would lead to a division by zero. One could use a constant value, say 0.5, in this case, but there is no guarantee that this would be a good choice. We therefore use random values from [0, 1) instead. These may sometimes be suitable, sometimes not. But at least they likely are not always a bad choice, which might happen in some scenarios with 0.5 or any other constant.

  4. The decisions regarding the number of offspring per selected record and the step-width of the search moves are made only based on this fitness (and, again, not on the tanh scaling which is not used). Since we normally do not know the characteristics of the objective function in advance, I think that we also often do not know whether a tanh scaling (that emphasizes objective values close to the best and close to the worst) is necessary or a good idea. It could be good in some cases, but it might as well be a bad choice in others. For now, I have thus not implemented this and just use the raw normalized objective values.

  5. As unary operators, we employ instances of the class Op1WithStepSize, which provides a unary operator with a step size between 0 (smallest possible modification) to 1 (largest possible modification) and will scale appropriately between the two extremes. Often, instances of this class will determine the number or magnitude of changes based on an exponential scaling (see exponential_step_size()) of the step length. The idea is that small step sizes should be emphasized and that really big step sizes are often rarely needed. This thus effectively takes the place of the tanh scaling.

  6. Maximum step lengths, i.e., the parameter max_step, are not always explicitly used in some of the papers.

In order to understand the behavior of the algorithm, consider the following case. Assume that we set the maximum number (nmax) of offspring per solution to 1 and the number m of solutions to survive selection to 1 as well. In this case, the PPA has exactly the same “population structure” as the Randomized Local Search (rls), namely it preserves the best-so-far solution and generates one new solution in each step, which then competes with that best-so-far solution. The two algorithms then only differ in their search operator: The step-length of the unary operator used in PPA depends on the relative objective value and the one of the RLS does not. However, there are many situations where the two could still be equivalent. For example, if the current and new solution have different objective values, normalizing the objective value will mean that the best of the two has normalized objective value “0”. This equates to the shortest possible step length. In this case, for example, the step-length based operator op1_swap_try_n behaves exactly like the op1_swap2 operator and the step-length based op1_flip_m operator behaves like the op1_flip1. Of course, if both the current and the new solution have the same objective value, then we use a random number from [0,1) as normalized objective value, so the operators would not behave the same. Then again, one could set the maximum step length max_step to 0. In this case, the step length is always zero and most of our step-length based operations will behave like fixed small step-length based counterparts, as mentioned above. So in other words, if we set both m and nmax to 1 and set max_step to 0, our PPA behaves like rls (if the search operators are appropriately chosen).

Now rls is also known as the (1+1) EA and indeed, it is a special case of the (mu+lambda) EA implemented in ea. I think with some appropriate settings of the parameter, we can probably construct some setups of both algorithms with larger populations that should be equivalent or close-to-equivalent in the big picture.

Below, you can find references on the PPA.

  1. Abdellah Salhi and Eric Serafin Fraga. Nature-Inspired Optimisation Approaches and the New Plant Propagation Algorithm. Proceeding of the International Conference on Numerical Analysis and Optimization (ICeMATH’2011), June 6-8, 2011, Yogyakarta, Indonesia, volume 1, pages K2-1–K2-8. ISBN: 978-602-98919-1-1. https://doi.org/10.13140/2.1.3262.0806. https://repository.essex.ac.uk/9974/1/paper.pdf.

  2. Misha Paauw and Daan van den Berg. Paintings, Polygons and Plant Propagation. In Anikó Ekárt, Antonios Liapis, and María Luz Castro Pena, editors, Proceedings of the 8th International Conference on Computational Intelligence in Music, Sound, Art and Design (EvoMUSART’19, Part of EvoStar), April 24-26, 2019, Leipzig, Germany, Lecture Notes in Computer Science (LNCS), volume 11453, pages 84-97. ISBN: 978-3-030-16666-3. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-16667-0_6. https://www.researchgate.net/publication/332328080.

  3. Muhammad Sulaiman, Abdellah Salhi, Eric Serafin Fraga, Wali Khan Mashwa, and Muhammad M. Rashi. A Novel Plant Propagation Algorithm: Modifications and Implementation. Science International (Lahore) 28(1):201-209, #2330, January/February 2016. http://www.sci-int.com/pdf/4066579081%20a%20201-209%20PPA%20Science%20international_Wali.pdf. https://arxiv.org/pdf/1412.4290.pdf

  4. Hussein Fouad Almazini, Salah Mortada, Hassan Fouad Abbas Al-Mazini, Hayder Naser Khraibet AL-Behadili, and Jawad Alkenani. Improved Discrete Plant Propagation Algorithm for Solving the Traveling Salesman Problem. IAES International Journal of Artificial Intelligence (IJ-AI) 11(1):13-22. March 2022. http://doi.org/10.11591/ijai.v11.i1.pp13-22. https://www.researchgate.net/publication/357484222.

  5. Birsen İrem Selamoğlu and Abdellah Salhi. The Plant Propagation Algorithm for Discrete Optimisation: The Case of the Travelling Salesman Problem. In Xin-She Yang, editor, Nature-Inspired Computation in Engineering, pages 43-61. Studies in Computational Intelligence (SCI), Volume 637. March 2016. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-30235-5_3. https://www.researchgate.net/publication/299286896.

  6. Marleen de Jonge and Daan van den Berg. Parameter Sensitivity Patterns in the Plant Propagation Algorithm. In Juan Julián Merelo Guervós, Jonathan M. Garibaldi, Christian Wagner, Thomas Bäck, Kurosh Madani, and Kevin Warwick, editors, Proceedings of the 12th International Joint Conference on Computational Intelligence (IJCCI’20), November 2-4, 2020, Budapest, Hungary, pages 92-99. Setúbal, Portugal: SciTePress. https://doi.org/10.5220/0010134300920099. https://www.researchgate.net/publication/346829569.

  7. Ege de Bruin. Escaping Local Optima by Preferring Rarity with the Frequency Fitness Assignment. Master’s Thesis at Vrije Universiteit Amsterdam, Amsterdam, the Netherlands. 2022.

  8. Wouter Vrielink and Daan van den Berg. Parameter control for the Plant Propagation Algorithm. In Antonio M. Mora and Anna Isabel Esparcia-Alcázar, editors, Late-Breaking Abstracts of EvoStar’21, April 7-9, 2021, online conference. https://arxiv.org/pdf/2106.11804.pdf. https://www.researchgate.net/publication/350328314.

  9. Levi Koppenhol, Nielis Brouwer, Danny Dijkzeul, Iris Pijning, Joeri Sleegers, and Daan van den Berg. Exactly Characterizable Parameter Settings in a Crossoverless Evolutionary Algorithm. In Jonathan E. Fieldsend and Markus Wagner, editors, Genetic and Evolutionary Computation Conference (GECCO’22) Companion Volume, July 9-13, 2022, Boston, MA, USA, pages 1640-1649. New York, NY, USA: ACM. https://doi.org/10.1145/3520304.3533968. https://www.researchgate.net/publication/362120506.

class moptipy.algorithms.so.ppa.PPA(op0, op1, m=30, nmax=5, max_step=0.3, name='ppa')[source]

Bases: Algorithm1

The Plant Propagation Algorithm (PPA).

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

m: Final[int]

the number of records to survive in each generation

max_step: Final[float]

the maximum step length

nmax: Final[int]

the maximum number of offsprings per solution per iteration

solve(process)[source]

Apply the PPA to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

moptipy.algorithms.so.record module

A simple record for storing a solution with its quality.

class moptipy.algorithms.so.record.Record(x, f)[source]

Bases: object

A point in the search space, its quality, and creation time.

A record stores a point in the search space x together with the corresponding objective value f. It also stores a “iteration index” it, i.e., the time when the point was created or modified.

This allows for representing and storing solutions in a population. If the population is sorted, then records with better objective value will be moved to the beginning of the list. Ties are broken such that younger individuals (with higher it value) are preferred.

f: int | float

the objective value corresponding to x

it: int

the iteration index when x was sampled

x: Final

the point in the search space

moptipy.algorithms.so.rls module

The implementation of the Randomized Local Search algorithm rls.

The algorithm starts by applying the nullary search operator, an implementation of op0(), to sample one fully random solution. This is the first best-so-far solution. In each step, it applies the unary operator, an implementation of op1(), to the best-so-far solution to obtain a new, similar solution. If this new solution is not worse than the current best-so-far solution, it replaces this solution. Otherwise, it is discarded.

The rls algorithm is a simple local search that accepts all non-deteriorating moves. It is thus similar to the simple hill climber hc implemented in HillClimber, which, however, accepts strictly improving moves. rls is also equivalent to a (mu+lambda)-EA without crossover as implemented in EA if the same unary and nullary operator are used and mu=1, lambda=1, and br=0. rls, however, will be faster as it does not represent a population of solutions as list of objects but can directly utilize local variables.

Strictly speaking, the name “Randomized Local Search” only fits partially to the algorithm we implement here. Take the discrete search domain, where the search spaces are bit strings of a fixed length n, as an example. The name “Randomized Local Search” and the abbreviation rls has a fixed meaning on this domain: It is the algorithm that starts with a random solution and flips exactly one randomly chosen bit in each step. This corresponds to our rls algorithm with the operator Op1Flip1. However, an algorithm that starts with a random solution and flips a number of bits sampled from a Binomial distribution is called (1+1) EA. Now this algorithm corresponds again to our rls algorithm, but this time with operator Op1MoverNflip. In other words, we can implement (at least) two algorithms with well-known and fixed names by plugging different operators into our rls approach. One of them is called RLS, the other one is called (1+1) EA. Now this is somewhat confusing but results from the general nature of our basic framework. Regardless of what we do, we will have some form of name clash here. We advise the user of our algorithms to be careful with respect to literature and scientific conventions when using our framework.

  1. Frank Neumann and Ingo Wegener. Randomized Local Search, Evolutionary Algorithms, and the Minimum Spanning Tree Problem. Theoretical Computer Science. 378(1):32-40, June 2007. https://doi.org/10.1016/j.tcs.2006.11.002, https://eldorado.tu-dortmund.de/bitstream/2003/5454/1/165.pdf

  2. Holger H. Hoos and Thomas Stützle. Stochastic Local Search: Foundations and Applications. 2005. ISBN: 1493303732. In The Morgan Kaufmann Series in Artificial Intelligence. Amsterdam, The Netherlands: Elsevier.

  3. Thomas Weise. Optimization Algorithms. 2021. Hefei, Anhui, China: Institute of Applied Optimization (IAO), School of Artificial Intelligence and Big Data, Hefei University. http://thomasweise.github.io/oa/

class moptipy.algorithms.so.rls.RLS(op0, op1)[source]

Bases: Algorithm1

The RLS is a simple local search accepting all non-worsening moves.

In each step, an RLS creates a modified copy new_x of the current best solution best_x. If new_x is not worse than best_x, it becomes the new best_x. Otherwise, it is discarded.

solve(process)[source]

Apply the RLS to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

moptipy.algorithms.so.simulated_annealing module

The simulated annealing algorithm with configurable temperature schedule.

A basic randomized local search (rls) maintains one interesting solution and derives one new solution from it using the unary search operator (Op1). The new solution replaces the current solution if it is not worse, i.e., better or equally good. Simulated Annealing is similar to the rls, but it sometimes also accepts solutions that are worse than the current one. It does so with a probability that becomes smaller the worse the new solution is and also becomes smaller the smaller the current “temperature” is.

Simulated Annealing applies a so-called temperature schedule (see temperature_schedule), which basically is function that relates the index of the algorithm iteration (i.e., the index of the current objective function evaluation) to a temperature. It therefore is a function accepting an integer value as input and returning a float temperature. This function is usually monotonously decreasing over time, meaning that the initial “temperature” is high and then becomes smaller. The algorithm therefore is more likely to accept worse solutions at its beginning, whereas it behaves basically like a rls at the end of its computational budget (if configured correctly).

Simulated Annealing was independently developed by several researchers [1-4]. The idea is inspired by Metropolis’ approximation of how annealing can be simulated [5].

  1. Scott Kirkpatrick, C. Daniel Gelatt, Jr., and Mario P. Vecchi. Optimization by Simulated Annealing. Science Magazine. 220(4598):671-680. May 13, 1983. doi: https://doi.org/10.1126/science.220.4598.671. https://www.researchgate.net/publication/6026283

  2. Vladimír Černý. Thermodynamical Approach to the Traveling Salesman Problem: An Efficient Simulation Algorithm. Journal of Optimization Theory and Applications. 45(1):41-51. January 1985. doi: https://doi.org/10.1007/BF00940812. http://mkweb.bcgsc.ca/papers/cerny-travelingsalesman.pdf.

  3. Dean Jacobs, Jan Prins, Peter Siegel, and Kenneth Wilson. Monte Carlo Techniques in Code Optimization. ACM SIGMICRO Newsletter. 13(4):143-148. December 1982. Also in Proceedings of the 15th Annual Workshop on Microprogramming (MICRO 15), October 5-7, 1982, Palo Alto, CA, USA, New York, NY, USA: ACM. doi: http://doi.org/10.1145/1014194.800944.

  4. Martin Pincus. Letter to the Editor - A Monte Carlo Method for the Approximate Solution of Certain Types of Constrained Optimization Problems. Operations Research. 18(6):1225-1228. November/December 1970. doi: https://doi.org/10.1287/opre.18.6.1225.

  5. Nicholas Metropolis, Arianna W. Rosenbluth, Marshall Nicholas Rosenbluth, Augusta H. Teller, Edward Teller. Equation of State Calculations by Fast Computing Machines. The Journal of Chemical Physics. 21(6):1087-1092. June 1953. doi: https://doi.org/10.1063/1.1699114. http://scienze-como.uninsubria.it/bressanini/montecarlo-history/mrt2.pdf.

class moptipy.algorithms.so.simulated_annealing.SimulatedAnnealing(op0, op1, schedule)[source]

Bases: Algorithm1

Simulated Annealing is an RLS sometimes accepting worsening moves.

initialize()[source]

Initialize the algorithm.

Return type:

None

log_parameters_to(logger)[source]

Log all parameters of the simulated annealing algorithm.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

schedule: Final[TemperatureSchedule]

the temperature schedule

solve(process)[source]

Apply the simulated annealing algorithm to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None