moptipy.algorithms package

In this package, we provide implementations of metaheuristic algorithms.

We divide them into single-objective algorithms, which are provided in the package moptipy.algorithms.so, and multi-objective algorithms, which can be found in package moptipy.algorithms.mo.

Methods which are based on random sampling and random walks are unaffected by the number of objective functions and therefore can be found in this package directly.

The black-box methods are given directly in these package, more specialized algorithms will be placed in sub-packages corresponding to their requirements for the search- and solution space.

Subpackages

Submodules

moptipy.algorithms.luby_restarts module

Perform restarts of an algorithm based on the Luby sequence.

Luby et al. showed that, for Las Vegas algorithms with known run time distribution, there is an optimal stopping time in order to minimize the expected running time. Even if the distribution is unknown, there is a universal sequence of running times given by (1,1,2,1,1,2,4,1,1,2,1,1,2,4, 8,…), which is the optimal restarting strategy up to constant factors. While this only holds for Las Vegas algorithms, this restart length may also be used for optimization, e.g., if we aim to find a globally optimal solution.

  1. M. Luby, A. Sinclair, and S. Zuckerman. Optimal Speedup of Las Vegas Algorithms. Information Processing Letters 47(4):173-180. September 1993. https://doi.org/10.1016/0020-0190(93)90029-9

class moptipy.algorithms.luby_restarts.T

the type variable for single- and multi-objective algorithms.

alias of TypeVar(‘T’, ~moptipy.api.algorithm.Algorithm, ~moptipy.api.mo_algorithm.MOAlgorithm)

moptipy.algorithms.luby_restarts.luby_restarts(algorithm, base_fes=64, log_restarts=False)[source]

Perform restarts of an algorithm in the Luby fashion.

The restart run length in FEs is determined by the Luby sequence (1, 1, 2, 1, 1, 2, 4, 1, 1, 2, 1, 1, 2, 4, 8, 1, 1, 2, 1, 1, 2, 4, 1, …) multiplied by base_fes. Restarts are performed until a termination criterion is met.

Parameters:
  • algorithm (TypeVar(T, Algorithm, MOAlgorithm)) – the algorithm

  • base_fes (int, default: 64) – the basic objective function evaluations

  • log_restarts (bool, default: False) – should we log the restarts?

Return type:

TypeVar(T, Algorithm, MOAlgorithm)

moptipy.algorithms.random_sampling module

The random sampling algorithm rs.

The random sampling algorithm keeps creating random solutions until the computational budget is exhausted or any other termination criterion is met. In a loop that is repeated until the termination criterion should_terminate() becomes True, it

  • applies the nullary search operator, an implementation of op0(), to sample exactly one single random solution. It then

  • evaluates the solution by passing it to evaluate().

This algorithm has a very bad performance. It makes no use of the information that is learned during the search. Every single solution is created completely independent on anything that has happened before in the algorithm. Only if the problem is completely random, has no structure, or is entirely deceptive, this algorithm can be of any use.

Basically, this algorithm is an iterated version of the 1rs algorithm implemented in SingleRandomSample.

  1. Thomas Weise. Optimization Algorithms. 2021. Hefei, Anhui, China: Institute of Applied Optimization (IAO), School of Artificial Intelligence and Big Data, Hefei University. http://thomasweise.github.io/oa/

class moptipy.algorithms.random_sampling.RandomSampling(op0)[source]

Bases: Algorithm0

In each step, random sampling creates a new, random solution.

solve(process)[source]

Apply the random sampling approach to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

moptipy.algorithms.random_walk module

A random walk algorithm implementation.

A random walk starts with a random point in the search space created with the nullary search operator. In each step, it applies the unary search operator to move a new point. It does not really care whether the new point is better or worse, it will always accept it.

Of course, it still needs to call the objective function to make sure to inform the moptipy.api.process.Process about the new point so that, at the end, we can obtain the best point that was visited. But during the course of its run, it will walk around the search space randomly without direction.

class moptipy.algorithms.random_walk.RandomWalk(op0, op1)[source]

Bases: Algorithm1

Perform a random walk through the search space.

In each step, a random walk creates a modified copy of the current solution and accepts it as starting point for the next step.

solve(process)[source]

Apply the random walk to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

moptipy.algorithms.restarts module

Perform restarts of an algorithm that terminates too early.

class moptipy.algorithms.restarts.T

the type variable for single- and multi-objective algorithms.

alias of TypeVar(‘T’, ~moptipy.api.algorithm.Algorithm, ~moptipy.api.mo_algorithm.MOAlgorithm)

moptipy.algorithms.restarts.restarts(algorithm)[source]

Perform restarts of an algorithm until the termination criterion is met.

Parameters:

algorithm (TypeVar(T, Algorithm, MOAlgorithm)) – the algorithm

Return type:

TypeVar(T, Algorithm, MOAlgorithm)

moptipy.algorithms.single_random_sample module

The 1rs “algorithm” creates one single random solution.

The single random sample algorithm applies the nullary search operator, an implementation of op0(), to sample exactly one single random solution. It then evaluates the solution by passing it to evaluate(). It then terminates.

This is a very very bad optimization algorithm. We only use it in our book to illustrate one basic concept for solving optimization problems: The generation of random solutions. The single-random sampling algorithm here is actually very wasteful: since it only generates exactly one single solution, it does not use its computational budget well. Even if you grant it 10’000 years, it will still only generate one solution. Even if it could generate and test thousands or millions of solutions, it will not do it. Nevertheless, after applying this “algorithm,” you will have one valid solution remembered in the optimization process (embodied as instance process of Process).

This concept of random sampling is then refined in the RandomSampling as the rs algorithm, which repeats generating random solutions until its allotted runtime is exhausted.

  1. Thomas Weise. Optimization Algorithms. 2021. Hefei, Anhui, China: Institute of Applied Optimization (IAO), School of Artificial Intelligence and Big Data, Hefei University. http://thomasweise.github.io/oa/

class moptipy.algorithms.single_random_sample.SingleRandomSample(op0)[source]

Bases: Algorithm0

An algorithm that creates one single random solution.

solve(process)[source]

Apply single random sampling to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None