http://thomasweise.github.io/oa/
Bases: Algorithm0
In each step, random sampling creates a new, random solution.
A random walk algorithm implementation.
A random walk starts with a random point in the search space created with the nullary search operator. In each step, it applies the unary search operator to move a new point. It does not really care whether the new point is better or worse, it will always accept it.
Of course, it still needs to call the objective function to make sure to inform the moptipy.api.process.Process
about the new point so that, at the end, we can obtain the best point that was visited. But during the course of its run, it will walk around the search space randomly without direction.
Bases: Algorithm1
Perform a random walk through the search space.
In each step, a random walk creates a modified copy of the current solution and accepts it as starting point for the next step.
Perform restarts of an algorithm that terminates too early.
the type variable for single- and multi-objective algorithms.
alias of TypeVar(‘T’, ~moptipy.api.algorithm.Algorithm, ~moptipy.api.mo_algorithm.MOAlgorithm)
Perform restarts of an algorithm until the termination criterion is met.
algorithm (TypeVar
(T
, Algorithm
, MOAlgorithm
)) – the algorithm
TypeVar
(T
, Algorithm
, MOAlgorithm
)
The 1rs “algorithm” creates one single random solution.
The single random sample algorithm applies the nullary search operator, an implementation of op0()
, to sample exactly one single random solution. It then evaluates the solution by passing it to evaluate()
. It then terminates.
This is a very very bad optimization algorithm. We only use it in our book to illustrate one basic concept for solving optimization problems: The generation of random solutions. The single-random sampling algorithm here is actually very wasteful: since it only generates exactly one single solution, it does not use its computational budget well. Even if you grant it 10’000 years, it will still only generate one solution. Even if it could generate and test thousands or millions of solutions, it will not do it. Nevertheless, after applying this “algorithm,” you will have one valid solution remembered in the optimization process (embodied as instance process of Process
).
This concept of random sampling is then refined in the RandomSampling
as the rs algorithm, which repeats generating random solutions until its allotted runtime is exhausted.
Thomas Weise. Optimization Algorithms. 2021. Hefei, Anhui, China: Institute of Applied Optimization (IAO), School of Artificial Intelligence and Big Data, Hefei University. http://thomasweise.github.io/oa/
Bases: Algorithm0
An algorithm that creates one single random solution.