moptipy.algorithms.modules.selections package¶
The set of selection algorithms.
Selection
algorithms are algorithms that choose elements from a pool of records based on their fitness
and random numbers. They are commonly used in Evolutionary Algorithms (GeneralEA
).
Currently, the following selection algorithms have been implemented:
Best
selection picks the n best solutions from an array. It is the survival selection scheme used in a (mu+lambda)EA
.RandomWithoutReplacement
randomly chooses n solutions without replacement. This is the mating selection scheme used in a (mu+lambda)EA
.FitnessProportionateSUS
performs fitness proportionate selection for minimization using Stochastic Uniform Sampling. It also allows you to specify the selection probability for the worst element.TournamentWithReplacement
performs tournament selection with a specified tournament size with replacement.TournamentWithoutReplacement
performs tournament selection with a specified tournament size without replacement.
Submodules¶
moptipy.algorithms.modules.selections.best module¶
Choose the n best elements from the source array.
“Best” selection is the standard way to perform survival selection in a (mu+lambda) Evolutionary Algorithm (EA
). It basically sorts the input array based on fitness and then copies the first n elements to the output array. The algorithm is not random but deterministic, i.e., it will always select each of the n best records exactly once.
This selection is also often called “mu+lambda,” “mu,lambda,” or “top-n” selection. It is similar to the selection scheme in Eshelman’s CHC algorithm. A randomized version of this algorithm is called “truncation selection” in the paper of Blickle and Thiele.
Peter J. B. Hancock. An Empirical Comparison of Selection Methods in Evolutionary Algorithms. In Terence Claus Fogarty, editor, Selected Papers from the AISB Workshop on Evolutionary Computing (AISB EC’94), April 11-13, 1994, Leeds, UK, volume 865 of Lecture Notes in Computer Science, pages 80-94, Berlin/Heidelberg, Germany: Springer, ISBN: 978-3-540-58483-4. https://dx.doi.org/10.1007/3-540-58483-8_7. Conference organized by the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB).
Larry J.Eshelman. The CHC Adaptive Search Algorithm: How to Have Safe Search When Engaging in Nontraditional Genetic Recombination. In Gregory J. E. Rawlins, editor, Foundations of Genetic Algorithms, volume 1, 1991, pages 265-283. San Francisco, CA, USA: Morgan Kaufmann. https://doi.org/10.1016/B978-0-08-050684-5.50020-3
Frank Hoffmeister and Thomas Bäck. Genetic Algorithms and Evolution Strategies: Similarities and Differences. In Hans-Paul Schwefel and Reinhard Männer, Proceedings of the International Conference on Parallel Problem Solving from Nature (PPSN I), October 1-3, 1990, Dortmund, Germany, volume 496 of Lecture Notes in Computer Science, pages 455-469, Berlin/Heidelberg, Germany: Springer. ISBN: 978-3-540-54148-6. https://doi.org/10.1007/BFb0029787.
Uday Kumar Chakraborty and Kalyanmoy Deb and Mandira Chakraborty. Analysis of Selection Algorithms: A Markov Chain Approach. Evolutionary Computation, 4(2):133-167. Summer 1996. Cambridge, MA, USA: MIT Press. doi:10.1162/evco.1996.4.2.133. https://dl.acm.org/doi/pdf/10.1162/evco.1996.4.2.133
Tobias Blickle and Lothar Thiele. A Comparison of Selection Schemes used in Genetic Algorithms. Second edition, December 1995. TIK-Report 11 from the Eidgenössische Technische Hochschule (ETH) Zürich, Department of Electrical Engineering, Computer Engineering and Networks Laboratory (TIK), Zürich, Switzerland. ftp://ftp.tik.ee.ethz.ch/pub/publications/TIK-Report11.ps
- class moptipy.algorithms.modules.selections.best.Best[source]¶
Bases:
Selection
The best selection: select each of the best n elements once.
- select(source, dest, n, random)[source]¶
Perform deterministic best selection without replacement.
- Parameters:
source (
list
[FitnessRecord
]) – the list with the records to select fromdest (
Callable
[[FitnessRecord
],Any
]) – the destination collector to invoke for each selected recordn (
int
) – the number of records to selectrandom (
Generator
) – the random number generator
- Return type:
moptipy.algorithms.modules.selections.fitness_proportionate_sus module¶
Fitness Proportionate Selection with Stochastic Uniform Sampling.
In Fitness Proportionate Selection, the chance of a solution for being selected is proportional to its fitness. This selection scheme is designed for maximization, whereas all optimization in moptipy is done as minimization. Therefore, some adjustments are necessary. We will discuss them later. Let us first introduce the idea of fitness proportionate selection for maximization.
This idea goes back to the original Genetic Algorithm by Holland. Let us say that there are N elements and the element at index i has fitness v(i). The probability to select this element is then P(i) = v(i) / [sum_j=0^N v(j)], i.e., the fitness of the element divided by the overall fitness sum. Let’s say we have a population with the fitnesses 1, 2, 3, 4. The probability of each element of being selected is then the fitness divided by the overall fitness sum (here 1+2+3+4=10), i.e., we get the probabilities 1/10=0.1, 0.2, 0.3, 0.4. These nicely add up to 1.
This can be implemented as follows: First, we copy the fitness values of the individuals to an array a and get, in the above example, a = [1, 2, 3, 4]. Then, we turn this array into a cumulative sum, i.e., add to each element the sum of all previous elements. We get a = [1, 3, 6, 10]. 10, the last value, is the overall sum S of all fitnesses. Whenever we want to select an element, we draw a random number r uniformly distributed in [0, S). We perform a binary search to get the right-most insertion index i of r in a, i.e., the index i with a[i - 1] <= r < a[i]. Let’s say you draw 0.5, then i=0, for r=1 you get i=1, and for r=9, you get i=3. i is then the element that has been selected.
In the classical Roulette Wheel Selection as used in Holland’s original Genetic Algorithm, we perform this sampling procedure (draw a random number r, find the index i where it belongs, and return the corresponding element) for each of the n offspring we want to sample. An alternative to this is to perform Stochastic Universal Sampling (SUS) by Baker. Here, the idea is that we only generate a single random number r(0) from within [0, S). The next number r(1) will not be random, but r(1)=(r(0) + S/n) mod S and r(i) = (r(i-1) + S/n) mod S where mod be the modulo division. In other words, after drawing the initial random sample, we take steps of equal length along the Roulette Wheel, or, alternatively, we have a Roulette Wheel not with a single choice point that we spin n times but a wheel with n choice points that we spin a single time. This avoids random jitter and also requires us to only draw a single random number. We implement the fitness proportionate selection with stochastic uniform sampling here.
But this is for maximization of fitness, while we conduct minimization.
At first glance, we can turn a maximization problem into a minimization problem by simply subtracting each fitness value from the maximum fitness value max_{all i} v(i). However, this has two big downsides.
Let us take the same example as before, the fitnesses 1, 2, 3, 4. Under maximization, the fitness 4 was best but now it is worst. It is also the maximum fitness. So let us emulate fitness proportionate selection for minimization by simply subtracting each fitness from the maximum (here: 4). We would get the adjusted fitnesses 3, 2, 1, 0, the fitness sum 6, and probabilities 1/2, 1/3, 1/6, 0, which still add up to 1 nicely. However, now we have one element with 0% chance of being selected, namely the one with the worst original fitness 4. This is strange, because under maximization, the worst element was 1 and it still had non-zero probability of being selected. Furthermore, a second problem occurs if all elements are the same, say, 1, 1, 1, 1. We would then adjust them to all zeros, have a zero sum, and getting 0/0 as selection probabilities. So that is not permissible.
The second problem can easily be solved by simply defining that, if all N elements are identical, they should all get the same probability 1/N of being selected.
The first problem becomes clearer when we realize that under “normal” fitness proportionate selection for maximization, the reference point “0” is actually arbitrarily chosen and hard coded. If we have 1, 2, 3, 4, 10 as fitnesses and transform them to the probabilities 0.05, 0.1, 0.15, 0.2, 0.5, we do so implicitly based on their “distance” to 0. If we would add some offset to them, say, 1, i.e., calculate wit 2, 3, 4, 5, 11, we would get the fitness sum 25 and compute probabilities 0.08, 0.12, 0.16, 0.2, and 0.44 instead. In other words, if we choose different reference points, e.g., -1 instead of 0, we get different probabilities. And while 0 seems a natural choice as reference point, it is actually just arbitrary. The only actual condition of a reference point for maximization is that it must be less or equal than/to the smallest occurring fitness.
If we do minimization instead of maximization, we do not have a “natural” reference point. The only condition for the reference point is that it must be larger or equal than/to the largest occurring fitness. Choosing the maximum fitness value is just an arbitrary choice and it results in the solution of this fitness getting 0 chance to reproduce.
If we can choose an arbitrary reference point for minimization, how do we choose it? Our FitnessProportionateSUS
has a parameter min_prob, which corresponds to the minimal selection probability that any element should have (of course, if we have N elements in the population, it must hold that 0 <= min_prob < 1/N). Based on this probability, we compute the offset of the fitness values. We do it as follows:
The idea is that, in maximization, we got P(i) = v(i) / [sum_j=0^(N-1) v(j)]. Now if v(i) = 0, we would get P(i) = 0 as well. But we want P(i) = min_prob, so we need to add an offset to each v(i). So this then becomes P(i) = min_prob = offset / [sum_j=0^(N-1) (v(j) + offset)], which becomes min_prob = offset / [N * offset + sum_j=0^N v(j)]. Let’s set S = sum_j=0^N v(j) to make this easier to read and we get min_prob = offset / (N * offset + S). Solving for offset gives us offset = S * (min_prob / (1.0 - (min_prob * N))). In other words, for any allowable minimum selection probability 0<=min_prob<1/N, we can compute an offset to add to each fitness value that will result in the worst solution having exactly this selection probability. The probabilities of the other solutions will be larger, rather proportional to their fitness.
For minimization, first, we compute the maximum (i.e., worst) fitness max_fitness and negate each fitness by subtracting it from max_fitness. For an input array 1, 2, 3, 4 we now get 3, 2, 1, 0. S be the sum of the negated fitnesses, so in the above example, S = 3 + 2 + 1 + 0 = 6. We can now compute the offset to be added to each negated fitness to achieve the goal probability distribution as follows: offset = S * (min_prob / (1.0 - (min_prob * N))). If we had chosen min_prob = 0, then offset = 0 and the probability for the worst element to be selected remains 0. If we choose min_prob = 0.01, then we would get offset = 0.0625. The selection probability of the worst element with original fitness 4 and adjusted fitness 0 would be (0 + 0.0625) / (6 + (4 * 0.0625)) = 0.0625 / 6.25 = 0.01.
As a side-effect of this choice of dynamic offsetting, our fitness proportionate selection scheme becomes invariant under all translations of the objective function value. The original fitness proportionate selection schemes, regardless of being of the Roulette Wheel or Stochastic Universal Sampling variant, do not have this property (see, for instance, de la Maza and Tidor).
John Henry Holland. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Ann Arbor, MI, USA: University of Michigan Press. 1975. ISBN: 0-472-08460-7
David Edward Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc. 1989. ISBN: 0-201-15767-5
James E. Baker. Reducing Bias and Inefficiency in the Selection Algorithm. In John J. Grefenstette, editor, Proceedings of the Second International Conference on Genetic Algorithms on Genetic Algorithms and their Application (ICGA’87), July 1987, Cambridge, MA, USA, pages 14-21. Hillsdale, NJ, USA: Lawrence Erlbaum Associates. ISBN: 0-8058-0158-8
Peter J. B. Hancock. An Empirical Comparison of Selection Methods in Evolutionary Algorithms. In Terence Claus Fogarty, editor, Selected Papers from the AISB Workshop on Evolutionary Computing (AISB EC’94), April 11-13, 1994, Leeds, UK, volume 865 of Lecture Notes in Computer Science, pages 80-94, Berlin/Heidelberg, Germany: Springer, ISBN: 978-3-540-58483-4. https://dx.doi.org/10.1007/3-540-58483-8_7. Conference organized by the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB).
Tobias Blickle and Lothar Thiele. A Comparison of Selection Schemes used in Genetic Algorithms. Second edition, December 1995. TIK-Report 11 from the Eidgenössische Technische Hochschule (ETH) Zürich, Department of Electrical Engineering, Computer Engineering and Networks Laboratory (TIK), Zürich, Switzerland. ftp://ftp.tik.ee.ethz.ch/pub/publications/TIK-Report11.ps
Uday Kumar Chakraborty and Kalyanmoy Deb and Mandira Chakraborty. Analysis of Selection Algorithms: A Markov Chain Approach. Evolutionary Computation, 4(2):133-167. Summer 1996. Cambridge, MA, USA: MIT Press. doi:10.1162/evco.1996.4.2.133. https://dl.acm.org/doi/pdf/10.1162/evco.1996.4.2.133
Michael de la Maza and Bruce Tidor. An Analysis of Selection Procedures with Particular Attention Paid to Proportional and Bolzmann Selection. In Stephanie Forrest, editor, Proceedings of the Fifth International Conference on Genetic Algorithms (ICGA’93), July 17-21, 1993, Urbana-Champaign, IL, USA, pages 124-131. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. ISBN: 1-55860-299-2
- class moptipy.algorithms.modules.selections.fitness_proportionate_sus.FitnessProportionateSUS(min_prob=0.0)[source]¶
Bases:
Selection
Fitness Proportionate Selection with Stochastic Universal Sampling.
- log_parameters_to(logger)[source]¶
Log the parameters of the algorithm to a logger.
- Parameters:
logger (
KeyValueLogSection
) – the logger for the parameters- Return type:
- select(source, dest, n, random)[source]¶
Perform deterministic best selection without replacement.
- Parameters:
source (
list
[FitnessRecord
]) – the list with the records to select fromdest (
Callable
[[FitnessRecord
],Any
]) – the destination collector to invoke for each selected recordn (
int
) – the number of records to selectrandom (
Generator
) – the random number generator
- Return type:
moptipy.algorithms.modules.selections.random_without_repl module¶
Select n records at random without replacement.
This selection scheme is the standard mating selection scheme in an Evolutionary Algorithm.
- class moptipy.algorithms.modules.selections.random_without_repl.RandomWithoutReplacement[source]¶
Bases:
Selection
Select random elements without replacement.
- select(source, dest, n, random)[source]¶
Select n random elements from source without replacement.
- Parameters:
source (
list
[FitnessRecord
]) – the list with the records to select fromdest (
Callable
[[FitnessRecord
],Any
]) – the destination collector to invoke for each selected recordn (
int
) – the number of records to selectrandom (
Generator
) – the random number generator
- Return type:
moptipy.algorithms.modules.selections.tournament_with_repl module¶
Tournament selection with replacement in the tournaments.
For each slot in the destination, a tournament with size
randomly chosen participating solutions is conducted. The solution with the best fitness wins and is copied to the destination. The solutions are drawn with replacement for each tournament, meaning that one solution may enter a given tournament several times. A solution may be selected multiple times.
Tournament selection without replacement is implemented in moptipy.algorithms.modules.selections.tournament_without_repl
.
Peter J. B. Hancock. An Empirical Comparison of Selection Methods in Evolutionary Algorithms. In Terence Claus Fogarty, editor, Selected Papers from the AISB Workshop on Evolutionary Computing (AISB EC’94), April 11-13, 1994, Leeds, UK, volume 865 of Lecture Notes in Computer Science, pages 80-94, Berlin/Heidelberg, Germany: Springer, ISBN: 978-3-540-58483-4. https://dx.doi.org/10.1007/3-540-58483-8_7. Conference organized by the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB).
Uday Kumar Chakraborty and Kalyanmoy Deb and Mandira Chakraborty. Analysis of Selection Algorithms: A Markov Chain Approach. Evolutionary Computation, 4(2):133-167. Summer 1996. Cambridge, MA, USA: MIT Press. doi:10.1162/evco.1996.4.2.133. https://dl.acm.org/doi/pdf/10.1162/evco.1996.4.2.133
Tobias Blickle and Lothar Thiele. A Comparison of Selection Schemes used in Genetic Algorithms. Second edition, December 1995. TIK-Report 11 from the Eidgenössische Technische Hochschule (ETH) Zürich, Department of Electrical Engineering, Computer Engineering and Networks Laboratory (TIK), Zürich, Switzerland. ftp://ftp.tik.ee.ethz.ch/pub/publications/TIK-Report11.ps
Kumara Sastry and David Edward Goldberg. Modeling Tournament Selection with Replacement using Apparent Added Noise. In Lee Spector, Erik D. Goodman, Annie Wu, William Benjamin Langdon, and Hans-Michael Voigt, eds., Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation (GECCO’01), July 7-11, 2001, San Francisco, CA, USA, page 781. San Francisco, CA, United States: Morgan Kaufmann Publishers Inc. ISBN: 978-1-55860-774-3. https://dl.acm.org/doi/pdf/10.5555/2955239.2955378
- class moptipy.algorithms.modules.selections.tournament_with_repl.TournamentWithReplacement(size=2)[source]¶
Bases:
Selection
Tournament selection with replacement in the tournaments.
- log_parameters_to(logger)[source]¶
Log the parameters of the algorithm to a logger.
- Parameters:
logger (
KeyValueLogSection
) – the logger for the parameters- Return type:
- select(source, dest, n, random)[source]¶
Perform tournament with replacement.
- Parameters:
source (
list
[FitnessRecord
]) – the list with the records to select fromdest (
Callable
[[FitnessRecord
],Any
]) – the destination collector to invoke for each selected recordn (
int
) – the number of records to selectrandom (
Generator
) – the random number generator
- Return type:
moptipy.algorithms.modules.selections.tournament_without_repl module¶
Tournament selection without replacement in the tournaments.
For each slot in the destination, a tournament with size
randomly chosen participating solutions is conducted. The solution with the best fitness wins and is copied to the destination.
The tournaments are without replacement, following the algorithm by Goldberg, Korg, and Deb. This means that a solution can only take part in another tournament after all other solutions have joined one. In other words, we are drawing solutions without replacement. This means that if we have m solutions and want to select n from them by conducting tournaments of size s, each solution will take part in at least floor(s*n / m) tournaments and in at most ceil(s*n / m) ones.
We implement this drawing of unique random indices as a partial Fisher-Yates shuffle. The indices used to choose the tournament contestants from are in an array forming a permutation. Initially, unused=m indices. Whenever we draw one new index from the permutation, we do so from a random position from 0..unused-1, swap it to position unused-1 and decrement unused by one. Once unused reaches 0, we reset it to m. This is different from the originally proposed implementation of tournament selection without repetition in that it does not first permutate all the indices. It will be better in cases where only a few small tournaments are conducted, e.g., during mating selection. It will be slower when many large tournaments are conducted, e.g., during survival selection.
Tournament selection with replacement is implemented in moptipy.algorithms.modules.selections.tournament_with_repl
.
David Edward Goldberg and Bradley Korb and Kalyanmoy Deb. Messy Genetic Algorithms: Motivation, Analysis, and First Results. Complex Systems 3(5):493-530. 1989. https://wpmedia.wolfram.com/uploads/sites/13/2018/02/03-5-5.pdf
Kumara Sastry and David Edward Goldberg. Modeling Tournament Selection with Replacement using Apparent Added Noise. In Lee Spector, Erik D. Goodman, Annie Wu, William Benjamin Langdon, and Hans-Michael Voigt, eds., Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation (GECCO’01), July 7-11, 2001, San Francisco, CA, USA, page 781. San Francisco, CA, United States: Morgan Kaufmann Publishers Inc. ISBN: 978-1-55860-774-3. https://dl.acm.org/doi/pdf/10.5555/2955239.2955378
Peter J. B. Hancock. An Empirical Comparison of Selection Methods in Evolutionary Algorithms. In Terence Claus Fogarty, editor, Selected Papers from the AISB Workshop on Evolutionary Computing (AISB EC’94), April 11-13, 1994, Leeds, UK, volume 865 of Lecture Notes in Computer Science, pages 80-94, Berlin/Heidelberg, Germany: Springer, ISBN: 978-3-540-58483-4. https://dx.doi.org/10.1007/3-540-58483-8_7. Conference organized by the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB).
Uday Kumar Chakraborty and Kalyanmoy Deb and Mandira Chakraborty. Analysis of Selection Algorithms: A Markov Chain Approach. Evolutionary Computation, 4(2):133-167. Summer 1996. Cambridge, MA, USA: MIT Press. doi:10.1162/evco.1996.4.2.133. https://dl.acm.org/doi/pdf/10.1162/evco.1996.4.2.133
Tobias Blickle and Lothar Thiele. A Comparison of Selection Schemes used in Genetic Algorithms. Second edition, December 1995. TIK-Report 11 from the Eidgenössische Technische Hochschule (ETH) Zürich, Department of Electrical Engineering, Computer Engineering and Networks Laboratory (TIK), Zürich, Switzerland. ftp://ftp.tik.ee.ethz.ch/pub/publications/TIK-Report11.ps
Sir Ronald Aylmer Fisher and Frank Yates. Statistical Tables for Biological, Agricultural and Medical Research. Sixth Edition, March 1963. London, UK: Oliver & Boyd. ISBN: 0-02-844720-4. https://digital.library.adelaide.edu.au/dspace/bitstream/2440/10701/1/stat_tab.pdf
- class moptipy.algorithms.modules.selections.tournament_without_repl.TournamentWithoutReplacement(size=2)[source]¶
Bases:
Selection
Tournament selection without replacement in the tournament.
- log_parameters_to(logger)[source]¶
Log the parameters of the algorithm to a logger.
- Parameters:
logger (
KeyValueLogSection
) – the logger for the parameters- Return type:
- select(source, dest, n, random)[source]¶
Perform deterministic best selection without replacement.
- Parameters:
source (
list
[FitnessRecord
]) – the list with the records to select fromdest (
Callable
[[FitnessRecord
],Any
]) – the destination collector to invoke for each selected recordn (
int
) – the number of records to selectrandom (
Generator
) – the random number generator
- Return type: