http://thomasweise.github.io/oa/
Bases: Op0
Shuffle permutations randomly.
An operator deleting an element from a permutation and inserting it elsewhere.
The operator is similar to a combination of the rol and ror operators in [1]. If the permutation represents, e.g., a tour in the Traveling Salesperson Problem (TSP), then a rotation replaces two edges (if two neighboring elements are rotated = swapped) or three edges (if at least three elements take part in the rotation). It may therefore be considered as a possible 3-opt move on the TSP.
An alternative way to look at this is given in [2, 3, 4], where the operator is called “Insertion Mutation”: An element of the permutation is removed from its current position (i1) and inserted at another position (i2). The operator is therefore called “position based mutation operator” in [5].
Thomas Weise, Raymond Chiong, Ke Tang, Jörg Lässig, Shigeyoshi Tsutsui, Wenxiang Chen, Zbigniew Michalewicz, and Xin Yao. Benchmarking Optimization Algorithms: An Open Source Framework for the Traveling Salesman Problem. IEEE Computational Intelligence Magazine (CIM) 9(3):40-52, August 2014. https://doi.org/10.1109/MCI.2014.2326101
Pedro Larrañaga, Cindy M. H. Kuijpers, Roberto H. Murga, I. Inza, and S. Dizdarevic. Genetic Algorithms for the Travelling Salesman Problem: A Review of Representations and Operators. Artificial Intelligence Review, 13(2):129-170, April 1999. Kluwer Academic Publishers, The Netherlands. https://doi.org/10.1023/A:1006529012972
David B. Fogel. An Evolutionary Approach to the Traveling Salesman Problem. Biological Cybernetics 60(2):139-144, December 1988. https://doi.org/10.1007/BF00202901
Zbigniew Michalewicz. Genetic Algorithms + Data Structures = Evolution Programs, Berlin, Germany: Springer-Verlag GmbH. 1996. ISBN:3-540-58090-5. https://doi.org/10.1007/978-3-662-03315-9
Gilbert Syswerda. Schedule Optimization Using Genetic Algorithms. In Lawrence Davis, (ed.), Handbook of Genetic Algorithms, pages 332-349. 1991. New York, NY, USA: Van Nostrand Reinhold.
Yuezhong Wu, Thomas Weise, and Raymond Chiong. Local Search for the Traveling Salesman Problem: A Comparative Study. In Proceedings of the 14th IEEE Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC’15), July 6-8, 2015, Beijing, China, pages 213-220, Los Alamitos, CA, USA: IEEE Computer Society Press, ISBN: 978-1-4673-7289-3. https://doi.org/10.1109/ICCI-CC.2015.7259388
Dan Xu, Thomas Weise, Yuezhong Wu, Jörg Lässig, and Raymond Chiong. An Investigation of Hybrid Tabu Search for the Traveling Salesman Problem. In Proceedings of the 10th International Conference on Bio-Inspired Computing — Theories and Applications (BIC-TA’15), September 25-28, 2015, Hefei, Anhui, China, volume 562 of Communications in Computer and Information Science. Berlin/Heidelberg: Springer-Verlag, pages 523-537, ISBN 978-3-662-49013-6. https://doi.org/10.1007/978-3-662-49014-3_47
Weichen Liu, Thomas Weise, Yuezhong Wu, and Raymond Chiong. Hybrid Ejection Chain Methods for the Traveling Salesman Problem. In Proceedings of the 10th International Conference on Bio-Inspired Computing — Theories and Applications (BIC-TA’15), September 25-28, 2015, Hefei, Anhui, China, volume 562 of Communications in Computer and Information Science. Berlin/Heidelberg: Springer-Verlag, pages 268-282, ISBN 978-3-662-49013-6. https://doi.org/10.1007/978-3-662-49014-3_25
Yuezhong Wu, Thomas Weise, and Weichen Liu. Hybridizing Different Local Search Algorithms with Each Other and Evolutionary Computation: Better Performance on the Traveling Salesman Problem. In Proceedings of the 18th Genetic and Evolutionary Computation Conference (GECCO’16), July 20-24, 2016, Denver, CO, USA, pages 57-58, New York, NY, USA: ACM. ISBN: 978-1-4503-4323-7. https://doi.org/10.1145/2908961.2909001
Bases: Op1
Delete an element from a permutation and insert it elsewhere.
This operation keep drawing to random numbers, i1 and i2. If i1 < i2, then the permutation is rotated one step to the left between i1 and i2. If i1 > i2, then it is rotated one step to the right between i1 and i2. If the permutation has changed, the process is stopped. This corresponds to extracting the element at index i1 and re-inserting it at index i2. This operator also works for permutations with repetitions.
Copy x into dest and then rotate a subsequence by one step.
>>> rand = np.random.default_rng(10)
>>> xx = np.array(range(10), int)
>>> out = np.empty(len(xx), int)
>>> rotate(rand, out, xx)
>>> print(out)
[0 1 2 3 4 5 6 8 9 7]
>>> rotate(rand, out, xx)
>>> print(out)
[0 1 2 3 4 5 6 8 7 9]
>>> rotate(rand, out, xx)
>>> print(out)
[0 5 1 2 3 4 6 7 8 9]
>>> rotate(rand, out, xx)
>>> print(out)
[0 1 2 3 4 8 5 6 7 9]
Rotate a portion of an array to the left or right in place.
If i1 < i2, then a left rotation by one step is performed. In other words, the element at index i1 + 1 goes to index i1, the element at index i1 + 2 goes to index i1 + 1, and so on. The lst element, i.e., the one at index i2 goes to index i2 - 1. Finally, the element that originally was at index i1 goes to index i2. If any element in the array has changed, this function returns False, otherwise True.
If i1 > i2, then a right rotation by one step is performed. In other words, the element at index i1 - 1 goes to index i1, the element at index i1 - 2 goes to index i1 - 1, and so on. Finally, the element that originally was at index i1 goes to index i2. If any element in the array has changed, this function returns False, otherwise True.
This corresponds to extracting the element at index i1 and re-inserting it at index i2.
whether the array was unchanged
if the array arr is now different from before
if the array arr has not changed
>>> import numpy as npx
>>> dest = npx.array(range(10))
>>> print(dest)
[0 1 2 3 4 5 6 7 8 9]
>>> try_single_rotate(dest, 3, 4)
False
>>> print(dest)
[0 1 2 4 3 5 6 7 8 9]
>>> try_single_rotate(dest, 3, 4)
False
>>> print(dest)
[0 1 2 3 4 5 6 7 8 9]
>>> try_single_rotate(dest, 4, 3)
False
>>> print(dest)
[0 1 2 4 3 5 6 7 8 9]
>>> try_single_rotate(dest, 4, 3)
False
>>> print(dest)
[0 1 2 3 4 5 6 7 8 9]
>>> try_single_rotate(dest, 3, 6)
False
>>> print(dest)
[0 1 2 4 5 6 3 7 8 9]
>>> try_single_rotate(dest, 6, 3)
False
>>> print(dest)
[0 1 2 3 4 5 6 7 8 9]
>>> try_single_rotate(dest, 0, len(dest) - 1)
False
>>> print(dest)
[1 2 3 4 5 6 7 8 9 0]
>>> try_single_rotate(dest, len(dest) - 1, 0)
False
>>> print(dest)
[0 1 2 3 4 5 6 7 8 9]
>>> try_single_rotate(dest, 7, 7)
True
>>> dest = np.array([0, 1, 2, 3, 3, 3, 3, 3, 8, 9])
>>> try_single_rotate(dest, 7, 7)
True
>>> try_single_rotate(dest, 4, 6)
True
>>> print(dest)
[0 1 2 3 3 3 3 3 8 9]
>>> try_single_rotate(dest, 6, 4)
True
>>> print(dest)
[0 1 2 3 3 3 3 3 8 9]
>>> try_single_rotate(dest, 4, 7)
True
>>> print(dest)
[0 1 2 3 3 3 3 3 8 9]
>>> try_single_rotate(dest, 6, 7)
True
>>> print(dest)
[0 1 2 3 3 3 3 3 8 9]
>>> try_single_rotate(dest, 4, 8)
False
>>> print(dest)
[0 1 2 3 3 3 3 8 3 9]
>>> try_single_rotate(dest, 8, 4)
False
>>> print(dest)
[0 1 2 3 3 3 3 3 8 9]
>>> try_single_rotate(dest, 9, 4)
False
>>> print(dest)
[0 1 2 3 9 3 3 3 3 8]
>>> try_single_rotate(dest, 4, 9)
False
>>> print(dest)
[0 1 2 3 3 3 3 3 8 9]
An operator swapping two elements in a permutation.
This is a unary search operator which first copies the source string x to the destination string dest. Then it draws an index i1 randomly. It keeps drawing a second random index i2 until dest[i1] != dest[i2], i.e., until the elements at the two indices are different. This will always be true for actual permutations if i1 != i2, but for permutations with repetitions, even if i1 != i2, sometimes dest[i1] == dest[i2]. Anyway, as soon as the elements at i1 and i2 are different, they will be swapped.
This operator is very well-known for permutations and has been used by many researchers, e.g., for the Traveling Salesperson Problem (TSP). In the papers by Larrañaga et al. (1999) and Banzhaf (1990), it is called “Exchange Mutation.” It is also referred to as the “swap mutation” (Oliver et al. 1987), point mutation operator (Ambati et al. 1991), the reciprocal exchange mutation operator (Michalewicz 1992), or the order based mutation operator by Syswerda (1991).
Pedro Larrañaga, Cindy M. H. Kuijpers, Roberto H. Murga, I. Inza, and S. Dizdarevic. Genetic Algorithms for the Travelling Salesman Problem: A Review of Representations and Operators. Artificial Intelligence Review, 13(2):129-170, April 1999. Kluwer Academic Publishers, The Netherlands. https://doi.org/10.1023/A:1006529012972
Wolfgang Banzhaf. The “Molecular” Traveling Salesman. Biological Cybernetics, 64(1):7-14, November 1990, https://doi.org/10.1007/BF00203625
I.M. Oliver, D.J. Smith, and J.R.C. Holland. A Study of Permutation Crossover Operators on the Traveling Salesman Problem. In Proceedings of the Second International Conference on Genetic Algorithms and their Application (ICGA’87), October 1987, pages 224-230, https://dl.acm.org/doi/10.5555/42512.42542
Balamurali Krishna Ambati, Jayakrishna Ambati, and Mazen Moein Mokhtar. Heuristic Combinatorial Optimization by Simulated Darwinian Evolution: A Polynomial Time Algorithm for the Traveling Salesman Problem. Biological Cybernetics, 65(1):31-35, May 1991, https://doi.org/10.1007/BF00197287
Zbigniew Michalewicz. Genetic Algorithms + Data Structures = Evolution Programs, Berlin, Germany: Springer-Verlag GmbH. 1996. ISBN:3-540-58090-5. https://doi.org/10.1007/978-3-662-03315-9
Gilbert Syswerda. Schedule Optimization Using Genetic Algorithms. In Lawrence Davis, (ed.), Handbook of Genetic Algorithms, pages 332-349. 1991. New York, NY, USA: Van Nostrand Reinhold.
Thomas Weise. Optimization Algorithms. 2021. Hefei, Anhui, China: Institute of Applied Optimization (IAO), School of Artificial Intelligence and Big Data, Hefei University. http://thomasweise.github.io/oa/
This operator performs one swap. It is similar to Op1SwapN
, which performs a random number of swaps.
Bases: Op1
A unary search operation that swaps two (different) elements.
In other words, it performs exactly one swap on a permutation. It spans a neighborhood of a rather limited size but is easy and fast.
Copy x into dest and swap two different values in dest.
>>> rand = np.random.default_rng(10)
>>> xx = np.array(range(10), int)
>>> out = np.empty(len(xx), int)
>>> swap_2(rand, out, xx)
>>> print(out)
[0 1 2 3 4 5 6 9 8 7]
>>> sum(out != xx)
2
>>> swap_2(rand, out, xx)
>>> print(out)
[0 1 7 3 4 5 6 2 8 9]
>>> sum(out != xx)
2
>>> swap_2(rand, out, xx)
>>> print(out)
[0 1 2 3 4 8 6 7 5 9]
>>> sum(out != xx)
2
An operator trying to change exactly n elements in a permutation.
This is an operator with a step size (moptipy.api.operators.Op1WithStepSize
) and the step size determines how many elements of a permutation should be changed. If you are working on permutations without repetitions, the operator in op1_swap_try_n
is the better and faster choice. On normal permutations, it will be equivalent to op1_swap_exactly_n
, but faster. On Permutations with repetitions, it will still be faster but less precisely enforces the number of swaps.
Let’s say we have a “normal” permutation where each element occurs once. The permutation has length n, let’s say that its blueprint
is [0, 1, 2, 3, 4, …, n-1]. Then, with one application of the operator, we can change no less than two elements (step_size=0.0) and no more than n (step_size=1.0). Clearly we cannot change only one element, because what different value could we put there? We would violate the “permutation nature.” We can also not change more than n elements, obviously.
What our operator does is then it computes m by using exponential_step_size()
. This ensures that m=2 for step_size=0.0 and m is maximal for step_size=1.0. In between the two, it extrapolates exponentially. This means that small values of step_size will lead to few swap moves, regardless of the length of the permutation and many swaps are performed only for step_size values close to 1. Then it will pick m random indices and permute the elements at them such that none remains at its current position. That is rather easy to do. Unfortunately, Permutations
allows also permutations with repetitions. Here, things get dodgy.
Let’s say we have the blueprint [0, 1, 1, 1, 1]. Then we can only change exactly two elements.
>>> print(get_max_changes([0, 1, 1, 1, 1]))
2
If we have the blueprint [0, 0, 1, 1, 1], then we can change two elements. We can also change four elements. But there is no way to change three elements!
>>> print(get_max_changes([0, 0, 1, 1, 1]))
4
So in the general case, we determine the maximum number mx of positions that can be changed with one move via the function get_max_changes()
. We can still translate a step_size to a number m by doing m = 2 + int(round(step_size * (mx - 2))). However, it is not clear whether all moves from m=2 to m=mx are actually possible.
And even if they are possible, they might be very hard to sample randomly. Some moves touching a large number m of positions may be restricted to swap the elements at very specific indicies in a very specific order. Finding such a move by chance may be rather unlikely.
Indeed, a search operator should do random moves. I did not find a way to construct moves according to a policy which is both random and can yield any move. So I divide the operator into two steps:
First, the function find_move()
tries to find a move for a given m in a random fashion. This function tries to find the sequence of indices for a cyclic swap where each element is different from both its successor and predecessor. It builds such index sequences iteratively. This may fail: as in the example above, for some values of m it might just not be possible to construct a suitable sequence, either because there is none or because building it randomly has too low of a chance of success. Hence, the function tries to build at most max_trials sequences. Whenever building a sequence, it also remembers the longest-so-far cyclic move. If that one is shorter than m but all max_trials trials are exhausted, it returns this sequence instead. So in summary, this function tries to find the longest cyclic swap which is not longer than m and returns it. It may be shorter than m. If we deal with permutations where each value occurs only once, this function is guaranteed to find a sequence of length m in the first trial, i.e., it does not waste runtime.
Once a move was found, the function apply_move()
applies it. Now, as said, find_move()
discovers cyclic changes that are reasonably random. However, cyclic changes are not the only possible moves of length m. For example, if we have the permutation blueprint [0, 1, 2, 3, 4], a move of length 4 could be to exchange 0 with 1 and to swap 2 with 3. find_move()
cannot find this move, but it could find a cyclic swap of 0 to 1, 1 to 2, 2 to 3, and 3 to 1. So it could find the right indices for such a move, just restricted to the cyclic swapping. So what apply_move()
tries to do is to permute the indices discovered by find_move()
randomly and check whether this would still yield a feasible move changing exactly m locations. Any shuffling of the elements at the selected position which avoids putting the original values into the original positions would do. Sometimes, most such shuffles work out, e.g., if we work on the space of permutations where each element occurs once. Sometimes, the only sequence that works is indeed the cyclic move and shuffling it cannot work. So apply_move()
again has a limit for the maximum number of attempts to find a shuffle that works out. If it finds one, then it applies it as move. If it does not find one and the trials are exhausted, it randomly choses whether to apply the cyclic move as a cycle to the left or as a cycle to the right. Either way, it will change exactly m positions of the permutation, as prescribed by the move. Cycling one step in either direction will always work, since each element is different from both its predecessor and successor. (Cycling more than one step (but less than m) could potentially fail in permutations with repetitions, because there is no guarantee that any element is different from its successor’s successor.)
Now the overall operator just plugs these two functions together. It also adds one slight improvement: If we demand to change a number q of locations for the first time and find_move()
fails to find a move of length q but instead offers one of length p<q, then the operator remembers this. The next time we ask to change q positions, it will directly try to change only p. This memory is reset in initialize()
.
A similar but much more light-weight and faster operator is given in op1_swap_try_n
. That operator also tries to perform a given number of swaps, but puts in much less effort to achieve this goal, i.e., it will only perform a single attempt.
Bases: Op1WithStepSize
A unary search operator that swaps n (different) elements.
Initialize this operator.
Log the parameters of this operator to the given logger.
logger (KeyValueLogSection
) – the logger for the parameters
Final
[int
]¶the maximum number of attempts to apply a random permutation of the move before giving up and applying it as cyclic swap
Final
[int
]¶the maximum number of possible changes
Final
[int
]¶the maximum number of attempts to find a move with the exact step size
Copy x into dest and then swap several different values.
Apply a given move copying from source x to dest.
move must be the return value of find_move()
.
>>> import numpy as npx
>>> from numpy.random import default_rng
>>> rand = default_rng(12)
>>> xx = npx.array([0, 1, 2, 3, 3, 3, 3, 3, 3], int)
>>> dd = npx.empty(len(xx), int)
>>> mv = npx.array([5, 2, 8, 1, 4, 0], int)
>>> print(len(mv))
6
>>> apply_move(xx, dd, mv, rand, 0)
>>> print(dd)
[3 3 3 3 0 2 3 3 1]
>>> print(sum(dd != xx))
6
>>> apply_move(xx, dd, mv, rand, 1000)
>>> print(dd)
[3 3 3 3 2 1 3 3 0]
>>> print(sum(dd != xx))
6
>>> xx = npx.array([1, 3, 5, 9, 4, 2, 11, 7], int)
>>> dd = npx.empty(len(xx), int)
>>> mv = npx.array([4, 0, 5, 3, 1, 7, 2, 6], int)
>>> print(len(mv))
8
>>> apply_move(xx, dd, mv, rand, 0)
>>> print(dd)
[ 4 9 7 2 11 1 5 3]
>>> print(sum(dd != xx))
8
>>> xx = npx.array([1, 3, 5, 9, 4, 2, 11, 7], int)
>>> apply_move(xx, dd, mv, rand, 10)
>>> print(dd)
[ 5 4 2 11 7 3 9 1]
>>> print(sum(dd != xx))
8
Find a move of at most step_size changes and store it in indices.
For any pure permutation x, we can basically find any move between step_size=2 and len(x). If we have permutations with repetitions, the upper limit becomes smaller, because it may not be possible to change all elements at once. For example, in [0, 0, 1, 1, 1, 1, 1], we can change at most four elements. For this purpose, get_max_changes()
exists. It tells us the upper limit of changes, which would be four here:
>>> print(get_max_changes([0, 0, 1, 1, 1, 1, 1]))
4
Yet, this does not necessarily mean that we can make all moves including 2, 3, and 4 changes. If you look at the above example, you may find that it is impossible to change exactly 3 locations. We can only actually do step sizes of 2 and 4, but neither 3 nor 5, 6, nor 7. In reality, even if it was possible to make a big move, it might be very unlikely to be able to this in a random fashion. If you look at the code of get_max_changes()
, it tries to find the longest possible cyclic move by working through the elements of the permutation in a very specific order. Using a different (or random) order would yield shorter moves.
Our method find_move()
here therefore tries to find the longest possible cyclic swap involving at most step_size indices. It may find such a move that uses exactly step_size indices. But it may also find a shorter move because either the perfect complete feasible move touching step_size indices does not exist or because finding it in a random fashion may take too long. (For this purpose, max_trials exists to limit the search for the right move.)
For normal permutations, it will find the move of exactly the right length. For permutations with repetitions, it has a decent chance to find a move of the length step_size if a) such a move exists and b) step_size is not too big. Otherwise, it should find a shorter but still reasonably large move.
x (ndarray
) – the permutation to be modified in the end
indices (ndarray
) – the set of indices, which must be the canonical permutation of 0…len(x)-1
step_size (int
) – the step size, i.e., the number of elements to change
random (Generator
) – the random number generator
max_trials (int
) – the maximum number of trials
temp (ndarray
) – a temporary array
the discovered move
>>> import numpy as npx
>>> from numpy.random import default_rng
>>> gen = default_rng(12)
>>> perm = npx.array([0, 1, 2, 3, 3, 3, 3, 3, 3], int)
>>> use_indices = npx.array(range(len(perm)), int)
>>> want_size = len(perm)
>>> print(want_size)
9
>>> print(get_max_changes(perm))
6
>>> use_temp = npx.empty(len(perm), int)
>>> res = find_move(perm, use_indices, want_size, gen, 1000, use_temp)
>>> print(res)
[5 2 8 1 4 0]
>>> perm2 = perm.copy()
>>> perm2[res] = perm2[npx.roll(res, 1)]
>>> print(f"{perm} vs. {perm2}")
[0 1 2 3 3 3 3 3 3] vs. [3 3 3 3 1 0 3 3 2]
>>> print(sum(perm != perm2))
6
>>> perm = npx.array([1, 3, 5, 9, 4, 2, 11, 7], int)
>>> want_size = len(perm)
>>> print(want_size)
8
>>> use_indices = npx.array(range(len(perm)), int)
>>> use_temp = npx.empty(len(perm), int)
>>> res = find_move(perm, use_indices, want_size, gen, 1, use_temp)
>>> print(res)
[4 0 5 3 1 7 2 6]
>>> print(len(res))
8
>>> perm2 = perm.copy()
>>> perm2[res] = perm2[npx.roll(res, 1)]
>>> print(f"{perm} vs. {perm2}")
[ 1 3 5 9 4 2 11 7] vs. [ 4 9 7 2 11 1 5 3]
>>> print(sum(perm != perm2))
8
Get the maximum number of changes possible for a given permutation.
the maximum number of changes possible
>>> get_max_changes("1")
0
>>> get_max_changes("11")
0
>>> get_max_changes("12")
2
>>> get_max_changes("123")
3
>>> get_max_changes("1233")
4
>>> get_max_changes("12333")
4
>>> get_max_changes("1234")
4
>>> get_max_changes("12344")
5
>>> get_max_changes("123444")
6
>>> get_max_changes("1234444")
6
>>> get_max_changes("12334444")
8
>>> get_max_changes("123344445")
9
>>> get_max_changes("1233444455")
10
>>> get_max_changes("12334444555")
11
>>> get_max_changes("123344445555")
12
>>> get_max_changes("1233444455555")
13
>>> get_max_changes("12334444555555")
14
>>> get_max_changes("123344445555555")
15
>>> get_max_changes("1233444455555555")
16
>>> get_max_changes("12334444555555555")
16
>>> get_max_changes("112233")
6
>>> get_max_changes("11223344")
8
>>> get_max_changes("112233445")
9
>>> get_max_changes("1122334455")
10
>>> get_max_changes("11223344555")
11
>>> get_max_changes("112233445555555")
15
>>> get_max_changes("1122334455555555")
16
>>> get_max_changes("11223344555555555")
16
An operator trying to swap a given number of elements in a permutation.
This operator is similar to op1_swap_exactly_n
in that it tries to swap a fixed number of elements in a permutation. It implements the Op1WithStepSize
interface. Different from op1_swap_exactly_n
, however, it does so less strictly. It applies a simple best-effort approach and if that does not work out, then so be it. It is therefore faster, but adheres less strictly to the give step_size.
This operator will always swap the right number of elements on normal permutations. On permutations with repetitions, it enforces the number of swaps less strongly compared to op1_swap_exactly_n
, but it will be faster either way.
Bases: Op1WithStepSize
An operator trying to swap a given number of elements.
Initialize this operator.
Copy x into dest and then swap several different values.
Copy x into dest and then swap several different values.
>>> xx = np.array(range(10), int)
>>> out = np.empty(len(xx), xx.dtype)
>>> idxs = np.array(range(len(xx)), int)
>>> gen = np.random.default_rng(10)
>>> swap_try_n(gen, out, xx, 0.1, idxs)
>>> print(out)
[0 1 2 3 4 5 6 8 7 9]
>>> swap_try_n(gen, out, xx, 0.1, idxs)
>>> print(out)
[0 2 1 3 4 5 6 7 8 9]
>>> swap_try_n(gen, out, xx, 1.0, idxs)
>>> print(out)
[3 7 4 5 8 6 9 0 1 2]
An operator swapping several elements in a permutation.
This operator works similarly to Op1Swap2
, but instead of swapping two elements (i.e., performing 1 swap), it will perform a random number of swaps.
First, the operator copies the source string x into the destination dest. It then chooses a random index i1. It remembers the element at that index in two variables, last and first. (last will always be the value of the element will next be overwritten and first will never change and remember the first overwritten element, i.e., the value that we need to store again into the array in the end.) We also initialize a variable continue_after with True. This variable will always tell us if we need to continue and draw another random index.
We then begin with the main loop, which is iterated as long as continue_after is True. Directly as first action in this loop, we set continue_after = ri(2) <= 0. ri(2) will return a random integer in {0, 1}. ri(2) <= 0 will therefore be True with probability 0.5.
We now will draw a new random index i2. We load the element at index i2 into variable current. If current == last, we immediately go back and draw another random i2. The reason is that we want to store current at i1, i.e., where last is currently stored. If current == last, this would change nothing. Furthermore, if continue_after is False, then current must also be different from first: If we exit the main loop, then we will store first into the place where we found current. Remember, we will overwrite the element at index i1 with current, so the very first element we overwrite must eventually be stored back into the array.
OK, if we have obtained an index i2 whose corresponding element current fulfills all requirements, we can set dest[i1] = last = current, i.e., remember current in last and also store it at index i1. Next, we will overwrite the element at index i2, so we set i1 = i2. If continue_after is True, the loop will continue. Otherwise, it will stop.
In the latter case, we store first back into the array at the last index i1 and do dest[i1] = first.
As a result, the operator will swap exactly 2 elements with probability 0.5. With probability 0.25, it will swap three elements, with 0.125 probability, it will swap 4 elements, and so on.
Thomas Weise. Optimization Algorithms. 2021. Hefei, Anhui, China: Institute of Applied Optimization (IAO), School of Artificial Intelligence and Big Data, Hefei University. http://thomasweise.github.io/oa/
Bases: Op1
A unary search operation that swaps several (different) elements.
It is similar to swap2, but instead may perform a random number of swaps. After each swap, it decides with probability 0.5 whether or not to perform another swap.
Copy x into dest and then swap several different values.
>>> rand = np.random.default_rng(10)
>>> xx = np.array(range(10), int)
>>> out = np.empty(len(xx), int)
>>> swap_n(rand, out, xx)
>>> print(out)
[0 1 7 3 4 5 6 2 8 9]
>>> swap_n(rand, out, xx)
>>> print(out)
[0 1 8 3 4 5 6 7 2 9]
>>> swap_n(rand, out, xx)
>>> print(out)
[0 5 2 3 4 8 6 7 1 9]
A binary operator that merges two permutations by adding elements in sequence.
Assume we have two permutations x0 and x1. For each position i in the destination string dest, we randomly select the permutation x from which the value should come (so either x=x0 or x=x1). We then store the first value not yet marked as done from x in dest[i], mark that value as done both in x0 and x1.
This operator may be considered as a generalized version of the Alternating Position Crossover operator (AP) for the Traveling Salesperson Problem by Larrañaga et al. (1997). The original AP operator, as described by Larrañaga et al., simply creates an offspring by selecting alternately the next element of the fist parent and the next element of the second parent, omitting the elements already present in the offspring. For example, if x0 is 12345678 and x1 is 37516824, the AP operator gives the following offspring 13275468. Exchanging the parents results in 31725468.
Our generalized version randomly decides which of the two parent permutations to use each time, hopefully resulting in a greater variety of possible results. It also does not skip over a parent if its next element is already used, but instead picks the next not-yet-used element from that parent. As mnemonic for the operator, we use gap. Larrañaga et al. used AP for the version of the operator that strictly alternates between parents.
Pedro Larrañaga, Cindy M. H. Kuijpers, Mikel Poza, and Roberto H. Murga. Decomposing Bayesian Networks: Triangulation of the Moral Graph with Genetic Algorithms, Statistics and Computing, 7(1):19-34, March 1997, https://doi.org/10.1023/A:1018553211613
Pedro Larrañaga, Cindy M. H. Kuijpers, Roberto H. Murga, Iñaki Inza, and S. Dizdarevic. Genetic Algorithms for the Travelling Salesman Problem: A Review of Representations and Operators. Artificial Intelligence Review, 13(2):129-170, April 1999. Kluwer Academic Publishers, The Netherlands. https://doi.org/10.1023/A:1006529012972
Bases: Op2
A binary operator trying to preserve the value sequence.
Apply a sequence mix from x0 and x1 to dest.
The Order-based Crossover operator.
Larrañaga et al. describe this operator as follows:
The order-based crossover operator by Syswerda (1991) selects at random several positions in a parent tour, and the order of the cities in the selected positions of this parent is imposed on the other parent. For example, consider the parents 12345678 and 24687531. Suppose that in the second parent the second, third, and sixth positions are selected. The values in these positions are 4, 6, and 5 respectively. In the first parent these cities are present at the fourth, fifth and sixth positions. Now the offspring is equal to parent 1 except in the fourth, fifth and sixth positions: 123xxx78. We add the missing cities to the offspring in the same order in which they appear in the second parent tour. This results in 12346578. Exchanging the role of the first parent and the second parent gives, using the same selected positions, 24387561.
We implement this operator such that each position has the same chance to be chosen by either parents, i.e., the total number of positions copied from the parents is binomially distributed with p=0.5, but we ensure that at least two positions are copied from either parents (as the result would otherwise necessarily equal one of the parents). We also switch the role of the two parents in our implementation.
As mnemonic for the operator, we use ox2, similar to Larrañaga et al., who used OX2.
G. Syswerda. Schedule Optimization Using Genetic Algorithms. In Lawrence Davis, L. (ed.), Handbook of Genetic Algorithms, pages 332-349. New York, NY, USA: Van Nostrand Reinhold.
Pedro Larrañaga, Cindy M. H. Kuijpers, Roberto H. Murga, Iñaki Inza, and S. Dizdarevic. Genetic Algorithms for the Travelling Salesman Problem: A Review of Representations and Operators. Artificial Intelligence Review, 13(2):129-170, April 1999. Kluwer Academic Publishers, The Netherlands. https://doi.org/10.1023/A:1006529012972
Bases: Op2
The order-based crossover operator.
Initialize this operator.
Apply the order-based crossover from x0 and x1 to dest.