Coverage for moptipy / algorithms / so / ppa.py: 96%
80 statements
« prev ^ index » next coverage.py v7.12.0, created at 2025-11-24 08:49 +0000
« prev ^ index » next coverage.py v7.12.0, created at 2025-11-24 08:49 +0000
1"""
2A simple implementation of a Plant Propagation Algorithm (PPA).
4This is a simple implementation of the Plant Propagation Algorithm, PPA for
5short, with some tweaks and modifications.
6Our PPA implementation works as follows:
81. It starts with a set of :attr:`~moptipy.algorithms.so.ppa.PPA.m` randomly
9 sampled solutions in the list `lst`. Each solution `x` is evaluated and its
10 objective value `f(x)` is remembered.
112. In the main loop...
13 a. First, the range `[fmin,fmax]` of the objective values of the first
14 :attr:`~moptipy.algorithms.so.ppa.PPA.m` solutions in `lst` is
15 determined. We set `frange = fmax - fmin`, where `fmax` is the largest
16 objective value of any of the first `m` solutions in `lst` and `fmin`
17 is the smallest one. If `frange > 0`, then the fitness `z(x)` of each
18 element be `(f(x) - fmin) / frange`. Otherwise, i.e., if all solutions
19 in `lst` have the same objective value, we set `z(x)` to be a random
20 number uniformly distributed in `[0,1)` and drawn separately for each
21 solution.
22 b. For each of the first :attr:`~moptipy.algorithms.so.ppa.PPA.m` solutions
23 `x` in `lst`, we create `1 + int(nmax * r * (1 - z(x)))` offspring,
24 where :attr:`~moptipy.algorithms.so.ppa.PPA.nmax` is the maximum number
25 of offspring per solution and `r` be again an independently drawn random
26 number uniformly distributed in `[0,1)`. In other words, solutions with
27 a fitness close to zero will produce more offspring. If the solutions in
28 the list `lst` have different objective values, then this means that
29 better solutions produce more offsprings.
30 Each such offspring is the result of the application of a unary operator
31 with step size, i.e., an instance of
32 :class:`~moptipy.api.operators.Op1WithStepSize`. The step size is set to
33 `r * max_step * z(x)`, where `r` again is a freshly and independently
34 drawn random number uniformly distributed in `[0,1)`. This means that
35 better solutions are modified with smaller step sizes and worse
36 solutions are modified more strongly.
37 :attr:`~moptipy.algorithms.so.ppa.PPA.max_step` is a parameter of the
38 algorithm that determines the maximum permissible step size. It is
39 always from the interval `[0,1]`.
40 Examples for such operators are given in
41 :mod:`~moptipy.operators.permutations.op1_swap_exactly_n`,
42 :mod:`~moptipy.operators.permutations.op1_swap_try_n`, or
43 :mod:`~moptipy.operators.bitstrings.op1_flip_m`.
44 The new solutions are appended into `lst` and their objective values are
45 computed.
46 c. The list is then sorted by objective values in ascending order, meaning
47 that the best solutions are up front.
49The main differences between this procedure and the "standard-PPA" are as
50follows:
52A. The algorithm is implemented for minimization and all equations are
53 modified accordingly.
54B. After normalizing the objective values in the population, the `tanh`-based
55 scaling is *not* applied.
56 Instead, the normalized objective values, where `0` is best and `1` is
57 worst, are used directly to determine the number of offspring per record
58 and the step length.
59C. The fitness of a record equals its normalized objective value
60 (in `[0, 1]`), unless all records have the same objective value, in which
61 case the fitness of each record is set to a random number uniformly
62 distributed in `[0, 1)`.
63 If all elements in the population have the same objective value,
64 normalizing is not possible as it would lead to a division by zero.
65 One could use a constant value, say `0.5`, in this case, but there is no
66 guarantee that this would be a good choice.
67 We therefore use random values from `[0, 1)` instead.
68 These may sometimes be suitable, sometimes not.
69 But at least they likely are not *always* a bad choice, which might happen
70 in some scenarios with `0.5` or any other constant.
71D. The decisions regarding the number of offspring per selected record and the
72 step-width of the search moves are made only based on this fitness (and,
73 again, not on the `tanh` scaling which is not used).
74 Since we normally do not know the characteristics of the objective function
75 in advance, I think that we also often do not know whether a `tanh` scaling
76 (that emphasizes objective values close to the best and close to the worst)
77 is necessary or a good idea.
78 It could be good in some cases, but it might as well be a bad choice in
79 others.
80 For now, I have thus not implemented this and just use the raw normalized
81 objective values.
82E. As unary operators, we employ instances of the class
83 :class:`~moptipy.api.operators.Op1WithStepSize`, which provides a unary
84 operator with a step size between `0` (smallest possible modification) to
85 `1` (largest possible modification) and will scale appropriately between
86 the two extremes.
87 Often, instances of this class will determine the number or magnitude of
88 changes based on an exponential scaling (see
89 :func:`~moptipy.operators.tools.exponential_step_size`) of the step length.
90 The idea is that small step sizes should be emphasized and that really big
91 step sizes are often rarely needed.
92 This thus effectively takes the place of the `tanh` scaling.
93F. Maximum step lengths, i.e., the parameter
94 :attr:`~moptipy.algorithms.so.ppa.PPA.max_step`, are not always explicitly
95 used in some of the papers.
97In order to understand the behavior of the algorithm, consider the following
98case. Assume that we set the maximum number
99(:attr:`~moptipy.algorithms.so.ppa.PPA.nmax`) of offspring per solution to `1`
100and the number :attr:`~moptipy.algorithms.so.ppa.PPA.m` of solutions to
101survive selection to `1` as well. In this case, the PPA has exactly the same
102"population structure" as the Randomized Local Search
103(:mod:`~moptipy.algorithms.so.rls`), namely it preserves the best-so-far
104solution and generates one new solution in each step, which then competes with
105that best-so-far solution. The two algorithms then only differ in their search
106operator: The step-length of the unary operator used in PPA depends on the
107relative objective value and the one of the RLS does not. However, there are
108many situations where the two could still be equivalent. For example, if the
109current and new solution have different objective values, normalizing the
110objective value will mean that the best of the two has normalized objective
111value "0". This equates to the shortest possible step length. In this case,
112for example, the step-length based operator
113:mod:`~moptipy.operators.permutations.op1_swap_try_n` behaves exactly like the
114:mod:`~moptipy.operators.permutations.op1_swap2` operator and the step-length
115based :mod:`~moptipy.operators.bitstrings.op1_flip_m` operator behaves like
116the :mod:`~moptipy.operators.bitstrings.op1_flip1`.
117Of course, if both the current and the new solution have the same objective
118value, then we use a random number from `[0,1)` as normalized objective value,
119so the operators would not behave the same. Then again, one could set the
120maximum step length :attr:`~moptipy.algorithms.so.ppa.PPA.max_step` to `0`.
121In this case, the step length is always zero and most of our step-length based
122operations will behave like fixed small step-length based counterparts, as
123mentioned above. So in other words, if we set both
124:attr:`~moptipy.algorithms.so.ppa.PPA.m` and
125:attr:`~moptipy.algorithms.so.ppa.PPA.nmax` to `1` and set
126:attr:`~moptipy.algorithms.so.ppa.PPA.max_step` to `0`, our PPA behaves like
127:mod:`~moptipy.algorithms.so.rls` (if the search operators are appropriately
128chosen).
130Now :mod:`~moptipy.algorithms.so.rls` is also known as the (1+1) EA
131and indeed, it is a special case of the (mu+lambda) EA implemented in
132:mod:`~moptipy.algorithms.so.ea`.
133I think with some appropriate settings of the parameter, we can probably
134construct some setups of both algorithms with larger populations that should
135be equivalent or close-to-equivalent in the big picture.
137Below, you can find references on the PPA.
1391. Abdellah Salhi and Eric Serafin Fraga. Nature-Inspired Optimisation
140 Approaches and the New Plant Propagation Algorithm. *Proceeding of the
141 International Conference on Numerical Analysis and Optimization
142 (ICeMATH'2011),* June 6-8, 2011, Yogyakarta, Indonesia, volume 1,
143 pages K2-1--K2-8. ISBN: 978-602-98919-1-1.
144 https://doi.org/10.13140/2.1.3262.0806.
145 https://repository.essex.ac.uk/9974/1/paper.pdf.
1462. Misha Paauw and Daan van den Berg. Paintings, Polygons and Plant
147 Propagation. In Anikó Ekárt, Antonios Liapis, and María Luz Castro Pena,
148 editors, *Proceedings of the 8th International Conference on Computational
149 Intelligence in Music, Sound, Art and Design (EvoMUSART'19, Part of
150 EvoStar)*, April 24-26, 2019, Leipzig, Germany, Lecture Notes in Computer
151 Science (LNCS), volume 11453, pages 84-97. ISBN: 978-3-030-16666-3. Cham,
152 Switzerland: Springer. https://doi.org/10.1007/978-3-030-16667-0_6.
153 https://www.researchgate.net/publication/332328080.
1543. Muhammad Sulaiman, Abdellah Salhi, Eric Serafin Fraga, Wali Khan Mashwa,
155 and Muhammad M. Rashi. A Novel Plant Propagation Algorithm: Modifications
156 and Implementation. *Science International (Lahore)* 28(1):201-209, #2330,
157 January/February 2016. http://www.sci-int.com/pdf/4066579081%20a%20201-\
158209%20PPA%20Science%20international_Wali.pdf.
159 https://arxiv.org/pdf/1412.4290.pdf
1604. Hussein Fouad Almazini, Salah Mortada, Hassan Fouad Abbas Al-Mazini, Hayder
161 Naser Khraibet AL-Behadili, and Jawad Alkenani. Improved Discrete Plant
162 Propagation Algorithm for Solving the Traveling Salesman Problem. *IAES
163 International Journal of Artificial Intelligence (IJ-AI)* 11(1):13-22.
164 March 2022. http://doi.org/10.11591/ijai.v11.i1.pp13-22.
165 https://www.researchgate.net/publication/357484222.
1665. Birsen İrem Selamoğlu and Abdellah Salhi. The Plant Propagation Algorithm
167 for Discrete Optimisation: The Case of the Travelling Salesman Problem.
168 In Xin-She Yang, editor, *Nature-Inspired Computation in Engineering,*
169 pages 43-61. Studies in Computational Intelligence (SCI), Volume 637.
170 March 2016. Cham, Switzerland: Springer.
171 https://doi.org/10.1007/978-3-319-30235-5_3.
172 https://www.researchgate.net/publication/299286896.
1736. Marleen de Jonge and Daan van den Berg. Parameter Sensitivity Patterns in
174 the Plant Propagation Algorithm. In Juan Julián Merelo Guervós,
175 Jonathan M. Garibaldi, Christian Wagner, Thomas Bäck, Kurosh Madani, and
176 Kevin Warwick, editors, *Proceedings of the 12th International Joint
177 Conference on Computational Intelligence* (IJCCI'20), November 2-4, 2020,
178 Budapest, Hungary, pages 92-99. Setúbal, Portugal: SciTePress.
179 https://doi.org/10.5220/0010134300920099.
180 https://www.researchgate.net/publication/346829569.
1817. Ege de Bruin. Escaping Local Optima by Preferring Rarity with the
182 Frequency Fitness Assignment. Master's Thesis at Vrije Universiteit
183 Amsterdam, Amsterdam, the Netherlands. 2022.
1848. Wouter Vrielink and Daan van den Berg. Parameter control for the Plant
185 Propagation Algorithm. In Antonio M. Mora and Anna Isabel Esparcia-Alcázar,
186 editors, *Late-Breaking Abstracts of EvoStar'21*, April 7-9, 2021, online
187 conference. https://arxiv.org/pdf/2106.11804.pdf.
188 https://www.researchgate.net/publication/350328314.
1899. Levi Koppenhol, Nielis Brouwer, Danny Dijkzeul, Iris Pijning, Joeri
190 Sleegers, and Daan van den Berg. Exactly Characterizable Parameter Settings
191 in a Crossoverless Evolutionary Algorithm. In Jonathan E. Fieldsend and
192 Markus Wagner, editors, Genetic and Evolutionary Computation Conference
193 (GECCO'22) Companion Volume, July 9-13, 2022, Boston, MA, USA,
194 pages 1640-1649. New York, NY, USA: ACM.
195 https://doi.org/10.1145/3520304.3533968.
196 https://www.researchgate.net/publication/362120506.
197"""
198from math import isfinite
199from typing import Callable, Final, cast
201from numpy.random import Generator
202from pycommons.types import check_int_range, type_error
204from moptipy.algorithms.so.record import Record
205from moptipy.api.algorithm import Algorithm1
206from moptipy.api.operators import Op0, Op1WithStepSize
207from moptipy.api.process import Process
208from moptipy.utils.logger import KeyValueLogSection
209from moptipy.utils.strings import PART_SEPARATOR, num_to_str_for_name
212# start book
213class PPA(Algorithm1):
214 """The Plant Propagation Algorithm (PPA)."""
216 def solve(self, process: Process) -> None:
217 """
218 Apply the PPA to an optimization problem.
220 :param process: the black-box process object
221 """
222 m: Final[int] = self.m # m: the number of best solutions kept
223 nmax: Final[int] = self.nmax # maximum offspring per solution
224 list_len: Final[int] = (nmax + 1) * m
225 # initialization of some variables omitted in book for brevity
226 # end book
227 random: Final[Generator] = process.get_random() # random gen
228 create: Final[Callable] = process.create # create x container
229 evaluate: Final[Callable] = process.evaluate # the objective
230 op0: Final[Callable] = self.op0.op0 # the nullary operator
231 op1: Final[Callable] = cast("Op1WithStepSize",
232 self.op1).op1 # the unary operator
233 should_terminate: Final[Callable] = process.should_terminate
234 r01: Final[Callable[[], float]] = cast( # random floats
235 "Callable[[], float]", random.random)
236 max_step: Final[float] = self.max_step
237 # start book
238 # create list of m random records and enough empty records
239 lst: Final[list] = [None] * list_len # pre-allocate list
240 f: int | float = 0 # variable to hold objective values
241 for i in range(list_len): # fill list of size m*nmax
242 x = create() # by creating point in search space
243 if i < m: # only the first m records are initialized by
244 op0(random, x) # applying nullary operator = randomize
245 if should_terminate(): # should we quit?
246 return # computational budget exhausted -> quit
247 f = evaluate(x) # continue? ok, evaluate new solution
248 lst[i] = Record(x, f) # create and store record
250 it: int = 0 # the iteration counter
251 while True: # lst: keep 0..mu-1, overwrite mu..mu+lambda-1
252 it += 1 # step iteration counter
253 fmin = fmax = lst[0].f # get range of objective values
254 for i in range(m): # iterate over selected individuals
255 fval = lst[i].f # get objective value
256 if fval < fmin: # is it less than minimum?
257 fmin = fval # yes -> update the minimum
258 elif fval > fmax: # no! is it more than maximum then?
259 fmax = fval # yes -> update maximum
260 frange = fmax - fmin # compute the range of objective
261 all_same = (not isfinite(frange)) or (frange <= 0.0)
262 total = m # the total population length (so far: m)
263 for i in range(m): # generate offspring for each survivor
264 rec = lst[i] # get parent record
265 fit = r01() if all_same else ((rec.f - fmin) / frange)
266 x = rec.x # the parent x
267 for _ in range(1 + int((1.0 - fit) * r01() * nmax)):
268 if should_terminate(): # should we quit?
269 return # yes - then return
270 dest = lst[total] # get next destination record
271 total += 1 # remember we have now one more
272 dest.it = it # set iteration counter
273 op1(random, dest.x, x, fit * max_step * r01())
274 dest.f = evaluate(dest.x) # evaluate new point
275 ls = lst[0:total] # get sub-list of elements in population
276 ls.sort() # sort these used elements
277 lst[0:total] = ls # write the sorted sub-list back
278# end book
280 def __init__(self, op0: Op0, op1: Op1WithStepSize, m: int = 30,
281 nmax: int = 5, max_step: float = 0.3,
282 name: str = "ppa") -> None:
283 """
284 Create the Plant Propagation Algorithm (PPA).
286 :param op0: the nullary search operator
287 :param op1: the unary search operator
288 :param m: the number of best solutions to survive in each generation
289 :param nmax: the maximum number of offspring per solution
290 :param name: the base name of the algorithm
291 """
292 if not isinstance(op1, Op1WithStepSize):
293 raise type_error(op1, "op1", Op1WithStepSize)
295 #: the number of records to survive in each generation
296 self.m: Final[int] = check_int_range(m, "m", 1, 1_000_000)
297 #: the maximum number of offsprings per solution per iteration
298 self.nmax: Final[int] = check_int_range(
299 nmax, "nmax", 1, 1_000_000)
300 if not isinstance(max_step, float):
301 raise type_error(max_step, "max_step", float)
302 if (not isfinite(max_step)) or (max_step < 0.0) or (max_step > 1.0):
303 raise ValueError(f"max_step={max_step}, but must be in [0,1].")
304 #: the maximum step length
305 self.max_step: Final[float] = max_step
307 name = f"{name}{PART_SEPARATOR}{m}{PART_SEPARATOR}{nmax}"
308 if max_step != 1.0:
309 name = f"{name}{PART_SEPARATOR}{num_to_str_for_name(max_step)}"
310 super().__init__(name, op0, op1)
312 def log_parameters_to(self, logger: KeyValueLogSection) -> None:
313 """
314 Log the parameters of the algorithm to a logger.
316 :param logger: the logger for the parameters
317 """
318 super().log_parameters_to(logger)
319 logger.key_value("m", self.m)
320 logger.key_value("nmax", self.nmax)
321 logger.key_value("maxStep", self.max_step)