3 minute read

Randomized local search () is maybe the simplest algorithm. It maintains one single best-so-far solution and in each step, it samples one new solution by applying the unary search operator to it. It will accept the new solution as the current solution if it is not worse. If we plug Frequency Fitness Assignment () into this algorithm, we obtain the FRLS.

The Algorithm

We first provide FRLS as Algorithm 1 and then discuss it step-by-step. It should be noted upfront that this algorithm will only work if the objective function is discrete, i.e., takes on a finite (and ideally small) number of different values.

Algorithm 1. The randomized local search with FFA, i.e., the FRLS algorithm.
$H\gets (0, 0, \dots, 0)$.
Sample solution $x_c$ uniformly at random from $\mathbb{X}$.
$y_c\gets f(x_c)$.
$x_b\gets x_c$; $y_b\gets y_c$.
Until $\lnot$ should terminate repeat
 $x_n\gets\mathit{op1}(x_c)$; $y_n\gets f(x_n)$.
 $H[y_c]\gets H[y_c]+1$; $H[y_n]\gets H[y_n]+1$.
 If $H[y_n] \leq H[y_c]$ then
 $x_c\gets x_n$; $y_c\gets y_n$.
 If $y_n < y_b$ then
 $x_b\gets x_n$; $y_b\gets y_n$.
Return $x_b, y_b$.

Let us first discuss the most important changes compared to a normal like . FRLS uses a frequency table $H$ to determine how interesting solutions are. $H$ counts how often each objective value was encountered during the selection step. It could be implemented a simple array or a hash map. We denote the initialization of this data structure by symbolically writing $H\gets (0, 0, \dots, 0)$.

The creation and evaluation of the initial solution $x_c$ and the new solution $x_n$ sampled in each iteration stay the same as in any other local search. When the algorithm starts, the first value of $x_c$ is sampled uniformly at random from the solution space $\mathbb{X}$. Its objective value $y_c$ is computed by evaluating the objective function $f$ as $y_c\gets f(x_c)$. In each iteration of the main loop, a new candidate solution $x_n$ is sampled by applying the unary search operator $\mathit{op1}$ to $x_c$. The new solution is evaluated and its objective value is stored in $y_n$. This is exactly the same compared to what RLS does.

However, before the selection step, i.e., before we decide which of the two we want to keep, a new line appears: $H[y_c]\gets H[y_c]+1$ and $H[y_n]\gets H[y_n]+1$ increment the frequency counters of the objective values $y_c$ and $y_n$ of $x_c$ and $x_n$, respectively. In other words, $H[y_c]$ now represents how often a solution with objective value $y_c$ took place in the selection. And, of course, $H[y_n]$ now represents how often a solution with objective value $y_n$ took place in the selection.

The selection decision is also changed: Instead of comparing $y_n\leq y_c$, we now compare $H[y_n] \leq H[y_c]$. If this comparison yields true, then $x_n$ replaces $x_c$ and $y_c$ is overwritten with $y_n$. What does this mean? In RLS, we select the new solution $x_n$ if it is either better than the current solution $x_c$ or equally good. In FRLS, we select the new solution $x_n$ if its objective value $y_n$ was encountered more rarely or equally often in the selection decisions compared to the objective value $y_c$ of the current solution $x_c$. From a simplified perspective, this means that we accept the new solution if its objective value appears to be more rare than the objective value of the current solution.

This means that, in FRLS, it does not really matter whether the new solution is better than the current solution. Indeed, the algorithm may well accept worse solutions and in experiments “optimizes” into the “wrong” direction about half of the time. This waste of time facilitates its exploration capability, though.

It also means that the best solution that this algorithm discovers is likely to get lost and overwritten. This can never happen in RLS. But in FRLS, it can. And thus, we need the blue part. We remember the best-ever encountered solution in an additional variable $x_b$ and its objective value in a variable $y_b$. These two values are returned when the algorithm has finished, e.g., exhausted its computational budget. They are initialized right at the beginning of the algorithm. We check whether they should be updated when $H[y_n] \leq H[y_c]$, because only if $y_n$ was encountered less often than $y_c$, it can represent a new best-so-far solution. Otherwise, it has been encountered already at least once before, so it might have corresponded to a new best-so-far solution back then, but not now.

Neither $x_b$ nor $y_b$ influence the search in any way. They are strictly used to remember the best-so-far solutions, but have no impact on where the search will take us next. That is entirely decided by the frequency table $H$. FRLS thus is no longer searching for better solutions, but for solutions whose objective value was encountered less often so far.

Posts

Plugging Frequency Fitness Assignment into Metaheuristics

12 minute read

Frequency Fitness Assignment (, 频率适应度分配) is a technique that fundamentally changes how (metaheuristic) optimization algorithms work. The goal of this post is to explore how this technique can be plugged into an existing algorithm. We first discuss optimization and the general pattern of metaheuristics in general. We then discuss the simplest local search algorithm – randomized local search, or for short. We plug FFA into this algorithm and obtain the . We finally list some properties of FRLS as well as some related works.

Publications

  • Rui ZHAO (赵睿), Zhize WU (吴志泽), Daan van den Berg, Matthias Thürer, Tianyu LIANG (梁天宇), Tan MING (檀明), and Thomas Weise (汤卫思): Randomized Local Search for Two-Dimensional Bin Packing and a Negative Result for Frequency Fitness Assignment. International Conference on Evolutionary Computation Theory and Applications (ECTA'2024), part of 16th International Joint Conference on Computational Intelligence (IJCCI'2024), November 20-22, 2024, Porto, Portugal, pages 15-26. Setúbal, Portugal: SciTePress.
  • Jiayang CHEN (陈嘉阳), Zhize WU (吴志泽), Sarah Louise Thomson, and Thomas Weise (汤卫思): Frequency Fitness Assignment: Optimization Without Bias for Good Solution Outperforms Randomized Local Search on the Quadratic Assignment Problem. International Conference on Evolutionary Computation Theory and Applications (ECTA'2024), part of 16th International Joint Conference on Computational Intelligence (IJCCI'2024), November 20-22, 2024, Porto, Portugal, pages 27-37. Setúbal, Portugal: SciTePress.
  • Xiang CAO (曹翔), Zhize WU (吴志泽), Daan van den Berg, and Thomas Weise (汤卫思): Randomized Local Search vs. NSGA-II vs. Frequency Fitness Assignment on The Traveling Tournament Problem. International Conference on Evolutionary Computation Theory and Applications (ECTA'2024), part of 16th International Joint Conference on Computational Intelligence (IJCCI'2024), November 20-22, 2024, Porto, Portugal, pages 38-49. Setúbal, Portugal: SciTePress.
  • Sarah Louise Thomson, Gabriela Ochoa, Daan van den Berg, Tianyu LIANG (梁天宇), and Thomas Weise (汤卫思): Entropy, Search Trajectories, and Explainability for Frequency Fitness Assignment. 18th International Conference on Parallel Problem Solving from Nature (PPSN XVIII), September 14-18, 2024, Hagenberg, Austria, pages 377-392. Lecture Notes in Computer Science (LNCS), volume 15148. Berlin/Heidelberg, Germany: Springer.