2 minute read

Metaheuristic Optimization

Frequency Fitness Assignment (FFA) is a technique that redefines how we do metaheuristic optimization. Metaheuristic algorithms, like Evolutionary Algorithms (EAs), local search, Simulated Annealing (SA), or Tabu Search (TS) proceed iteratively according to the cycle given in Algorithm 1. At each iteration $i$, they maintain a set $S_i$ of interesting candidate solutions $x\in S_i$ from the solution space $\mathbb{X}$. They use these selected solutions in one way or another to sample a set $N_i$ of new solutions. This may happen via a unary search operator (called “mutation” in EA-lingo) or a binary operator (often called “crossover” or “recombination”) or in any other imaginable way. Either way, we get a set of new solutions $N_i$. Then, $S_i$ and $N_i$ are combined into a set $P_i=S_i\cup N_i$ and the set $S_{i+1}\subseteq P_i$ for the next iteration is chosen from it. Now, normally, the better a solution $x\in P_i$ relative to the other members of&nbs;$P_i$, the higher its chance to be selected into $S_{i+1}$.

Algorithm 1. The normal cycle of metaheuristic algorithms.
Sample set $S_1$ of initial solutions from the solution space $\mathbb{X}$.
For $i$ from 1 to$\dots$
 Create set $N_i$ of new solutions based on $S_i$.
 $P_i\gets S_i\cup N_i$.
 Select set $S_{i+1}$ from $P_i$ according to some policy.

And this makes a lot of sense. This is how trial-and-error works: We try different solutions for problem and look how good they work. Then we continue with slightly modified versions the solution that we liked best in order to find even better. This iterative procedure of trial and error with a bias for better solutions is foundation of metaheuristic optimization.

Randomized Local Search (RLS)

Let us consider the simplest metaheuristic, namely Randomized Local Search (RLS). RLS begins by sampling one candidate solution $x_c$ from the solution space $\mathbb{X}$ uniformly at random. It computes the objective value $y_c$ of $x_c$ by evaluating the objective function $f:\mathbb{X}\mapsto\mathbb{R}$. Let’s assume that $f$ is subject to minimization, i.e., represents some sort of cost or amount of consumed resource. $y_c\gets f(x_c)$ then is the quality of the current solution $x_c$ and $x_c$ is the better, the smaller $y_c$ is. In the scheme given in Algorithm 1, we would basically have $S_i={x_c}$.

Now, in its main loop, RLS creates a slightly modified copy $x_n$ of the $x_c$. It does so by applying a unary operator $\mathit{op1}$ to $x_c$. $\mathit{op1}$ is usually randomized, meaning that it copies $x_c$ and then randomly changes one or multiple components of that copy. Either way, we get a new solution $x_n\gets\mathit{op1}(x_n)$. This solution is then also evaluated and we obtain its objective value $y_n\gets f(x_n)$. In the scheme given in Algorithm 1, we would basically have $N_i={x_n}$.

In its selection step, RLS retains $x_n$ if it is not worse than $x_c$. This means that, if $y_n\leq y_c$, $x_c\gets\x_n$ and $y_c\gets y_n$. After the complete computational budget is exhausted, the algorithm returns the best solution that it has discovered (namely $x_c$) as well as its quality $y_c$. This process is illustrated as Algorithm 2.

Algorithm 2. The randomized local search (RLS).
Sample solution $x_c$ uniformly at random from $\mathbb{X}$.
$y_c\gets f(x_c)$.
Until computational budget exhausted repeat
 $x_n\gets\mathit{Op1}(x_c)$; $y_n\gets f(x_n)$.
 If $y_n \leq y_c$ then
 $x_c\gets x_n$; $y_c\gets y_n$.
Return $x_c, y_c$.

Randomized Local Search with FFA (FRLS)

Now let’s plug FFA into the RLS algorithm and we obtain the FRLS. We first provide FRLS as Algorithm 3and then discuss it step-by-step. It should be noted upfront that this algorithm will only work if the objective function is discrete, i.e., takes on a finite (and ideally small) number of different values.

Algorithm 3. The randomized local search with FFA, i.e., the FRLS algorithm.
$H\gets (0, 0, \dots, 0)$.
Sample solution $x_c$ uniformly at random from $\mathbb{X}$.
$y_c\gets f(x_c)$.
$x_b\gets x_c$; $y_b\gets y_c$.
Until computational budget exhausted repeat
 $x_n\gets\mathit{Op1}(x_c)$; $y_n\gets f(x_n)$.
 $H[y_c]\gets H[y_c]+1$; $H[y_n]\gets H[y_n]+1$.
 If $H[y_n] \leq H[y_c]$ then
 $x_c\gets x_n$; $y_c\gets y_n$.
 If $y_n < y_b$ then
 $x_b\gets x_n$; $y_b\gets y_n$.
Return $x_b, y_b$.