1 minute read

Randomized local search (RLS) is maybe the simplest algorithm. It maintains one single best-so-far solution $x_c$ and in each step, it samples one new solution $x_n$ by applying the unary search operator to it. It will accept the new solution $x_n$ as the current solution $x_c$ if it is not worse. This search works already surprisingly well on a wide set of optimization problems. It can be viewed as the bottom line that any new must significantly outperform to be worth its salt.

The Algorithm

Algorithm 1. The randomized local search (RLS).
Sample solution $x_c$ uniformly at random from $\mathbb{X}$.
$y_c\gets f(x_c)$.
Until $\lnot$ should terminate repeat
 $x_n\gets\mathit{op1}(x_c)$; $y_n\gets f(x_n)$.
 If $y_n \leq y_c$ then
 $x_c\gets x_n$; $y_c\gets y_n$.
Return $x_c, y_c$.

RLS begins by sampling one candidate solution $x_c$ from the solution space $\mathbb{X}$ uniformly at random. It computes the objective value $y_c$ of $x_c$ by evaluating the objective function $f:\mathbb{X}\mapsto\mathbb{R}$. Let’s assume that $f$ is subject to minimization, i.e., represents some sort of cost or amount of consumed resource. $y_c\gets f(x_c)$ then is the quality of the current solution $x_c$ and $x_c$ is the better, the smaller $y_c$ is

Now, in its main loop, RLS creates a slightly modified copy $x_n$ of the $x_c$. It does so by applying a unary operator $\mathit{op1}$ to $x_c$. $\mathit{op1}$ is usually randomized, meaning that it first copies $x_c$ and then randomly changes one or multiple components of that copy. Either way, we get a new solution $x_n\gets\mathit{op1}(x_n)$. This solution is then also evaluated and we obtain its objective value $y_n\gets f(x_n)$.

In its selection step, RLS retains $x_n$ if it is not worse than $x_c$. This means that, if $y_n\leq y_c$, $x_c\gets x_n$ and $y_c\gets y_n$. After the complete computational budget is exhausted, the algorithm returns the best solution that it has discovered (namely $x_c$) as well as its quality $y_c$. This process is illustrated as Algorithm 1.

If we plug Frequency Fitness Assignment () into this algorithm, we obtain the . But this is a story for another page.

Posts

Plugging Frequency Fitness Assignment into Metaheuristics

12 minute read

Frequency Fitness Assignment (, 频率适应度分配) is a technique that fundamentally changes how (metaheuristic) optimization algorithms work. The goal of this post is to explore how this technique can be plugged into an existing algorithm. We first discuss optimization and the general pattern of metaheuristics in general. We then discuss the simplest local search algorithm – randomized local search, or for short. We plug FFA into this algorithm and obtain the . We finally list some properties of FRLS as well as some related works.

Publications