4 minute read

Frequency Fitness Assignment (, 频率适应度分配) is an “algorithm plugin” that fundamentally changes how the selection step in works. Recently, we discussed how FFA can be plugged into the simple algorithm , yielding the . According to my claim above, the FRLS should work fundamentally different from the RLS. That’s a pretty bold claim. Let me substantiate it a bit.

Before I do that, let me briefly illustrate the RLS as Algorithm 1and the FRLS as Algorithm 2, though.

Algorithm 1. The randomized local search (RLS).
Sample solution $x_c$ uniformly at random from $\mathbb{X}$.
$y_c\gets f(x_c)$.
Until $\lnot$ should terminate repeat
 $x_n\gets\mathit{Op1}(x_c)$; $y_n\gets f(x_n)$.
 If $y_n \leq y_c$ then
 $x_c\gets x_n$; $y_c\gets y_n$.
Return $x_c, y_c$.

The RLS starts by sampling an initial best-so-far solution $x_c$ randomly from the search space $\mathbb{X}$. In each iteration, it applies the unary search operator $\mathit{Op1}(x_c)$ to it and obtains a modified copy $x_n$ of $x_c$. If $x_n$ is not worse than $x_c$, then it is accepted and replaces $x_c$. Otherwise, it is discarded. This is repeated until the algorithm terminates.

Algorithm 2. The randomized local search with FFA, i.e., the FRLS algorithm.
$H\gets (0, 0, \dots, 0)$.
Sample solution $x_c$ uniformly at random from $\mathbb{X}$.
$y_c\gets f(x_c)$.
$x_b\gets x_c$; $y_b\gets y_c$.
Until $\lnot$ should terminate repeat
 $x_n\gets\mathit{Op1}(x_c)$; $y_n\gets f(x_n)$.
 $H[y_c]\gets H[y_c]+1$; $H[y_n]\gets H[y_n]+1$.
 If $H[y_n] \leq H[y_c]$ then
 $x_c\gets x_n$; $y_c\gets y_n$.
 If $y_n < y_b$ then
 $x_b\gets x_n$; $y_b\gets y_n$.
Return $x_b, y_b$.

The FRLS works the same, but it replaces the selection decision. Basically, here $x_n$ gets accepted if its objective value $y_n$ has not been encountered more often in selection decisions than the objective value $y_c$ of $x_c$. This looks very strange. Here I will not discuss why this is done. All I will discuss is the resulting invariance property.

Invariance under Injection Transformations of the Objective Function Value

FFA makes algorithms invariant under all injective transformations of the objective function value. Let’s say you are applying the FRLS to an objective function $f(x)$, where $x\in\mathbb{X}$ are the candidate solutions and $\mathbb{X}$ is the search space. The algorithm will begin with some random solutions $x_0$ and then iteratively sample a new solution $x_i$ in each iteration $i$. Thus, it takes some path $x_0, x_1, x_2, x_3, \dots$ through the search space. Actually, this holds for all metaheuristics, regardless how they sample and select solutions. Each run of the algorithm could be viewed as a path traveled through the search space.

Now, if you replace $f(x)$ with a different objective function $g(x)$, then FRLS will take exactly the same path $x_0, x_1, x_2, x_3, \dots$ if an injective function $h$ exists such that $g(x)=h(f(x))\forall x\in\mathbb{X}$.

Invariance under Order-Preserving Transformations

The original algorithm RLS samples a new solution in each step and accepts it if it is not worse than the current solution. This algorithm is also invariant under some mappings $h$, but these have to be order-preserving. In other words, RLS will take the same path through the search space for two objective functions $f$ and $g$ if it holds that $f(x_a)\leq f(x_b) \Leftrightarrow g(x_a)\leq g(x_b)\forall x_a,x_b\in\mathbb{X}$. This is a nice and strong invariance property.

But it is much weaker than the one offered by FRLS. FRLS takes the same path through the search space for two objective functions $f$ and $g$ if it holds that $f(x_a)=f(x_b) \Leftrightarrow g(x_a)=g(x_b)\forall x_a,x_b\in\mathbb{X}$. The only condition for this invariance property is that $g(x)=h(f(x))$ for an injective function $h$. What does this mean?

Well, again, any order-preserving function $h$ is by default injective, so if RLS behaves the same on two problems, so does FRLS. However, $h(\nu)=-\nu$ is also an injective function. FRLS will follow the same path through the search space regardless whether optimize $f(x)$ or $g(x)=-f(x)$.

Invariance under Permutation

FFA is only suitable for objective functions that are discrete, i.e., that can be mapped to the natural numbers $\mathbb{N}$. Another injective mapping that can be applied to such functions are permutations. Let’s say that we have an objective function $f:\mathbb{X}\mapsto{1,2,3,\dots,m}$. Then we could create any permutation $\Pi$ of the first $m$ natural numbers and optimize $\Pi[f(x)]$ instead. And FRLS would still take the same path through the search space.

Invariance under Encryption

Cryptography offers another form of injective mappings. Normally, cryptographic algorithms $\mathcal{A}$ are applied to some form of text $T$ or binary data. The algorithm usually receives $T$ as input together with some password $P$ and returns the encrypted data $C=\mathcal{A}(T,P)$. $C$ usually does not correlate with $T$ in any form. Actually, this is the whole purpose of encryption. Of course, encryption is bijective, i.e., knowing $P$ and $C$, you can reproduce $T$. Bijective functions are a subclass of injective functions.

You could apply the FRLS to $\mathcal{A}(f(x), P)$ instead of $f(x)$. And it would take the same path through the search space.

Other Algorithms that are Invariant under Injective Mappings

How weird is that? There are only three algorithms that have this strong invariance property:

  • Random Sampling, i.e., in each step, create a completely random solution,
  • Random Walks, i.e., create a new solution in each step and accept it regardless of its quality, and
  • Exhaustive Enumeration, i.e., just test every possible solution, one after the other.

None of them are good optimization methods. But algorithms using FFA can, in some domains, deliver better solutions than their original, objective-guided host algorithms.

Notice that, while we here explained this invariance property on the example of the FRLS, it will hold for all algorithms where the selection decision is guided purly by FFA. You could plug it into an or or even … and they would become invariant under all injective transformations of the objective function value.

Further Reading

  • Thomas Weise (汤卫思), Zhize WU (吴志泽), Xinlu LI (李新路), Yan CHEN (陈岩), and Jörg Lässig: Frequency Fitness Assignment: Optimization without Bias for Good Solutions can be Efficient. IEEE Transactions on Evolutionary Computation (TEVC) 27(4):980-992. August 2023.
  • Thomas Weise (汤卫思), Zhize WU (吴志泽), Xinlu LI (李新路), and Yan CHEN (陈岩): Frequency Fitness Assignment: Making Optimization Algorithms Invariant under Bijective Transformations of the Objective Function Value. IEEE Transactions on Evolutionary Computation (TEVC) 25(2):307-319. April 2021.