This is the academic website of Dr. Thomas Weise (汤卫思), full professor at Hefei University (合肥大学) located in the beautiful city of Hefei (合肥市) in the Anhui Province (安徽省) in China.
My research area is metaheuristic optimization and operations research.
I mainly focus on discrete and combinatorial problems from domains such as logistics, production planning, scheduling, and packing.
My current main research interest is a technique called Frequency Fitness Assignment (FFA, 频率适应度分配), which is an “Algorithm Plugin” that removes the bias towards better solutions from its host algorithm and makes it invariant under all injective transformations of the objective function value.
This is the strongest theoretically possible invariance property and the only other algorithms that possess it are random sampling, random walks, and exhaustive enumeration.
These methods are very bad optimization methods.
Yet, if we plug it into the (1+1) EA, for instance, it leads to a speed-up of three orders of magnitude on the $\mathcal{NP}$-hard MaxSat problem.
While I currently focus on finding the limits, advantages, drawbacks, and use cases of FFA, I also contribute to a wide range of topics in optimization.
Besides doing research, I supervise great students, teach classes, write open source software, sometimes give talks, and contribute to the research community as reviewer and editor.
Recent Posts
2 minute read
Our paper When Frequency Fitness Assignment Fails: Trapped States in Frequency-Guided Local Search has been accepted at the Genetic and Evolutionary Computation Conference (GECCO’2026) taking place from July 13 to 17, 2026 in San José, Costa Rica.
3 minute read
Sometimes we have a PDF document but want to convert it to a series of JPG or PNG images page-by-page. This can be done via Ghostscript. Here I provide the little Bash script pdf2imgs.sh, which wraps around Ghostscript and does this in the Linux terminal. It takes as parameters
3 minute read
Sometimes we want to execute a command or program that may use too many CPU cores or too much memory and thus potentially crash our system. Instead of waiting for this to happen, we can also limit the number of CPU cores as well as the memory that a process can use/allocate. If the program can smoothly run within such limits, it will complete normally. If it tries to allocate more memory than what we permit, it will most likely simply crash. If we have 8 logical CPU cores, but we permit the program to only use 4, then it can also never happen that the computer becomes too slow because of the computational load. Regardless what happens, our computer will remain fully usable.
1 minute read
Sometimes we want to find whether a text documents contains only ASCII characters or whether there is some non-ASCII character in it. You see, ASCII is an old 7 bit-based encoding for text. Hence, it supports a very limited range of different characters and certainly not fancy stuff like “ä” let alone “你好”. Most tools today can understand the Unicode character set encoded with UTF-8, which is more or less compatible to that and can store all such characters. Some software, however, still takes offense if characters appear that are outside the ASCII range, e.g., in names of functions in some programming languages. Other software can deal with Unicode well, but dislikes only certain characters in certain places. If we have some strange LaTeX error when compiling a paper draft, sometimes it can help to check whether we accidentally had maybe a “。” instead of a “.” slip in somewhere.
3 minute read
Most graphics that we use in publications or reports are vector graphics, such as SVGs, PDFs, sometimes also WMFs, EMFs, EPSs, or PS files. It is often necessary to convert between them, most often between PDF and SVG. This can be done via Inkscape. Here I provide the little Bash script vec2vec.sh, which does this in the Linux terminal. It basically just executes Inkscape with the correct parameters.
3 minute read
Sometimes we have PDF documents and want to submit through a web form, but it is too big. Often, this size is caused by the images inside the PDF. We may reduce their resolution, their “dots per inch” (DPI) value, and then the PDF gets smaller. This can be done via Ghostscript. Here I provide the little Bash script scalePdf.sh, which does this in the Linux terminal. It basically just executes Ghostscript with the correct parameters.
3 minute read
Sometimes you have a Microsoft Office document, say an MS Word or MS PowerPoint file, and want to convert it to PDF. If you do not have Microsoft Office installed, then you can do this via LibreOffice, which is free open source. Here I provide the little Bash script office2pdf.sh, which does this in the Linux terminal. It basically just executes LibreOffice with the correct parameters.
2 minute read
In this recent post, I gave a small script for packing files and folders into tar.xz archives. To the best of my knowledge, this offers the best compression ratio. However, sadly, not everybody knows how to deal with tar.xz archives and there might be situations where we must provide data as zip archives. These usually have a much worse compression ratio, but are fast to compress and uncompress and widely accepted. I here present the small Linux/Bash script zipCompress.sh, which is drop-in compatible with xzCompress.sh but instead produces zip archives. It attempts to produce the strongest zip compression possible with the built-in tools. You can download the script from here and the complete collection of my personal scripts is available here.
3 minute read
Often, I want to compress one or multiple files or directories into an archive. Maybe I want to upload experimental results to zenodo or exchange them with my students. Maybe I want to offer a package with slides of one of my classes. Then I always use tar.xz archives. They take a longer time to create, but the compression is usually best, often much better than what zip archives can produce.
2 minute read
Sometimes, we want to download a file via the terminal Linux. Downloading files may always fail due to connection issues. So we want a command that tries again and again until success. Here I post a small Bash script does exactly that, which you may store as download.sh. The script takes one or multiple URLs as parameter and downloads the corresponding files. You could, for example, type download.sh "https://thomasweise.github.io/programmingWithPython/programmingWithPython.pdf" and it will download our book on the Python programming language while download.sh "http://thomasweise.github.io/scripts/scripts.tar.xz" downloads my personal script collection. The script uses wget internally, but adds an infinite loop around it. It also first tries 10 times to download the file without rate limitation and afterwards limits the connect speed in follow-up attempts. If the URL is simply invalid or the file does not exist, the script loops forever. Here you can download this script and the complete collection of my personal scripts is available here.
3 minute read
I use Linux for my work and try to keep my system up-to-date. This means that, whenever I shut down my computer, I run a little Bash script update.sh that updates all installed packages, both deb and snap packages. Sometimes, for whatever reason, there may be a problem with some inconsistent package state. My script basically applies all methods I know to fix such state. The script is very heavy-handed and loops over the update process four times. If for whatever reason one update would require another first and that could only be done by multiple updates, then that’s no problem. It also sleeps for one second between each two update steps, just in case. I usually run something like sudo update.sh && shutdown now, which then shuts down my computer. So I do not really need to care how long the script needs anyway. Here you can download this script and the complete collection of my personal scripts is available here.
3 minute read
Assume that we have a command that we want to execute in a Linux terminal. We know that the command may sometimes fail, but it should actually succeed. So we would try the command again and again until this happens. An example is, for instance, a download process. If we know that the URL is reachable, then the command should succeed. However, maybe there is a lost connection or other temporary disturbance. Another such command could be a build process which downloads and installs dependencies.
1 minute read
Sometimes, we have the need to delete empty files and empty directories under Linux. This happens, for example, when we want to restart an experiment with moptipy. Here I post a small Bash script that you may store as file deleteZeroSizeFiles.sh. It will do exactly that: It will search the current directory and all subdirectories for empty files, i.e., files of size zero. It will delete all of them. Then, it will recursively look for empty directories, i.e., directories that do not contain files or other directories. It will delete them as well. All files and directories that get deleted are also printed, so you can see what happened. Here you can download this script and the complete collection of my personal scripts is available here.
4 minute read
Frequency Fitness Assignment (FFA, 频率适应度分配) is an “algorithm plugin” that fundamentally changes how the selection step in metaheuristics works. Recently, we discussed how FFA can be plugged into the simple local search algorithm RLS, yielding the FRLS. According to my claim above, the FRLS should work fundamentally different from the RLS. That’s a pretty bold claim. Let me substantiate it a bit.
12 minute read
Frequency Fitness Assignment (FFA, 频率适应度分配) is a technique that fundamentally changes how (metaheuristic) optimization algorithms work. The goal of this post is to explore how this technique can be plugged into an existing algorithm. We first discuss optimization and the general pattern of metaheuristics in general. We then discuss the simplest local search algorithm – randomized local search, or RLS for short. We plug FFA into this algorithm and obtain the FRLS. We finally list some properties of FRLS as well as some related works.
13 minute read
Some time ago, I discussed why global optimization with an Evolutionary Algorithm (EA) is not necessarily better than local search. Actually, I was asked “Why should I use an EA?” quite a few times. Thus, today, it is time to write down a few ideas about why and why not you may benefit from using an EA. I tried to be objective, which is not entirely easy since I work in that domain.
22 minute read
My research area is metaheuristic optimization, i.e., algorithms that can find good approximate solutions for computationally hard problems in feasible time. The Traveling Salesperson Problem (TSP) is an example for such an optimization task. In a TSP, $n$ cities and the distances between them are given and the goal is to find the shortest round-trip tour that goes through each city exactly once and then returns to its starting point. The TSP is $\mathcal{NP}$-hard, meaning that any currently known algorithm for finding the exact / globally best solution for all possible TSP instances will need a time which grows exponentially with $n$ on the worst case instances. And this is unlikely to change. In other words, if a TSP instance has $n$ cities, then the worst-case runtime of any exact TSP solver is in ${\mathcal{O}}(2^n)$. Well, today we have algorithms that can exactly solve a wide range of problems with tens of thousands of nodes and approximate the solution of million-node problems with an error of less than one part per thousand. This is pretty awesome, but the worst-case runtime to find the exact (optimal) solution is still exponential.
15 minute read
Sometimes, when discussing the benefits of Evolutionary Algorithms (EAs), their special variant Genetic Algorithms (GAs) with bit-string based search spaces, or other metaheuristic global search algorithms in general, statements like the following may be made:“If you use a huge population size with a large computational budget for both global search metaheuristics (with enough diversity) and local search metaheuristics, the global search approach will no doubt outperform the local search.” I cannot not really agree to this statement, in particular if it is applied to black box metaheuristics (those that make no use of the problem knowledge).
18 minute read
The center of my research is optimization. But what is optimization? Basically, optimization is the art of making good decisions. It provides us with a set of tools, mostly from the areas of computer science and mathematics, which are applicable in virtually all fields ranging from business, industry, biology, physics, medicine, data mining, engineering, to even art.
17 minute read
The inspiration gleaned from observing nature has led to several important advances in the field of optimization. Still, it seems to me that a lot of work is mainly based on such inspiration alone. This might divert attention away from practical and algorithmic concerns. As a result, there is a growing number of specialized terminologies used in the field of Evolutionary Computation (EC) and Swarm Intelligence (SI), which I consider as a problem for clarity in research. With this article, I would like to formulate my thoughts with the hope to contribute to a fruitful debate.