This is the academic website of Dr. Thomas Weise (汤卫思), full professor at Hefei University (合肥大学) located in the beautiful city of Hefei (合肥市) in the Anhui Province (安徽省) in China.

My research area is metaheuristic optimization and operations research. I mainly focus on discrete and combinatorial problems from domains such as transport and logistics, production planning, scheduling, and packing. Within these domains, I currently pursue two main interests.

My first research interest is on a technique called Frequency Fitness Assignment (FFA, 频率适应度分配), which is an “Algorithm Plugin”. FFA removes the bias towards better solutions from its host algorithm and makes it invariant under all injective transformations of the objective function value. This is the strongest theoretically possible invariance and the only other algorithms that possess it are random sampling, random walks, and exhaustive enumeration. These methods are very bad optimization methods. Yet, if we plug it into the (1+1) EA, for instance, it can speed the algorithm up by three orders of magnitude on the NP-hard MaxSat problem. My goal is to research both positive and negative aspects of FFA. Some of my publications on this topic are:

  • Thomas Weise (汤卫思), Zhize WU (吴志泽), Xinlu LI (李新路), Yan CHEN (陈岩), and Jörg Lässig: Frequency Fitness Assignment: Optimization without Bias for Good Solutions can be Efficient. IEEE Transactions on Evolutionary Computation (TEVC) 27(4):980-992. August 2023. [infos]
  • Thomas Weise (汤卫思), Zhize WU (吴志泽), Xinlu LI (李新路), and Yan CHEN (陈岩): Frequency Fitness Assignment: Making Optimization Algorithms Invariant under Bijective Transformations of the Objective Function Value. IEEE Transactions on Evolutionary Computation (TEVC) 25(2):307-319. April 2021. [infos]
  • Thomas Weise (汤卫思), Mingxu WAN (万明绪), Ke TANG (唐珂), Pu WANG, Alexandre Devert, and Xin YAO (姚新): Frequency Fitness Assignment. IEEE Transactions on Evolutionary Computation (IEEE-TEVC) 18(2):226-243. April 2014. [infos]

My second research interest is on benchmarking. Benchmarking is maybe the most important tool for experimental research on metaheuristics. We need proper tools (such as moptipy) to run replicable and self-documenting experiments. If we have those, then we can gather lots of meaningful data. Once we have lots and lots of data, the question of how to make sense out of it emerges. This can, again, be done with tools, statistics, or even artificial intelligence. Some of my publications on this topic are:

  • Thomas Weise (汤卫思) and Zhize WU (吴志泽): Replicable Self-Documenting Experiments with Arbitrary Search Spaces and Algorithms. Genetic and Evolutionary Computation Conference Companion (GECCO'2023) Companion, July 15-19, 2023, Lisbon, Portugal, pages 1891-1899. New York, NY, USA: ACM. [infos]
  • Thomas Weise (汤卫思), Yan CHEN (陈岩), Xinlu LI (李新路), and Zhize WU (吴志泽): Selecting a diverse set of benchmark instances from a tunable model problem for black-box discrete optimization algorithms. Applied Soft Computing Journal (ASOC) 92:106269. June 2020. [infos]
  • Thomas Weise (汤卫思), Xiao-Feng WANG (王晓峰), Qi QI (齐琪), Bin LI (李斌), and Ke TANG (唐珂): Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features. Applied Soft Computing Journal (ASOC), 73:366–382, December 2018. [infos]

Besides doing research, I supervise students, teach classes in international curricula of our university, and contribute to the research community as reviewer and editor.