moptipy.api package

The API for implementing optimization algorithms and -problems.

This package provides two things:

  • the basic, abstract API for implementing optimization algorithms and problems

  • the abstraction and implementation of black-box processes in form of the process API and its implementations

The former helps us to implement and plug together different components of optimization problems and optimization algorithms. The latter allows us to apply algorithms to problems and to collect the results in a transparent way. It also permits logging of the algorithm progress or even the change of dynamic parameters.

The moptipy API has the following major elements for implementing optimization algorithms and their components. You are encouraged to read more about the logic behind this structure in our free online book at https://thomasweise.github.io/oa.

  • component provides the base class Component from which all components of algorithm and optimization problems inherit. It offers the basic methods log_parameters_to() for writing all parameters and configuration elements of a component to a KeyValueLogSection of a log file (embodied by an instance of Logger) and the method initialize(), which must be overwritten to do any initialization necessary before a run of the optimization process.

  • algorithm provides the base class Algorithm from which any optimization algorithm will inherit and which defines the optimization algorithm API, most importantly the method solve() which must be overwritten to implement the algorithm behavior. Here you can also find subclasses such as Algorithm0, Algorithm1, and Algorithm2 for algorithms that use nullary, nullary and unary, as well as nully, unary, and binary search operators.

  • encoding provides the base class Encoding which can be used to implement the decoding procedure decode() which translates elements of a search space to elements of the solution space. Such an encoding may be needed if the two spaces differ, as they do, for instance, in our Job Shop Scheduling Problem example (see ob_encoding).

  • mo_algorithm provides the base class MOAlgorithm for multi-objective optimization algorithms, which is a subclass of Algorithm.

  • mo_archive provides the base class MOArchivePruner for pruning archives of non-dominated solutions during a multi-objective optimization process. Since multi-objective optimization problems may have many possible solutions, but we can only return/retain a finite number of them, it may be necessary to decide which solution to keep and which to dispose of. This is the task of the archive pruner.

  • objective provides the base class Objective for objective functions, i.e., for implementing the criteria rating how good a solution is. Objective functions are subject to minimization and provide a method evaluate() which computes an integer or floating point value rating one solution. They may additionally have a lower_bound() and/or an upper_bound().

  • mo_problem provides the base class MOProblem, which represents a multi-objective optimization problem. This is a subclass of Objective and at the same time, a collection of multiple instances of Objective. Each multi-objective problem can compute a vector of objective values via f_evaluate(), but also implements the single-objective evaluate() method returning a default scalarization of the objective vector. This makes multi-objective optimization compatible with single-objective optimization. While actual multi-objective algorithms can work in a truly multi-objective fashion, logging can be unified and even single-objective methods can be applied. To allow for the representation of preferences, the default Pareto domination relationship can be overwritten by providing a custom implementation of f_dominates().

  • operators provides the base classes Op0, Op1, and Op2 fur nullary, unary, and binary search operators, respectively. These can be used to implement the methods op0(), op1(), and op2() that are used by metaheuristic optimization algorithms to sample solutions and to derive solutions from existing ones.

  • space provides the base class Space for implementing the functionality of search and solution spaces. An instance of Space offers methods such as create() for creating a data structure for holding point in the search space (with undefined contents), copy() for copying one data structure to another one, to_str() and from_str() to convert a data structure to and from a string, is_equal() to check whether one data structure equals another one, and validate() to verify whether the contents of a data structure are valid. With these methods, optimization algorithms can create and copy the data structure containers to hold solutions as they need without requiring any information about the actual contents and layout of these structures. The search operators (see above) then can handle the actual processing of the data structures. At the same time, the string conversion routines allow for storing the results of algorithms in log files.

The algorithm and experiment execution API has the following major elements:

  • execution provides the builder class Execution that is used to construct one application of an optimization algorithm to an optimization problem. It configures the termination criteria, what information should be logged, and which algorithm to apply to which problem. Its method execute() returns an instance of Process that contains the state of the optimization after it is completed, i.e., after the algorithm was executed. Log files follow the specification at https://thomasweise.github.io/moptipy/#log-file-sections.

  • The module logging holds mainly string constants that identify sections and keys for the data to be written to log files.

  • mo_execution provides the builder class MOExecution, which is the multi-objective equivalent of Execution.

  • experiment offers the function run_experiment() which allows you to execute a structured experiment applying a set of optimization algorithms to a set of optimization problems in a reproducible fashion, in parallel or in a distributed way. It will create a folder structure as prescribed in https://thomasweise.github.io/moptipy/#file-names-and-folder-structure. The data from such a folder structure can then be read in by the experiment evaluation tools in package evaluation and discussed in https://thomasweise.github.io/moptipy/#evaluating-experiments.

  • process provides the class Process, which is the basis for all optimization processes. It offers the functionality of an objective function, the search space, the termination criterion, and the random number generator to the optimization algorithm. At the same time, it collects the best-so-far solution and writes log files (if needed) for the user.

  • mo_process provides the class MOProcess, which is the multi-objective equivalent of Process. It also collects an archive of solutions.

  • subprocesses offers several functions to slice off computational budget of a process to allow an algorithm to use a sub-algorithm, to wrap processes for running external algorithms that do not respect the moptipy API, or to run sub-algorithms starting with a specified initial solution.

Submodules

moptipy.api.algorithm module

The base classes for implementing optimization algorithms.

All optimization algorithms implemented based on the moptipy API inherit from Algorithm. If you implement a new algorithm, you will want to override the following methods:

  1. solve() implements the algorithm itself. It receives an instance of Process as parameter that allows for the creation and evaluation of candidate solutions and that provides a random number generator. The optimization algorithm then will sample solutions and pass them to evaluate() to obtain their objective value, striving sampling better and better solutions.

  2. The dunder method __str__ should be overridden to return a short mnemonic name of the algorithm.

  3. log_parameters_to() needs to be overridden if the algorithm has any parameters. This methods then should store the values of all the parameters to the logging context. It should also invoke the log_parameters_to() routines of all sub-components of the algorithm.

  4. initialize() needs to be overridden to reset/initialize all internal data structures and to invoke all the initialize() of all components (such as search operators) of the algorithm.

Notice that we already provide specialized algorithm sub-classes for several common scenarios, such as:

  1. Algorithm0 for algorithms that have a nullary search operator (Op0).

  2. Algorithm1 for algorithms that have a nullary (Op0) and an unary (Op1) search operator.

  3. Algorithm2 for algorithms that have a nullary (Op0), an unary (Op1), and a binary (Op2) search operator.

  4. MOAlgorithm for multi-objective optimization problems.

These classes automatically invoke the log_parameters_to() and initialize() routines of their operators.

If you implement a new algorithm, you can and should test with the pre-defined unit test routine validate_algorithm(), or its specialized versions

  1. for bit-string based search spaces based on validate_algorithm_on_bitstrings()): a. validate_algorithm_on_onemax(), b. validate_algorithm_on_leadingones()

  2. for the JSSP based on validate_algorithm_on_1_jssp(): a. validate_algorithm_on_jssp()

  3. on real-valued vector search spaces based on validate_algorithm_on_vectors()): a. validate_algorithm_on_ackley()

class moptipy.api.algorithm.Algorithm[source]

Bases: Component

A base class for implementing optimization algorithms.

solve(process)[source]

Apply this optimization algorithm to the given process.

Parameters:

process (Process) – the process which provides methods to access the search space, the termination criterion, and a source of randomness. It also wraps the objective function, remembers the best-so-far solution, and takes care of creating log files (if this is wanted).

Return type:

None

class moptipy.api.algorithm.Algorithm0(name, op0)[source]

Bases: Algorithm

An algorithm with a nullary search operator.

initialize()[source]

Initialize the algorithm.

Return type:

None

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

name: Final[str]

the name of this optimization algorithm, which is also the return value of __str__()

op0: Final[Op0]

The nullary search operator.

class moptipy.api.algorithm.Algorithm1(name, op0, op1)[source]

Bases: Algorithm0

An algorithm with a unary search operator.

initialize()[source]

Initialize the algorithm.

Return type:

None

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

op1: Final[Op1]

The unary search operator.

class moptipy.api.algorithm.Algorithm2(name, op0, op1, op2)[source]

Bases: Algorithm1

An algorithm with a binary and unary operator.

initialize()[source]

Initialize the algorithm.

Return type:

None

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

op2: Final[Op2]

The binary search operator.

moptipy.api.algorithm.check_algorithm(algorithm)[source]

Check whether an object is a valid instance of Algorithm.

Parameters:

algorithm (Any) – the algorithm object

Return type:

Algorithm

Returns:

the object

Raises:

TypeError – if algorithm is not an instance of Algorithm

>>> check_algorithm(Algorithm())
Algorithm
>>> try:
...     check_algorithm('A')
... except TypeError as te:
...     print(te)
algorithm should be an instance of moptipy.api.algorithm.Algorithm but is str, namely 'A'.
>>> try:
...     check_algorithm(None)
... except TypeError as te:
...     print(te)
algorithm should be an instance of moptipy.api.algorithm.Algorithm but is None.

moptipy.api.component module

Provides the base class for all components of the moptipy API.

All elements of the moptipy API inherit from Component. If you implement a new such component, you can test it using the pre-defined unit test routine validate_component().

class moptipy.api.component.Component[source]

Bases: object

The base class for all components of the moptipy API.

initialize()[source]

Initialize this component before a new run.

Before every run of the optimization algorithm, its initialize method is called. The algorithm in turn must call all the initialize methods of all of its components.

Return type:

None

log_parameters_to(logger)[source]

Log all parameters of this component as key-value pairs.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

>>> from moptipy.utils.logger import InMemoryLogger
>>> with InMemoryLogger() as l:
...     with l.key_values("C") as kv:
...         Component().log_parameters_to(kv)
...     text = l.get_log()
>>> text[-2]
'class: moptipy.api.component.Component'
>>> text[-3]
:rtype: :sphinx_autodoc_typehints_type:`\:py\:obj\:\`None\``
'name: Component'
>>> len(text)
4

moptipy.api.encoding module

The base class for implementing encodings.

Sometimes, in optimization, the search space and the space of possible solutions are different. For example, in the Job Shop Scheduling Problem (JSSP), the search operators of the optimization algorithm may process permutations with repetitions (Permutations) while the objective function rates Gantt charts. The Gantt charts are the solutions that the user wants, but they are harder to deal with for search operators. However, we can easily develop search operators for permutations and permutations can be mapped to Gantt charts (see OperationBasedEncoding). A combination of search space, solution space, and encoding can be configured when setting up an experiment Execution, which then takes care of showing the search space to the optimization algorithm while providing the objective function and user with the elements of the solution spaces.

All encodings inherit from the class Encoding. If you implement a new such encoding, you may want to test it using the pre-defined unit test routine validate_encoding().

class moptipy.api.encoding.Encoding[source]

Bases: Component

The encoding translates from a search space to a solution space.

decode(x, y)[source]

Translate from search- to solution space.

Map a point x from the search space to a point y in the solution space.

Parameters:
  • x – the point in the search space, remaining unchanged.

  • y – the destination data structure for the point in the solution space, whose contents will be overwritten

Return type:

None

moptipy.api.encoding.check_encoding(encoding, none_is_ok=True)[source]

Check whether an object is a valid instance of Encoding.

Parameters:
  • encoding (Any) – the object

  • none_is_ok (bool, default: True) – is it ok if None is passed in?

Return type:

Encoding | None

Returns:

the object

Raises:

TypeError – if encoding is not an instance of Encoding

>>> check_encoding(Encoding())
Encoding
>>> check_encoding(Encoding(), True)
Encoding
>>> check_encoding(Encoding(), False)
Encoding
>>> try:
...     check_encoding('A')
... except TypeError as te:
...     print(te)
encoding should be an instance of moptipy.api.encoding.Encoding but is str, namely 'A'.
>>> try:
...     check_encoding('A', True)
... except TypeError as te:
...     print(te)
encoding should be an instance of moptipy.api.encoding.Encoding but is str, namely 'A'.
>>> try:
...     check_encoding('A', False)
... except TypeError as te:
...     print(te)
encoding should be an instance of moptipy.api.encoding.Encoding but is str, namely 'A'.
>>> print(check_encoding(None))
None
>>> print(check_encoding(None, True))
None
>>> try:
...     check_encoding(None, False)
... except TypeError as te:
...     print(te)
encoding should be an instance of moptipy.api.encoding.Encoding but is None.

moptipy.api.execution module

The algorithm execution API.

class moptipy.api.execution.Execution[source]

Bases: object

Define all the components of an experiment and then execute it.

This class follows the builder pattern. It allows us to step-by-step store all the parameters needed to execute an experiment. Via the method execute(), we can then run the experiment and obtain the instance of Process after the execution of the algorithm. From this instance, we can query the final result of the algorithm application.

execute()[source]

Execute the experiment and return the process after the run.

The optimization process constructed with this object is executed. This means that first, an instance of Process is constructed. Then, the method solve() is applied to this instance. In other words, the optimization algorithm is executed until it terminates. Finally, this method returns the Process instance after algorithm completion. This instance then can be queried for the final result of the run (via get_copy_of_best_y()), the objective value of this final best solution (via get_best_f()), and other information.

If a log file path was supplied to set_log_file(), then the information gathered from the optimization process will be written to the file after the with blog using the process is left. See https://thomasweise.github.io/moptipy/#log-file-sections for a documentation of the log file structure and sections.

Return type:

Process

Returns:

the process after the run, i.e., in the state where it can be queried for the result

set_algorithm(algorithm)[source]

Set the algorithm to be used for this experiment.

Parameters:

algorithm (Algorithm) – the algorithm

Return type:

TypeVar(TExecution, bound= Execution)

Returns:

this execution

set_encoding(encoding)[source]

Set the encoding to be used for this experiment.

This is the function translating from the search space to the solution space.

Parameters:

encoding (Encoding | None) – the encoding, or None of none shall be used

Return type:

TypeVar(TExecution, bound= Execution)

Returns:

this execution

set_goal_f(goal_f)[source]

Set the goal objective value after which the process can stop.

If this method is called multiple times, then the largest value is retained.

Parameters:

goal_f (int | float) – the goal objective value.

Return type:

TypeVar(TExecution, bound= Execution)

Returns:

this execution

set_log_all_fes(log_all_fes=True)[source]

Set whether all objective function evaluations (FEs) should be logged.

If all FEs are logged, then the PROGRESS section will be added to the log files, as documented at https://thomasweise.github.io/moptipy/#the-section-progress.

Parameters:

log_all_fes (bool, default: True) – if all FEs should be logged?

Return type:

TypeVar(TExecution, bound= Execution)

Returns:

this execution

set_log_file(log_file)[source]

Set the log file to write to.

If a path to a log file is provided, the contents of this file will be filled based on the structure documented at https://thomasweise.github.io/moptipy/#log-file-sections, which includes the algorithm parameters, the instance features, the system settings, the final solution, the corresponding point in the search space, etc.

This method can be called arbitrarily often.

Parameters:

log_file (str | None) – the log file

Return type:

TypeVar(TExecution, bound= Execution)

set_log_improvements(log_improvements=True)[source]

Set whether improvements should be logged.

If improvements are logged, then the PROGRESS section will be added to the log files, as documented at https://thomasweise.github.io/moptipy/#the-section-progress.

Parameters:

log_improvements (bool, default: True) – if improvements should be logged?

Return type:

TypeVar(TExecution, bound= Execution)

Returns:

this execution

set_max_fes(max_fes, force_override=False)[source]

Set the maximum FEs.

This is the number of candidate solutions an optimization is allowed to evaluate. If this method is called multiple times, then the shortest limit is used unless force_override is True.

Parameters:
  • max_fes (int) – the maximum FEs

  • force_override (bool, default: False) – the use the value given in max_time_millis regardless of what was specified before

Return type:

TypeVar(TExecution, bound= Execution)

Returns:

this execution

set_max_time_millis(max_time_millis, force_override=False)[source]

Set the maximum time in milliseconds.

This is the maximum time that the process is allowed to run. If this method is called multiple times, the shortest time is used unless force_override is True.

Parameters:
  • max_time_millis (int) – the maximum time in milliseconds

  • force_override (bool, default: False) – the use the value given in max_time_millis regardless of what was specified before

Return type:

TypeVar(TExecution, bound= Execution)

Returns:

this execution

set_objective(objective)[source]

Set the objective function to be used for this experiment.

This is the function rating the quality of candidate solutions.

Parameters:

objective (Objective) – the objective function

Return type:

TypeVar(TExecution, bound= Execution)

Returns:

this execution

set_rand_seed(rand_seed)[source]

Set the seed to be used for initializing the random number generator.

Parameters:

rand_seed (int | None) – the random seed, or None if a seed should automatically be chosen when the experiment is executed

Return type:

TypeVar(TExecution, bound= Execution)

set_search_space(search_space)[source]

Set the search space to be used for this experiment.

This is the space from which the algorithm samples points.

Parameters:

search_space (Space | None) – the search space, or None of none shall be used, i.e., if search and solution space are the same

Return type:

TypeVar(TExecution, bound= Execution)

Returns:

this execution

set_solution_space(solution_space)[source]

Set the solution space to be used for this experiment.

This is the space managing the data structure holding the candidate solutions.

Parameters:

solution_space (Space) – the solution space

Return type:

TypeVar(TExecution, bound= Execution)

Returns:

this execution

class moptipy.api.execution.TExecution

The execution type variable for returning Self.

alias of TypeVar(‘TExecution’, bound=Execution)

moptipy.api.experiment module

The experiment execution API.

Via the function run_experiment(), you can execute a complex experiment where multiple optimization algorithms are applied to multiple problem instances, where log files with the results and progress information about the runs are collected, and where multiprocessing is used to parallelize the experiment execution. Experiments are replicable, as random seeds are automatically generated based on problem instance names in a replicable fashion.

The log files are structured according to the documentation in https://thomasweise.github.io/moptipy/#file-names-and-folder-structure and their contents follow the specification given in https://thomasweise.github.io/moptipy/#log-file-sections.

class moptipy.api.experiment.Parallelism(value)[source]

Bases: IntEnum

An enumeration of parallel thread counts.

The value of each element of this enumeration equal the number of threads that would be used. Thus, the values are different on different systems. Currently, only Linux is supported for parallelism. For all other systems, you need to manually start the program as often as you want it to run in parallel. On other systems, all values of this enumeration are 1.

>>> Parallelism.SINGLE_THREAD.value
1
ACCURATE_TIME_MEASUREMENTS = 1

Use as many threads as accurate time measurement permits. This equals to using one logical core on each physical core and leaving one physical core unoccupied (but always using at least one thread, obviously). If you have four physical cores with two logical cores each, this would mean using three threads on Linux. On Windows, parallelism is not supported yet, so this would equal using only one core on Windows.

PERFORMANCE = 3

Use all but one logical core. This will mess up time measurements but should produce the maximum performance while not impeding the system of doing stuff like garbage collection or other bookkeeping and overhead tasks. This is the most reasonable option if you want to execute one experiment as quickly as possible. If you have four physical cores with two logical cores each, this would mean using seven threads on Linux. On Windows, parallelism is not supported yet, so this would equal using only one core on Windows.

RECKLESS = 4

Use every single logical core available, which may mess up your system. We run the experiment as quickly as possible, but the system may not be usable while the experiment is running. Background tasks like garbage collection, moving the mouse course, accepting user input, or network communication may come to a halt. Seriously, why would you use this? If you have four physical cores with two logical cores each, this would mean using eight threads on Linux. On Windows, parallelism is not supported yet, so this would equal using only one core on Windows.

SINGLE_THREAD = 1

use only a single thread

moptipy.api.experiment.run_experiment(base_dir, instances, setups, n_runs=11, n_threads=Parallelism.SINGLE_THREAD, perform_warmup=True, warmup_fes=20, perform_pre_warmup=True, pre_warmup_fes=20, on_completion=<function __no_complete>)[source]

Run an experiment and store the log files into the given folder.

This function will automatically run an experiment, i.e., apply a set setups of algorithm setups to a set instances of problem instances for n_runs each. It will collect log files and store them into an appropriate folder structure under the path base_dir. It will automatically draw random seeds for all algorithm runs using moptipy.utils.nputils.rand_seeds_from_str() based on the names of the problem instances to solve. This yields replicable experiments, i.e., running the experiment program twice will yield exactly the same runs in exactly the same file structure (give and take clock-time dependent issues, which obviously cannot be controlled in a deterministic fashion).

This function will use n_threads separate processes to parallelize the whole experiment (if you do not specify n_threads, it will be chosen automatically).

Note for Windows users: The parallelization will not work under Windows. However, you can achieve almost the same effect and performance as for n_threads=N if you set n_threads=1 and simply start the program N times separately (in separate terminals and in parallel). Of course, all N processes must have the same base_dir parameter. They will then automatically share the workload.

Parameters:
  • base_dir (str) – the base directory where to store the results

  • instances (Iterable[Callable[[], Any]]) – an iterable of callables, each of which should return an object representing a problem instance, whose __str__ representation is a valid name

  • setups (Iterable[Callable[[Any], Execution]]) – an iterable of callables, each receiving an instance (as returned by instances) as input and producing an moptipy.api.execution.Execution as output

  • n_runs (Union[int, Iterable[int]], default: 11) – the number of runs per algorithm-instance combination

  • n_threads (int, default: <Parallelism.SINGLE_THREAD: 1>) – the number of parallel threads of execution to use. This parameter only works under Linux! It should be set to 1 under all other operating systems. Under Linux, by default, we will use the number of physical cores - 1 processes. The default value for n_threads is computed in ACCURATE_TIME_MEASUREMENTS, which will be different for different machines(!). We will try to distribute the threads over different logical and physical cores to minimize their interactions. If n_threads is less or equal the number of physical cores, then multiple logical cores will be assigned to each process. If less threads than the number of physical cores are spawned, we will leave one physical core unoccupied. This core may be used by the operating system or other processes for their work, thus reducing interference of the os with our experiments.

  • perform_warmup (bool, default: True) – should we perform a warm-up for each instance? If this parameter is True, then before the very first run of a thread on an instance, we will execute the algorithm for just a few function evaluations without logging and discard the results. The idea is that during this warm-up, things such as JIT compilation or complicated parsing can take place. While this cannot mitigate time measurement problems for JIT compilations taking place late in runs, it can at least somewhat solve the problem of delayed first FEs caused by compilation and parsing.

  • warmup_fes (int, default: 20) – the number of the FEs for the warm-up runs

  • perform_pre_warmup (bool, default: True) – should we do one warmup run for each instance before we begin with the actual experiments? This complements the warmups defined by perform_warmup. It could be that, for some reason, JIT or other activities may lead to stalls between multiple processes when code is encountered for the first time. This may or may not still cause strange timing issues even if perform_warmup=True. We therefore can do one complete round of warmups before starting the actual experiment. After that, we perform one garbage collection run and then freeze all objects surviving it to prevent them from future garbage collection runs. All processes that execute the experiment in parallel will complete their pre-warmup and only after all of them have completed it, the actual experiment will begin. I am not sure whether this makes sense or not, but it also would not hurt.

  • pre_warmup_fes (int, default: 20) – the FEs for the pre-warmup runs

  • on_completion (Callable[[Any, Path, Process], None], default: <function __no_complete at 0x7f6617e61240>) – a function to be called for every completed run, receiving the instance, the path to the log file (before it is created) and the Process of the run as parameters

Return type:

Path

Returns:

the canonicalized path to base_dir

moptipy.api.logging module

Shared constants and functions for dealing with logs.

moptipy.api.logging.ERROR_SECTION_PREFIX: Final[str] = 'ERROR_'

the common name prefix of all error sections

moptipy.api.logging.FILE_SUFFIX: Final[str] = '.txt'

the file suffix to be used for log files

moptipy.api.logging.KEY_ALGORITHM: Final[str] = 'algorithm'

the key for algorithms

moptipy.api.logging.KEY_ARCHIVE_F: Final[str] = 'f'

the scalarized objective value column of the archive if without numeric suffix, the result of the ith objective function if with numeric suffix

moptipy.api.logging.KEY_ARCHIVE_MAX_SIZE: Final[str] = 'archiveMaxSize'

the maximum archive size (after pruning)

moptipy.api.logging.KEY_ARCHIVE_PRUNE_LIMIT: Final[str] = 'archivePruneLimit'

the pruning limit of the archive size

moptipy.api.logging.KEY_ARCHIVE_SIZE: Final[str] = 'archiveSize'

the number of archived non-dominated solutions

moptipy.api.logging.KEY_BEST_F: Final[str] = 'bestF'

the best encountered objective value

moptipy.api.logging.KEY_BEST_FS: Final[str] = 'bestFs'

the vector of objective values of the best solution

moptipy.api.logging.KEY_CLASS: Final[str] = 'class'

the class of an object

moptipy.api.logging.KEY_COMMAND_LINE: Final[str] = 'commandLine'

the command line of the process

moptipy.api.logging.KEY_CPU_AFFINITY: Final[str] = 'cpuAffinity'

the affinity of the process to logical CPUs

moptipy.api.logging.KEY_EXCEPTION_STACK_TRACE: Final[str] = 'exceptionStackTrace'

the key for the exception stack trace

moptipy.api.logging.KEY_EXCEPTION_TYPE: Final[str] = 'exceptionType'

the key for the exception type

moptipy.api.logging.KEY_EXCEPTION_VALUE: Final[str] = 'exceptionValue'

the key for the exception value

moptipy.api.logging.KEY_F_LOWER_BOUND: Final[str] = 'lowerBound'

the default log key for the lower bound of objective function values

moptipy.api.logging.KEY_F_UPPER_BOUND: Final[str] = 'upperBound'

the default log key for the upper bound of objective function values

moptipy.api.logging.KEY_GOAL_F: Final[str] = 'goalF'

the goal objective value of a black-box process

moptipy.api.logging.KEY_HW_BYTE_ORDER: Final[str] = 'byteOrder'

the key for the byte order

moptipy.api.logging.KEY_HW_CPU_MHZ: Final[str] = 'cpuMhz'

the clock speed of the CPUs

moptipy.api.logging.KEY_HW_CPU_NAME: Final[str] = 'cpu'

the key for the cpu name

moptipy.api.logging.KEY_HW_MACHINE: Final[str] = 'machine'

the key for the machine

moptipy.api.logging.KEY_HW_MEM_SIZE: Final[str] = 'memSize'

the key for the memory size

moptipy.api.logging.KEY_HW_N_LOGICAL_CPUS: Final[str] = 'nLogicalCpus'

the number of logical CPUs

moptipy.api.logging.KEY_HW_N_PHYSICAL_CPUS: Final[str] = 'nPhysicalCpus'

the number of physical CPUs

moptipy.api.logging.KEY_INNER_CLASS: Final[str] = 'innerClass'

the inner class of an object

moptipy.api.logging.KEY_INSTANCE: Final[str] = 'instance'

the key for the instance

moptipy.api.logging.KEY_LAST_IMPROVEMENT_FE: Final[str] = 'lastImprovementFE'

the FE when the best objective value was reached

moptipy.api.logging.KEY_LAST_IMPROVEMENT_TIME_MILLIS: Final[str] = 'lastImprovementTimeMillis'

the time in milliseconds when the best objective value was reached

moptipy.api.logging.KEY_MAX_FES: Final[str] = 'maxFEs'

the maximum FEs of a black-box process

moptipy.api.logging.KEY_MAX_TIME_MILLIS: Final[str] = 'maxTimeMillis'

the maximum runtime in milliseconds of a black-box process

moptipy.api.logging.KEY_NAME: Final[str] = 'name'

the default log key for names of objects

moptipy.api.logging.KEY_NODE_IP: Final[str] = 'ipAddress'

the ip address of the node on which the session is running

moptipy.api.logging.KEY_NODE_NAME: Final[str] = 'node'

the node on which the session is running

moptipy.api.logging.KEY_OS_NAME: Final[str] = 'name'

the operating system name

moptipy.api.logging.KEY_OS_RELEASE: Final[str] = 'release'

the operating system release

moptipy.api.logging.KEY_OS_VERSION: Final[str] = 'version'

the operating system version

moptipy.api.logging.KEY_PROCESS_ID: Final[str] = 'processId'

the pid of the process

moptipy.api.logging.KEY_PYTHON_IMPLEMENTATION: Final[str] = 'implementation'

the python implementation

moptipy.api.logging.KEY_PYTHON_VERSION: Final[str] = 'version'

the python version

moptipy.api.logging.KEY_RAND_BIT_GENERATOR_TYPE: Final[str] = 'randBitGenType'

the type of the bit generator used by the random generator

moptipy.api.logging.KEY_RAND_GENERATOR_TYPE: Final[str] = 'randGenType'

the random generator type

moptipy.api.logging.KEY_RAND_SEED: Final[str] = 'randSeed'

the random seed

moptipy.api.logging.KEY_SESSION_START: Final[str] = 'start'

the time when the session was started

moptipy.api.logging.KEY_SPACE_NUM_VARS: Final[str] = 'nvars'

the number of decision variables

moptipy.api.logging.KEY_TOTAL_FES: Final[str] = 'totalFEs'

the total number of consumed FEs

moptipy.api.logging.KEY_TOTAL_TIME_MILLIS: Final[str] = 'totalTimeMillis'

the total number of consumed milliseconds

moptipy.api.logging.KEY_WORKING_DIRECTORY: Final[str] = 'workingDirectory'

the working directory of the process

moptipy.api.logging.PREFIX_SECTION_ARCHIVE: Final[str] = 'ARCHIVE_'

the archive section prefix

moptipy.api.logging.PROGRESS_CURRENT_F: Final[str] = 'f'

the current objective value column for the progress CSV

moptipy.api.logging.PROGRESS_FES: Final[str] = 'fes'

the FEs column for the progress CSV

moptipy.api.logging.PROGRESS_TIME_MILLIS: Final[str] = 'timeMS'

the time millis column for the progress CSV

moptipy.api.logging.SCOPE_ALGORITHM: Final[str] = 'a'

the scope of the optimization algorithm

moptipy.api.logging.SCOPE_ENCODING: Final[str] = 'g'

the scope of the encoding

moptipy.api.logging.SCOPE_HARDWARE: Final[str] = 'hardware'

the hardware scope in the sys-info section

moptipy.api.logging.SCOPE_OBJECTIVE_FUNCTION: Final[str] = 'f'

the scope of the objective function

moptipy.api.logging.SCOPE_OP0: Final[str] = 'op0'

the scope of the nullary search operator

moptipy.api.logging.SCOPE_OP1: Final[str] = 'op1'

the scope of the unary search operator

moptipy.api.logging.SCOPE_OP2: Final[str] = 'op2'

the scope of the binary search operator

moptipy.api.logging.SCOPE_OS: Final[str] = 'os'

the operating system in the sys-info section

moptipy.api.logging.SCOPE_PROCESS: Final[str] = 'p'

the scope of the process parameters

moptipy.api.logging.SCOPE_PRUNER: Final[str] = 'ap'

the scope of the multi-objective archive pruner

moptipy.api.logging.SCOPE_PYTHON: Final[str] = 'python'

the python scope in the sys-info section

moptipy.api.logging.SCOPE_SEARCH_SPACE: Final[str] = 'x'

the scope of the search space

moptipy.api.logging.SCOPE_SESSION: Final[str] = 'session'

information about the current session

moptipy.api.logging.SCOPE_SOLUTION_SPACE: Final[str] = 'y'

the scope of the solution space

moptipy.api.logging.SCOPE_VERSIONS: Final[str] = 'version'

the versions scope in the sys-info section

moptipy.api.logging.SECTION_ARCHIVE_QUALITY: Final[str] = 'ARCHIVE_QUALITIES'

the archive quality section suffix

moptipy.api.logging.SECTION_ERROR_BEST_F: Final[str] = 'ERROR_BEST_F_MISMATCH'

the section indicating a mismatch of the computed and registered best f

moptipy.api.logging.SECTION_ERROR_INVALID_X: Final[str] = 'ERROR_INVALID_X'

the section indicating an invalid point in the search space

moptipy.api.logging.SECTION_ERROR_INVALID_Y: Final[str] = 'ERROR_INVALID_Y'

the section indicating an invalid candidate solution

moptipy.api.logging.SECTION_ERROR_IN_CONTEXT: Final[str] = 'ERROR_IN_CONTEXT'

the section indicating an error that occurred in the context of the process. this may be an error in the algorithm or, more likely, in the processing of the result.

moptipy.api.logging.SECTION_ERROR_IN_LOG: Final[str] = 'ERROR_IN_LOG'

the section indicating an error caught during log writing

moptipy.api.logging.SECTION_ERROR_IN_RUN: Final[str] = 'ERROR_IN_RUN'

the section indicating an error during the algorithm run

moptipy.api.logging.SECTION_ERROR_TIMING: Final[str] = 'ERROR_TIMING'

the section indicating that the time measurement has an error

moptipy.api.logging.SECTION_FINAL_STATE: Final[str] = 'STATE'

the end state

moptipy.api.logging.SECTION_PROGRESS: Final[str] = 'PROGRESS'

the progress csv section

moptipy.api.logging.SECTION_RESULT_X: Final[str] = 'RESULT_X'

the resulting point in the search space

moptipy.api.logging.SECTION_RESULT_Y: Final[str] = 'RESULT_Y'

the resulting point in the solution space

moptipy.api.logging.SECTION_SETUP: Final[str] = 'SETUP'

the setup section

moptipy.api.logging.SECTION_SYS_INFO: Final[str] = 'SYS_INFO'

the automatically generated system info section

moptipy.api.logging.SUFFIX_SECTION_ARCHIVE_X: Final[str] = '_X'

the archive X section suffix

moptipy.api.logging.SUFFIX_SECTION_ARCHIVE_Y: Final[str] = '_Y'

the archive Y section suffix

moptipy.api.mo_algorithm module

The base classes for multi-objective optimization algorithms.

A multi-objective optimization algorithm is an optimization algorithm that can optimize multiple, possibly conflicting, objective functions at once. All such algorithms inherit from MOAlgorithm. When developing or implementing new algorithms, you would still follow the concepts discussed in module algorithm.

If you implement a new multi-objective algorithm, you can test it via the pre-defined unit test routine validate_mo_algorithm().

class moptipy.api.mo_algorithm.MOAlgorithm[source]

Bases: Algorithm

A base class for multi-objective optimization algorithms.

solve(process)[source]

Forward to solve_mo() and cast process to MOProcess.

Parameters:

process (Process) – the process to solve. Must be an instance of MOProcess.

Return type:

None

solve_mo(process)[source]

Apply the multi-objective optimization algorithm to the given process.

Parameters:

process (MOProcess) – the multi-objective process which provides methods to access the search space, the termination criterion, and a source of randomness. It also wraps the objective function, remembers the best-so-far solution, and takes care of creating log files (if this is wanted).

Return type:

None

moptipy.api.mo_archive module

An archive of solutions to a multi-objective problems.

class moptipy.api.mo_archive.MOArchivePruner[source]

Bases: Component

A strategy for pruning an archive of solutions.

prune(archive, n_keep, size)[source]

Prune an archive.

After invoking this method, the first n_keep entries in archive are selected to be preserved. The remaining entries (at indices n_keep…len(archive)-1) can be deleted.

Pruning therefore is basically just a method of sorting the archive according to a preference order of solutions. It will not delete any element from the list. The caller can do that afterwards if she wants.

This base class just provides a simple FIFO scheme.

Parameters:
  • archive (list[MORecord]) – the archive, i.e., a list of tuples of solutions and their objective vectors

  • n_keep (int) – the number of solutions to keep

  • size (int) – the current size of the archive

Return type:

None

class moptipy.api.mo_archive.MORecord(x, fs)[source]

Bases: object

A record for the multi-objective archive.

The default sorting order of multi-objective records is lexicographic based on the objective value vector.

>>> import numpy as npx
>>> mr1 = MORecord("xxx", npx.array([1, 2, 3], int))
>>> print(mr1.x)
xxx
>>> print(mr1.fs)
[1 2 3]
>>> print(mr1)
fs=1;2;3, x=xxx
>>> mr2 = MORecord("yyy", npx.array([1, 2, 1], int))
>>> print(mr2.x)
yyy
>>> print(mr2.fs)
[1 2 1]
>>> mr1 < mr2
False
>>> mr2 < mr1
True
fs: Final[ndarray]

the vector of objective values

x: Final[Any]

the point in the search space

moptipy.api.mo_archive.check_mo_archive_pruner(pruner)[source]

Check whether an object is a valid instance of MOArchivePruner.

Parameters:

pruner (Any) – the multi-objective archive pruner

Return type:

MOArchivePruner

Returns:

the object

Raises:

TypeError – if pruner is not an instance of MOArchivePruner

>>> check_mo_archive_pruner(MOArchivePruner())
fifo
>>> try:
...     check_mo_archive_pruner('A')
... except TypeError as te:
...     print(te)
pruner should be an instance of moptipy.api.mo_archive.MOArchivePruner but is str, namely 'A'.
>>> try:
...     check_mo_archive_pruner(None)
... except TypeError as te:
...     print(te)
pruner should be an instance of moptipy.api.mo_archive.MOArchivePruner but is None.

moptipy.api.mo_execution module

The multi-objective algorithm execution API.

class moptipy.api.mo_execution.MOExecution[source]

Bases: Execution

Define all the components of a multi-objective experiment and execute it.

Different from Execution, this class here allows us to construct multi-objective optimization processes, i.e., such that have more than one optimization goal.

execute()[source]

Create a multi-objective process, apply algorithm, and return result.

This method is multi-objective equivalent of the execute() method. It returns a multi-objective process after applying the multi-objective algorithm.

Return type:

MOProcess

Returns:

the instance of MOProcess after applying the algorithm.

set_archive_max_size(size)[source]

Set the upper limit for the archive size (after pruning).

The internal archive of the multi-objective optimization process retains non-dominated solutions encountered during the search. Since there can be infinitely many such solutions, the archive could grow without bound if left untouched. Therefore, we define two size limits: the maximum archive size (defined by this method) and the pruning limit. Once the archive grows beyond the pruning limit, it is cut down to the archive size limit.

Parameters:

size (int) – the maximum archive size

Return type:

TypeVar(TMOExecution, bound= MOExecution)

Returns:

this execution

set_archive_pruner(pruner)[source]

Set the pruning strategy for downsizing the archive.

Parameters:

pruner (MOArchivePruner) – the archive pruner

Return type:

TypeVar(TMOExecution, bound= MOExecution)

Returns:

this execution

set_archive_pruning_limit(limit)[source]

Set the size limit of the archive above which pruning is performed.

If the size of the archive grows above this limit, the archive will be pruned down to the archive size limit.

Parameters:

limit (int) – the archive pruning limit

Return type:

TypeVar(TMOExecution, bound= MOExecution)

Returns:

this execution

set_objective(objective)[source]

Set the objective function in form of a multi-objective problem.

Parameters:

objective (Objective) – the objective function

Return type:

TypeVar(TExecution, bound= Execution)

Returns:

this execution

class moptipy.api.mo_execution.TMOExecution

The execution type variable for returning Self.

alias of TypeVar(‘TMOExecution’, bound=MOExecution)

moptipy.api.mo_problem module

The base classes for multi-objective optimization problems.

This class provides the ability to evaluate solutions according to multiple criteria. The evaluation results are stored in a numpy array and also are scalarized to a single value.

Basically, a multi-objective problem provides three essential components:

  1. It can evaluate a candidate solution according to multiple optimization objectives. Each objective returns one value, subject to minimization, and all the values are stored in a single numpy array. This is done by f_evaluate()

  2. It provides a criterion deciding whether one such objective vector dominates (i.e., is strictly better than) another one. This is done by f_dominates(). The default definition adheres to the standard “domination” definition in multi-objective optimization: A vector a dominates a vector b if it is not worse in any objective value and better in at least one. But if need be, you can overwrite this behavior.

  3. A scalarization approach: When evaluating a solution, the result is not just the objective vector itself, but also a single scalar value. This is needed to create compatibility to single-objective optimization. Matter of fact, a MOProblem is actually a subclass of Objective. This means that via this scalarization, all multi-objective problems can also be considered as single-objective problems. This means that single-objective algorithms can be applied to them as-is. It also means that log files are compatible. Multi-objective algorithms can just ignore the scalarization result and focus on the domination relationship. Often, a weighted sum approach (WeightedSum) may be the method of choice for scalarization.

class moptipy.api.mo_problem.MOProblem[source]

Bases: Objective

The base class for multi-objective optimization problems.

A multi-objective optimization problem is defined as a set of Objective functions. Each candidate solution is evaluated using each of the objectives, i.e., is rated by a vector of objective values. This vector is the basis for deciding which candidate solutions to keep and which to discard.

In multi-objective optimization, this decision is based on “domination.” A solution a dominates a solution b if it is not worse in any objective and better in at least one. This comparison behavior is implemented in method f_dominates() and can be overwritten if need be.

In our implementation, we prescribe that each multi-objective optimization problem must also be accompanied by a scalarization function, i.e., a function that represents the vector of objective values as a single scalar value. The whole multi-objective problem can then be viewed also as a single objective function itself. The method evaluate() first evaluates all of the objective functions and obtains the vector of objective values. It then scalarizes the result into a single scalar quality and returns it. Multi-objective algorithms may instead use the method f_evaluate(), which also allows a vector to be passed in which will then be filled with the results of the individual objective functions.

This makes multi-objective optimization with moptipy compatible with single-objective optimization. In other words, all optimization methods implemented for single-objective processes Process will work out-of-the-box with the multi-objective version MOProcess.

Warning: We use instances of numpy.ndarray to represent the vectors of objective values. This necessitates that each objective function has, if it is integer-valued (is_always_integer() is True) a range that fits well into at least a 64-bit integer. Specifically, it must be possible to compute “a - b” without overflow or loss of sign for any two objective values “a” and “b” within the confines of a numpy signed 64-bit integer.

evaluate(x)[source]

Evaluate a solution x and return its scalarized objective value.

This method computes all objective values for a given solution and then returns the scalarized result. The objective values themselves are directly discarted and not used. It makes a multi-objective problem compatible with single-objective optimization.

Parameters:

x – the candidate solution

Return type:

float | int

Returns:

the scalarized objective value

f_create()[source]

Create a vector to receive the objective values.

This array will be of the length returned by f_dimension() and of the dtype of f_dtype().

Return type:

ndarray

Returns:

a vector to receive the objective values

f_dimension()[source]

Obtain the number of objective functions.

Return type:

int

Returns:

the number of objective functions

f_dominates(a, b)[source]

Check if an objective vector dominates or is dominated by another one.

Usually, one vector is said to dominate another one if it is not worse in any objective and better in at least one. This behavior is implemented in moptipy.api.mo_utils.dominates() and this is also the default behavior of this method. However, depending on your concrete optimization task, you may overwrite this behavior.

Parameters:
  • a (ndarray) – the first objective vector

  • b (ndarray) – the second objective value

Return type:

int

Returns:

an integer value indicating the domination relationship

Retval -1:

if a dominates b

Retval 1:

if b dominates a

Retval 2:

if b equals a

Retval 0:

if a and b are mutually non-dominated, i.e., if neither a dominates b not b dominates a and b is also different from a

f_dtype()[source]

Get the data type used in f_create().

This data type will be an integer data type if all the objective functions are integer-valued. If the bounds of the objective values are known, then this type will be “big enough” to allow the subtraction “a - b” of any two objective vectors “a” and “b” to be computed without overflow or loss of sign. At most, however, this data type will be a 64-bit integer. If any one of the objective functions returns floating point data, this data type will be a floating point type.

Return type:

dtype

Returns:

the data type used by f_create().

f_evaluate(x, fs)[source]

Perform the multi-objective evaluation of a solution.

This method fills the objective vector fs with the results of the objective functions evaluated on x. It then returns the scalarized result, i.e., a single scalar value computed based on all values in fs.

Parameters:
  • x – the solution to be evaluated

  • fs (ndarray) – the array to receive the objective values

Return type:

int | float

Returns:

the scalarization result

f_validate(x)[source]

Validate the objective vector.

Parameters:

x (ndarray) – the numpy vector

Raises:
  • TypeError – if the string is not an element of this space.

  • ValueError – if the shape of the vector is wrong or any of its element is not finite.

Return type:

None

class moptipy.api.mo_problem.MOSOProblemBridge(objective)[source]

Bases: MOProblem

A bridge between multi-objective and single-objective optimization.

f_dtype()[source]

Get the objective vector dtype.

Return type:

dtype

f_evaluate(x, fs)[source]

Evaluate the candidate solution.

Parameters:
  • x – the solution

  • fs (ndarray) – the objective vector, will become [res]

Return type:

int | float

Returns:

the objective value res

f_validate(x)[source]

Validate the objective vector.

Parameters:

x (ndarray) – the numpy array with the objective values

Return type:

None

initialize()[source]

Initialize the MO-problem bridge.

Return type:

None

log_parameters_to(logger)[source]

Log the parameters of this function to the provided destination.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

moptipy.api.mo_problem.check_mo_problem(mo_problem)[source]

Check whether an object is a valid instance of MOProblem.

Parameters:

mo_problem (Any) – the multi-objective optimization problem

Return type:

MOProblem

Returns:

the mo-problem

Raises:

TypeError – if mo_problem is not an instance of MOProblem

>>> check_mo_problem(MOProblem())
moProblem
>>> try:
...     check_mo_problem(1)
... except TypeError as te:
...     print(te)
multi-objective optimziation problem should be an instance of moptipy.api.mo_problem.MOProblem but is int, namely '1'.
>>> try:
...     check_mo_problem(None)
... except TypeError as te:
...     print(te)
multi-objective optimziation problem should be an instance of moptipy.api.mo_problem.MOProblem but is None.

moptipy.api.mo_process module

Processes offer data to both the user and the optimization algorithm.

They provide the information about the optimization process and its current state as handed to the optimization algorithm and, after the algorithm has finished, to the user. This is the multi-objective version of the Process-API. It supports having multiple objective functions. It also provides a single core objective value, which is the scalarized result of several objective values.

class moptipy.api.mo_process.MOProcess[source]

Bases: MOProblem, Process

A multi-objective Process API variant.

This class encapsulates an optimization process using multiple objective functions. It inherits all of its methods from the single-objective process Process and extends its API towards multi-objective optimization by providing the functionality of the class MOProblem.

check_in(x, fs, prune_if_necessary=False)[source]

Check a solution into the archive.

This method is intended for being invoked after the optimization algorithm has finished its work. The algorithm should, by itself, maintain the set of interesting solutions during its course. Once it has completed all of its computations, it should flush these solutions to the process using this method. All the non-dominated solutions preserved in the archive will then become available via get_archive() to the code starting the optimization procedure.

If you have a sequence of MORecord instances that you want to flush into the archive, you can use the convenience method check_in_all() for that purpose.

Parameters:
  • x (Any) – the point in the search space

  • fs (ndarray) – the vector of objective values

  • prune_if_necessary (bool, default: False) – should we prune the archive if it becomes too large? False means that the archive may grow unbounded

Return type:

bool

Returns:

True if the solution was non-dominated and has actually entered the archive, False if it has not entered the archive

check_in_all(recs, prune_if_necessary=False)[source]

Check in all the elements of an Iterable of MORecord instances.

This is a convenience wrapper around check_in().

Parameters:
  • recs (Iterable[MORecord]) – an iterable sequence of solution + quality vector records

  • prune_if_necessary (bool, default: False) – should we prune the archive if it becomes too large? False means that the archive may grow unbounded

Return type:

None

get_archive()[source]

Get the archive of non-dominated solutions.

Return type:

list[MORecord]

Returns:

a list containing all non-dominated solutions currently in the archive

get_copy_of_best_fs(fs)[source]

Get a copy of the objective vector of the current best solution.

This always corresponds to the best-so-far solution based on the scalarization of the objective vector. It is the best solution that the process has seen so far, the current best solution.

You should only call this method if you are either sure that you have invoked evaluate(), have invoked f_evaluate(), or have called has_best() before and it returned True.

Parameters:

fs (ndarray) – the destination vector to be overwritten

Return type:

None

See Also
register(x, f)[source]

Unavailable during multi-objective optimization.

Return type:

None

moptipy.api.mo_utils module

Utilities for multi-objective optimization.

moptipy.api.mo_utils.dominates(a, b)[source]

Check if one objective vector dominates or is dominated by another one.

Parameters:
  • a (ndarray) – the first objective vector

  • b (ndarray) – the second objective value

Return type:

int

Returns:

an integer value indicating the domination relationship

Retval -1:

if a dominates b

Retval 1:

if b dominates a

Retval 2:

if b equals a

Retval 0:

if a and b are mutually non-dominated, i.e., if neither a dominates b not b dominates a and b is also different from a

>>> from numpy import array
>>> dominates(array([1, 1, 1]), array([2, 2, 2]))
-1
>>> dominates(array([1.0, 1.0, 2.0]), array([2.0, 2.0, 1.0]))
0
>>> dominates(array([2, 2, 2]), array([1, 1, 1]))
1
>>> dominates(array([2, 2, 2]), array([2, 2, 2]))
2
moptipy.api.mo_utils.lexicographic(a, b)[source]

Compare two arrays lexicographically.

Parameters:
  • a (ndarray) – the first objective vector

  • b (ndarray) – the second objective value

Return type:

int

Returns:

-1 if a is lexicographically less than b, 1 if b is less than a, 0 otherwise

Retval -1:

if a is lexicographically less than b

Retval 1:

if b is lexicographically less than a

Retval 2:

if b equals a

>>> from numpy import array
>>> lexicographic(array([1, 1, 1]), array([2, 2, 2]))
-1
>>> lexicographic(array([1, 1, 1]), array([1, 1, 2]))
-1
>>> lexicographic(array([2, 2, 2]), array([1, 1, 1]))
1
>>> lexicographic(array([2, 2, 2]), array([2, 2, 1]))
1
>>> lexicographic(array([2, 2, 2]), array([2, 2, 2]))
0

moptipy.api.objective module

The base class for implementing objective functions.

An objective function evaluates the quality of a candidate solution of an optimization problem. Solutions with smaller objective values are better, i.e., objective functions are subject to minimization. All objective functions inherit from Objective. If you implement a new objective function, you can test it via the pre-defined unit test routine validate_objective().

class moptipy.api.objective.Objective[source]

Bases: Component

An objective function subject to minimization.

An objective function represents one optimization criterion that is used for rating the solution quality. All objective functions in our system are subject to minimization, meaning that smaller values are better.

evaluate(x)[source]

Evaluate a solution x and return its objective value.

The return value is either an integer or a float and must be finite. Smaller objective values are better, i.e., all objective functions are subject to minimization.

Parameters:

x – the candidate solution

Return type:

float | int

Returns:

the objective value

is_always_integer()[source]

Return True if evaluate() will always return an int value.

Return type:

bool

Returns:

True if evaluate() will always return an int or False if also a float may be returned.

log_parameters_to(logger)[source]

Log the parameters of this function to the provided destination.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

lower_bound()[source]

Get the lower bound of the objective value.

This function returns a theoretical limit for how good a solution could be at best. If no real limit is known, the function should return -inf.

Return type:

float | int

Returns:

the lower bound of the objective value

upper_bound()[source]

Get the upper bound of the objective value.

This function returns a theoretical limit for how bad a solution could be at worst. If no real limit is known, the function should return inf.

Return type:

float | int

Returns:

the upper bound of the objective value

moptipy.api.objective.check_objective(objective)[source]

Check whether an object is a valid instance of Objective.

Parameters:

objective (Any) – the objective

Return type:

Objective

Returns:

the objective

Raises:

TypeError – if objective is not an instance of Objective

>>> check_objective(Objective())
Objective
>>> try:
...     check_objective('A')
... except TypeError as te:
...     print(te)
objective function should be an instance of moptipy.api.objective.Objective but is str, namely 'A'.
>>> try:
...     check_objective(None)
... except TypeError as te:
...     print(te)
objective function should be an instance of moptipy.api.objective.Objective but is None.

moptipy.api.operators module

The base classes for implementing search operators.

Nullary search operators are used to sample the initial starting points of the optimization processes. They inherit from class Op0. The pre-defined unit test routine validate_op0() can and should be used to test all the nullary operators that are implemented.

Unary search operators accept one point in the search space as input and generate a new, similar point as output. They inherit from class Op1. The pre-defined unit test routine validate_op1() can and should be used to test all the unary operators that are implemented.

The basic unary operators Op1 have no parameter telling them how much of the input point to change. They may do a hard-coded number of modifications (as, e.g., Op1Swap2 does) or may apply a random number of modifications (like Op1SwapN). There is a sub-class of unary operators named Op1WithStepSize where a parameter step_size with a value from the closed interval [0.0, 1.0] can be supplied. If step_size=0.0, such an operator should perform the smallest possible modification and for step_size=1.0, it should perform the largest possible modification.

Binary search operators accept two points in the search space as input and generate a new point that should be similar to both inputs as output. They inherit from class Op2. The pre-defined unit test routine validate_op2() can and should be used to test all the binary operators that are implemented.

class moptipy.api.operators.Op0[source]

Bases: Component

A base class to implement a nullary search operator.

op0(random, dest)[source]

Apply the nullary search operator to fill object dest.

Afterwards dest will hold a valid point in the search space. Often, this would be a point uniformly randomly sampled from the search space, but it could also be the result of a heuristic or even a specific solution.

Parameters:
  • random (Generator) – the random number generator

  • dest – the destination data structure

Return type:

None

class moptipy.api.operators.Op1[source]

Bases: Component

A base class to implement a unary search operator.

op1(random, dest, x)[source]

Fill dest with a modified copy of x.

Parameters:
  • random (Generator) – the random number generator

  • dest – the destination data structure

  • x – the source point in the search space

Return type:

None

class moptipy.api.operators.Op1WithStepSize[source]

Bases: Op1

A unary search operator with a step size.

op1(random, dest, x, step_size=0.0)[source]

Copy x to dest but apply a modification with a given step_size.

This operator is similar to Op1.op1() in that it stores a modified copy of x into dest. The difference is that you can also specify how much that copy should be different: The parameter step_size can take on any value in the interval [0.0, 1.0], including the two boundary values. A step_size of 0.0 indicates the smallest possible move (for which dest will still be different from x) and step_size=1.0 will lead to the largest possible move.

The step_size may be interpreted differently by different operators: Some may interpret it as an exact requirement and enforce steps of the exact specified size, see, for example module op1_flip_m. Others might interpret it stochastically as an expectation. Yet others may interpret it as a goal step width and try to realize it in a best effort kind of way, but may also do smaller or larger steps if the best effort fails, see for example module op1_swap_exactly_n. What all operators should, however, have in common is that at step_size=0.0, they should try to perform a smallest possible change and at step_size=1.0, they should try to perform a largest possible change. For all values in between, step sizes should grow with rising step_size. This should allow algorithms that know nothing about the nature of the search space or the operator’s moves to still tune between small and large moves based on a policy which makes sense in a black-box setting.

Every implementation of Op1WithStepSize must specify a reasonable default value for this parameter ensure compatibility with Op1.op1(). In this base class, we set the default to 0.0.

Finally, if a step_size value is passed in which is outside the interval [0, 1], the behavior of this method is undefined. It may throw an exception or not. It may also enter an infinite loop.

Parameters:
  • random (Generator) – the random number generator

  • dest – the destination data structure

  • x – the source point in the search space

  • step_size (float, default: 0.0) – the step size parameter for the unary operator

Return type:

None

class moptipy.api.operators.Op2[source]

Bases: Component

A base class to implement a binary search operator.

op2(random, dest, x0, x1)[source]

Fill dest with a combination of x0 and x1.

Parameters:
  • random (Generator) – the random number generator

  • dest – the destination data structure

  • x0 – the first source point in the search space

  • x1 – the second source point in the search space

Return type:

None

moptipy.api.operators.check_op0(op0)[source]

Check whether an object is a valid instance of Op0.

Parameters:

op0 (Any) – the (supposed) instance of Op0

Return type:

Op0

Returns:

the object op0

Raises:

TypeError – if op0 is not an instance of Op0

>>> check_op0(Op0())
Op0
>>> try:
...     check_op0('A')
... except TypeError as te:
...     print(te)
op0 should be an instance of moptipy.api.operators.Op0 but is str, namely 'A'.
>>> try:
...     check_op0(None)
... except TypeError as te:
...     print(te)
op0 should be an instance of moptipy.api.operators.Op0 but is None.
moptipy.api.operators.check_op1(op1)[source]

Check whether an object is a valid instance of Op1.

Parameters:

op1 (Any) – the (supposed) instance of Op1

Return type:

Op1

Returns:

the object

Raises:

TypeError – if op1 is not an instance of Op1

>>> check_op1(Op1())
Op1
>>> try:
...     check_op1('A')
... except TypeError as te:
...     print(te)
op1 should be an instance of moptipy.api.operators.Op1 but is str, namely 'A'.
>>> try:
...     check_op1(None)
... except TypeError as te:
...     print(te)
op1 should be an instance of moptipy.api.operators.Op1 but is None.
moptipy.api.operators.check_op1_with_step_size(op1)[source]

Check whether an object is a valid instance of Op1WithStepSize.

Parameters:

op1 (Any) – the (supposed) instance of Op1WithStepSize

Return type:

Op1WithStepSize

Returns:

the object op1

Raises:

TypeError – if op1 is not an instance of Op1WithStepSize

>>> check_op1_with_step_size(Op1WithStepSize())
Op1WithStepSize
>>> try:
...     check_op1_with_step_size('A')
... except TypeError as te:
...     print(te)
op1 should be an instance of moptipy.api.operators.Op1WithStepSize but is str, namely 'A'.
>>> try:
...     check_op1_with_step_size(Op1())
... except TypeError as te:
...     print(te)
op1 should be an instance of moptipy.api.operators.Op1WithStepSize but is moptipy.api.operators.Op1, namely 'Op1'.
>>> try:
...     check_op1_with_step_size(None)
... except TypeError as te:
...     print(te)
op1 should be an instance of moptipy.api.operators.Op1WithStepSize but is None.
moptipy.api.operators.check_op2(op2)[source]

Check whether an object is a valid instance of Op2.

Parameters:

op2 (Any) – the (supposed) instance of Op2

Return type:

Op2

Returns:

the object op2

Raises:

TypeError – if op2 is not an instance of Op2

>>> check_op2(Op2())
Op2
>>> try:
...     check_op2('A')
... except TypeError as te:
...     print(te)
op2 should be an instance of moptipy.api.operators.Op2 but is str, namely 'A'.
>>> try:
...     check_op2(None)
... except TypeError as te:
...     print(te)
op2 should be an instance of moptipy.api.operators.Op2 but is None.

moptipy.api.process module

Processes offer data to both the user and the optimization algorithm.

They provide the information about the optimization process and its current state as handed to the optimization algorithm and, after the algorithm has finished, to the user. They also supply the optimization algorithm with everything it needs to run, e.g., random numbers (get_random()), they evaluate solutions (evaluate()) , and they tell it when to stop (should_terminate()).

The idea behind this interface is to treat optimization algorithms as so-called anytime algorithms. An anytime algorithm will begin with a guess about what the solution for a problem could be. It will then iteratively sample and evaluate (evaluate()) new solutions, i.e., new and hopefully better guesses. It can be stopped at any time, e.g., by the termination criterion, should_terminate() and then return the best guess of the solution (get_copy_of_best_y(), get_best_f()).

The process API also collects all the information about the optimization process, performs in-memory logging if wanted, and can write a standardized, text-based log file for each run of an algorithm in a clear folder structure. By storing information about the algorithm, the problem, and the system, as well as the random seed, this allows for self-documenting and replicable experiments.

The class Process is a base class from which all optimization processes are derived. It is for the standard single-objective optimization case. A multi-objective variant is given in module mo_process as class MOProcess.

Furthermore, processes also lent themselves to “forking” off some of the computational budget of an algorithm to sub-algorithms. For this purpose, the module subprocesses provides specialized routines, such as for_fes() for creating sub-processes that forward all method calls to the original process but will perform at most a given number of objective function evaluations or from_starting_point(), which creates a sub-process that has the current-best solution pre-set to a given point in the search space and its quality. without_should_terminate() wraps a process in such a way that the termination criterion should_terminate(), which is suitable for invoking externally implemented optimization algorithms that do not know/care about the moptipy API.

  1. Mark S. Boddy and Thomas L. Dean. Solving Time-Dependent Planning Problems. Report CS-89-03, February 1989. Providence, RI, USA: Brown University, Department of Computer Science. ftp://ftp.cs.brown.edu/pub/techreports/89/cs89-03.pdf

class moptipy.api.process.Process[source]

Bases: Space, Objective, AbstractContextManager

Processes offer data to the optimization algorithm and the user.

A Process presents the objective function and search space to an optimization algorithm. Since it wraps the actual objective function, it can see all evaluated solutions and remember the best-so-far solution. It can also count the FEs and the runtime that has passed. Therefore, it also presents the termination criterion to the optimization algorithm. It also provides a random number generator the algorithm. It can write log files with the progress of the search and the end result. Finally, it provides the end result to the user, who can access it after the algorithm has finished.

add_log_section(title, text)[source]

Add a section to the log, if a log is written (otherwise ignore it).

When creating the experiment Execution, you can specify a log file via method set_log_file(). Then, the results of your algorithm and the system configuration will be stored as text in this file. Each type of information will be stored in a different section. The end state with the final solution quality, for instance, will be stored in a section named STATE. Each section begins with the line BEGIN_XXX and ends with the line END_XXX, where XXX is the name of the section. Between these two lines, all the contents of the section are stored.

This method here allows you to add a custom section to your log file. This can happen in your implementation of the method solve() of your algorithm. (Ideally at its end.) Of course, invoking this method only makes sense if there actually is a log file. You can check for this by calling has_log().

You can specify a custom section name (which must be in upper case characters) and a custom section body text. Of course, the name of this section must not clash with any other section name. Neither the section name nor section body should contain strings like BEGIN_ or END_, and such and such. You do not want to mess up your log files. Ofcourse you can add a section with a given name only once, because otherwise there would be a name clash. Anyway, if you add sections like this, they will be appended at the end of the log file. This way, you have all the standard log data and your additional information in one consistent file.

Be advised: Adding sections costs time and memory. You do not want to do such a thing in a loop. If your algorithm should store additional data, it makes sense to gather this data in an efficient way during the run and only flush it to a section at the end of the run.

Parameters:
  • title (str) – the title of the log section

  • text (str) – the text to log

Return type:

None

evaluate(x)[source]

Evaluate a solution x and return its objective value.

This method implements the evaluate() method of the moptipy.api.objective.Objective function interface, but on Process level.

The return value is either an integer or a float and must be finite. Smaller objective values are better, i.e., all objective functions are subject to minimization.

This method here is usually a wrapper that internally invokes the actual Objective function, but it does more: While it does use the evaluate() method of the objective function to compute the quality of a candidate solution, it also internally increments the counter for the objective function evaluations (FEs) that have passed. You can request the number of these FEs via get_consumed_fes() (and also the time that has passed via get_consumed_time_millis(), but this is unrelated to the evaluate() method).

Still, counting the FEs like this allows us to know when, e.g., the computational budget in terms of a maximum permitted number of FEs has been exhausted, in which case should_terminate() will become True.

Also, since this method will see all objective values and the corresponding candidate solutions, it is able to internally remember the best solution you have ever created and its corresponding objective value. Therefore, the optimization Process can provide both to you via the methods has_best(), get_copy_of_best_x(), get_copy_of_best_y(), and get_best_f(). At the same time, if a goal objective value or lower bound for the objective function is specified and one solution is seen that has such a quality, should_terminate() will again become True.

Finally, this method also performs all logging, e.g., of improving moves, in memory if logging is activated. (See set_log_file(), set_log_improvements(), and set_log_all_fes().)

In some cases, you may not need to invoke the original objective function via this wrapper to obtain the objective value of a solution. Indeed, in some cases you know the objective value because of the way you constructed the solution. However, you still need to tell our system the objective value and provide the solution to ensure the correct counting of FEs, the correct preservation of the best solution, and the correct setting of the termination criterion. For these situations, you will call register() instead of evaluate().

Parameters:

x – the candidate solution

Return type:

float | int

Returns:

the objective value

get_best_f()[source]

Get the objective value of the current best solution.

This always corresponds to the best-so-far solution, i.e., the best solution that you have passed to evaluate() or register() so far. It is NOT the best possible objective value for the optimization problem. It is the best objective value that the process has seen so far, the current best objective value.

You should only call this method if you are either sure that you have invoked meth:evaluate before register() of if you called has_best() before and it returned True.

Return type:

int | float

Returns:

the objective value of the current best solution.

See Also
get_consumed_fes()[source]

Obtain the number consumed objective function evaluations.

This is the number of calls to evaluate().

Return type:

int

Returns:

the number of objective function evaluations so far

get_consumed_time_millis()[source]

Obtain an approximation of the consumed runtime in milliseconds.

Returns:

the consumed runtime measured in milliseconds.

Return type:

int

get_copy_of_best_x(x)[source]

Get a copy of the current best point in the search space.

This always corresponds to the point in the search space encoding the best-so-far solution, i.e., the best point in the search space that you have passed to evaluate() or register() so far. It is NOT the best global optimum for the optimization problem. It corresponds to the best solution that the process has seen so far, the current best solution.

Even if the optimization algorithm using this process does not preserve this solution in special variable and has already lost it again, this method will still return it. The optimization process encapsulated by this process object will always remember it.

This also means that your algorithm implementations do not need to store the best-so-far solution anywhere if doing so would be complicated. They can obtain it simply from this method whenever needed.

You should only call this method if you are either sure that you have invoked evaluate() before register() of if you called has_best() before and it returned True.

For understanding the relationship between the search space and the solution space, see module encoding.

Parameters:

x – the destination data structure to be overwritten

Return type:

None

See Also
get_copy_of_best_y(y)[source]

Get a copy of the current best point in the solution space.

This always corresponds to the best-so-far solution, i.e., the best solution that you have passed to evaluate() or register() so far. It is NOT the global optimum for the optimization problem. It is the best solution that the process has seen so far, the current best solution.

You should only call this method if you are either sure that you have invoked meth:evaluate before register() of if you called has_best() before and it returned True.

Parameters:

y – the destination data structure to be overwritten

Return type:

None

See Also
get_last_improvement_fe()[source]

Get the FE at which the last improvement was made.

You should only call this method if you are either sure that you have invoked meth:evaluate before register() of if you called has_best() before and it returned True.

Return type:

int

Returns:

the function evaluation when the last improvement was made

Raises:

ValueError – if no FE was performed yet

get_last_improvement_time_millis()[source]

Get the FE at which the last improvement was made.

You should only call this method if you are either sure that you have invoked meth:evaluate before register() of if you called has_best() before and it returned True.

Return type:

int

Returns:

the function evaluation when the last improvement was made

Raises:

ValueError – if no FE was performed yet

get_log_basename()[source]

Get the basename of the log, if any.

If a log file is associated with this process, then this function returns the name of the log file without the file suffix. If no log file is associated with the process, then None is returned.

This can be used to store additional information during the run of the optimization algorithm. However, treat this carefully, as some files with the same base name may exist or be generated by other modules.

Return type:

str | None

Returns:

the path to the log file without the file suffix if a log file is associated with the process, or None otherwise

get_max_fes()[source]

Obtain the maximum number of permitted objective function evaluations.

If no limit is set, None is returned.

Return type:

int | None

Returns:

the maximum number of objective function evaluations, or None if no limit is specified.

get_max_time_millis()[source]

Obtain the maximum runtime permitted in milliseconds.

If no limit is set, None is returned.

Return type:

int | None

Returns:

the maximum runtime permitted in milliseconds, or None if no limit is specified.

get_random()[source]

Obtain the random number generator.

The optimization algorithm and all of its components must only use this random number generator for all their non-deterministic decisions. In order to guarantee reproducible runs, there must not be any other source of randomness. This generator can be seeded in the set_rand_seed() method of the Execution builder object.

Return type:

Generator

Returns:

the random number generator

has_best()[source]

Check whether a current best solution is available.

As soon as one objective function evaluation has been performed, the black-box process can provide a best-so-far solution. Then, this method returns True. Otherwise, it returns False. This means that this method returns True if and only if you have called either evaluate() or register() at least once.

Return type:

bool

Returns:

True if the current-best solution can be queried.

See Also
has_log()[source]

Will any information of this process be logged?.

Only if this method returns True, invoking add_log_section() makes any sense. Otherwise, the data would just be discarded.

Retval TrueTrue:

if the process is associated with a log output

Retval FalseFalse:

if no information is stored in a log output

Return type:

bool

initialize()[source]

Raise an error because this method shall never be called.

Raises:

ValueError – always

Return type:

None

register(x, f)[source]

Register a solution x with externally-evaluated objective value.

This function is equivalent to evaluate(), but receives the objective value as parameter. In some problems, algorithms can compute the objective value of a solution very efficiently without passing it to the objective function.

For example, on the Traveling Salesperson Problem with n cities, if you have a tour of known length and swap two cities in it, then you can compute the overall tour length in O(1) instead of O(n) that you would need to pay for a full evaluation. In such a case, you could use register instead of evaluate.

x must be provided if f marks an improvement. In this case, the contents of x will be copied to an internal variable remembering the best-so-far solution. If f is not an improvement, you may pass in None for x or just any valid point in the search space.

For each candidate solution you construct, you must call either evaluate() or register(). This is because these two functions also count the objective function evaluations (FEs) that have passed. This is needed to check the termination criterion, for instance.

Parameters:
  • x – the candidate solution

  • f (int | float) – the objective value

Return type:

None

should_terminate()[source]

Check whether the optimization process should terminate.

If this function returns True, the optimization process must not perform any objective function evaluations anymore. It will automatically become True when a termination criterion is hit or if anyone calls terminate(), which happens also at the end of a with statement.

Generally, the termination criterion is configured by the methods set_max_fes(), set_max_time_millis(), and set_goal_f() of the Execution builder. Furthermore, if the objective function has a finite lower_bound(), then this lower bound is also used as goal objective value if no goal objective value is specified via set_goal_f(). should_terminate() then returns True as soon as any one of the configured criteria is met, i.e., the process terminates when the earliest one of the criteria is met.

Return type:

bool

Returns:

True if the process should terminate, False if not

terminate()[source]

Terminate this process.

This function is automatically called at the end of the with statement, but can also be called by the algorithm when it is finished and is also invoked automatically when a termination criterion is hit. After the first time this method is invoked, should_terminate() becomes True.

Return type:

None

moptipy.api.process.check_goal_f(goal_f, none_is_ok=False)[source]

Check the goal objective value.

This is a small utility method that validates whether a goal objective value is valid.

Parameters:
  • goal_f (int | float | None) – the goal objective value

  • none_is_ok (bool, default: False) – is None ok?

Return type:

int | float | None

Returns:

the goal objective value, or None

Raises:
  • TypeError – if goal_f is None (and None is not allowed) or neither an int nor a float

  • ValueError – if goal_f is invalid

moptipy.api.process.check_max_fes(max_fes, none_is_ok=False)[source]

Check the maximum FEs.

This is a small utility method that validates whether a maximum for the objective function evaluations (FEs) is valid.

Parameters:
  • max_fes (int | None) – the maximum FEs

  • none_is_ok (bool, default: False) – is None ok?

Return type:

int | None

Returns:

the maximum fes, or None

Raises:
  • TypeError – if max_fes is None (and None is not allowed) or not an int

  • ValueError – if max_fes is invalid

moptipy.api.process.check_max_time_millis(max_time_millis, none_is_ok=False)[source]

Check the maximum time in milliseconds.

This is a small utility method that validates whether a maximum for the milliseconds that can be used as runtime limit is valid.

Parameters:
  • max_time_millis (int | None) – the maximum time in milliseconds

  • none_is_ok (bool, default: False) – is None ok?

Return type:

int | None

Returns:

the maximum time in milliseconds, or None

Raises:
  • TypeError – if max_time_millis is None (and None is not allowed) or not an int

  • ValueError – if max_time_millis is invalid

moptipy.api.space module

Provide the functionality to access search and solution spaces.

A Space is the abstraction of the data structures for solutions and points in the search space that need to be generated, copied, and stored during the optimization process. This allows us to develop black-box algorithms while still being able to properly remember the best solutions, storing them as text strings in log files, and to validate whether they are correct.

All search or solution spaces in moptipy inherit from Space. If you implement a new space, you should test it with the pre-defined unit test routine validate_space().

The following pre-defined spaces are currently available:

  • BitStrings, the space of n-dimensional bit strings

  • IntSpace, a space of n-dimensional integer strings, where each element is between predefined inclusive bounds min_value…max_value.

  • Permutations is a special version of the IntSpace where all elements are permutations of a base string blueprint. This means that it can represent permutations both with and without repetitions. Depending on the base string, each element may occur an element-specific number of times. For the base string (-1, -1, 2, 7, 7, 7), for example, -1 may occur twice, 2 can occur once, and 7 three times.

  • VectorSpace is the space of n-dimensional floating point number vectors.

class moptipy.api.space.Space[source]

Bases: Component

A class to represent both search and solution spaces.

The space basically defines a container data structure and basic operations that we can apply to them. For example, a solution space contains all the possible solutions to an optimization problem. All of them are instances of one data structure. An optimization as well as a black-box process needs to be able to create and copy such objects. In order to store the solutions we found in a text file, we must further be able to translate them to strings. We should also be able to parse such strings. It is also important to detect whether two objects are the same and whether the contents of an object are valid. All of this functionality is offered by the Space class.

copy(dest, source)[source]

Copy one instance of the data structure to another one.

Notice that the first parameter of this method is the destination, which will be overwritten by copying the contents of the second parameter over it.

Parameters:
  • dest – the destination data structure, whose contents will be overwritten with those from source

  • source – the source data structure, which remains unchanged and whose contents will be copied to dest

Return type:

None

create()[source]

Generate an instance of the data structure managed by the space.

The state/contents of this data structure are undefined. It may not pass the validate() method.

Return type:

Any

Returns:

the new instance

from_str(text)[source]

Transform a string text to one element of the space.

This method should be implemented as inverse to :meth:to_str:. It should check the validity of the result before returning it. It may not always be possible to implement this method, but you should try.

Parameters:

text (str) – the input string

Return type:

Any

Returns:

the element in the space corresponding to text

is_equal(x1, x2)[source]

Check if the contents of two instances of the data structure are equal.

Parameters:
  • x1 – the first instance

  • x2 – the second instance

Return type:

bool

Returns:

True if the contents are equal, False otherwise

n_points()[source]

Get the approximate number of different elements in the space.

This operation can help us when doing tests of the space API implementations. If we know how many points exist in the space, then we can judge whether a method that randomly generates points is sufficiently random, for instance.

By default, this method simply returns 2. If you have a better approximation of the size of the space, then you should override it.

Return type:

int

Returns:

the approximate scale of the space

to_str(x)[source]

Obtain a textual representation of an instance of the data structure.

This method should convert an element of the space to a string representation that is parseable by :meth:from_str: and should ideally not be too verbose. For example, when converting a list or array x of integers to a string, one could simply do “;”.join([str(xx) for xx in x]), which would convert it to a semicolon-separated list without any wasted space.

Notice that this method is used by the Logger when storing the final optimization results in the log files in form of a TextLogSection created via text(). By implementing :meth:from_str: and :meth:to_str: as exact inverse of each other, you can thus ensure that you can always automatically load the results of your optimization runs from the log files created via set_log_file() of the Execution class.

Parameters:

x – the instance

Return type:

str

Returns:

the string representation of x

validate(x)[source]

Check whether a given point in the space is valid.

This function should be implemented such that it very carefully checks whether the argument x is a valid element of this space. It should check the Python data type of x and the type of its components and raise a TypeError if it does not match the exact requirements. It should also check the value of each element of x whether it is permitted. Once this function returns without throwing an exception, the user can rely on that the data structure x is correct.

For example, if we have a space of permutations of the values from 1 to n, where the elements are represented as numpy.ndarray objects, then this function should first check whether x is indeed an instance of numpy.ndarray. If not, it should raise a TypeError. Then it could check whether the length of x is indeed n and raise a ValueError otherwise. It could then check whether each element of x is from 1..n and occurs exactly one time (and otherwise raise a ValueError). Moreover, it should also check whether the numpy dtype of x is an appropriate integer data type and raise a ValueError otherwise. Hence, if this function checks an element x and does not raise any error, the user can rely on that this element x is, indeed, a fully valid permutation.

In our system, we use this method for example at the end of optimization processes. Every solution that is written to a log file must pass through this method. In other words, we ensure that only valid solutions are stored. If the optimization algorithm or a search operator has a bug and sometimes may produce invalid data structures, this a) helps us in finding the bug and b) prevents us from storing invalid solutions. It is also strongly encouraged that the from_str() method of any Space implementation should run its results through validate(). Since from_str() can be used to, e.g., parse the data from the result sections of log files, this ensures that no corrupted or otherwise invalid data is parsed into an application. See https://thomasweise.github.io/moptipy/#data-formats for more information on log files.

Parameters:

x – the point

Raises:
  • TypeError – if the point x (or one of its elements, if applicable) has the wrong data type

  • ValueError – if the point x is invalid and/or simply is not an element of this space

Return type:

None

moptipy.api.space.check_space(space, none_is_ok=False)[source]

Check whether an object is a valid instance of Space.

Parameters:
  • space (Any) – the object

  • none_is_ok (bool, default: False) – is it ok if None is passed in?

Return type:

Space | None

Returns:

the object

Raises:

TypeError – if space is not an instance of Space

>>> check_space(Space())
Space
>>> check_space(Space(), True)
Space
>>> check_space(Space(), False)
Space
>>> try:
...     check_space('A')
... except TypeError as te:
...     print(te)
space should be an instance of moptipy.api.space.Space but is str, namely 'A'.
>>> try:
...     check_space('A', True)
... except TypeError as te:
...     print(te)
space should be an instance of moptipy.api.space.Space but is str, namely 'A'.
>>> try:
...     check_space('A', False)
... except TypeError as te:
...     print(te)
space should be an instance of moptipy.api.space.Space but is str, namely 'A'.
>>>
>>> try:
...     check_space(None)
... except TypeError as te:
...     print(te)
space should be an instance of moptipy.api.space.Space but is None.
>>> print(check_space(None, True))
None
>>> try:
...     check_space(None, False)
... except TypeError as te:
...     print(te)
space should be an instance of moptipy.api.space.Space but is None.

moptipy.api.subprocesses module

Different ways to transform and slice processes.

In this module, we provide some routines that can be used to slice of computational budgets of a given process for running algorithms. The following functions are included:

  1. for_fes() allows for creating a sub-process that forwards all method calls to the original process but will perform at most a given number of objective function evaluations.

  2. from_starting_point() creates a sub-process that has the current-best solution pre-set to a given point in the search space and its quality. If the best solution is improved upon, the provided point will be overwritten in place.

  3. without_should_terminate() wraps a process in such a way that the termination criterion should_terminate() does not need to be checked anymore. Instead, once the optimization must stop, it will throw an internal exception and catch it again. This makes it possible to pass evaluate() to externally implemented algorithms that do not care about the moptipy API.

The utility function get_remaining_fes() returns a number representing the remaining objective function evaluations of a given Process. If that process does not have an FE-based termination criterion, it will instead return a very big number.

class moptipy.api.subprocesses.T

the type variable for single- and multi-objective processes.

alias of TypeVar(‘T’, ~moptipy.api.process.Process, ~moptipy.api.mo_process.MOProcess)

moptipy.api.subprocesses.for_fes(process, max_fes)[source]

Create a sub-process that can run for the given number of FEs.

Parameters:
  • process (TypeVar(T, Process, MOProcess)) – the original process

  • max_fes (int) – the maximum number of objective function evaluations

Return type:

TypeVar(T, Process, MOProcess)

Returns:

the sub-process that will terminate after max_fes FEs and that forwards all other calls the process.

moptipy.api.subprocesses.from_starting_point(owner, in_and_out_x, f)[source]

Create a sub-process searching from one starting point.

This process is especially useful in conjunction with class Op0Forward. This class allows forwarding the nullary search operator to the function get_copy_of_best_x(). This way, the first point that it sampled by a local search can be the point specified as in_and_out_x, which effectively seeds the local search.

To dovetail with chance of seeding, no FEs are counted at the beginning of the process as long as all points to be evaluated equal to the in_and_out_x. As soon as the first point different from in_and_out_x is evaluated, FE counting starts.

Equally-good solutions will also be accepted, i.e., stored into in_and_out_x. This costs a little bit of runtime, but would normally be the preferred behavior: On many problems, making neutral moves (i.e., drifting) will be beneficial over only accepting strict improvements. This is why rls outperforms the normal hill_climber on the jssp.

Parameters:
  • owner (Process) – the owning process

  • in_and_out_x (Any) – the input solution record, which will be overwritten with the best encountered solution

  • f (int | float) – the objective value corresponding to in_and_out

Return type:

Process

moptipy.api.subprocesses.get_remaining_fes(process)[source]

Get a finite number representing the remaining FEs of a process.

If the process has the maximum objective function evaluations (FEs) set (see get_max_fes()), then this method returns the maximum FEs minus the consumed FEs (see get_consumed_fes()). Otherwise, i.e., if get_max_fes() returns None, this function returns a very large number, namely 9223372036854775807, i.e., (2 ** 63) - 1. This number is so high that it will always be impossible to consume it in terms of FEs. But it is also finite in any case. When trying to slice of budgets or computing things based on the remaining budget, this makes it unnecessary for us to deal with special cases.

Parameters:

process (Process) – the process

Return type:

int

Returns:

an integer representing the remaining FEs of the process. If no FE limit is imposed by process, a very large number will be returned.

>>> from moptipy.api.process import Process as Proc
>>> class X(Proc):
...     def get_max_fes(self):
...         return None
...     def get_consumed_fes(self):
...         return 123
>>> get_remaining_fes(X())
9223372036854775807
>>> class Y(X):
...     def get_max_fes(self):
...         return 456
>>> get_remaining_fes(Y())
333
moptipy.api.subprocesses.without_should_terminate(algorithm, process)[source]

Apply an algorithm that does not call should_terminate to a process.

If we use an algorithm from an external library, this algorithm may ignore the proper usage of our API. With this method, we try to find a way to make sure that these calls are consistent with the termination criterion of the moptipy API.

Before calling evaluate(), register(), or f_evaluate(), an optimization algorithm must check if it should instead stop via should_terminate(). If the process called should_terminate() and was told to stop but did invoke the evaluation routines anyway, an exception will be thrown and the process force-terminates. If the process did not call should_terminate() but was supposed to stop, the results of evaluate() may be arbitrary (or positive infinity).

This function here can be used to deal with processes that do not invoke should_terminate(). It will invoke this method by itself before evaluate(), register(), and f_evaluate() and terminate the algorithm with an exception if necessary. It will then catch the exception and bury it.

Thus, we can now use algorithms that ignore our termination criteria and still force them to terminate when they should.

Some algorithms using this system are implemented in scipy and pdfo. These modules import external algorithms from other libraries which, of course, know nothing about how our moptipy works. They only accept the objective function and cannot handle the beautiful should_terminate()-based termination criteria. By using without_should_terminate(), however, we can still safely use them within moptipy compliant scenarios.

Parameters:
Return type:

None