moptipy.api package¶
The API for implementing optimization algorithms and -problems.
This package provides two things:
the basic, abstract API for implementing optimization algorithms and problems
the abstraction and implementation of black-box processes in form of the
process
API and its implementations
The former helps us to implement and plug together different components of optimization problems and optimization algorithms. The latter allows us to apply algorithms to problems and to collect the results in a transparent way. It also permits logging of the algorithm progress or even the change of dynamic parameters.
The moptipy API has the following major elements for implementing optimization algorithms and their components. You are encouraged to read more about the logic behind this structure in our free online book at https://thomasweise.github.io/oa.
component
provides the base classComponent
from which all components of algorithm and optimization problems inherit. It offers the basic methodslog_parameters_to()
for writing all parameters and configuration elements of a component to aKeyValueLogSection
of a log file (embodied by an instance ofLogger
) and the methodinitialize()
, which must be overwritten to do any initialization necessary before a run of the optimization process.algorithm
provides the base classAlgorithm
from which any optimization algorithm will inherit and which defines the optimization algorithm API, most importantly the methodsolve()
which must be overwritten to implement the algorithm behavior. Here you can also find subclasses such asAlgorithm0
,Algorithm1
, andAlgorithm2
for algorithms that use nullary, nullary and unary, as well as nully, unary, and binary search operators.encoding
provides the base classEncoding
which can be used to implement the decoding proceduredecode()
which translates elements of a search space to elements of the solution space. Such an encoding may be needed if the two spaces differ, as they do, for instance, in our Job Shop Scheduling Problem example (seeob_encoding
).mo_algorithm
provides the base classMOAlgorithm
for multi-objective optimization algorithms, which is a subclass ofAlgorithm
.mo_archive
provides the base classMOArchivePruner
for pruning archives of non-dominated solutions during a multi-objective optimization process. Since multi-objective optimization problems may have many possible solutions, but we can only return/retain a finite number of them, it may be necessary to decide which solution to keep and which to dispose of. This is the task of the archive pruner.objective
provides the base classObjective
for objective functions, i.e., for implementing the criteria rating how good a solution is. Objective functions are subject to minimization and provide a methodevaluate()
which computes an integer or floating point value rating one solution. They may additionally have alower_bound()
and/or anupper_bound()
.mo_problem
provides the base classMOProblem
, which represents a multi-objective optimization problem. This is a subclass ofObjective
and at the same time, a collection of multiple instances ofObjective
. Each multi-objective problem can compute a vector of objective values viaf_evaluate()
, but also implements the single-objectiveevaluate()
method returning a default scalarization of the objective vector. This makes multi-objective optimization compatible with single-objective optimization. While actual multi-objective algorithms can work in a truly multi-objective fashion, logging can be unified and even single-objective methods can be applied. To allow for the representation of preferences, the default Pareto domination relationship can be overwritten by providing a custom implementation off_dominates()
.operators
provides the base classesOp0
,Op1
, andOp2
fur nullary, unary, and binary search operators, respectively. These can be used to implement the methodsop0()
,op1()
, andop2()
that are used by metaheuristic optimization algorithms to sample solutions and to derive solutions from existing ones.space
provides the base classSpace
for implementing the functionality of search and solution spaces. An instance ofSpace
offers methods such ascreate()
for creating a data structure for holding point in the search space (with undefined contents),copy()
for copying one data structure to another one,to_str()
andfrom_str()
to convert a data structure to and from a string,is_equal()
to check whether one data structure equals another one, andvalidate()
to verify whether the contents of a data structure are valid. With these methods, optimization algorithms can create and copy the data structure containers to hold solutions as they need without requiring any information about the actual contents and layout of these structures. The search operators (see above) then can handle the actual processing of the data structures. At the same time, the string conversion routines allow for storing the results of algorithms in log files.
The algorithm and experiment execution API has the following major elements:
execution
provides the builder classExecution
that is used to construct one application of an optimization algorithm to an optimization problem. It configures the termination criteria, what information should be logged, and which algorithm to apply to which problem. Its methodexecute()
returns an instance ofProcess
that contains the state of the optimization after it is completed, i.e., after the algorithm was executed. Log files follow the specification at https://thomasweise.github.io/moptipy/#log-file-sections.The module
logging
holds mainly string constants that identify sections and keys for the data to be written to log files.mo_execution
provides the builder classMOExecution
, which is the multi-objective equivalent ofExecution
.experiment
offers the functionrun_experiment()
which allows you to execute a structured experiment applying a set of optimization algorithms to a set of optimization problems in a reproducible fashion, in parallel or in a distributed way. It will create a folder structure as prescribed in https://thomasweise.github.io/moptipy/#file-names-and-folder-structure. The data from such a folder structure can then be read in by the experiment evaluation tools in packageevaluation
and discussed in https://thomasweise.github.io/moptipy/#evaluating-experiments.process
provides the classProcess
, which is the basis for all optimization processes. It offers the functionality of an objective function, the search space, the termination criterion, and the random number generator to the optimization algorithm. At the same time, it collects the best-so-far solution and writes log files (if needed) for the user.mo_process
provides the classMOProcess
, which is the multi-objective equivalent ofProcess
. It also collects an archive of solutions.subprocesses
offers several functions to slice off computational budget of a process to allow an algorithm to use a sub-algorithm, to wrap processes for running external algorithms that do not respect the moptipy API, or to run sub-algorithms starting with a specified initial solution.
Submodules¶
moptipy.api.algorithm module¶
The base classes for implementing optimization algorithms.
All optimization algorithms implemented based on the moptipy API inherit from Algorithm
. If you implement a new algorithm, you will want to override the following methods:
solve()
implements the algorithm itself. It receives an instance ofProcess
as parameter that allows for the creation and evaluation of candidate solutions and that provides a random number generator. The optimization algorithm then will sample solutions and pass them toevaluate()
to obtain their objective value, striving sampling better and better solutions.The dunder method __str__ should be overridden to return a short mnemonic name of the algorithm.
log_parameters_to()
needs to be overridden if the algorithm has any parameters. This methods then should store the values of all the parameters to the logging context. It should also invoke thelog_parameters_to()
routines of all sub-components of the algorithm.initialize()
needs to be overridden to reset/initialize all internal data structures and to invoke all theinitialize()
of all components (such as search operators) of the algorithm.
Notice that we already provide specialized algorithm sub-classes for several common scenarios, such as:
Algorithm0
for algorithms that have a nullary search operator (Op0
).Algorithm1
for algorithms that have a nullary (Op0
) and an unary (Op1
) search operator.Algorithm2
for algorithms that have a nullary (Op0
), an unary (Op1
), and a binary (Op2
) search operator.MOAlgorithm
for multi-objective optimization problems.
These classes automatically invoke the log_parameters_to()
and initialize()
routines of their operators.
If you implement a new algorithm, you can and should test with the pre-defined unit test routine validate_algorithm()
, or its specialized versions
for bit-string based search spaces based on
validate_algorithm_on_bitstrings()
): a.validate_algorithm_on_onemax()
, b.validate_algorithm_on_leadingones()
for the JSSP based on
validate_algorithm_on_1_jssp()
: a.validate_algorithm_on_jssp()
on real-valued vector search spaces based on
validate_algorithm_on_vectors()
): a.validate_algorithm_on_ackley()
- class moptipy.api.algorithm.Algorithm[source]¶
Bases:
Component
A base class for implementing optimization algorithms.
- class moptipy.api.algorithm.Algorithm0(name, op0)[source]¶
Bases:
Algorithm
An algorithm with a nullary search operator.
- log_parameters_to(logger)[source]¶
Log the parameters of the algorithm to a logger.
- Parameters:
logger (
KeyValueLogSection
) – the logger for the parameters- Return type:
- class moptipy.api.algorithm.Algorithm1(name, op0, op1)[source]¶
Bases:
Algorithm0
An algorithm with a unary search operator.
- log_parameters_to(logger)[source]¶
Log the parameters of the algorithm to a logger.
- Parameters:
logger (
KeyValueLogSection
) – the logger for the parameters- Return type:
- class moptipy.api.algorithm.Algorithm2(name, op0, op1, op2)[source]¶
Bases:
Algorithm1
An algorithm with a binary and unary operator.
- log_parameters_to(logger)[source]¶
Log the parameters of the algorithm to a logger.
- Parameters:
logger (
KeyValueLogSection
) – the logger for the parameters- Return type:
- moptipy.api.algorithm.check_algorithm(algorithm)[source]¶
Check whether an object is a valid instance of
Algorithm
.- Parameters:
algorithm (
Any
) – the algorithm object- Return type:
- Returns:
the object
- Raises:
>>> check_algorithm(Algorithm()) Algorithm >>> try: ... check_algorithm('A') ... except TypeError as te: ... print(te) algorithm should be an instance of moptipy.api.algorithm.Algorithm but is str, namely 'A'. >>> try: ... check_algorithm(None) ... except TypeError as te: ... print(te) algorithm should be an instance of moptipy.api.algorithm.Algorithm but is None.
moptipy.api.component module¶
Provides the base class for all components of the moptipy API.
All elements of the moptipy API inherit from Component
. If you implement a new such component, you can test it using the pre-defined unit test routine validate_component()
.
- class moptipy.api.component.Component[source]¶
Bases:
object
The base class for all components of the moptipy API.
- initialize()[source]¶
Initialize this component before a new run.
Before every run of the optimization algorithm, its initialize method is called. The algorithm in turn must call all the initialize methods of all of its components.
- Return type:
- log_parameters_to(logger)[source]¶
Log all parameters of this component as key-value pairs.
- Parameters:
logger (
KeyValueLogSection
) – the logger for the parameters- Return type:
>>> from moptipy.utils.logger import InMemoryLogger >>> with InMemoryLogger() as l: ... with l.key_values("C") as kv: ... Component().log_parameters_to(kv) ... text = l.get_log() >>> text[-2] 'class: moptipy.api.component.Component' >>> text[-3] 'name: Component' >>> len(text) 4
moptipy.api.encoding module¶
The base class for implementing encodings.
Sometimes, in optimization, the search space and the space of possible solutions are different. For example, in the Job Shop Scheduling Problem (JSSP), the search operators of the optimization algorithm may process permutations with repetitions (Permutations
) while the objective function rates Gantt
charts. The Gantt charts are the solutions that the user wants, but they are harder to deal with for search operators. However, we can easily develop search operators for permutations and permutations can be mapped to Gantt charts (see OperationBasedEncoding
). A combination of search space, solution space, and encoding can be configured when setting up an experiment Execution
, which then takes care of showing the search space to the optimization algorithm while providing the objective function and user with the elements of the solution spaces.
All encodings inherit from the class Encoding
. If you implement a new such encoding, you may want to test it using the pre-defined unit test routine validate_encoding()
.
- class moptipy.api.encoding.Encoding[source]¶
Bases:
Component
The encoding translates from a search space to a solution space.
- decode(x, y)[source]¶
Translate from search- to solution space.
Map a point x from the search space to a point y in the solution space.
- Parameters:
x – the point in the search space, remaining unchanged.
y – the destination data structure for the point in the solution space, whose contents will be overwritten
- Return type:
- moptipy.api.encoding.check_encoding(encoding, none_is_ok=True)[source]¶
Check whether an object is a valid instance of
Encoding
.- Parameters:
- Return type:
- Returns:
the object
- Raises:
>>> check_encoding(Encoding()) Encoding >>> check_encoding(Encoding(), True) Encoding >>> check_encoding(Encoding(), False) Encoding >>> try: ... check_encoding('A') ... except TypeError as te: ... print(te) encoding should be an instance of moptipy.api.encoding.Encoding but is str, namely 'A'. >>> try: ... check_encoding('A', True) ... except TypeError as te: ... print(te) encoding should be an instance of moptipy.api.encoding.Encoding but is str, namely 'A'. >>> try: ... check_encoding('A', False) ... except TypeError as te: ... print(te) encoding should be an instance of moptipy.api.encoding.Encoding but is str, namely 'A'. >>> print(check_encoding(None)) None >>> print(check_encoding(None, True)) None >>> try: ... check_encoding(None, False) ... except TypeError as te: ... print(te) encoding should be an instance of moptipy.api.encoding.Encoding but is None.
moptipy.api.execution module¶
The algorithm execution API.
- class moptipy.api.execution.Execution[source]¶
Bases:
object
Define all the components of an experiment and then execute it.
This class follows the builder pattern. It allows us to step-by-step store all the parameters needed to execute an experiment. Via the method
execute()
, we can then run the experiment and obtain the instance ofProcess
after the execution of the algorithm. From this instance, we can query the final result of the algorithm application.- execute()[source]¶
Execute the experiment and return the process after the run.
The optimization process constructed with this object is executed. This means that first, an instance of
Process
is constructed. Then, the methodsolve()
is applied to this instance. In other words, the optimization algorithm is executed until it terminates. Finally, this method returns theProcess
instance after algorithm completion. This instance then can be queried for the final result of the run (viaget_copy_of_best_y()
), the objective value of this final best solution (viaget_best_f()
), and other information.If a log file path was supplied to
set_log_file()
, then the information gathered from the optimization process will be written to the file after the with blog using the process is left. See https://thomasweise.github.io/moptipy/#log-file-sections for a documentation of the log file structure and sections.- Return type:
- Returns:
the process after the run, i.e., in the state where it can be queried for the result
- set_algorithm(algorithm)[source]¶
Set the algorithm to be used for this experiment.
- Parameters:
algorithm (
Algorithm
) – the algorithm- Return type:
Self
- Returns:
this execution
- set_encoding(encoding)[source]¶
Set the encoding to be used for this experiment.
This is the function translating from the search space to the solution space.
- set_goal_f(goal_f)[source]¶
Set the goal objective value after which the process can stop.
If this method is called multiple times, then the largest value is retained.
- set_log_all_fes(log_all_fes=True)[source]¶
Set whether all objective function evaluations (FEs) should be logged.
If all FEs are logged, then the PROGRESS section will be added to the log files, as documented at https://thomasweise.github.io/moptipy/#the-section-progress.
- Parameters:
log_all_fes (
bool
, default:True
) – if all FEs should be logged?- Return type:
Self
- Returns:
this execution
- set_log_file(log_file)[source]¶
Set the log file to write to.
If a path to a log file is provided, the contents of this file will be filled based on the structure documented at https://thomasweise.github.io/moptipy/#log-file-sections, which includes the algorithm parameters, the instance features, the system settings, the final solution, the corresponding point in the search space, etc.
This method can be called arbitrarily often.
- set_log_improvements(log_improvements=True)[source]¶
Set whether improvements should be logged.
If improvements are logged, then the PROGRESS section will be added to the log files, as documented at https://thomasweise.github.io/moptipy/#the-section-progress.
- Parameters:
log_improvements (
bool
, default:True
) – if improvements should be logged?- Return type:
Self
- Returns:
this execution
- set_max_fes(max_fes, force_override=False)[source]¶
Set the maximum FEs.
This is the number of candidate solutions an optimization is allowed to evaluate. If this method is called multiple times, then the shortest limit is used unless force_override is True.
- set_max_time_millis(max_time_millis, force_override=False)[source]¶
Set the maximum time in milliseconds.
This is the maximum time that the process is allowed to run. If this method is called multiple times, the shortest time is used unless force_override is True.
- set_objective(objective)[source]¶
Set the objective function to be used for this experiment.
This is the function rating the quality of candidate solutions.
- Parameters:
objective (
Objective
) – the objective function- Return type:
Self
- Returns:
this execution
- set_rand_seed(rand_seed)[source]¶
Set the seed to be used for initializing the random number generator.
moptipy.api.experiment module¶
The experiment execution API.
Via the function run_experiment()
, you can execute a complex experiment where multiple optimization algorithms are applied to multiple problem instances, where log files with the results and progress information about the runs are collected, and where multiprocessing is used to parallelize the experiment execution. Experiments are replicable, as random seeds are automatically generated based on problem instance names in a replicable fashion.
The log files are structured according to the documentation in https://thomasweise.github.io/moptipy/#file-names-and-folder-structure and their contents follow the specification given in https://thomasweise.github.io/moptipy/#log-file-sections.
- moptipy.api.experiment.run_experiment(base_dir, instances, setups, n_runs=11, perform_warmup=True, warmup_fes=20, perform_pre_warmup=True, pre_warmup_fes=20, on_completion=<function __no_complete>)[source]¶
Run an experiment and store the log files into the given folder.
This function will automatically run an experiment, i.e., apply a set setups of algorithm setups to a set instances of problem instances for n_runs each. It will collect log files and store them into an appropriate folder structure under the path base_dir. It will automatically draw random seeds for all algorithm runs using
moptipy.utils.nputils.rand_seeds_from_str()
based on the names of the problem instances to solve. This yields replicable experiments, i.e., running the experiment program twice will yield exactly the same runs in exactly the same file structure (give and take clock-time dependent issues, which obviously cannot be controlled in a deterministic fashion).- Parameters:
base_dir (
str
) – the base directory where to store the resultsinstances (
Iterable
[Callable
[[],Any
]]) – an iterable of callables, each of which should return an object representing a problem instance, whose __str__ representation is a valid namesetups (
Iterable
[Callable
[[Any
],Execution
]]) – an iterable of callables, each receiving an instance (as returned by instances) as input and producing anmoptipy.api.execution.Execution
as outputn_runs (
Union
[int
,Iterable
[int
]], default:11
) – the number of runs per algorithm-instance combinationperform_warmup (
bool
, default:True
) – should we perform a warm-up for each instance? If this parameter is True, then before the very first run of a thread on an instance, we will execute the algorithm for just a few function evaluations without logging and discard the results. The idea is that during this warm-up, things such as JIT compilation or complicated parsing can take place. While this cannot mitigate time measurement problems for JIT compilations taking place late in runs, it can at least somewhat solve the problem of delayed first FEs caused by compilation and parsing.warmup_fes (
int
, default:20
) – the number of the FEs for the warm-up runsperform_pre_warmup (
bool
, default:True
) – should we do one warmup run for each instance before we begin with the actual experiments? This complements the warmups defined by perform_warmup. It could be that, for some reason, JIT or other activities may lead to stalls between multiple processes when code is encountered for the first time. This may or may not still cause strange timing issues even if perform_warmup=True. We therefore can do one complete round of warmups before starting the actual experiment. After that, we perform one garbage collection run and then freeze all objects surviving it to prevent them from future garbage collection runs. All processes that execute the experiment in parallel will complete their pre-warmup and only after all of them have completed it, the actual experiment will begin. I am not sure whether this makes sense or not, but it also would not hurt.pre_warmup_fes (
int
, default:20
) – the FEs for the pre-warmup runson_completion (
Callable
[[Any
,Path
,Process
],None
], default:<function __no_complete at 0x7f668785a5c0>
) – a function to be called for every completed run, receiving the instance, the path to the log file (before it is created) and theProcess
of the run as parameters
- Return type:
- Returns:
the canonicalized path to base_dir
moptipy.api.logging module¶
Shared constants and functions for dealing with logs.
- moptipy.api.logging.ERROR_SECTION_PREFIX: Final[str] = 'ERROR_'¶
the common name prefix of all error sections
- moptipy.api.logging.KEY_ARCHIVE_F: Final[str] = 'f'¶
the scalarized objective value column of the archive if without numeric suffix, the result of the ith objective function if with numeric suffix
- moptipy.api.logging.KEY_ARCHIVE_MAX_SIZE: Final[str] = 'archiveMaxSize'¶
the maximum archive size (after pruning)
- moptipy.api.logging.KEY_ARCHIVE_PRUNE_LIMIT: Final[str] = 'archivePruneLimit'¶
the pruning limit of the archive size
- moptipy.api.logging.KEY_ARCHIVE_SIZE: Final[str] = 'archiveSize'¶
the number of archived non-dominated solutions
- moptipy.api.logging.KEY_BEST_FS: Final[str] = 'bestFs'¶
the vector of objective values of the best solution
- moptipy.api.logging.KEY_CPU_AFFINITY: Final[str] = 'cpuAffinity'¶
the affinity of the process to logical CPUs
- moptipy.api.logging.KEY_EXCEPTION_STACK_TRACE: Final[str] = 'exceptionStackTrace'¶
the key for the exception stack trace
- moptipy.api.logging.KEY_EXCEPTION_VALUE: Final[str] = 'exceptionValue'¶
the key for the exception value
- moptipy.api.logging.KEY_F_LOWER_BOUND: Final[str] = 'lowerBound'¶
the default log key for the lower bound of objective function values
- moptipy.api.logging.KEY_F_UPPER_BOUND: Final[str] = 'upperBound'¶
the default log key for the upper bound of objective function values
- moptipy.api.logging.KEY_GOAL_F: Final[str] = 'goalF'¶
the goal objective value of a black-box process
- moptipy.api.logging.KEY_HW_N_PHYSICAL_CPUS: Final[str] = 'nPhysicalCpus'¶
the number of physical CPUs
- moptipy.api.logging.KEY_LAST_IMPROVEMENT_FE: Final[str] = 'lastImprovementFE'¶
the FE when the best objective value was reached
- moptipy.api.logging.KEY_LAST_IMPROVEMENT_TIME_MILLIS: Final[str] = 'lastImprovementTimeMillis'¶
the time in milliseconds when the best objective value was reached
- moptipy.api.logging.KEY_MAX_TIME_MILLIS: Final[str] = 'maxTimeMillis'¶
the maximum runtime in milliseconds of a black-box process
- moptipy.api.logging.KEY_NODE_IP: Final[str] = 'ipAddress'¶
the ip address of the node on which the session is running
- moptipy.api.logging.KEY_PYTHON_IMPLEMENTATION: Final[str] = 'implementation'¶
the python implementation
- moptipy.api.logging.KEY_RAND_BIT_GENERATOR_TYPE: Final[str] = 'randBitGenType'¶
the type of the bit generator used by the random generator
- moptipy.api.logging.KEY_TOTAL_TIME_MILLIS: Final[str] = 'totalTimeMillis'¶
the total number of consumed milliseconds
- moptipy.api.logging.KEY_WORKING_DIRECTORY: Final[str] = 'workingDirectory'¶
the working directory of the process
- moptipy.api.logging.PROGRESS_CURRENT_F: Final[str] = 'f'¶
the current objective value column for the progress CSV
- moptipy.api.logging.PROGRESS_TIME_MILLIS: Final[str] = 'timeMS'¶
the time millis column for the progress CSV
- moptipy.api.logging.SCOPE_HARDWARE: Final[str] = 'hardware'¶
the hardware scope in the sys-info section
- moptipy.api.logging.SCOPE_VERSIONS: Final[str] = 'version'¶
the versions scope in the sys-info section
- moptipy.api.logging.SECTION_ARCHIVE_QUALITY: Final[str] = 'ARCHIVE_QUALITIES'¶
the archive quality section suffix
- moptipy.api.logging.SECTION_ERROR_BEST_F: Final[str] = 'ERROR_BEST_F_MISMATCH'¶
the section indicating a mismatch of the computed and registered best f
- moptipy.api.logging.SECTION_ERROR_INVALID_X: Final[str] = 'ERROR_INVALID_X'¶
the section indicating an invalid point in the search space
- moptipy.api.logging.SECTION_ERROR_INVALID_Y: Final[str] = 'ERROR_INVALID_Y'¶
the section indicating an invalid candidate solution
- moptipy.api.logging.SECTION_ERROR_IN_CONTEXT: Final[str] = 'ERROR_IN_CONTEXT'¶
the section indicating an error that occurred in the context of the process. this may be an error in the algorithm or, more likely, in the processing of the result.
- moptipy.api.logging.SECTION_ERROR_IN_LOG: Final[str] = 'ERROR_IN_LOG'¶
the section indicating an error caught during log writing
- moptipy.api.logging.SECTION_ERROR_IN_RUN: Final[str] = 'ERROR_IN_RUN'¶
the section indicating an error during the algorithm run
- moptipy.api.logging.SECTION_ERROR_TIMING: Final[str] = 'ERROR_TIMING'¶
the section indicating that the time measurement has an error
- moptipy.api.logging.SECTION_RESULT_X: Final[str] = 'RESULT_X'¶
the resulting point in the search space
- moptipy.api.logging.SECTION_RESULT_Y: Final[str] = 'RESULT_Y'¶
the resulting point in the solution space
moptipy.api.mo_algorithm module¶
The base classes for multi-objective optimization algorithms.
A multi-objective optimization algorithm is an optimization algorithm that can optimize multiple, possibly conflicting, objective functions at once. All such algorithms inherit from MOAlgorithm
. When developing or implementing new algorithms, you would still follow the concepts discussed in module algorithm
.
If you implement a new multi-objective algorithm, you can test it via the pre-defined unit test routine validate_mo_algorithm()
.
- class moptipy.api.mo_algorithm.MOAlgorithm[source]¶
Bases:
Algorithm
A base class for multi-objective optimization algorithms.
- solve(process)[source]¶
Forward to
solve_mo()
and cast process to MOProcess.
- solve_mo(process)[source]¶
Apply the multi-objective optimization algorithm to the given process.
- Parameters:
process (
MOProcess
) – the multi-objective process which provides methods to access the search space, the termination criterion, and a source of randomness. It also wraps the objective function, remembers the best-so-far solution, and takes care of creating log files (if this is wanted).- Return type:
moptipy.api.mo_archive module¶
An archive of solutions to a multi-objective problems.
- class moptipy.api.mo_archive.MOArchivePruner[source]¶
Bases:
Component
A strategy for pruning an archive of solutions.
- prune(archive, n_keep, size)[source]¶
Prune an archive.
After invoking this method, the first n_keep entries in archive are selected to be preserved. The remaining entries (at indices n_keep…len(archive)-1) can be deleted.
Pruning therefore is basically just a method of sorting the archive according to a preference order of solutions. It will not delete any element from the list. The caller can do that afterwards if she wants.
This base class just provides a simple FIFO scheme.
- class moptipy.api.mo_archive.MORecord(x, fs)[source]¶
Bases:
object
A record for the multi-objective archive.
The default sorting order of multi-objective records is lexicographic based on the objective value vector.
>>> import numpy as npx >>> mr1 = MORecord("xxx", npx.array([1, 2, 3], int)) >>> print(mr1.x) xxx >>> print(mr1.fs) [1 2 3] >>> print(mr1) fs=1;2;3, x=xxx >>> mr2 = MORecord("yyy", npx.array([1, 2, 1], int)) >>> print(mr2.x) yyy >>> print(mr2.fs) [1 2 1] >>> mr1 < mr2 False >>> mr2 < mr1 True
- moptipy.api.mo_archive.check_mo_archive_pruner(pruner)[source]¶
Check whether an object is a valid instance of
MOArchivePruner
.- Parameters:
pruner (
Any
) – the multi-objective archive pruner- Return type:
- Returns:
the object
- Raises:
TypeError – if pruner is not an instance of
MOArchivePruner
>>> check_mo_archive_pruner(MOArchivePruner()) fifo >>> try: ... check_mo_archive_pruner('A') ... except TypeError as te: ... print(te) pruner should be an instance of moptipy.api.mo_archive.MOArchivePruner but is str, namely 'A'. >>> try: ... check_mo_archive_pruner(None) ... except TypeError as te: ... print(te) pruner should be an instance of moptipy.api.mo_archive.MOArchivePruner but is None.
moptipy.api.mo_execution module¶
The multi-objective algorithm execution API.
- class moptipy.api.mo_execution.MOExecution[source]¶
Bases:
Execution
Define all the components of a multi-objective experiment and execute it.
Different from
Execution
, this class here allows us to construct multi-objective optimization processes, i.e., such that have more than one optimization goal.- execute()[source]¶
Create a multi-objective process, apply algorithm, and return result.
This method is multi-objective equivalent of the
execute()
method. It returns a multi-objective process after applying the multi-objective algorithm.
- set_archive_max_size(size)[source]¶
Set the upper limit for the archive size (after pruning).
The internal archive of the multi-objective optimization process retains non-dominated solutions encountered during the search. Since there can be infinitely many such solutions, the archive could grow without bound if left untouched. Therefore, we define two size limits: the maximum archive size (defined by this method) and the pruning limit. Once the archive grows beyond the pruning limit, it is cut down to the archive size limit.
- Parameters:
size (
int
) – the maximum archive size- Return type:
Self
- Returns:
this execution
- set_archive_pruner(pruner)[source]¶
Set the pruning strategy for downsizing the archive.
- Parameters:
pruner (
MOArchivePruner
) – the archive pruner- Return type:
Self
- Returns:
this execution
- set_archive_pruning_limit(limit)[source]¶
Set the size limit of the archive above which pruning is performed.
If the size of the archive grows above this limit, the archive will be pruned down to the archive size limit.
- Parameters:
limit (
int
) – the archive pruning limit- Return type:
Self
- Returns:
this execution
moptipy.api.mo_problem module¶
The base classes for multi-objective optimization problems.
This class provides the ability to evaluate solutions according to multiple criteria. The evaluation results are stored in a numpy array and also are scalarized to a single value.
Basically, a multi-objective problem provides three essential components:
It can evaluate a candidate solution according to multiple optimization objectives. Each objective returns one value, subject to minimization, and all the values are stored in a single numpy array. This is done by
f_evaluate()
It provides a criterion deciding whether one such objective vector dominates (i.e., is strictly better than) another one. This is done by
f_dominates()
. The default definition adheres to the standard “domination” definition in multi-objective optimization: A vector a dominates a vector b if it is not worse in any objective value and better in at least one. But if need be, you can overwrite this behavior.A scalarization approach: When evaluating a solution, the result is not just the objective vector itself, but also a single scalar value. This is needed to create compatibility to single-objective optimization. Matter of fact, a
MOProblem
is actually a subclass ofObjective
. This means that via this scalarization, all multi-objective problems can also be considered as single-objective problems. This means that single-objective algorithms can be applied to them as-is. It also means that log files are compatible. Multi-objective algorithms can just ignore the scalarization result and focus on the domination relationship. Often, a weighted sum approach (WeightedSum
) may be the method of choice for scalarization.
- class moptipy.api.mo_problem.MOProblem[source]¶
Bases:
Objective
The base class for multi-objective optimization problems.
A multi-objective optimization problem is defined as a set of
Objective
functions. Each candidate solution is evaluated using each of the objectives, i.e., is rated by a vector of objective values. This vector is the basis for deciding which candidate solutions to keep and which to discard.In multi-objective optimization, this decision is based on “domination.” A solution a dominates a solution b if it is not worse in any objective and better in at least one. This comparison behavior is implemented in method
f_dominates()
and can be overwritten if need be.In our implementation, we prescribe that each multi-objective optimization problem must also be accompanied by a scalarization function, i.e., a function that represents the vector of objective values as a single scalar value. The whole multi-objective problem can then be viewed also as a single objective function itself. The method
evaluate()
first evaluates all of the objective functions and obtains the vector of objective values. It then scalarizes the result into a single scalar quality and returns it. Multi-objective algorithms may instead use the methodf_evaluate()
, which also allows a vector to be passed in which will then be filled with the results of the individual objective functions.This makes multi-objective optimization with moptipy compatible with single-objective optimization. In other words, all optimization methods implemented for single-objective processes
Process
will work out-of-the-box with the multi-objective versionMOProcess
.Warning: We use instances of
numpy.ndarray
to represent the vectors of objective values. This necessitates that each objective function has, if it is integer-valued (is_always_integer()
is True) a range that fits well into at least a 64-bit integer. Specifically, it must be possible to compute “a - b” without overflow or loss of sign for any two objective values “a” and “b” within the confines of a numpy signed 64-bit integer.- evaluate(x)[source]¶
Evaluate a solution x and return its scalarized objective value.
This method computes all objective values for a given solution and then returns the scalarized result. The objective values themselves are directly discarted and not used. It makes a multi-objective problem compatible with single-objective optimization.
- f_create()[source]¶
Create a vector to receive the objective values.
This array will be of the length returned by
f_dimension()
and of the dtype off_dtype()
.- Return type:
- Returns:
a vector to receive the objective values
- f_dimension()[source]¶
Obtain the number of objective functions.
- Return type:
- Returns:
the number of objective functions
- f_dominates(a, b)[source]¶
Check if an objective vector dominates or is dominated by another one.
Usually, one vector is said to dominate another one if it is not worse in any objective and better in at least one. This behavior is implemented in
moptipy.api.mo_utils.dominates()
and this is also the default behavior of this method. However, depending on your concrete optimization task, you may overwrite this behavior.- Parameters:
- Return type:
- Returns:
an integer value indicating the domination relationship
- Retval -1:
if a dominates b
- Retval 1:
if b dominates a
- Retval 2:
if b equals a
- Retval 0:
if a and b are mutually non-dominated, i.e., if neither a dominates b not b dominates a and b is also different from a
- f_dtype()[source]¶
Get the data type used in
f_create()
.This data type will be an integer data type if all the objective functions are integer-valued. If the bounds of the objective values are known, then this type will be “big enough” to allow the subtraction “a - b” of any two objective vectors “a” and “b” to be computed without overflow or loss of sign. At most, however, this data type will be a 64-bit integer. If any one of the objective functions returns floating point data, this data type will be a floating point type.
- Return type:
- Returns:
the data type used by
f_create()
.
- class moptipy.api.mo_problem.MOSOProblemBridge(objective)[source]¶
Bases:
MOProblem
A bridge between multi-objective and single-objective optimization.
- log_parameters_to(logger)[source]¶
Log the parameters of this function to the provided destination.
- Parameters:
logger (
KeyValueLogSection
) – the logger for the parameters- Return type:
- moptipy.api.mo_problem.check_mo_problem(mo_problem)[source]¶
Check whether an object is a valid instance of
MOProblem
.- Parameters:
mo_problem (
Any
) – the multi-objective optimization problem- Return type:
- Returns:
the mo-problem
- Raises:
>>> check_mo_problem(MOProblem()) moProblem >>> try: ... check_mo_problem(1) ... except TypeError as te: ... print(te) multi-objective optimziation problem should be an instance of moptipy.api.mo_problem.MOProblem but is int, namely 1. >>> try: ... check_mo_problem(None) ... except TypeError as te: ... print(te) multi-objective optimziation problem should be an instance of moptipy.api.mo_problem.MOProblem but is None.
moptipy.api.mo_process module¶
Processes offer data to both the user and the optimization algorithm.
They provide the information about the optimization process and its current state as handed to the optimization algorithm and, after the algorithm has finished, to the user. This is the multi-objective version of the Process
-API. It supports having multiple objective functions. It also provides a single core objective value, which is the scalarized result of several objective values.
- class moptipy.api.mo_process.MOProcess[source]¶
-
A multi-objective
Process
API variant.This class encapsulates an optimization process using multiple objective functions. It inherits all of its methods from the single-objective process
Process
and extends its API towards multi-objective optimization by providing the functionality of the classMOProblem
.- check_in(x, fs, prune_if_necessary=False)[source]¶
Check a solution into the archive.
This method is intended for being invoked after the optimization algorithm has finished its work. The algorithm should, by itself, maintain the set of interesting solutions during its course. Once it has completed all of its computations, it should flush these solutions to the process using this method. All the non-dominated solutions preserved in the archive will then become available via
get_archive()
to the code starting the optimization procedure.If you have a sequence of
MORecord
instances that you want to flush into the archive, you can use the convenience methodcheck_in_all()
for that purpose.- Parameters:
- Return type:
- Returns:
True if the solution was non-dominated and has actually entered the archive, False if it has not entered the archive
- check_in_all(recs, prune_if_necessary=False)[source]¶
Check in all the elements of an Iterable of MORecord instances.
This is a convenience wrapper around
check_in()
.
- get_copy_of_best_fs(fs)[source]¶
Get a copy of the objective vector of the current best solution.
This always corresponds to the best-so-far solution based on the scalarization of the objective vector. It is the best solution that the process has seen so far, the current best solution.
You should only call this method if you are either sure that you have invoked
evaluate()
, have invokedf_evaluate()
, or have calledhas_best()
before and it returned True.
moptipy.api.mo_utils module¶
Utilities for multi-objective optimization.
- moptipy.api.mo_utils.dominates(a, b)[source]¶
Check if one objective vector dominates or is dominated by another one.
- Parameters:
- Return type:
- Returns:
an integer value indicating the domination relationship
- Retval -1:
if a dominates b
- Retval 1:
if b dominates a
- Retval 2:
if b equals a
- Retval 0:
if a and b are mutually non-dominated, i.e., if neither a dominates b not b dominates a and b is also different from a
>>> from numpy import array >>> dominates(array([1, 1, 1]), array([2, 2, 2])) -1 >>> dominates(array([1.0, 1.0, 2.0]), array([2.0, 2.0, 1.0])) 0 >>> dominates(array([2, 2, 2]), array([1, 1, 1])) 1 >>> dominates(array([2, 2, 2]), array([2, 2, 2])) 2
- moptipy.api.mo_utils.lexicographic(a, b)[source]¶
Compare two arrays lexicographically.
- Parameters:
- Return type:
- Returns:
-1 if a is lexicographically less than b, 1 if b is less than a, 0 otherwise
- Retval -1:
if a is lexicographically less than b
- Retval 1:
if b is lexicographically less than a
- Retval 2:
if b equals a
>>> from numpy import array >>> lexicographic(array([1, 1, 1]), array([2, 2, 2])) -1 >>> lexicographic(array([1, 1, 1]), array([1, 1, 2])) -1 >>> lexicographic(array([2, 2, 2]), array([1, 1, 1])) 1 >>> lexicographic(array([2, 2, 2]), array([2, 2, 1])) 1 >>> lexicographic(array([2, 2, 2]), array([2, 2, 2])) 0
moptipy.api.objective module¶
The base class for implementing objective functions.
An objective function evaluates the quality of a candidate solution of an optimization problem. Solutions with smaller objective values are better, i.e., objective functions are subject to minimization. All objective functions inherit from Objective
. If you implement a new objective function, you can test it via the pre-defined unit test routine validate_objective()
.
- class moptipy.api.objective.Objective[source]¶
Bases:
Component
An objective function subject to minimization.
An objective function represents one optimization criterion that is used for rating the solution quality. All objective functions in our system are subject to minimization, meaning that smaller values are better.
- evaluate(x)[source]¶
Evaluate a solution x and return its objective value.
The return value is either an integer or a float and must be finite. Smaller objective values are better, i.e., all objective functions are subject to minimization.
- is_always_integer()[source]¶
Return True if
evaluate()
will always return an int value.- Return type:
- Returns:
True if
evaluate()
will always return an int or False if also a float may be returned.
- log_parameters_to(logger)[source]¶
Log the parameters of this function to the provided destination.
- Parameters:
logger (
KeyValueLogSection
) – the logger for the parameters- Return type:
- moptipy.api.objective.check_objective(objective)[source]¶
Check whether an object is a valid instance of
Objective
.- Parameters:
objective (
Any
) – the objective- Return type:
- Returns:
the objective
- Raises:
>>> check_objective(Objective()) Objective >>> try: ... check_objective('A') ... except TypeError as te: ... print(te) objective function should be an instance of moptipy.api.objective.Objective but is str, namely 'A'. >>> try: ... check_objective(None) ... except TypeError as te: ... print(te) objective function should be an instance of moptipy.api.objective.Objective but is None.
moptipy.api.operators module¶
The base classes for implementing search operators.
Nullary search operators are used to sample the initial starting points of the optimization processes. They inherit from class Op0
. The pre-defined unit test routine validate_op0()
can and should be used to test all the nullary operators that are implemented.
Unary search operators accept one point in the search space as input and generate a new, similar point as output. They inherit from class Op1
. The pre-defined unit test routine validate_op1()
can and should be used to test all the unary operators that are implemented.
The basic unary operators Op1
have no parameter telling them how much of the input point to change. They may do a hard-coded number of modifications (as, e.g., Op1Swap2
does) or may apply a random number of modifications (like Op1SwapN
). There is a sub-class of unary operators named Op1WithStepSize
where a parameter step_size with a value from the closed interval [0.0, 1.0] can be supplied. If step_size=0.0, such an operator should perform the smallest possible modification and for step_size=1.0, it should perform the largest possible modification.
Binary search operators accept two points in the search space as input and generate a new point that should be similar to both inputs as output. They inherit from class Op2
. The pre-defined unit test routine validate_op2()
can and should be used to test all the binary operators that are implemented.
- class moptipy.api.operators.Op0[source]¶
Bases:
Component
A base class to implement a nullary search operator.
- class moptipy.api.operators.Op1[source]¶
Bases:
Component
A base class to implement a unary search operator.
- class moptipy.api.operators.Op1WithStepSize[source]¶
Bases:
Op1
A unary search operator with a step size.
- op1(random, dest, x, step_size=0.0)[source]¶
Copy x to dest but apply a modification with a given step_size.
This operator is similar to
Op1.op1()
in that it stores a modified copy of x into dest. The difference is that you can also specify how much that copy should be different: The parameter step_size can take on any value in the interval [0.0, 1.0], including the two boundary values. A step_size of 0.0 indicates the smallest possible move (for which dest will still be different from x) and step_size=1.0 will lead to the largest possible move.The step_size may be interpreted differently by different operators: Some may interpret it as an exact requirement and enforce steps of the exact specified size, see, for example module
op1_flip_m
. Others might interpret it stochastically as an expectation. Yet others may interpret it as a goal step width and try to realize it in a best effort kind of way, but may also do smaller or larger steps if the best effort fails, see for example moduleop1_swap_exactly_n
. What all operators should, however, have in common is that at step_size=0.0, they should try to perform a smallest possible change and at step_size=1.0, they should try to perform a largest possible change. For all values in between, step sizes should grow with rising step_size. This should allow algorithms that know nothing about the nature of the search space or the operator’s moves to still tune between small and large moves based on a policy which makes sense in a black-box setting.Every implementation of
Op1WithStepSize
must specify a reasonable default value for this parameter ensure compatibility withOp1.op1()
. In this base class, we set the default to 0.0.Finally, if a step_size value is passed in which is outside the interval [0, 1], the behavior of this method is undefined. It may throw an exception or not. It may also enter an infinite loop.
- class moptipy.api.operators.Op2[source]¶
Bases:
Component
A base class to implement a binary search operator.
- moptipy.api.operators.check_op0(op0)[source]¶
Check whether an object is a valid instance of
Op0
.- Parameters:
- Return type:
- Returns:
the object op0
- Raises:
>>> check_op0(Op0()) Op0 >>> try: ... check_op0('A') ... except TypeError as te: ... print(te) op0 should be an instance of moptipy.api.operators.Op0 but is str, namely 'A'. >>> try: ... check_op0(None) ... except TypeError as te: ... print(te) op0 should be an instance of moptipy.api.operators.Op0 but is None.
- moptipy.api.operators.check_op1(op1)[source]¶
Check whether an object is a valid instance of
Op1
.- Parameters:
- Return type:
- Returns:
the object
- Raises:
>>> check_op1(Op1()) Op1 >>> try: ... check_op1('A') ... except TypeError as te: ... print(te) op1 should be an instance of moptipy.api.operators.Op1 but is str, namely 'A'. >>> try: ... check_op1(None) ... except TypeError as te: ... print(te) op1 should be an instance of moptipy.api.operators.Op1 but is None.
- moptipy.api.operators.check_op1_with_step_size(op1)[source]¶
Check whether an object is a valid instance of
Op1WithStepSize
.- Parameters:
op1 (
Any
) – the (supposed) instance ofOp1WithStepSize
- Return type:
- Returns:
the object op1
- Raises:
TypeError – if op1 is not an instance of
Op1WithStepSize
>>> check_op1_with_step_size(Op1WithStepSize()) Op1WithStepSize >>> try: ... check_op1_with_step_size('A') ... except TypeError as te: ... print(te) op1 should be an instance of moptipy.api.operators.Op1WithStepSize but is str, namely 'A'. >>> try: ... check_op1_with_step_size(Op1()) ... except TypeError as te: ... print(te) op1 should be an instance of moptipy.api.operators.Op1WithStepSize but is moptipy.api.operators.Op1. >>> try: ... check_op1_with_step_size(None) ... except TypeError as te: ... print(te) op1 should be an instance of moptipy.api.operators.Op1WithStepSize but is None.
- moptipy.api.operators.check_op2(op2)[source]¶
Check whether an object is a valid instance of
Op2
.- Parameters:
- Return type:
- Returns:
the object op2
- Raises:
>>> check_op2(Op2()) Op2 >>> try: ... check_op2('A') ... except TypeError as te: ... print(te) op2 should be an instance of moptipy.api.operators.Op2 but is str, namely 'A'. >>> try: ... check_op2(None) ... except TypeError as te: ... print(te) op2 should be an instance of moptipy.api.operators.Op2 but is None.
moptipy.api.process module¶
Processes offer data to both the user and the optimization algorithm.
They provide the information about the optimization process and its current state as handed to the optimization algorithm and, after the algorithm has finished, to the user. They also supply the optimization algorithm with everything it needs to run, e.g., random numbers (get_random()
), they evaluate solutions (evaluate()
) , and they tell it when to stop (should_terminate()
).
The idea behind this interface is to treat optimization algorithms as so-called anytime algorithms. An anytime algorithm will begin with a guess about what the solution for a problem could be. It will then iteratively sample and evaluate (evaluate()
) new solutions, i.e., new and hopefully better guesses. It can be stopped at any time, e.g., by the termination criterion, should_terminate()
and then return the best guess of the solution (get_copy_of_best_y()
, get_best_f()
).
The process API also collects all the information about the optimization process, performs in-memory logging if wanted, and can write a standardized, text-based log file for each run of an algorithm in a clear folder structure. By storing information about the algorithm, the problem, and the system, as well as the random seed, this allows for self-documenting and replicable experiments.
The class Process
is a base class from which all optimization processes are derived. It is for the standard single-objective optimization case. A multi-objective variant is given in module mo_process
as class MOProcess
.
Furthermore, processes also lent themselves to “forking” off some of the computational budget of an algorithm to sub-algorithms. For this purpose, the module subprocesses
provides specialized routines, such as for_fes()
for creating sub-processes that forward all method calls to the original process but will perform at most a given number of objective function evaluations or from_starting_point()
, which creates a sub-process that has the current-best solution pre-set to a given point in the search space and its quality. without_should_terminate()
wraps a process in such a way that the termination criterion should_terminate()
, which is suitable for invoking externally implemented optimization algorithms that do not know/care about the moptipy API.
Mark S. Boddy and Thomas L. Dean. Solving Time-Dependent Planning Problems. Report CS-89-03, February 1989. Providence, RI, USA: Brown University, Department of Computer Science. ftp://ftp.cs.brown.edu/pub/techreports/89/cs89-03.pdf
- class moptipy.api.process.Process[source]¶
Bases:
Space
,Objective
,AbstractContextManager
Processes offer data to the optimization algorithm and the user.
A Process presents the objective function and search space to an optimization algorithm. Since it wraps the actual objective function, it can see all evaluated solutions and remember the best-so-far solution. It can also count the FEs and the runtime that has passed. Therefore, it also presents the termination criterion to the optimization algorithm. It also provides a random number generator the algorithm. It can write log files with the progress of the search and the end result. Finally, it provides the end result to the user, who can access it after the algorithm has finished.
- add_log_section(title, text)[source]¶
Add a section to the log, if a log is written (otherwise ignore it).
When creating the experiment
Execution
, you can specify a log file via methodset_log_file()
. Then, the results of your algorithm and the system configuration will be stored as text in this file. Each type of information will be stored in a different section. The end state with the final solution quality, for instance, will be stored in a section named STATE. Each section begins with the line BEGIN_XXX and ends with the line END_XXX, where XXX is the name of the section. Between these two lines, all the contents of the section are stored.This method here allows you to add a custom section to your log file. This can happen in your implementation of the method
solve()
of your algorithm. (Ideally at its end.) Of course, invoking this method only makes sense if there actually is a log file. You can check for this by callinghas_log()
.You can specify a custom section name (which must be in upper case characters) and a custom section body text. Of course, the name of this section must not clash with any other section name. Neither the section name nor section body should contain strings like BEGIN_ or END_, and such and such. You do not want to mess up your log files. Ofcourse you can add a section with a given name only once, because otherwise there would be a name clash. Anyway, if you add sections like this, they will be appended at the end of the log file. This way, you have all the standard log data and your additional information in one consistent file.
Be advised: Adding sections costs time and memory. You do not want to do such a thing in a loop. If your algorithm should store additional data, it makes sense to gather this data in an efficient way during the run and only flush it to a section at the end of the run.
- evaluate(x)[source]¶
Evaluate a solution x and return its objective value.
This method implements the
evaluate()
method of themoptipy.api.objective.Objective
function interface, but onProcess
level.The return value is either an integer or a float and must be finite. Smaller objective values are better, i.e., all objective functions are subject to minimization.
This method here is usually a wrapper that internally invokes the actual
Objective
function, but it does more: While it does use theevaluate()
method of the objective function to compute the quality of a candidate solution, it also internally increments the counter for the objective function evaluations (FEs) that have passed. You can request the number of these FEs viaget_consumed_fes()
(and also the time that has passed viaget_consumed_time_millis()
, but this is unrelated to theevaluate()
method).Still, counting the FEs like this allows us to know when, e.g., the computational budget in terms of a maximum permitted number of FEs has been exhausted, in which case
should_terminate()
will become True.Also, since this method will see all objective values and the corresponding candidate solutions, it is able to internally remember the best solution you have ever created and its corresponding objective value. Therefore, the optimization
Process
can provide both to you via the methodshas_best()
,get_copy_of_best_x()
,get_copy_of_best_y()
, andget_best_f()
. At the same time, if a goal objective value or lower bound for the objective function is specified and one solution is seen that has such a quality,should_terminate()
will again become True.Finally, this method also performs all logging, e.g., of improving moves, in memory if logging is activated. (See
set_log_file()
,set_log_improvements()
, andset_log_all_fes()
.)In some cases, you may not need to invoke the original objective function via this wrapper to obtain the objective value of a solution. Indeed, in some cases you know the objective value because of the way you constructed the solution. However, you still need to tell our system the objective value and provide the solution to ensure the correct counting of FEs, the correct preservation of the best solution, and the correct setting of the termination criterion. For these situations, you will call
register()
instead ofevaluate()
.
- get_best_f()[source]¶
Get the objective value of the current best solution.
This always corresponds to the best-so-far solution, i.e., the best solution that you have passed to
evaluate()
orregister()
so far. It is NOT the best possible objective value for the optimization problem. It is the best objective value that the process has seen so far, the current best objective value.You should only call this method if you are either sure that you have invoked meth:evaluate before
register()
of if you calledhas_best()
before and it returned True.
- get_consumed_fes()[source]¶
Obtain the number consumed objective function evaluations.
This is the number of calls to
evaluate()
.- Return type:
- Returns:
the number of objective function evaluations so far
- get_consumed_time_millis()[source]¶
Obtain an approximation of the consumed runtime in milliseconds.
- Returns:
the consumed runtime measured in milliseconds.
- Return type:
- get_copy_of_best_x(x)[source]¶
Get a copy of the current best point in the search space.
This always corresponds to the point in the search space encoding the best-so-far solution, i.e., the best point in the search space that you have passed to
evaluate()
orregister()
so far. It is NOT the best global optimum for the optimization problem. It corresponds to the best solution that the process has seen so far, the current best solution.Even if the optimization algorithm using this process does not preserve this solution in special variable and has already lost it again, this method will still return it. The optimization process encapsulated by this process object will always remember it.
This also means that your algorithm implementations do not need to store the best-so-far solution anywhere if doing so would be complicated. They can obtain it simply from this method whenever needed.
You should only call this method if you are either sure that you have invoked
evaluate()
beforeregister()
of if you calledhas_best()
before and it returned True.For understanding the relationship between the search space and the solution space, see module
encoding
.- Parameters:
x – the destination data structure to be overwritten
- Return type:
- get_copy_of_best_y(y)[source]¶
Get a copy of the current best point in the solution space.
This always corresponds to the best-so-far solution, i.e., the best solution that you have passed to
evaluate()
orregister()
so far. It is NOT the global optimum for the optimization problem. It is the best solution that the process has seen so far, the current best solution.You should only call this method if you are either sure that you have invoked meth:evaluate before
register()
of if you calledhas_best()
before and it returned True.- Parameters:
y – the destination data structure to be overwritten
- Return type:
- get_last_improvement_fe()[source]¶
Get the FE at which the last improvement was made.
You should only call this method if you are either sure that you have invoked meth:evaluate before
register()
of if you calledhas_best()
before and it returned True.- Return type:
- Returns:
the function evaluation when the last improvement was made
- Raises:
ValueError – if no FE was performed yet
- get_last_improvement_time_millis()[source]¶
Get the FE at which the last improvement was made.
You should only call this method if you are either sure that you have invoked meth:evaluate before
register()
of if you calledhas_best()
before and it returned True.- Return type:
- Returns:
the function evaluation when the last improvement was made
- Raises:
ValueError – if no FE was performed yet
- get_log_basename()[source]¶
Get the basename of the log, if any.
If a log file is associated with this process, then this function returns the name of the log file without the file suffix. If no log file is associated with the process, then None is returned.
This can be used to store additional information during the run of the optimization algorithm. However, treat this carefully, as some files with the same base name may exist or be generated by other modules.
- get_max_fes()[source]¶
Obtain the maximum number of permitted objective function evaluations.
If no limit is set, None is returned.
- get_max_time_millis()[source]¶
Obtain the maximum runtime permitted in milliseconds.
If no limit is set, None is returned.
- get_random()[source]¶
Obtain the random number generator.
The optimization algorithm and all of its components must only use this random number generator for all their non-deterministic decisions. In order to guarantee reproducible runs, there must not be any other source of randomness. This generator can be seeded in the
set_rand_seed()
method of theExecution
builder object.- Return type:
- Returns:
the random number generator
- has_best()[source]¶
Check whether a current best solution is available.
As soon as one objective function evaluation has been performed, the black-box process can provide a best-so-far solution. Then, this method returns True. Otherwise, it returns False. This means that this method returns True if and only if you have called either
evaluate()
orregister()
at least once.- Return type:
- Returns:
True if the current-best solution can be queried.
- has_log()[source]¶
Will any information of this process be logged?.
Only if this method returns True, invoking
add_log_section()
makes any sense. Otherwise, the data would just be discarded.- Retval TrueTrue:
if the process is associated with a log output
- Retval FalseFalse:
if no information is stored in a log output
- Return type:
- initialize()[source]¶
Raise an error because this method shall never be called.
- Raises:
ValueError – always
- Return type:
- register(x, f)[source]¶
Register a solution x with externally-evaluated objective value.
This function is equivalent to
evaluate()
, but receives the objective value as parameter. In some problems, algorithms can compute the objective value of a solution very efficiently without passing it to the objective function.For example, on the Traveling Salesperson Problem with n cities, if you have a tour of known length and swap two cities in it, then you can compute the overall tour length in O(1) instead of O(n) that you would need to pay for a full evaluation. In such a case, you could use register instead of evaluate.
x must be provided if f marks an improvement. In this case, the contents of x will be copied to an internal variable remembering the best-so-far solution. If f is not an improvement, you may pass in None for x or just any valid point in the search space.
For each candidate solution you construct, you must call either
evaluate()
orregister()
. This is because these two functions also count the objective function evaluations (FEs) that have passed. This is needed to check the termination criterion, for instance.
- should_terminate()[source]¶
Check whether the optimization process should terminate.
If this function returns True, the optimization process must not perform any objective function evaluations anymore. It will automatically become True when a termination criterion is hit or if anyone calls
terminate()
, which happens also at the end of a with statement.Generally, the termination criterion is configured by the methods
set_max_fes()
,set_max_time_millis()
, andset_goal_f()
of theExecution
builder. Furthermore, if the objective function has a finitelower_bound()
, then this lower bound is also used as goal objective value if no goal objective value is specified viaset_goal_f()
.should_terminate()
then returns True as soon as any one of the configured criteria is met, i.e., the process terminates when the earliest one of the criteria is met.- Return type:
- Returns:
True if the process should terminate, False if not
- terminate()[source]¶
Terminate this process.
This function is automatically called at the end of the with statement, but can also be called by the algorithm when it is finished and is also invoked automatically when a termination criterion is hit. After the first time this method is invoked,
should_terminate()
becomes True.- Return type:
- moptipy.api.process.check_goal_f(goal_f, none_is_ok=False)[source]¶
Check the goal objective value.
This is a small utility method that validates whether a goal objective value is valid.
- Parameters:
- Return type:
- Returns:
the goal objective value, or None
- Raises:
TypeError – if goal_f is None (and None is not allowed) or neither an int nor a float
ValueError – if goal_f is invalid
- moptipy.api.process.check_max_fes(max_fes, none_is_ok=False)[source]¶
Check the maximum FEs.
This is a small utility method that validates whether a maximum for the objective function evaluations (FEs) is valid.
- moptipy.api.process.check_max_time_millis(max_time_millis, none_is_ok=False)[source]¶
Check the maximum time in milliseconds.
This is a small utility method that validates whether a maximum for the milliseconds that can be used as runtime limit is valid.
- Parameters:
- Return type:
- Returns:
the maximum time in milliseconds, or None
- Raises:
TypeError – if max_time_millis is None (and None is not allowed) or not an int
ValueError – if max_time_millis is invalid
moptipy.api.space module¶
Provide the functionality to access search and solution spaces.
A Space
is the abstraction of the data structures for solutions and points in the search space that need to be generated, copied, and stored during the optimization process. This allows us to develop black-box algorithms while still being able to properly remember the best solutions, storing them as text strings in log files, and to validate whether they are correct.
All search or solution spaces in moptipy inherit from Space
. If you implement a new space, you should test it with the pre-defined unit test routine validate_space()
.
The following pre-defined spaces are currently available:
BitStrings
, the space of n-dimensional bit stringsIntSpace
, a space of n-dimensional integer strings, where each element is between predefined inclusive bounds min_value…max_value.Permutations
is a special version of theIntSpace
where all elements are permutations of a base stringblueprint
. This means that it can represent permutations both with and without repetitions. Depending on the base string, each element may occur an element-specific number of times. For the base string (-1, -1, 2, 7, 7, 7), for example, -1 may occur twice, 2 can occur once, and 7 three times.VectorSpace
is the space of n-dimensional floating point number vectors.
- class moptipy.api.space.Space[source]¶
Bases:
Component
A class to represent both search and solution spaces.
The space basically defines a container data structure and basic operations that we can apply to them. For example, a solution space contains all the possible solutions to an optimization problem. All of them are instances of one data structure. An optimization as well as a black-box process needs to be able to create and copy such objects. In order to store the solutions we found in a text file, we must further be able to translate them to strings. We should also be able to parse such strings. It is also important to detect whether two objects are the same and whether the contents of an object are valid. All of this functionality is offered by the Space class.
- copy(dest, source)[source]¶
Copy one instance of the data structure to another one.
Notice that the first parameter of this method is the destination, which will be overwritten by copying the contents of the second parameter over it.
- Parameters:
dest – the destination data structure, whose contents will be overwritten with those from source
source – the source data structure, which remains unchanged and whose contents will be copied to dest
- Return type:
- create()[source]¶
Generate an instance of the data structure managed by the space.
The state/contents of this data structure are undefined. It may not pass the
validate()
method.- Return type:
- Returns:
the new instance
- from_str(text)[source]¶
Transform a string text to one element of the space.
This method should be implemented as inverse to :meth:to_str:. It should check the validity of the result before returning it. It may not always be possible to implement this method, but you should try.
- is_equal(x1, x2)[source]¶
Check if the contents of two instances of the data structure are equal.
- Parameters:
x1 – the first instance
x2 – the second instance
- Return type:
- Returns:
True if the contents are equal, False otherwise
- n_points()[source]¶
Get the approximate number of different elements in the space.
This operation can help us when doing tests of the space API implementations. If we know how many points exist in the space, then we can judge whether a method that randomly generates points is sufficiently random, for instance.
By default, this method simply returns 2. If you have a better approximation of the size of the space, then you should override it.
- Return type:
- Returns:
the approximate scale of the space
- to_str(x)[source]¶
Obtain a textual representation of an instance of the data structure.
This method should convert an element of the space to a string representation that is parseable by :meth:from_str: and should ideally not be too verbose. For example, when converting a list or array x of integers to a string, one could simply do “;”.join([str(xx) for xx in x]), which would convert it to a semicolon-separated list without any wasted space.
Notice that this method is used by the
Logger
when storing the final optimization results in the log files in form of aTextLogSection
created viatext()
. By implementing :meth:from_str: and :meth:to_str: as exact inverse of each other, you can thus ensure that you can always automatically load the results of your optimization runs from the log files created viaset_log_file()
of theExecution
class.- Parameters:
x – the instance
- Return type:
- Returns:
the string representation of x
- validate(x)[source]¶
Check whether a given point in the space is valid.
This function should be implemented such that it very carefully checks whether the argument x is a valid element of this space. It should check the Python data type of x and the type of its components and raise a TypeError if it does not match the exact requirements. It should also check the value of each element of x whether it is permitted. Once this function returns without throwing an exception, the user can rely on that the data structure x is correct.
For example, if we have a space of
permutations
of the values from 1 to n, where the elements are represented asnumpy.ndarray
objects, then this function should first check whether x is indeed an instance ofnumpy.ndarray
. If not, it should raise a TypeError. Then it could check whether the length of x is indeed n and raise a ValueError otherwise. It could then check whether each element of x is from 1..n and occurs exactly one time (and otherwise raise a ValueError). Moreover, it should also check whether the numpy dtype of x is an appropriate integer data type and raise a ValueError otherwise. Hence, if this function checks an element x and does not raise any error, the user can rely on that this element x is, indeed, a fully valid permutation.In our system, we use this method for example at the end of optimization processes. Every solution that is written to a log file must pass through this method. In other words, we ensure that only valid solutions are stored. If the optimization algorithm or a search operator has a bug and sometimes may produce invalid data structures, this a) helps us in finding the bug and b) prevents us from storing invalid solutions. It is also strongly encouraged that the
from_str()
method of anySpace
implementation should run its results throughvalidate()
. Sincefrom_str()
can be used to, e.g., parse the data from the result sections of log files, this ensures that no corrupted or otherwise invalid data is parsed into an application. See https://thomasweise.github.io/moptipy/#data-formats for more information on log files.- Parameters:
x – the point
- Raises:
TypeError – if the point x (or one of its elements, if applicable) has the wrong data type
ValueError – if the point x is invalid and/or simply is not an element of this space
- Return type:
- moptipy.api.space.check_space(space, none_is_ok=False)[source]¶
Check whether an object is a valid instance of
Space
.- Parameters:
- Return type:
- Returns:
the object
- Raises:
>>> check_space(Space()) Space >>> check_space(Space(), True) Space >>> check_space(Space(), False) Space >>> try: ... check_space('A') ... except TypeError as te: ... print(te) space should be an instance of moptipy.api.space.Space but is str, namely 'A'. >>> try: ... check_space('A', True) ... except TypeError as te: ... print(te) space should be an instance of moptipy.api.space.Space but is str, namely 'A'. >>> try: ... check_space('A', False) ... except TypeError as te: ... print(te) space should be an instance of moptipy.api.space.Space but is str, namely 'A'. >>> >>> try: ... check_space(None) ... except TypeError as te: ... print(te) space should be an instance of moptipy.api.space.Space but is None. >>> print(check_space(None, True)) None >>> try: ... check_space(None, False) ... except TypeError as te: ... print(te) space should be an instance of moptipy.api.space.Space but is None.
moptipy.api.subprocesses module¶
Different ways to transform and slice processes.
In this module, we provide some routines that can be used to slice of computational budgets of a given process for running algorithms. The following functions are included:
for_fes()
allows for creating a sub-process that forwards all method calls to the original process but will perform at most a given number of objective function evaluations.from_starting_point()
creates a sub-process that has the current-best solution pre-set to a given point in the search space and its quality. If the best solution is improved upon, the provided point will be overwritten in place.without_should_terminate()
wraps a process in such a way that the termination criterionshould_terminate()
does not need to be checked anymore. Instead, once the optimization must stop, it will throw an internal exception and catch it again. This makes it possible to passevaluate()
to externally implemented algorithms that do not care about the moptipy API.
The utility function get_remaining_fes()
returns a number representing the remaining objective function evaluations of a given Process
. If that process does not have an FE-based termination criterion, it will instead return a very big number.
- class moptipy.api.subprocesses.T¶
the type variable for single- and multi-objective processes.
alias of TypeVar(‘T’, ~moptipy.api.process.Process, ~moptipy.api.mo_process.MOProcess)
- moptipy.api.subprocesses.for_fes(process, max_fes)[source]¶
Create a sub-process that can run for the given number of FEs.
- moptipy.api.subprocesses.from_starting_point(owner, in_and_out_x, f)[source]¶
Create a sub-process searching from one starting point.
This process is especially useful in conjunction with class
Op0Forward
. This class allows forwarding the nullary search operator to the functionget_copy_of_best_x()
. This way, the first point that it sampled by a local search can be the point specified as in_and_out_x, which effectively seeds the local search.To dovetail with chance of seeding, no FEs are counted at the beginning of the process as long as all points to be evaluated equal to the in_and_out_x. As soon as the first point different from in_and_out_x is evaluated, FE counting starts.
Equally-good solutions will also be accepted, i.e., stored into in_and_out_x. This costs a little bit of runtime, but would normally be the preferred behavior: On many problems, making neutral moves (i.e., drifting) will be beneficial over only accepting strict improvements. This is why
rls
outperforms the normalhill_climber
on thejssp
.
- moptipy.api.subprocesses.get_remaining_fes(process)[source]¶
Get a finite number representing the remaining FEs of a process.
If the process has the maximum objective function evaluations (FEs) set (see
get_max_fes()
), then this method returns the maximum FEs minus the consumed FEs (seeget_consumed_fes()
). Otherwise, i.e., ifget_max_fes()
returns None, this function returns a very large number, namely 9223372036854775807, i.e., (2 ** 63) - 1. This number is so high that it will always be impossible to consume it in terms of FEs. But it is also finite in any case. When trying to slice of budgets or computing things based on the remaining budget, this makes it unnecessary for us to deal with special cases.- Parameters:
process (
Process
) – the process- Return type:
- Returns:
an integer representing the remaining FEs of the process. If no FE limit is imposed by process, a very large number will be returned.
>>> from moptipy.api.process import Process as Proc >>> class X(Proc): ... def get_max_fes(self): ... return None ... def get_consumed_fes(self): ... return 123 >>> get_remaining_fes(X()) 9223372036854775807 >>> class Y(X): ... def get_max_fes(self): ... return 456 >>> get_remaining_fes(Y()) 333
- moptipy.api.subprocesses.without_should_terminate(algorithm, process)[source]¶
Apply an algorithm that does not call should_terminate to a process.
If we use an algorithm from an external library, this algorithm may ignore the proper usage of our API. With this method, we try to find a way to make sure that these calls are consistent with the termination criterion of the moptipy API.
Before calling
evaluate()
,register()
, orf_evaluate()
, an optimization algorithm must check if it should instead stop viashould_terminate()
. If the process calledshould_terminate()
and was told to stop but did invoke the evaluation routines anyway, an exception will be thrown and the process force-terminates. If the process did not callshould_terminate()
but was supposed to stop, the results ofevaluate()
may be arbitrary (or positive infinity).This function here can be used to deal with processes that do not invoke
should_terminate()
. It will invoke this method by itself beforeevaluate()
,register()
, andf_evaluate()
and terminate the algorithm with an exception if necessary. It will then catch the exception and bury it.Thus, we can now use algorithms that ignore our termination criteria and still force them to terminate when they should.
Some algorithms using this system are implemented in
scipy
andpdfo
. These modules import external algorithms from other libraries which, of course, know nothing about how our moptipy works. They only accept the objective function and cannot handle the beautifulshould_terminate()
-based termination criteria. By usingwithout_should_terminate()
, however, we can still safely use them within moptipy compliant scenarios.