moptipy.algorithms.so.vector package

Numerical optimization algorithms working on real-valued vectors.

The search space here is always VectorSpace, which prescribes a box in the space of n-dimensional real vectors that is constraint by an upper and a lower bound in each dimension. Algorithms working on such spaces can use the rich mathematical relationships and operations defined over the continuous spaces.

Subpackages

Submodules

moptipy.algorithms.so.vector.cmaes_lib module

Provides the CMA-ES Family Algorithms from the Library cmaes.

The Covariance Matrix Adaptation Evolutionary Strategy, CMA-ES for short, is a very efficient optimization algorithm for small- and mid-scale and numerical/ continuous optimization problems.

Here, we wrap our moptipy API around the beautiful library cmaes by Masashi Shibata and Masahiro Nomura at https://pypi.org/project/cmaes/. They provide a CMA-ES implementation based on the ask-tell interface. In this interface, you repeatedly query sample points in the search space from the model and evaluate them. Then you feed back the points and their corresponding objective values to the CMA-ES algorithm so that it can update its model. Then the cycle is repeated.

  1. Nikolaus Hansen and Andreas Ostermeier. A Completely Derandomized Self-Adaptation in Evolution Strategies. Evolutionary Computation. 9(2):159-195. Summer 2001. https://dx.doi.org/10.1162/106365601750190398

  2. Nikolaus Hansen. The CMA Evolution Strategy: A Tutorial. arXiv:1604.00772, 2016. https://arxiv.org/abs/1604.00772

  3. Raymond Ros and Nikolaus Hansen. A Simple Modification in CMA-ES Achieving Linear Time and Space Complexity. In Günter Rudolph, Thomas Jansen, Nicola Beume, Simon Lucas, and Carlo Poloni, eds., Proceedings of the 10th International Conference on Parallel Problem Solving From Nature (PPSN X), September 13-17, 2008, Dortmund, Germany, pages 296-305. Volume 5199 of Lecture Notes in Computer Science. Berlin/Heidelberg, Germany: Springer. http://dx.doi.org/10.1007/978-3-540-87700-4_30 https://hal.inria.fr/inria-00287367/document

  4. Nikolaus Hansen. Benchmarking a BI-Population CMA-ES on the BBOB-2009 Function Testbed. In Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, July 8-12, 2009, Montreal, Québec, Canada, pages 2389-2396. New York, USA: ACM. http://dx.doi.org/10.1145/1570256.1570333 https://hal.inria.fr/inria-00382093/document

class moptipy.algorithms.so.vector.cmaes_lib.BiPopCMAES(space, log_restarts=False)[source]

Bases: CMAES

The bi-population CMA-ES based on Class CMA from Library cmaes.

This algorithm combines two restart strategies for the normal CMA-ES under its hood. One where the population size increases exponentially and one where varying small population sizes are used.

We here implement the bi-population CMA-ES algorithm in exactly the same way as the authors of the cmaes library do on https://pypi.org/project/cmaes/.

  1. Nikolaus Hansen. Benchmarking a BI-Population CMA-ES on the BBOB-2009 Function Testbed. In Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, July 8-12, 2009, Montreal, Québec, Canada, pages 2389-2396. New York, USA: ACM. http://dx.doi.org/10.1145/1570256.1570333 https://hal.inria.fr/inria-00382093/document

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

log_restarts: Final[bool]

should we log the FEs when the restarts happen or not?

solve(process)[source]

Apply the external cmaes implementation to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

class moptipy.algorithms.so.vector.cmaes_lib.CMAES(space)[source]

Bases: Algorithm

A wrapper for the CMA algorithm from cmaes.

  1. Nikolaus Hansen and Andreas Ostermeier. A Completely Derandomized Self-Adaptation in Evolution Strategies. Evolutionary Computation. 9(2):159-195. Summer 2001. https://dx.doi.org/10.1162/106365601750190398

  2. Nikolaus Hansen. The CMA Evolution Strategy: A Tutorial. arXiv:1604.00772, 2016. https://arxiv.org/abs/1604.00772

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

solve(process)[source]

Apply the bi-population CMA-ES to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

space: Final[VectorSpace]

the vector space defining the dimensions and bounds

class moptipy.algorithms.so.vector.cmaes_lib.SepCMAES(space)[source]

Bases: CMAES

The Separable CMA-ES based on Class SepCMA from Library cmaes.

This is a variant of the CMA-ES where the covariance matrix is constrained to be diagonal. This means that there are fewer parameters to learn, so the learning rate for the covariance matrix can be increased. This algorithm is suitable if the problem is of larger scale, i.e., has a high dimension, in which case the pure CMA-ES may become rather slow in terms of its runtime consumption. Then, the loss of solution quality resulting from the underlying assumption that the objective function is separable is acceptable versus the gain in speed. By learning only the diagonals of the covariance matrix, the implicit assumption is that there are no mutual influences between the different decision variables. Of course, if the optimization problem is already of that nature, i.e., separable, the algorithm will be faster than the normal CMA-ES at the same solution quality.

  1. Raymond Ros and Nikolaus Hansen. A Simple Modification in CMA-ES Achieving Linear Time and Space Complexity. In Günter Rudolph, Thomas Jansen, Nicola Beume, Simon Lucas, and Carlo Poloni, eds., Proceedings of the 10th International Conference on Parallel Problem Solving From Nature (PPSN X), September 13-17, 2008, Dortmund, Germany, pages 296-305. Volume 5199 of Lecture Notes in Computer Science. Berlin/Heidelberg, Germany: Springer. http://dx.doi.org/10.1007/978-3-540-87700-4_30 https://hal.inria.fr/inria-00287367/document

solve(process)[source]

Apply the separable CMA-ES version to an optimization problem.

Parameters:

process (Process) – the optimization problem to solve

Return type:

None

moptipy.algorithms.so.vector.pdfo module

Provides the Algorithm bobyqa from the Library pdfo.

The library “Powell’s Derivative-Free Optimization solvers” (pdfo) at https://www.pdfo.net provides an implementation of the “Bound Optimization BY Quadratic Approximation” algorithm, or BOBYQA for short. The library is dedicated to the late Professor M. J. D. Powell FRS (1936—2015) and maintained by Tom M. Ragonneau and Zaikun Zhang. Here, we wrap it into a class that complies to our moptipy API. This class offers no additional logic but directly defers to the function in the pdfo library.

  1. Michael James David Powell. The BOBYQA Algorithm for Bound Constrained Optimization without Derivatives. Technical Report DAMTP 2009/NA06. Department of Applied Mathematics and Theoretical Physics, Cambridge University, Cambridge, UK, 2009. https://www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf

  2. Tom M. Ragonneau and Zaikun Zhang. PDFO: a cross-platform package for Powell’s derivative-free optimization solvers, arXiv preprint. Ithaca, NY, USA: Cornell University Library February, 2023. arXiv:2302.13246v1 [math.OC] 26 Feb 202. https://arxiv.org/pdf/2302.13246v1

class moptipy.algorithms.so.vector.pdfo.BOBYQA(op0, space)[source]

Bases: Algorithm0

A wrapper for the bobyqa algorithm from pdfo.

The Bound Optimization BY Quadratic Approximation (BOBYQA) developed by Michael James David Powell and published by the pdfo library.

  1. Michael James David Powell. The BOBYQA Algorithm for Bound Constrained Optimization without Derivatives. Technical Report DAMTP 2009/NA06. Department of Applied Mathematics and Theoretical Physics, Cambridge University, Cambridge, UK, 2009. https://www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

solve(process)[source]

Apply the external bobyqa implementation to an optimization problem.

Parameters:

process (Process) – the black-box process object

Return type:

None

space: Final[VectorSpace]

the vector space defining the dimensions and bounds

moptipy.algorithms.so.vector.scipy module

A set of numerical optimization algorithms from SciPy.

The function scipy.optimize.minimize() provides a set of very efficient numerical/continuous optimization methods. Here we wrap a set of them into our moptipy Process API. All algorithms provided in this module are imported and wrapped from SciPy (https://scipy.org).

By using the without_should_terminate() tool, we can enforce the termination criteria set via the Execution builder on external algorithms while piping all their function evaluations through the evaluate() routine of the optimization Process(). This way, we can make these external algorithms usable within moptipy in a transparent manner.

class moptipy.algorithms.so.vector.scipy.BGFS(op0, space)[source]

Bases: SciPyAlgorithmWrapper

The wrapper for the BGFS algorithm in SciPy.

This is the quasi-Newton method by C. G. Broyden, Roger Fletcher, D. Goldfarb, and David F. Shanno (BFGS).

  1. Jorge Nocedal and Stephen J. Wright. Numerical Optimization. In Springer Series in Operations Research and Financial Engineering. New York, NY, USA: Springer. 2006. Second Edition. ISBN: 978-0-387-30303-1. Chapter 6, Page 136. https://doi.org/10.1007/978-0-387-40065-5.

  2. Roger Fletcher. Practical Methods of Optimization (2nd ed.), New York: John Wiley & Sons. 1987. ISBN 978-0-471-91547-8.

  3. C. G. Broyden. The convergence of a class of double-rank minimization algorithms. Journal of the Institute of Mathematics and Its Applications. 6(1):76-90. March 1970. http://dx.doi.org/10.1093/imamat/6.1.76

class moptipy.algorithms.so.vector.scipy.CG(op0, space)[source]

Bases: SciPyAlgorithmWrapper

The wrapper for the Conjugate Gradient algorithm in SciPy.

  1. Jorge Nocedal and Stephen J. Wright. Numerical Optimization. In Springer Series in Operations Research and Financial Engineering. New York, NY, USA: Springer. 2006. Second Edition. ISBN: 978-0-387-30303-1. Chapter 5, Page 101.

class moptipy.algorithms.so.vector.scipy.DE(space)[source]

Bases: Algorithm

The Differential Evolution Algorithm as implemented by SciPy.

At this point, we do not expose the many parameters of the function scipy.optimize.differential_evolution(). We only use the default settings. This may change in future releases.

  1. Rainer Storn and Kenneth Price. Differential Evolution - A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. Journal of Global Optimization 11(4):341-359. December 1997. https://doi.org/10.1023/A:1008202821328. https://www.researchgate.net/publication/227242104

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

solve(process)[source]

Apply the algorithm from SciPy to an optimization problem.

Basically, this wraps a specific configuration of scipy.optimize.minimize() into our process API and invokes it.

Parameters:

process (Process) – the black-box process object

Return type:

None

space: Final[VectorSpace]

the vector space defining the dimensions and bounds

class moptipy.algorithms.so.vector.scipy.NelderMead(op0, space)[source]

Bases: SciPyAlgorithmWrapper

The Downhill Simplex aka. the Nelder-Mead Algorithm.

The function scipy.optimize.minimize() with parameter “Nelder-Mead” for continuous optimization by using the Downhill Simplex algorithm a.k.a., the Nelder-Mead algorithm. Here we wrap it into our API.

Scipy provides the following reference:

  1. Fuchang Gao and Lixing Han. Implementing the Nelder-Mead Simplex Algorithm with Adaptive Parameters. Computational Optimization and Applications. 51(1):259-277. January 2012. https://doi.org/10.1007/s10589-010-932

  2. J. A. Nelder and R. Mead. A Simplex Method for Function Minimization. The Computer Journal. 7(4):308-313. January 1965. Oxford University Press (OUP). http://dx.doi.org/10.1093/COMJNL/7.4.308 https://people.duke.edu/~hpgavin/cee201/Nelder+Mead-ComputerJournal-1965.pdf

  3. M. H. Wright. Direct Search Methods: Once Scorned, Now Respectable. In D.F. Griffiths and G.A. Watson (Eds.) Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis. Harlow, UK: Addison Wesley Longman, pp. 191-208.

  4. Jorge Nocedal and Stephen J. Wright. Numerical Optimization. In Springer Series in Operations Research and Financial Engineering. New York, NY, USA: Springer. 2006. Second Edition. ISBN: 978-0-387-30303-1. Chapter 9.5, Page 238. https://doi.org/10.1007/978-0-387-40065-5.

class moptipy.algorithms.so.vector.scipy.Powell(op0, space)[source]

Bases: SciPyAlgorithmWrapper

Powell’s Algorithm.

The function scipy.optimize.minimize() with parameter “Powell” for continuous optimization.

  1. Michael James David Powell. An Efficient Method for Finding the Minimum of a Function of Several Variables without Calculating Derivatives. The Computer Journal. 7(2):155-162. 1964. https://doi.org/10.1093/comjnl/7.2.155

class moptipy.algorithms.so.vector.scipy.SLSQP(op0, space)[source]

Bases: SciPyAlgorithmWrapper

The Sequential Least Squares Programming (SLSQP) algorithm in SciPy.

  1. Dieter Kraft. Algorithm 733: TOMP-Fortran modules for optimal control calculations. ACM Transactions on Mathematical Software. 20(3):262-281. September 1994. https://doi.org/10.1145/192115.192124

class moptipy.algorithms.so.vector.scipy.SciPyAlgorithmWrapper(name, op0, space)[source]

Bases: Algorithm0

A wrapper for the Sci-Py API.

An instance of this class may be re-used, but it must only be used for problems of the same dimension.

log_parameters_to(logger)[source]

Log the parameters of the algorithm to a logger.

Parameters:

logger (KeyValueLogSection) – the logger for the parameters

Return type:

None

solve(process)[source]

Apply the algorithm from SciPy to an optimization problem.

Basically, this wraps a specific configuration of scipy.optimize.minimize() into our process API and invokes it.

Parameters:

process (Process) – the black-box process object

Return type:

None

space: Final[VectorSpace]

the vector space defining the dimensions and bounds

class moptipy.algorithms.so.vector.scipy.TNC(op0, space)[source]

Bases: SciPyAlgorithmWrapper

The Truncated Newton Method from SciPy.

  1. Stephen G. Nash. Newton-Type Minimization via the Lanczos Method. SIAM Journal on Numerical Analysis. 21(4):770-783. August 1984. https://dx.doi.org/10.1137/0721052.

  2. Jorge Nocedal and Stephen J. Wright. Numerical Optimization. In Springer Series in Operations Research and Financial Engineering. New York, NY, USA: Springer. 2006. Second Edition. ISBN: 978-0-387-30303-1. https://doi.org/10.1007/978-0-387-40065-5.