https://pypi.org/project/cmaes/. They provide a CMA-ES implementation based on the ask-tell interface. In this interface, you repeatedly query sample points in the search space from the model and evaluate them. Then you feed back the points and their corresponding objective values to the CMA-ES algorithm so that it can update its model. Then the cycle is repeated.
Nikolaus Hansen and Andreas Ostermeier. A Completely Derandomized Self-Adaptation in Evolution Strategies. Evolutionary Computation. 9(2):159-195. Summer 2001. https://dx.doi.org/10.1162/106365601750190398
Nikolaus Hansen. The CMA Evolution Strategy: A Tutorial. arXiv:1604.00772, 2016. https://arxiv.org/abs/1604.00772
Raymond Ros and Nikolaus Hansen. A Simple Modification in CMA-ES Achieving Linear Time and Space Complexity. In Günter Rudolph, Thomas Jansen, Nicola Beume, Simon Lucas, and Carlo Poloni, eds., Proceedings of the 10th International Conference on Parallel Problem Solving From Nature (PPSN X), September 13-17, 2008, Dortmund, Germany, pages 296-305. Volume 5199 of Lecture Notes in Computer Science. Berlin/Heidelberg, Germany: Springer. http://dx.doi.org/10.1007/978-3-540-87700-4_30 https://hal.inria.fr/inria-00287367/document
Nikolaus Hansen. Benchmarking a BI-Population CMA-ES on the BBOB-2009 Function Testbed. In Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, July 8-12, 2009, Montreal, Québec, Canada, pages 2389-2396. New York, USA: ACM. http://dx.doi.org/10.1145/1570256.1570333 https://hal.inria.fr/inria-00382093/document
Bases: CMAES
The bi-population CMA-ES based on Class CMA from Library cmaes.
This algorithm combines two restart strategies for the normal CMA-ES under its hood. One where the population size increases exponentially and one where varying small population sizes are used.
We here implement the bi-population CMA-ES algorithm in exactly the same way as the authors of the cmaes library do on https://pypi.org/project/cmaes/.
Nikolaus Hansen. Benchmarking a BI-Population CMA-ES on the BBOB-2009 Function Testbed. In Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, July 8-12, 2009, Montreal, Québec, Canada, pages 2389-2396. New York, USA: ACM. http://dx.doi.org/10.1145/1570256.1570333 https://hal.inria.fr/inria-00382093/document
Log the parameters of the algorithm to a logger.
logger (KeyValueLogSection
) – the logger for the parameters
Final
[bool
]¶should we log the FEs when the restarts happen or not?
Bases: Algorithm
A wrapper for the CMA algorithm from cmaes.
Nikolaus Hansen and Andreas Ostermeier. A Completely Derandomized Self-Adaptation in Evolution Strategies. Evolutionary Computation. 9(2):159-195. Summer 2001. https://dx.doi.org/10.1162/106365601750190398
Nikolaus Hansen. The CMA Evolution Strategy: A Tutorial. arXiv:1604.00772, 2016. https://arxiv.org/abs/1604.00772
Log the parameters of the algorithm to a logger.
logger (KeyValueLogSection
) – the logger for the parameters
Apply the bi-population CMA-ES to an optimization problem.
Final
[VectorSpace
]¶the vector space defining the dimensions and bounds
Bases: CMAES
The Separable CMA-ES based on Class SepCMA from Library cmaes.
This is a variant of the CMA-ES where the covariance matrix is constrained to be diagonal. This means that there are fewer parameters to learn, so the learning rate for the covariance matrix can be increased. This algorithm is suitable if the problem is of larger scale, i.e., has a high dimension, in which case the pure CMA-ES may become rather slow in terms of its runtime consumption. Then, the loss of solution quality resulting from the underlying assumption that the objective function is separable is acceptable versus the gain in speed. By learning only the diagonals of the covariance matrix, the implicit assumption is that there are no mutual influences between the different decision variables. Of course, if the optimization problem is already of that nature, i.e., separable, the algorithm will be faster than the normal CMA-ES at the same solution quality.
Raymond Ros and Nikolaus Hansen. A Simple Modification in CMA-ES Achieving Linear Time and Space Complexity. In Günter Rudolph, Thomas Jansen, Nicola Beume, Simon Lucas, and Carlo Poloni, eds., Proceedings of the 10th International Conference on Parallel Problem Solving From Nature (PPSN X), September 13-17, 2008, Dortmund, Germany, pages 296-305. Volume 5199 of Lecture Notes in Computer Science. Berlin/Heidelberg, Germany: Springer. http://dx.doi.org/10.1007/978-3-540-87700-4_30 https://hal.inria.fr/inria-00287367/document
Provides the Algorithm bobyqa from the Library pdfo.
The library “Powell’s Derivative-Free Optimization solvers” (pdfo) at https://www.pdfo.net provides an implementation of the “Bound Optimization BY Quadratic Approximation” algorithm, or BOBYQA for short. The library is dedicated to the late Professor M. J. D. Powell FRS (1936—2015) and maintained by Tom M. Ragonneau and Zaikun Zhang. Here, we wrap it into a class that complies to our moptipy API. This class offers no additional logic but directly defers to the function in the pdfo library.
Michael James David Powell. The BOBYQA Algorithm for Bound Constrained Optimization without Derivatives. Technical Report DAMTP 2009/NA06. Department of Applied Mathematics and Theoretical Physics, Cambridge University, Cambridge, UK, 2009. https://www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf
Tom M. Ragonneau and Zaikun Zhang. PDFO: a cross-platform package for Powell’s derivative-free optimization solvers, arXiv preprint. Ithaca, NY, USA: Cornell University Library February, 2023. arXiv:2302.13246v1 [math.OC] 26 Feb 202. https://arxiv.org/pdf/2302.13246v1
Bases: Algorithm0
A wrapper for the bobyqa algorithm from pdfo.
The Bound Optimization BY Quadratic Approximation (BOBYQA) developed by Michael James David Powell and published by the pdfo library.
Michael James David Powell. The BOBYQA Algorithm for Bound Constrained Optimization without Derivatives. Technical Report DAMTP 2009/NA06. Department of Applied Mathematics and Theoretical Physics, Cambridge University, Cambridge, UK, 2009. https://www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf
Log the parameters of the algorithm to a logger.
logger (KeyValueLogSection
) – the logger for the parameters
Apply the external bobyqa implementation to an optimization problem.
Final
[VectorSpace
]¶the vector space defining the dimensions and bounds
A set of numerical optimization algorithms from SciPy.
The function scipy.optimize.minimize()
provides a set of very efficient numerical/continuous optimization methods. Here we wrap a set of them into our moptipy Process
API. All algorithms provided in this module are imported and wrapped from SciPy (https://scipy.org).
By using the without_should_terminate()
tool, we can enforce the termination criteria set via the Execution
builder on external algorithms while piping all their function evaluations through the evaluate()
routine of the optimization Process()
. This way, we can make these external algorithms usable within moptipy in a transparent manner.
Bases: SciPyAlgorithmWrapper
The wrapper for the BGFS algorithm in SciPy.
This is the quasi-Newton method by C. G. Broyden, Roger Fletcher, D. Goldfarb, and David F. Shanno (BFGS).
Jorge Nocedal and Stephen J. Wright. Numerical Optimization. In Springer Series in Operations Research and Financial Engineering. New York, NY, USA: Springer. 2006. Second Edition. ISBN: 978-0-387-30303-1. Chapter 6, Page 136. https://doi.org/10.1007/978-0-387-40065-5.
Roger Fletcher. Practical Methods of Optimization (2nd ed.), New York: John Wiley & Sons. 1987. ISBN 978-0-471-91547-8.
C. G. Broyden. The convergence of a class of double-rank minimization algorithms. Journal of the Institute of Mathematics and Its Applications. 6(1):76-90. March 1970. http://dx.doi.org/10.1093/imamat/6.1.76
Bases: SciPyAlgorithmWrapper
The wrapper for the Conjugate Gradient algorithm in SciPy.
Jorge Nocedal and Stephen J. Wright. Numerical Optimization. In Springer Series in Operations Research and Financial Engineering. New York, NY, USA: Springer. 2006. Second Edition. ISBN: 978-0-387-30303-1. Chapter 5, Page 101.
Bases: Algorithm
The Differential Evolution Algorithm as implemented by SciPy.
At this point, we do not expose the many parameters of the function scipy.optimize.differential_evolution()
. We only use the default settings. This may change in future releases.
Rainer Storn and Kenneth Price. Differential Evolution - A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. Journal of Global Optimization 11(4):341-359. December 1997. https://doi.org/10.1023/A:1008202821328. https://www.researchgate.net/publication/227242104
Log the parameters of the algorithm to a logger.
logger (KeyValueLogSection
) – the logger for the parameters
Apply the algorithm from SciPy to an optimization problem.
Basically, this wraps a specific configuration of scipy.optimize.minimize()
into our process API and invokes it.
Final
[VectorSpace
]¶the vector space defining the dimensions and bounds
Bases: SciPyAlgorithmWrapper
The Downhill Simplex aka. the Nelder-Mead Algorithm.
The function scipy.optimize.minimize()
with parameter “Nelder-Mead” for continuous optimization by using the Downhill Simplex algorithm a.k.a., the Nelder-Mead algorithm. Here we wrap it into our API.
Scipy provides the following reference:
Fuchang Gao and Lixing Han. Implementing the Nelder-Mead Simplex Algorithm with Adaptive Parameters. Computational Optimization and Applications. 51(1):259-277. January 2012. https://doi.org/10.1007/s10589-010-932
J. A. Nelder and R. Mead. A Simplex Method for Function Minimization. The Computer Journal. 7(4):308-313. January 1965. Oxford University Press (OUP). http://dx.doi.org/10.1093/COMJNL/7.4.308 https://people.duke.edu/~hpgavin/cee201/Nelder+Mead-ComputerJournal-1965.pdf
M. H. Wright. Direct Search Methods: Once Scorned, Now Respectable. In D.F. Griffiths and G.A. Watson (Eds.) Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis. Harlow, UK: Addison Wesley Longman, pp. 191-208.
Jorge Nocedal and Stephen J. Wright. Numerical Optimization. In Springer Series in Operations Research and Financial Engineering. New York, NY, USA: Springer. 2006. Second Edition. ISBN: 978-0-387-30303-1. Chapter 9.5, Page 238. https://doi.org/10.1007/978-0-387-40065-5.
Bases: SciPyAlgorithmWrapper
Powell’s Algorithm.
The function scipy.optimize.minimize()
with parameter “Powell” for continuous optimization.
Michael James David Powell. An Efficient Method for Finding the Minimum of a Function of Several Variables without Calculating Derivatives. The Computer Journal. 7(2):155-162. 1964. https://doi.org/10.1093/comjnl/7.2.155
Bases: SciPyAlgorithmWrapper
The Sequential Least Squares Programming (SLSQP) algorithm in SciPy.
Dieter Kraft. Algorithm 733: TOMP-Fortran modules for optimal control calculations. ACM Transactions on Mathematical Software. 20(3):262-281. September 1994. https://doi.org/10.1145/192115.192124
Bases: Algorithm0
A wrapper for the Sci-Py API.
An instance of this class may be re-used, but it must only be used for problems of the same dimension.
Log the parameters of the algorithm to a logger.
logger (KeyValueLogSection
) – the logger for the parameters
Apply the algorithm from SciPy to an optimization problem.
Basically, this wraps a specific configuration of scipy.optimize.minimize()
into our process API and invokes it.
Final
[VectorSpace
]¶the vector space defining the dimensions and bounds
Bases: SciPyAlgorithmWrapper
The Truncated Newton Method from SciPy.
Stephen G. Nash. Newton-Type Minimization via the Lanczos Method. SIAM Journal on Numerical Analysis. 21(4):770-783. August 1984. https://dx.doi.org/10.1137/0721052.
Jorge Nocedal and Stephen J. Wright. Numerical Optimization. In Springer Series in Operations Research and Financial Engineering. New York, NY, USA: Springer. 2006. Second Edition. ISBN: 978-0-387-30303-1. https://doi.org/10.1007/978-0-387-40065-5.