his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A bilevel approach to parameter tuning of optimization algorithms using evolutionary computing: Understanding optimization algorithms through optimization
University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre. (Produktion och automatiseringsteknik, Production and Automation Engineering)
2018 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Most optimization problems found in the real world cannot be solved using analytical methods. For these types of difficult optimization problems, an alternative approach is needed. Metaheuristics are a category of optimization algorithms that do not guarantee that an optimal solution will be found, but instead search for the best solutions using some general heuristics. Metaheuristics have been shown to be effective at finding “good-enough” solutions to a wide variety of difficult problems. Most metaheuristics involve control parameters that can be used to modify how the heuristics perform its search. This is necessary because different problems may require different search strategies to be solved effectively. The control parameters allow for the optimization algorithm to be adapted to the problem at hand. It is, however, difficult to predict what the optimal control parameters are for any given problem. The problem of finding these optimal control parameter values is known as parameter tuning and is the main topic of this thesis. This thesis uses a bilevel optimization approach to solve parameter tuning problems. In this approach, the parameter tuning problem itself is formulated as an optimization problem and solved with an optimization algorithm. The parameter tuning problem formulated as a bilevel optimization problem is challenging because of nonlinear objective functions, interacting variables, multiple local optima, and noise. However, it is in precisely this kind of difficult optimization problem that evolutionary algorithms, which are a subclass of metaheuristics, have been shown to be effective. That is the motivation for using evolutionary algorithms for the upper-level optimization (i.e. tuning algorithm) of the bilevel optimization approach. Solving the parameter tuning problem using a bilevel optimization approach is also computationally expensive, since a complete optimization run has to be completed for every evaluation of a set of control parameter values. It is therefore important that the tuning algorithm be as efficient as possible, so that the parameter tuning problem can be solved to a satisfactory level with relatively few evaluations. Even so, bilevel optimization experiments can take a long time to run on a single computer. There is, however, considerable parallelization potential in the bilevel optimization approach, since many of the optimizations are independent of one another. This thesis has three primary aims: first, to present a bilevel optimization framework and software architecture for parallel parameter tuning; second, to use this framework and software architecture to evaluate and configure evolutionary algorithms as tuners and compare them with other parameter tuning methods; and, finally, to use parameter tuning experiments to gain new insights into and understanding of how optimization algorithms work and how they can used be to their maximum potential. The proposed framework and software architecture have been implemented and deployed in more than one hundred computers running many thousands of parameter tuning experiments for many millions of optimizations. This illustrates that this design and implementation approach can handle large parameter tuning experiments. Two types of evolutionary algorithms, i.e. differential evolution (DE) and a genetic algorithm (GA), have been evaluated as tuners against the parameter tuning algorithm irace. The as pects of algorithm configuration and noise handling for DE and the GA as related to the parameter tuning problem were also investigated. The results indicate that dynamic resampling strategies outperform static resampling strategies. It was also shown that the GA needs an explicit exploration and exploitation strategy in order not become stuck in local optima. The comparison with irace shows that both DE and the GA can significantly outperform it in a variety of different tuning problems.

Place, publisher, year, edition, pages
Skövde: University of Skövde , 2018. , p. 210
Series
Dissertation Series ; 25
National Category
Information Systems, Social aspects
Research subject
Production and Automation Engineering
Identifiers
URN: urn:nbn:se:his:diva-16368ISBN: 978-91-984187-7-4 (print)OAI: oai:DiVA.org:his-16368DiVA, id: diva2:1261356
Public defence
2018-09-24, ASSAR Industrial Innovation Arena, Skövde, 10:00
Opponent
Supervisors
Available from: 2018-11-15 Created: 2018-11-07 Last updated: 2018-11-15Bibliographically approved
List of papers
1. Parameter tuned CMA-ES on the CEC'15 expensive problems
Open this publication in new window or tab >>Parameter tuned CMA-ES on the CEC'15 expensive problems
2015 (English)In: Evolutionary Computation, IEEE conference proceedings, 2015, p. 1950-1957Conference paper, Published paper (Refereed)
Abstract [en]

Evolutionary optimization algorithms have parameters that are used to adapt the search strategy to suit different optimization problems. Selecting the optimal parameter values for a given problem is difficult without a-priori knowledge. Experimental studies can provide this knowledge by finding the best parameter values for a specific set of problems. This knowledge can also be constructed into heuristics (rule-of-thumbs) that can adapt the parameters for the problem. The aim of this paper is to assess the heuristics of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimization algorithm. This is accomplished by tuning CMA-ES parameters so as to maximize its performance on the CEC'15 problems, using a bilevel optimization approach that searches for the optimal parameter values. The optimized parameter values are compared against the parameter values suggested by the heuristics. The difference between specialized and generalized parameter values are also investigated.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2015
Keywords
Parameter tuning, CMA-ES
National Category
Computer and Information Sciences
Research subject
Technology; Production and Automation Engineering
Identifiers
urn:nbn:se:his:diva-11599 (URN)10.1109/CEC.2015.7257124 (DOI)000380444801129 ()2-s2.0-84963626635 (Scopus ID)978-1-4799-7492-4 (ISBN)
Conference
2015 IEEE Congress on Evolutionary Computation (CEC)
Available from: 2015-10-12 Created: 2015-10-12 Last updated: 2018-11-07Bibliographically approved
2. Parameter Tuning of MOEAs Using a Bilevel Optimization Approach
Open this publication in new window or tab >>Parameter Tuning of MOEAs Using a Bilevel Optimization Approach
2015 (English)In: Evolutionary Multi-Criterion Optimization: 8th International Conference, EMO 2015, Guimarães, Portugal, March 29 --April 1, 2015. Proceedings, Part I / [ed] António Gaspar-Cunha, Carlos Henggeler Antunes & Carlos Coello Coello, Springer, 2015, p. 233-247Conference paper, Published paper (Refereed)
Abstract [en]

The performance of an Evolutionary Algorithm (EA) can be greatly influenced by its parameters. The optimal parameter settings are also not necessarily the same across different problems. Finding the optimal set of parameters is therefore a difficult and often time-consuming task. This paper presents results of parameter tuning experiments on the NSGA-II and NSGA-III algorithms using the ZDT test problems. The aim is to gain new insights on the characteristics of the optimal parameter settings and to study if the parameters impose the same effect on both NSGA-II and NSGA-III. The experiments also aim at testing if the rule of thumb that the mutation probability should be set to one divided by the number of decision variables is a good heuristic on the ZDT problems. A comparison of the performance of NSGA-II and NSGA-III on the ZDT problems is also made.

Place, publisher, year, edition, pages
Springer, 2015
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 9018
Keywords
Parameter tuning, NSGA-II, NSGA-III, ZDT, Bilevel optimization, Multi-objective problems
National Category
Computer and Information Sciences
Research subject
Technology; Production and Automation Engineering
Identifiers
urn:nbn:se:his:diva-11371 (URN)10.1007/978-3-319-15934-8_16 (DOI)000361702100016 ()2-s2.0-84925342559 (Scopus ID)978-3-319-15933-1 (ISBN)978-3-319-15934-8 (ISBN)
Conference
8th International Conference on Evolutionary Multi-Criterion Optimization, 29 March-1 April 2015, Guimarães, Portugal
Available from: 2015-08-18 Created: 2015-08-18 Last updated: 2018-11-07Bibliographically approved
3. Tuning of Multiple Parameter Sets in Evolutionary Algorithms
Open this publication in new window or tab >>Tuning of Multiple Parameter Sets in Evolutionary Algorithms
2016 (English)In: GECCO'16: Proceedings of the 2016 genetic and evolutionary computation conference, Association for Computing Machinery (ACM), 2016, p. 533-540Conference paper, Published paper (Refereed)
Abstract [en]

Evolutionary optimization algorithms typically use one or more parameters that control their behavior. These parameters, which are often kept constant, can be tuned to improve the performance of the algorithm on specific problems. However, past studies have indicated that the performance can be further improved by adapting the parameters during runtime. A limitation of these studies is that they only control, at most, a few parameters, thereby missing potentially beneficial interactions between them. Instead of finding a direct control mechanism, the novel approach in this paper is to use different parameter sets in different stages of an optimization. These multiple parameter sets, which remain static within each stage, are tuned through extensive bi-level optimization experiments that approximate the optimal adaptation of the parameters. The algorithmic performance obtained with tuned multiple parameter sets is compared against that obtained with a single parameter set. For the experiments in this paper, the parameters of NSGA-II are tuned when applied to the ZDT, DTLZ and WFG test problems. The results show that using multiple parameter sets can significantly increase the performance over a single parameter set.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2016
Keywords
evolutionary algorithms, parameter tuning, multiple parameters, multi-objective optimization
National Category
Computer Sciences
Research subject
Production and Automation Engineering
Identifiers
urn:nbn:se:his:diva-13056 (URN)10.1145/2908812.2908899 (DOI)000382659200069 ()2-s2.0-84985916855 (Scopus ID)978-1-4503-4206-3 (ISBN)
Conference
Genetic and Evolutionary Computation Conference (GECCO), Denver, USA, July 20-24, 2016.
Available from: 2016-10-27 Created: 2016-10-27 Last updated: 2018-11-07Bibliographically approved
4. Towards Optimal Algorithmic Parameters for Simulation-Based Multi-Objective Optimization
Open this publication in new window or tab >>Towards Optimal Algorithmic Parameters for Simulation-Based Multi-Objective Optimization
2016 (English)In: 2016 IEEE Congress on Evolutionary Computation (CEC), New York: IEEE, 2016, p. 5162-5169Conference paper, Published paper (Refereed)
Abstract [en]

The use of optimization to solve a simulation-based multi-objective problem produces a set of solutions that provide information about the trade-offs that have to be considered by the decision maker. An incomplete or sub-optimal set of solutions will negatively affect the quality of any subsequent decisions. The parameters that control the search behavior of an optimization algorithm can be used to minimize this risk. However, choosing good parameter settings for a given optimization algorithm and problem combination is difficult. The aim of this paper is to take a step towards optimal parameter settings for optimization of simulation-based problems. Two parameter tuning methods, Latin Hypercube Sampling and Genetic Algorithms, are used to maximize the performance of NSGA-II applied to a simulation-based problem with discrete variables. The strengths and weaknesses of both methods are analyzed. The effect of the number of decision variables and the function budget on the optimal parameter settings is also studied.

Place, publisher, year, edition, pages
New York: IEEE, 2016
Series
IEEE Congress on Evolutionary Computation
National Category
Computer Sciences
Research subject
Production and Automation Engineering
Identifiers
urn:nbn:se:his:diva-13331 (URN)10.1109/CEC.2016.7748344 (DOI)000390749105045 ()2-s2.0-85008258262 (Scopus ID)978-1-5090-0623-6 (ISBN)978-1-5090-0622-9 (ISBN)978-1-5090-0624-3 (ISBN)
Conference
2016 IEEE Congress on Evolutionary Computation, CEC 2016, Vancouver, Canada, July 24-29, 2016
Available from: 2017-01-20 Created: 2017-01-20 Last updated: 2018-11-07Bibliographically approved
5. On the Trade-off Between Runtime and Evaluation Efficiency In Evolutionary Algorithms
Open this publication in new window or tab >>On the Trade-off Between Runtime and Evaluation Efficiency In Evolutionary Algorithms
(English)In: Evolutionary Computation, ISSN 1063-6560, E-ISSN 1530-9304Article in journal (Refereed) Submitted
Abstract [en]

Evolutionary optimization algorithms typically use one or more parameters that control their behavior. These parameters, which are often kept constant, can be tuned to improve the performance of the algorithm on specific problems.  However, past studies have indicated that the performance can be further improved by adapting the parameters during runtime. A limitation of these studies is that they only control, at most, a few parameters, thereby missing potentially beneficial interactions between them. Instead of finding a direct control mechanism, the novel approach in this paper is to use different parameter sets in different stages of an optimization. These multiple parameter sets, which remain static within each stage, are tuned through extensive bi-level optimization experiments that approximate the optimal adaptation of the parameters. The algorithmic performance obtained with tuned multiple parameter sets is compared against that obtained with a single parameter set.  For the experiments in this paper, the parameters of NSGAII are tuned when applied to the ZDT, DTLZ and WFG test problems. The results show that using multiple parameter sets can significantly increase the performance over a single parameter set.

Place, publisher, year, edition, pages
MIT Press
National Category
Computer Sciences
Research subject
Production and Automation Engineering
Identifiers
urn:nbn:se:his:diva-16272 (URN)
Available from: 2018-10-04 Created: 2018-10-04 Last updated: 2018-11-07Bibliographically approved
6. Parameter Tuning Evolutionary Algorithms for Runtime versus Cost Trade-off in a Cloud Computing Environment
Open this publication in new window or tab >>Parameter Tuning Evolutionary Algorithms for Runtime versus Cost Trade-off in a Cloud Computing Environment
2018 (English)In: Simulation Modelling Practice and Theory, ISSN 1569-190X, Vol. 89, p. 195-205Article in journal (Refereed) Published
Abstract [en]

The runtime of an evolutionary algorithm can be reduced by increasing the number of parallel evaluations. However, increasing the number of parallel evaluations can also result in wasted computational effort since there is a greater probability of creating solutions that do not contribute to convergence towards the global optimum. A trade-off, therefore, arises between the runtime and computational effort for different levels of parallelization of an evolutionary algorithm.  When the computational effort is translated into cost, the trade-off can be restated as runtime versus cost. This trade-off is particularly relevant for cloud computing environments where the computing resources can be exactly matched to the level of parallelization of the algorithm, and the cost is proportional to the runtime and how many instances that are used. This paper empirically investigates this trade-off for two different evolutionary algorithms, NSGA-II and differential evolution (DE) when applied to multi-objective discrete-event simulation-based (DES) problem. Both generational and steadystate asynchronous versions of both algorithms are included. The approach is to perform parameter tuning on a simplified version of the DES model. A subset of the best configurations from each tuning experiment is then evaluated on a cloud computing platform. The results indicate that, for the included DES problem, the steady-state asynchronous version of each algorithm provides a better runtime versus cost trade-off than the generational versions and that DE outperforms NSGA-II.

Place, publisher, year, edition, pages
Elsevier, 2018
National Category
Computer Sciences
Research subject
Production and Automation Engineering
Identifiers
urn:nbn:se:his:diva-16273 (URN)10.1016/j.simpat.2018.10.003 (DOI)000450450400013 ()2-s2.0-85055089735 (Scopus ID)
Available from: 2018-10-04 Created: 2018-10-04 Last updated: 2019-02-05Bibliographically approved
7. A Parallel Computing Software Architecture for the Bilevel Parameter Tuning of Optimization Algorithms
Open this publication in new window or tab >>A Parallel Computing Software Architecture for the Bilevel Parameter Tuning of Optimization Algorithms
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Most optimization algorithms extract important algorithmic design decisions as control parameters. This is necessary because different problems can require different search strategies to be solved effectively. The control parameters allow for the optimization algorithm to be adapted to the problem at hand. It is however difficult to predict what the optimal control parameters are for any given problem. Finding these optimal control parameter values is referred to as the parameter tuning problem. One approach of solving the parameter tuning problem is to use a bilevel optimization where the parameter tuning problem itself is formulated as an optimization problem involving algorithmic performance as the objective(s). In this paper, we present a framework and architecture that can be used to solve large-scale parameter tuning problems using a bilevel optimization approach. The proposed framework is used to show that evolutionary algorithms are competitive as tuners against irace which is a state-of-the-art tuning method. Two evolutionary algorithms, differential evaluation (DE) and a genetic algorithm (GA) are evaluated as tuner algorithms using the proposed framework and software architecture. The importance of replicating optimizations and avoiding local optima is also investigated. The architecture is deployed and tested by running millions of optimizations using a computing cluster. The results indicate that the evolutionary algorithms can consistently find better control parameter values than irace. The GA, however, needs to be configured for an explicit exploration and exploitation strategy in order avoid local optima.

National Category
Computer Sciences
Research subject
Production and Automation Engineering
Identifiers
urn:nbn:se:his:diva-16274 (URN)
Available from: 2018-10-04 Created: 2018-10-04 Last updated: 2019-06-18

Open Access in DiVA

No full text in DiVA

Other links

https://www.his.se/PageFiles/55354/Martin-Andersson-avhandling.pdf

Authority records BETA

Andersson, Martin

Search in DiVA

By author/editor
Andersson, Martin
By organisation
School of Engineering ScienceThe Virtual Systems Research Centre
Information Systems, Social aspects

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 357 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf