Högskolan i Skövde

his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Dynamic Resampling for Preference-based Evolutionary Multi-objective Optimization of Stochastic Systems: Improving the efficiency of time-constrained optimization
University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre. (Simulation-based Optimization)ORCID iD: 0000-0003-3432-5068
2016 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In preference-based Evolutionary Multi-objective Optimization (EMO), the decision maker is looking for a diverse, but locally focused non-dominated front in a preferred area of the objective space, as close as possible to the true Pareto-front. Since solutions found outside the area of interest are considered less important or even irrelevant, the optimization can focus its efforts on the preferred area and find the solutions that the decision maker is looking for more quickly, i.e., with fewer simulation runs. This is particularly important if the available time for optimization is limited, as is the case in many real-world applications. Although previous studies in using this kind of guided-search with preference information, for example, withthe R-NSGA-II algorithm, have shown positive results, only very few of them considered the stochastic outputs of simulated systems.

In the literature, this phenomenon of stochastic evaluation functions is sometimes called noisy optimization. If an EMO algorithm is run without any countermeasure to noisy evaluation functions, the performance will deteriorate, compared to the case if the true mean objective values are known. While, in general, static resampling of solutions to reduce the uncertainty of all evaluated design solutions can allow EMO algorithms to avoid this problem, it will significantly increase the required simulation time/budget, as many samples will be wasted on candidate solutions which are inferior. In comparison, a Dynamic Resampling (DR) strategy can allow the exploration and exploitation trade-off to be optimized, since the required accuracy about objective values varies between solutions. In a dense, converged population, itis important to know the accurate objective values, whereas noisy objective values are less harmful when an algorithm is exploring the objective space, especially early in the optimization process. Therefore, a well-designed Dynamic Resampling strategy which resamples the solution carefully, according to the resampling need, can help an EMO algorithm achieve better results than a static resampling allocation.

While there are abundant studies in Simulation-based Optimization that considered Dynamic Resampling, the survey done in this study has found that there is no related work that considered how combinations of Dynamic Resampling and preference-based guided search can further enhance the performance of EMO algorithms, especially if the problems under study involve computationally expensive evaluations, like production systems simulation. The aim of this thesis is therefore to study, design and then to compare new combinations of preference-based EMO algorithms with various DR strategies, in order to improve the solution quality found by simulation-based multi-objective optimization with stochastic outputs, under a limited function evaluation or simulation budget. Specifically, based on the advantages and flexibility offered by interactive, reference point-based approaches, studies of the performance enhancements of R-NSGA-II when augmented with various DR strategies, with increasing degrees of statistical sophistication, as well as several adaptive features in terms of optimization parameters, have been made. The research results have clearly shown that optimization results can be improved, if a hybrid DR strategy is used and adaptive algorithm parameters are chosen according to the noise level and problem complexity. In the case of a limited simulation budget, the results allow the conclusions that both decision maker preferences and DR should be used at the same time to achieve the best results in simulation-based multi-objective optimization.

Abstract [sv]

Vid preferensbaserad evolutionär flermålsoptimering försöker beslutsfattaren hitta lösningar som är fokuserade kring ett valt preferensområde i målrymden och som ligger så nära den optimala Pareto-fronten som möjligt. Eftersom lösningar utanför preferensområdet anses som mindre intressanta, eller till och med oviktiga, kan optimeringen fokusera på den intressanta delen av målrymden och hitta relevanta lösningar snabbare, vilket betyder att färre lösningar behöver utvärderas. Detta är en stor fördel vid simuleringsbaserad flermålsoptimering med långa simuleringstider eftersom antalet olika konfigurationer som kan simuleras och utvärderas är mycket begränsat. Även tidigare studier som använt fokuserad flermålsoptimering styrd av användarpreferenser, t.ex. med algoritmen R-NSGA-II, har visat positiva resultat men enbart få av dessa har tagit hänsyn till det stokastiska beteendet hos de simulerade systemen.

I litteraturen kallas optimering med stokastiska utvärderingsfunktioner ibland "noisy optimization". Om en optimeringsalgoritm inte tar hänsyn till att de utvärderade målvärdena är stokastiska kommer prestandan vara lägre jämfört med om optimeringsalgoritmen har tillgång till de verkliga målvärdena. Statisk upprepad utvärdering av lösningar med syftet att reducera osäkerheten hos alla evaluerade lösningar hjälper optimeringsalgoritmer att undvika problemet, men leder samtidigt till en betydande ökning av antalet nödvändiga simuleringar och därigenom en ökning av optimeringstiden. Detta är problematiskt eftersom det innebär att många simuleringar utförs i onödan på undermåliga lösningar, där exakta målvärden inte bidrar till att förbättra optimeringens resultat. Upprepad utvärdering reducerar ovissheten och hjälper till att förbättra optimeringen, men har också ett pris. Om flera simuleringar används för varje lösning så minskar antalet olika lösningar som kan simuleras och sökrymden kan inte utforskas lika mycket, givet att det totala antalet simuleringar är begränsat. Dynamisk upprepad utvärdering kan däremot effektivisera flermålsoptimeringens avvägning mellan utforskning och exploatering av sökrymden baserat på det faktum att den nödvändiga precisionen i målvärdena varierar mellan de olika lösningarna i målrymden. I en tät och konvergerad population av lösningar är det viktigt att känna till de exakta målvärdena, medan osäkra målvärden är mindre skadliga i ett tidigt stadium i optimeringsprocessen när algoritmen utforskar målrymden. En dynamisk strategi för upprepad utvärdering med en noggrann allokering av utvärderingarna kan därför uppnå bättre resultat än en allokering som är statisk.

Trots att finns ett rikligt antal studier inom simuleringsbaserad optimering som använder sig av dynamisk upprepad utvärdering så har inga relaterade studier hittats som undersöker hur kombinationer av dynamisk upprepad utvärdering och preferensbaserad styrning kan förbättra prestandan hos algoritmer för flermålsoptimering ytterligare. Speciell avsaknad finns det av studier om optimering av problem med långa simuleringstider, som t.ex. simulering av produktionssystem. Avhandlingens mål är därför att studera, konstruera och jämföra nya kombinationer av preferensbaserade optimeringsalgoritmer och dynamiska strategier för upprepad utvärdering. Syftet är att förbättra resultatet av simuleringsbaserad flermålsoptimering som har stokastiska målvärden när antalet utvärderingar eller optimeringstiden är begränsade. Avhandlingen har speciellt fokuserat på att undersöka prestandahöjande åtgärder hos algoritmen R-NSGA-II i kombination med dynamisk upprepad utvärdering, baserad på fördelarna och flexibiliteten som interaktiva referenspunktbaserade algoritmer erbjuder. Exempel på förbättringsåtgärder är dynamiska algoritmer för upprepad utvärdering med förbättrad statistisk osäkerhetshantering och adaptiva optimeringsparametrar. Resultaten från avhandlingen visar tydligt att optimeringsresultaten kan förbättras om hybrida dynamiska algoritmer för upprepad utvärdering används och adaptiva optimeringsparametrar väljs beroende på osäkerhetsnivån och komplexiteten i optimeringsproblemet. För de fall där simuleringstiden är begränsad är slutsatsen från avhandlingen att både användarpreferenser och dynamisk upprepad utvärdering bör användas samtidigt för att uppnå de bästa resultaten i simuleringsbaserad flermålsoptimering.

Place, publisher, year, edition, pages
Skövde: Högskolan i Skövde , 2016. , p. 300
Series
Dissertation Series ; 11 (2016)
Keywords [en]
Evolutionary multi-objective optimization, simulation-based optimization, guided search, preference-based optimization, reference point, decision support, noise, stochastic systems, dynamic resampling, budget allocation, sequential sampling, hybrid, ranking and selection
National Category
Information Systems Robotics
Research subject
Natural sciences; Technology; Production and Automation Engineering
Identifiers
URN: urn:nbn:se:his:diva-13088ISBN: 978-91-982690-1-7 (print)OAI: oai:DiVA.org:his-13088DiVA, id: diva2:1045646
Public defence
2016-12-12, Skövde, 13:00 (English)
Opponent
Supervisors
Funder
Knowledge FoundationVINNOVAAvailable from: 2016-11-11 Created: 2016-11-10 Last updated: 2018-05-08Bibliographically approved
List of papers
1. R-HV: A Metric for Computing Hyper-volume for Reference Point-based EMOs
Open this publication in new window or tab >>R-HV: A Metric for Computing Hyper-volume for Reference Point-based EMOs
2015 (English)In: Swarm, Evolutionary, and Memetic Computing: 5th International Conference, SEMCCO 2014, Bhubaneswar, India, December 18-20, 2014, Revised Selected Papers / [ed] Bijaya Ketan Panigrahi, Ponnuthurai Nagaratnam Suganthan & Swagatam Das, Springer, 2015, p. 98-110Chapter in book (Refereed)
Abstract [en]

For evaluating performance of a multi-objective optimizationfor finding the entire efficient front, a number of metrics, such as hypervolume, inverse generational distance, etc. exists. However, for evaluatingan EMO algorithm for finding a subset of the efficient frontier, the existing metrics are inadequate. There does not exist many performancemetrics for evaluating a partial preferred efficient set. In this paper, wesuggest a metric which can be used for such purposes for both attainableand unattainable reference points. Results on a number of two-objectiveproblems reveal its working principle and its importance in assessingdifferent algorithms. The results are promising and encouraging for itsfurther use.

Place, publisher, year, edition, pages
Springer, 2015
Series
Lecture Notes in Computer Science LNCS, ISSN 0302-9743, E-ISSN 1611-3349 ; 8947
Keywords
Evolutionary multi-objective optimization, performance metric, hyper-volume, reference point
National Category
Computer and Information Sciences Robotics
Research subject
Natural sciences; Technology; Production and Automation Engineering
Identifiers
urn:nbn:se:his:diva-10464 (URN)10.1007/978-3-319-20294-5_9 (DOI)000365045900009 ()2-s2.0-84946116194 (Scopus ID)978-3-319-20293-8 (ISBN)978-3-319-20294-5 (ISBN)
Conference
5th International Conference on Swarm, Evolutionary, and Memetic Computing, 18th-20th December 2014 (SEMCCO14), Bhubaneswar, Odisha, India
Funder
VINNOVA
Available from: 2014-12-19 Created: 2014-12-19 Last updated: 2018-11-27Bibliographically approved
2. Reference point-based evolutionary multi-objective optimization for industrial systems simulation
Open this publication in new window or tab >>Reference point-based evolutionary multi-objective optimization for industrial systems simulation
Show others...
2012 (English)In: Proceedings of the 2012 Winter Simulation Conference (WSC) / [ed] C. Laroque; J. Himmelspach; R. Pasupathy; O. Rose; A. M. Uhrmacher, IEEE conference proceedings, 2012Conference paper, Published paper (Refereed)
Abstract [en]

In Multi-objective Optimization the goal is to present a set of Pareto-optimal solutions to the decision maker (DM). One out of these solutions is then chosen according to the DM preferences. Given that the DM has some general idea of what type of solution is preferred, a more efficient optimization could be run. This can be accomplished by letting the optimization algorithm make use of this preference information and guide the search towards better solutions that correspond to the preferences. One example for such kind of algorithms is the reference point-based NSGA-II algorithm (R-NSGA-II), by which user-specified reference points can be used to guide the search in the objective space and the diversity of the focused Pareto-set can be controlled. In this paper, the applicability of the R-NSGA-II algorithm in solving industrial-scale simulation-based optimization problems is illustrated through a case study of the improvement of a production line.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2012
Series
Winter Simulation Conference, ISSN 0891-7736, E-ISSN 1558-4305
National Category
Computer Sciences
Research subject
Technology
Identifiers
urn:nbn:se:his:diva-7155 (URN)10.1109/WSC.2012.6465130 (DOI)000319225502011 ()2-s2.0-84874746413 (Scopus ID)978-1-4673-4782-2 (ISBN)978-1-4673-4779-2 (ISBN)978-1-4673-4780-8 (ISBN)978-1-4673-4781-5 (ISBN)
Conference
2012 Winter Simulation Conference, WSC 2012, December 9-12, Berlin
Available from: 2013-02-07 Created: 2013-02-07 Last updated: 2023-05-09Bibliographically approved
3. Dynamic Resampling for Guided Evolutionary Multi-Objective Optimization of Stochastic Systems
Open this publication in new window or tab >>Dynamic Resampling for Guided Evolutionary Multi-Objective Optimization of Stochastic Systems
2013 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

In Multi-objective Optimization many solutions have to be evaluated in order to provide the decision maker with a diverse Pareto-front. In Simulation-based Optimization the number of optimization function evaluations is very limited. If preference information is available however, the available function evaluations can be used more effectively by guiding the optimization towards interesting, preferred regions. One such algorithm for guided search is the R-NSGA-II algorithm. It takes reference points provided by the decision maker and guides the optimization towards areas of the Pareto-front close to the reference points.In Simulation-based Optimization the modeled systems are often stochastic and a reliable quality assessment of system configurations by resampling requires many simulation runs. Therefore optimization practitioners make use of dynamic resampling algorithms that distribute the available function evaluations intelligently on the solutions to be evaluated. Criteria for sampling allocation can be a.o. objective value variability, closeness to the Pareto-front indicated by elapsed time, or the dominance relations between different solutions based on distances between objective vectors and their variability.In our work we combine R-NSGA-II with several resampling algorithms based on the above mentioned criteria. Due to the preference information R-NSGA-II has fitness information based on distance to reference points at its disposal. We propose a resampling strategy that allocates more samples to solutions close to a reference point.Previously, we proposed extensions of R-NSGA-II that adapt algorithm parameters like population size, population diversity, or the strength of the Pareto-dominance relation continuously to optimization problem characteristics. We show how resampling algorithms can be integrated with those extensions.The applicability of the proposed algorithms is shown in a case study of an industrial production line for car manufacturing.

Keywords
Evolutionary multi-objective optimization, guided search, reference point, resampling, simulation-based optimization, stochastic systems
National Category
Computer and Information Sciences Robotics
Research subject
Natural sciences; Technology
Identifiers
urn:nbn:se:his:diva-10462 (URN)
Conference
22nd International Conference on Multiple Criteria Decision Making 2013, 17-21 June 2013, Málaga, Spain
Funder
VINNOVA
Available from: 2014-12-19 Created: 2014-12-19 Last updated: 2018-01-11Bibliographically approved
4. Adaptive Guided Evolutionary Multi-Objective Optimization
Open this publication in new window or tab >>Adaptive Guided Evolutionary Multi-Objective Optimization
2013 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

In Multi-objective Optimization many solutions have to be evaluated in order to provide the decision maker with a diverse Pareto-front. In Simulation-based Optimization the number of optimization function evaluations is very limited. If preference information is available however, the available function evaluations can be used more effectively by guiding the optimization towards interesting, preferred regions. One such algorithm for guided search is the Reference-point guided NSGA-II. It takes reference points provided by the decision maker and guides the optimization towards areas of the Pareto-front close to the reference points.We propose several extensions of R-NSGA-II. In the beginning of the optimization runtime the population is spread-out in the objective space while towards the end of the runtime most solutions are close to reference points. The purpose of a large population is to avoid local optima and to explore the search space which is less important when the algorithm has converged to the reference points. Therefore, we reduce the population size towards the end of the runtime. R-NSGA-II controls the objective space diversity through the epsilon parameter. We reduce the diversity in the population as it approaches the reference points. In a previous study we showed that R-NSGA-II keeps a high diversity until late in the optimization run which is caused by the Pareto-fitness. This slows down the progress towards the reference points. We constrain the Pareto-fitness to force a faster convergence. For the same reason an approach is presented that delays the use of the Pareto-fitness: Initially, the fitness is based only on reference point distance and diversity. Later, when the population has converged towards the Pareto-front, Pareto-fitness is considered as primary-, and distance as secondary fitness.

Keywords
Evolutionary multi-objective optimization, simulation-based optimization, guided search, reference point, adaptive
National Category
Computer and Information Sciences Robotics
Research subject
Natural sciences; Technology
Identifiers
urn:nbn:se:his:diva-10467 (URN)
Conference
22nd International Conference on Multiple Criteria Decision Making 2013, 17-21 June 2013, Málaga, Spain
Funder
VINNOVA
Available from: 2014-12-19 Created: 2014-12-19 Last updated: 2018-01-11Bibliographically approved
5. Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls
Open this publication in new window or tab >>Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls
2012 (English)In: 2012 IEEE Congress on Evolutionary Computation, IEEE, 2012, p. 2417-2424Conference paper, Published paper (Refereed)
Abstract [en]

Evolutionary Multi-objective Optimization aims at finding a diverse set of Pareto-optimal solutions whereof the decision maker can choose the solution that fits best to her or his preferences. In case of limited time (of function evaluations) for optimization this preference information may be used to speed up the search by making the algorithm focus directly on interesting areas of the objective space. The R-NSGA-II algorothm (1) uses reference points to which the search is guided specified according to the preferences of the user. In this paper, we propose an extension to R-NSGA-II that limits the Pareto-fitness to speed up the search for a limited number of function calls. It avoids to automatically select all solutions of the first front of the candidate set into the next population. In this way non-preferred Pareto-optimal solutions are not considered thereby accelerating the search process. With focusing comes the necessity to maintain diversity. In R-NSGA-II this is achieved with the help of a clustering algorithm which keeps the found solutions above a minimum distance ε. In this paper, we propose a self-adaptive ε approach that autonomously provides the decision maker with a more diverse solution set if the found Pareto-set is situated further away from a reference point. Similarly, the approach also varies the diversity inside of the Pareto-set. This helps the decision maker to get a better overview of the available solutions and supports decisions about how to adapt the reference points.

Place, publisher, year, edition, pages
IEEE, 2012
Series
IEEE transactions on evolutionary computation, ISSN 1089-778X, E-ISSN 1941-0026
Keywords
Evolutionary multi-objective optimization, decision making, guided search, reference point, diversity, Pareto-optimal
National Category
Computer and Information Sciences
Research subject
Technology
Identifiers
urn:nbn:se:his:diva-6421 (URN)10.1109/CEC.2012.6256654 (DOI)000312859303086 ()2-s2.0-84866852568 (Scopus ID)978-1-4673-1508-1 (ISBN)978-1-4673-1510-4 (ISBN)978-1-4673-1509-8 (ISBN)
Conference
IEEE World Congress on Computational Intelligence 2012 - Congress on Evolutionary Computation (CEC), Brisbane, June 10-15, 2012
Funder
Vinnova
Note

This study was partially funded by VINNOVA, Sweden, through the FFI-HSO project and Volvo Car Corporation. The authors gratefully acknowledge their provision of research funding.

Available from: 2012-10-10 Created: 2012-10-02 Last updated: 2023-05-03Bibliographically approved
6. A Comparative Study of Dynamic Resampling Strategies for Guided Evolutionary Multi-Objective Optimization
Open this publication in new window or tab >>A Comparative Study of Dynamic Resampling Strategies for Guided Evolutionary Multi-Objective Optimization
2013 (English)In: 2013 IEEE Congress on Evolutionary Computation, CEC 2013, IEEE conference proceedings, 2013, p. 1826-1835Conference paper, Published paper (Refereed)
Abstract [en]

In Evolutionary Multi-objective Optimization many solutions have to be evaluated to provide the decision maker with a diverse choice of solutions along the Pareto-front, in particular for high-dimensional optimization problems. In Simulation-based Optimization the modeled systems are complex and require long simulation times. In addition the evaluated systems are often stochastic and reliable quality assessment of system configurations by resampling requires many simulation runs. As a countermeasure for the required high number of simulation runs caused by multiple optimization objectives the optimization can be focused on interesting parts of the Pareto-front, as it is done by the Reference point-guided NSGA-II algorithm (R-NSGA-II) [9]. The number of evaluations needed for the resampling of solutions can be reduced by intelligent resampling algorithms that allocate just as much sampling budget needed in different situations during the optimization run. In this paper we propose and compare resampling algorithms that support the R-NSGA-II algorithm on optimization problems with stochastic evaluation functions. © 2013 IEEE.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2013
Keywords
Evolutionary multi-objective optimization, simulation-based optimization, guided search, reference point, decision support, stochastic systems, dynamic, resampling
National Category
Engineering and Technology Computer Sciences
Research subject
Technology
Identifiers
urn:nbn:se:his:diva-8531 (URN)10.1109/CEC.2013.6557782 (DOI)000326235301106 ()2-s2.0-84881589703 (Scopus ID)978-1-4799-0453-2 (ISBN)978-1-4799-0452-5 (ISBN)978-1-4799-0454-9 (ISBN)
Conference
2013 IEEE Congress on Evolutionary Computation, June 20-23, Cancún, México
Available from: 2013-10-07 Created: 2013-10-07 Last updated: 2018-01-11Bibliographically approved
7. Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems
Open this publication in new window or tab >>Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems
2015 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

In Multi-objective Optimization many solutions have to be evaluated in order to provide the decision maker with a diverse choice of solutions along the Pareto-front. In Simulation-based Optimization the number of optimization function evaluations is usually very limited due to the long execution times of the simulation models. If preference information is available however, the available number of function evaluations can be used more effectively. The optimization can be performed as a guided, focused search which returns solutions close to interesting, preferred regions of the Pareto-front. One such algorithm for guided search is the Reference-point guided Non-dominated Sorting Genetic Algorithm II, R-NSGA-II. It is a population-based Evolutionary Algorithm that finds a set of non-dominated solutions in a single optimization run. R-NSGA-II takes reference points in the objective space provided by the decision maker and guides the optimization towards areas of the Pareto-front close the reference points.

In Simulation-based Optimization the modeled and simulated systems are often stochastic and a common method to handle objective noise is Resampling. Reliable quality assessment of system configurations by resampling requires many simulation runs. Therefore, the optimization process can benefit from Dynamic Resampling algorithms that distribute the available function evaluations among the solutions in the best possible way. Solutions can vary in their sampling need. For example, solutions with highly variable objective values have to be sampled more times to reduce their objective value standard error. Dynamic resampling algorithms assign as much samples to them as is needed to reduce the uncertainty about their objective values below a certain threshold. Another criterion the number of samples can be based on is a solution's closeness to the Pareto-front. For solutions that are close to the Pareto-front it is likely that they are member of the final result set. It is therefore important to have accurate knowledge of their objective values available, in order to be able to to tell which solutions are better than others. Usually, the distance to the Pareto-front is not known, but another criterion can be used as an indication for it instead: The elapsed optimization time. A third example of a resampling criterion can be the dominance relations between different solutions. The optimization algorithm has to determine for pairs of solutions which is the better one. Here both distances between objective vectors and the variance of the objective values have to be considered which requires a more advanced resampling technique. This is a Ranking and Selection problem.

If R-NSGA-II is applied in a scenario with a stochastic fitness function resampling algorithms have to be used to support it in the best way and avoid a performance degradation due to uncertain knowledge about the objective values of solutions. In our work we combine R-NSGA-II with several resampling algorithms that are based on the above mentioned resampling criteria or combinations thereof and evaluate which are the best criteria the sampling allocation can be based on, in which situations.

Due to the preference information R-NSGA-II has an important fitness information about the solutions at its disposal: The distance to reference points. We propose a resampling strategy that allocates more samples to solutions close to a reference point. This idea is then extended with a resampling technique that compares solutions based on their distance to the reference point. We base this algorithm on a classical Ranking and Selection algorithm, Optimal Computing Budget Allocation, and show how OCBA can be applied to support R-NSGA-II. We show the applicability of the proposed algorithms in a case study of an industrial production line for car manufacturing.

Series
COIN Report ; 2015020
Keywords
Evolutionary multi-objective optimization, guided search, preference-based optimization, reference point, dynamic resampling, budget allocation, decision support, simulation-based optimization, stochastic systems
National Category
Computer and Information Sciences Robotics
Research subject
Natural sciences; Technology; Production and Automation Engineering
Identifiers
urn:nbn:se:his:diva-11494 (URN)
Conference
23rd International Conference on Multiple Criteria Decision Making MCDM 2015, August 3-7, 2015, Hamburg, Germany
Funder
Knowledge Foundation
Available from: 2015-09-07 Created: 2015-09-07 Last updated: 2018-03-29Bibliographically approved
8. Hybrid Dynamic Resampling for Guided Evolutionary Multi-Objective Optimization
Open this publication in new window or tab >>Hybrid Dynamic Resampling for Guided Evolutionary Multi-Objective Optimization
2015 (English)In: Evolutionary Multi-Criterion Optimization: 8th International Conference, EMO 2015, Guimarães, Portugal, March 29 --April 1, 2015. Proceedings, Part I / [ed] António Gaspar-Cunha; Carlos Henggeler Antunes; Carlos Coello Coello, Springer International Publishing Switzerland , 2015, p. 366-380Conference paper, Published paper (Refereed)
Abstract [en]

In Guided Evolutionary Multi-objective Optimization the goal is to find a diverse, but locally focused non-dominated front in a decision maker’s area of interest, as close as possible to the true Pareto-front. The optimization can focus its efforts towards the preferred area and achieve a better result [9, 17, 7, 13]. The modeled and simulated systems are often stochastic and a common method to handle the objective noise is Resampling. The given preference information allows to define better resampling strategies which further improve the optimization result. In this paper, resampling strategies are proposed that base the sampling allocation on multiple factors, and thereby combine multiple resampling strategies proposed by the authors in [15]. These factors are, for example, the Pareto-rank of a solution and its distance to the decision maker’s area of interest. The proposed hybrid Dynamic Resampling Strategy DR2 is evaluated on the Reference point-guided NSGA-II optimization algorithm (R-NSGA-II) [9].

Place, publisher, year, edition, pages
Springer International Publishing Switzerland, 2015
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9018
Keywords
evolutionary multi-objective optimization, guided search, reference point, dynamic resampling, budget allocation
National Category
Computer and Information Sciences Robotics
Research subject
Natural sciences; Technology; Production and Automation Engineering
Identifiers
urn:nbn:se:his:diva-10814 (URN)10.1007/978-3-319-15934-8_25 (DOI)000361702100025 ()2-s2.0-84925337390 (Scopus ID)978-3-319-15933-1 (ISBN)978-3-319-15934-8 (ISBN)
Conference
8th International Conference on Evolutionary Multi-Criterion Optimization, March 2015, Guimarães, Portugal
Projects
FFI-HSO
Funder
Knowledge FoundationVinnova
Note

Springer Cham

This study was partially funded by VINNOVA, Sweden, through the FFI-HSO project. The authors gratefully acknowledge their provision of research funding.

Available from: 2015-03-30 Created: 2015-03-30 Last updated: 2022-09-28Bibliographically approved
9. Hybrid Dynamic Resampling Algorithms for Evolutionary Multi-objective Optimization of Invariant-Noise Problems
Open this publication in new window or tab >>Hybrid Dynamic Resampling Algorithms for Evolutionary Multi-objective Optimization of Invariant-Noise Problems
2016 (English)In: Applications of Evolutionary Computation: 19th European Conference, EvoApplications 2016, Porto, Portugal, March 30 – April 1, 2016, Proceedings, Part II / [ed] Giovanni Squillero, Paolo Burelli, 2016, Vol. 9598, p. 311-326Conference paper, Published paper (Refereed)
Abstract [en]

In Simulation-based Evolutionary Multi-objective Optimization (EMO) the available time for optimization usually is limited. Since many real-world optimization problems are stochastic models, the optimization algorithm has to employ a noise compensation technique for the objective values. This article analyzes Dynamic Resampling algorithms for handling the objective noise. Dynamic Resampling improves the objective value accuracy by spending more time to evaluate the solutions multiple times, which tightens the optimization time limit even more. This circumstance can be used to design Dynamic Resampling algorithms with a better sampling allocation strategy that uses the time limit. In our previous work, we investigated Time-based Hybrid Resampling algorithms for Preference-based EMO. In this article, we extend our studies to general EMO which aims to find a converged and diverse set of alternative solutions along the whole Pareto-front of the problem. We focus on problems with an invariant noise level, i.e. a flat noise landscape.

Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9598
Keywords
Evolutionary multi-objective optimization, Simulationbased optimization, Noise, Dynamic resampling, Budget allocation, Hybrid
National Category
Robotics Information Systems
Research subject
Natural sciences; Technology; Production and Automation Engineering
Identifiers
urn:nbn:se:his:diva-12074 (URN)10.1007/978-3-319-31153-1_21 (DOI)000467438600021 ()2-s2.0-84962257415 (Scopus ID)978-3-319-31152-4 (ISBN)978-3-319-31153-1 (ISBN)
Conference
19th European Conference, EvoApplications 2016, Porto, Portugal, March 30 – April 1, 2016
Projects
BlixtSim, IDSS
Funder
Knowledge Foundation
Available from: 2016-03-29 Created: 2016-03-29 Last updated: 2020-02-14Bibliographically approved

Open Access in DiVA

Dissertation Florian Siegmund(12615 kB)1066 downloads
File information
File name FULLTEXT01.pdfFile size 12615 kBChecksum SHA-512
7e1721973b3c3ff6fdc111116c1a6a7cfbd29196f3117606ec36153387426a9d20ab3cc3505829c8c36e0c93835094cca436802d8218ffa3c5779f5bb38a60d2
Type fulltextMimetype application/pdf

Authority records

Siegmund, Florian

Search in DiVA

By author/editor
Siegmund, Florian
By organisation
School of Engineering ScienceThe Virtual Systems Research Centre
Information SystemsRobotics

Search outside of DiVA

GoogleGoogle Scholar
Total: 1067 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 883 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf