Högskolan i Skövde

his.sePublications
Change search
Refine search result
1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Deb, Kalyanmoy
    et al.
    Department of Electrical and Computer Engineering, Michigan State University, USA.
    Siegmund, Florian
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Ng, Amos H. C.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    R-HV: A Metric for Computing Hyper-volume for Reference Point-based EMOs2015In: Swarm, Evolutionary, and Memetic Computing: 5th International Conference, SEMCCO 2014, Bhubaneswar, India, December 18-20, 2014, Revised Selected Papers / [ed] Bijaya Ketan Panigrahi, Ponnuthurai Nagaratnam Suganthan & Swagatam Das, Springer, 2015, p. 98-110Chapter in book (Refereed)
    Abstract [en]

    For evaluating performance of a multi-objective optimizationfor finding the entire efficient front, a number of metrics, such as hypervolume, inverse generational distance, etc. exists. However, for evaluatingan EMO algorithm for finding a subset of the efficient frontier, the existing metrics are inadequate. There does not exist many performancemetrics for evaluating a partial preferred efficient set. In this paper, wesuggest a metric which can be used for such purposes for both attainableand unattainable reference points. Results on a number of two-objectiveproblems reveal its working principle and its importance in assessingdifferent algorithms. The results are promising and encouraging for itsfurther use.

  • 2.
    Ng, Amos H. C.
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Siegmund, Florian
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Deb, Kalyanmoy
    Michigan State University, East Lansing, Michigan, USA.
    Reference point based evolutionary multi-objective optimization with dynamic resampling for production systems improvement2018In: Journal of Systems and Information Technology, ISSN 1328-7265, E-ISSN 1758-8847, Vol. 20, no 4, p. 489-512Article in journal (Refereed)
    Abstract [en]

    Purpose

    Stochastic simulation is a popular tool among practitioners and researchers alike for quantitative analysis of systems. Recent advancement in research on formulating production systems improvement problems into multi-objective optimizations has provided the possibility to predict the optimal trade-offs between improvement costs and system performance, before making the final decision for implementation. However, the fact that stochastic simulations rely on running a large number of replications to cope with the randomness and obtain some accurate statistical estimates of the system outputs, has posed a serious issue for using this kind of multi-objective optimization in practice, especially with complex models. Therefore, the purpose of this study is to investigate the performance enhancements of a reference point based evolutionary multi-objective optimization algorithm in practical production systems improvement problems, when combined with various dynamic re-sampling mechanisms.

    Design/methodology/approach

    Many algorithms consider the preferences of decision makers to converge to optimal trade-off solutions faster. There also exist advanced dynamic resampling procedures to avoid wasting a multitude of simulation replications to non-optimal solutions. However, very few attempts have been made to study the advantages of combining these two approaches to further enhance the performance of computationally expensive optimizations for complex production systems. Therefore, this paper proposes some combinations of preference-based guided search with dynamic resampling mechanisms into an evolutionary multi-objective optimization algorithm to lower both the computational cost in re-sampling and the total number of simulation evaluations.

    Findings

    This paper shows the performance enhancements of the reference-point based algorithm, R-NSGA-II, when augmented with three different dynamic resampling mechanisms with increasing degrees of statistical sophistication, namely, time-based, distance-rank and optimal computing buffer allocation, when applied to two real-world production system improvement studies. The results have shown that the more stochasticity that the simulation models exert, the more the statistically advanced dynamic resampling mechanisms could significantly enhance the performance of the optimization process.

    Originality/value

    Contributions of this paper include combining decision makers’ preferences and dynamic resampling procedures; performance evaluations on two real-world production system improvement studies and illustrating statistically advanced dynamic resampling mechanism is needed for noisy models.

  • 3.
    Siegmund, Florian
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Dynamic Resampling for Preference-based Evolutionary Multi-objective Optimization of Stochastic Systems: Improving the efficiency of time-constrained optimization2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In preference-based Evolutionary Multi-objective Optimization (EMO), the decision maker is looking for a diverse, but locally focused non-dominated front in a preferred area of the objective space, as close as possible to the true Pareto-front. Since solutions found outside the area of interest are considered less important or even irrelevant, the optimization can focus its efforts on the preferred area and find the solutions that the decision maker is looking for more quickly, i.e., with fewer simulation runs. This is particularly important if the available time for optimization is limited, as is the case in many real-world applications. Although previous studies in using this kind of guided-search with preference information, for example, withthe R-NSGA-II algorithm, have shown positive results, only very few of them considered the stochastic outputs of simulated systems.

    In the literature, this phenomenon of stochastic evaluation functions is sometimes called noisy optimization. If an EMO algorithm is run without any countermeasure to noisy evaluation functions, the performance will deteriorate, compared to the case if the true mean objective values are known. While, in general, static resampling of solutions to reduce the uncertainty of all evaluated design solutions can allow EMO algorithms to avoid this problem, it will significantly increase the required simulation time/budget, as many samples will be wasted on candidate solutions which are inferior. In comparison, a Dynamic Resampling (DR) strategy can allow the exploration and exploitation trade-off to be optimized, since the required accuracy about objective values varies between solutions. In a dense, converged population, itis important to know the accurate objective values, whereas noisy objective values are less harmful when an algorithm is exploring the objective space, especially early in the optimization process. Therefore, a well-designed Dynamic Resampling strategy which resamples the solution carefully, according to the resampling need, can help an EMO algorithm achieve better results than a static resampling allocation.

    While there are abundant studies in Simulation-based Optimization that considered Dynamic Resampling, the survey done in this study has found that there is no related work that considered how combinations of Dynamic Resampling and preference-based guided search can further enhance the performance of EMO algorithms, especially if the problems under study involve computationally expensive evaluations, like production systems simulation. The aim of this thesis is therefore to study, design and then to compare new combinations of preference-based EMO algorithms with various DR strategies, in order to improve the solution quality found by simulation-based multi-objective optimization with stochastic outputs, under a limited function evaluation or simulation budget. Specifically, based on the advantages and flexibility offered by interactive, reference point-based approaches, studies of the performance enhancements of R-NSGA-II when augmented with various DR strategies, with increasing degrees of statistical sophistication, as well as several adaptive features in terms of optimization parameters, have been made. The research results have clearly shown that optimization results can be improved, if a hybrid DR strategy is used and adaptive algorithm parameters are chosen according to the noise level and problem complexity. In the case of a limited simulation budget, the results allow the conclusions that both decision maker preferences and DR should be used at the same time to achieve the best results in simulation-based multi-objective optimization.

    Download full text (pdf)
    Dissertation Florian Siegmund
  • 4.
    Siegmund, Florian
    et al.
    University of Skövde, School of Technology and Society. University of Skövde, The Virtual Systems Research Centre.
    Bernedixen, Jacob
    University of Skövde, School of Technology and Society. University of Skövde, The Virtual Systems Research Centre.
    Pehrsson, Leif
    University of Skövde, School of Technology and Society. University of Skövde, The Virtual Systems Research Centre.
    Ng, Amos H. C.
    University of Skövde, School of Technology and Society. University of Skövde, The Virtual Systems Research Centre.
    Deb, Kalyanmoy
    Department of Mechanical Engineering, Indian Institute of Technology Kanpur, India.
    Reference point-based evolutionary multi-objective optimization for industrial systems simulation2012In: Proceedings of the 2012 Winter Simulation Conference (WSC) / [ed] C. Laroque; J. Himmelspach; R. Pasupathy; O. Rose; A. M. Uhrmacher, IEEE conference proceedings, 2012Conference paper (Refereed)
    Abstract [en]

    In Multi-objective Optimization the goal is to present a set of Pareto-optimal solutions to the decision maker (DM). One out of these solutions is then chosen according to the DM preferences. Given that the DM has some general idea of what type of solution is preferred, a more efficient optimization could be run. This can be accomplished by letting the optimization algorithm make use of this preference information and guide the search towards better solutions that correspond to the preferences. One example for such kind of algorithms is the reference point-based NSGA-II algorithm (R-NSGA-II), by which user-specified reference points can be used to guide the search in the objective space and the diversity of the focused Pareto-set can be controlled. In this paper, the applicability of the R-NSGA-II algorithm in solving industrial-scale simulation-based optimization problems is illustrated through a case study of the improvement of a production line.

  • 5.
    Siegmund, Florian
    et al.
    University of Skövde, School of Technology and Society. University of Skövde, The Virtual Systems Research Centre.
    Deb, Kalyanmoy
    Department of Electrical and Computer Engineering, Michigan State University, USA.
    Karlsson, Alexander
    University of Skövde, School of Technology and Society. University of Skövde, The Virtual Systems Research Centre.
    Ng, Amos H. C.
    University of Skövde, School of Technology and Society. University of Skövde, The Virtual Systems Research Centre.
    Dynamic Resampling for Guided Evolutionary Multi-Objective Optimization of Stochastic Systems2013Conference paper (Refereed)
    Abstract [en]

    In Multi-objective Optimization many solutions have to be evaluated in order to provide the decision maker with a diverse Pareto-front. In Simulation-based Optimization the number of optimization function evaluations is very limited. If preference information is available however, the available function evaluations can be used more effectively by guiding the optimization towards interesting, preferred regions. One such algorithm for guided search is the R-NSGA-II algorithm. It takes reference points provided by the decision maker and guides the optimization towards areas of the Pareto-front close to the reference points.In Simulation-based Optimization the modeled systems are often stochastic and a reliable quality assessment of system configurations by resampling requires many simulation runs. Therefore optimization practitioners make use of dynamic resampling algorithms that distribute the available function evaluations intelligently on the solutions to be evaluated. Criteria for sampling allocation can be a.o. objective value variability, closeness to the Pareto-front indicated by elapsed time, or the dominance relations between different solutions based on distances between objective vectors and their variability.In our work we combine R-NSGA-II with several resampling algorithms based on the above mentioned criteria. Due to the preference information R-NSGA-II has fitness information based on distance to reference points at its disposal. We propose a resampling strategy that allocates more samples to solutions close to a reference point.Previously, we proposed extensions of R-NSGA-II that adapt algorithm parameters like population size, population diversity, or the strength of the Pareto-dominance relation continuously to optimization problem characteristics. We show how resampling algorithms can be integrated with those extensions.The applicability of the proposed algorithms is shown in a case study of an industrial production line for car manufacturing.

    Download full text (pdf)
    fulltext
  • 6.
    Siegmund, Florian
    et al.
    University of Skövde, School of Technology and Society. University of Skövde, The Virtual Systems Research Centre.
    Deb, Kalyanmoy
    Department of Electrical and Computer Engineering, Michigan State University, USA.
    Ng, Amos H. C.
    University of Skövde, School of Technology and Society. University of Skövde, The Virtual Systems Research Centre.
    Adaptive Guided Evolutionary Multi-Objective Optimization2013Conference paper (Refereed)
    Abstract [en]

    In Multi-objective Optimization many solutions have to be evaluated in order to provide the decision maker with a diverse Pareto-front. In Simulation-based Optimization the number of optimization function evaluations is very limited. If preference information is available however, the available function evaluations can be used more effectively by guiding the optimization towards interesting, preferred regions. One such algorithm for guided search is the Reference-point guided NSGA-II. It takes reference points provided by the decision maker and guides the optimization towards areas of the Pareto-front close to the reference points.We propose several extensions of R-NSGA-II. In the beginning of the optimization runtime the population is spread-out in the objective space while towards the end of the runtime most solutions are close to reference points. The purpose of a large population is to avoid local optima and to explore the search space which is less important when the algorithm has converged to the reference points. Therefore, we reduce the population size towards the end of the runtime. R-NSGA-II controls the objective space diversity through the epsilon parameter. We reduce the diversity in the population as it approaches the reference points. In a previous study we showed that R-NSGA-II keeps a high diversity until late in the optimization run which is caused by the Pareto-fitness. This slows down the progress towards the reference points. We constrain the Pareto-fitness to force a faster convergence. For the same reason an approach is presented that delays the use of the Pareto-fitness: Initially, the fitness is based only on reference point distance and diversity. Later, when the population has converged towards the Pareto-front, Pareto-fitness is considered as primary-, and distance as secondary fitness.

    Download full text (pdf)
    fulltext
  • 7.
    Siegmund, Florian
    et al.
    University of Skövde, School of Technology and Society. University of Skövde, The Virtual Systems Research Centre.
    Ng, Amos H. C.
    University of Skövde, School of Technology and Society. University of Skövde, The Virtual Systems Research Centre.
    Deb, Kalyanmoy
    Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India.
    A Comparative Study of Dynamic Resampling Strategies for Guided Evolutionary Multi-Objective Optimization2013In: 2013 IEEE Congress on Evolutionary Computation, CEC 2013, IEEE conference proceedings, 2013, p. 1826-1835Conference paper (Refereed)
    Abstract [en]

    In Evolutionary Multi-objective Optimization many solutions have to be evaluated to provide the decision maker with a diverse choice of solutions along the Pareto-front, in particular for high-dimensional optimization problems. In Simulation-based Optimization the modeled systems are complex and require long simulation times. In addition the evaluated systems are often stochastic and reliable quality assessment of system configurations by resampling requires many simulation runs. As a countermeasure for the required high number of simulation runs caused by multiple optimization objectives the optimization can be focused on interesting parts of the Pareto-front, as it is done by the Reference point-guided NSGA-II algorithm (R-NSGA-II) [9]. The number of evaluations needed for the resampling of solutions can be reduced by intelligent resampling algorithms that allocate just as much sampling budget needed in different situations during the optimization run. In this paper we propose and compare resampling algorithms that support the R-NSGA-II algorithm on optimization problems with stochastic evaluation functions. © 2013 IEEE.

  • 8.
    Siegmund, Florian
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Ng, Amos H. C.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Deb, Kalyanmoy
    Department of Electrical and Computer Engineering, Michigan State University, USA.
    A Comparative Study of Fast Adaptive Preference-Guided Evolutionary Multi-objective Optimization2017In: Evolutionary Multi-Criterion Optimization: 9th International Conference, EMO 2017, Münster, Germany, March 19-22, 2017, Proceedings / [ed] Heike Trautmann, Rudolph Günter, Kathrin Klamroth, Oliver Schütze, Margaret Wiecek, Yaochu Jin, and Christian Grimme, Springer, 2017, Vol. 10173, p. 560-574Conference paper (Refereed)
    Abstract [en]

    In Simulation-based Evolutionary Multi-objective Optimization, the number of simulation runs is very limited, since the complex simulation models require long execution times. With the help of preference information, the optimization result can be improved by guiding the optimization towards relevant areas in the objective space with, for example, the Reference Point-based NSGA-II algorithm (R-NSGA-II). Since the Pareto-relation is the primary fitness function in R-NSGA-II, the algorithm focuses on exploring the objective space with high diversity. Only after the population has converged closeto the Pareto-front does the influence of the reference point distance as secondary fitness criterion increase and the algorithm converges towards the preferred area on the Pareto-front.In this paper, we propose a set of extensions of R-NSGA-II which adaptively control the algorithm behavior, in order to converge faster towards the reference point. The adaption can be based on criteria such as elapsed optimization time or the reference point distance, or a combination thereof. In order to evaluate the performance of the adaptive extensions of R-NSGA-II, a performance metric for reference point-based EMO algorithms is used, which is based on the Hypervolume measure called the Focused Hypervolume metric. It measures convergence and diversity of the population in the preferred area around the reference point. The results are evaluated on two benchmark problems ofdifferent complexity and a simplistic production line model.

  • 9.
    Siegmund, Florian
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Ng, Amos H. C.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Deb, Kalyanmoy
    Department of Electrical and Computer Engineering, Michigan State University, USA.
    A Ranking and Selection Strategy for Preference-based Evolutionary Multi-objective Optimization of Variable-Noise Problems2016In: 2016 IEEE Congress on Evolutionary Computation (CEC), IEEE conference proceedings, 2016, p. 3035-3044Conference paper (Refereed)
    Abstract [en]

    In simulation-based Evolutionary Multi-objective Optimization the number of simulation runs is very limited, since the complex simulation models require long execution times. With the help of preference information, the optimization result can be improved by guiding the optimization towards relevant areas in the objective space, for example with the R-NSGA-II algorithm [9], which uses a reference point specified by the decision maker. When stochastic systems are simulated, the uncertainty of the objective values might degrade the optimization performance. By sampling the solutions multiple times this uncertainty can be reduced. However, resampling methods reduce the overall number of evaluated solutions which potentially worsens the optimization result. In this article, a Dynamic Resampling strategy is proposed which identifies the solutions closest to the reference point which guides the population of the Evolutionary Algorithm. We apply a single-objective Ranking and Selection resampling algorithm in the selection step of R-NSGA-II, which considers the stochastic reference point distance and its variance to identify the best solutions. We propose and evaluate different ways to integrate the sampling allocation method into the Evolutionary Algorithm. On the one hand, the Dynamic Resampling algorithm is made adaptive to support the EA selection step, and it is customized to be used in the time-constrained optimization scenario. Furthermore, it is controlled by other resampling criteria, in the same way as other hybrid DR algorithms. On the other hand, R-NSGA-II is modified to rely more on the scalar reference point distance as fitness function. The results are evaluated on a benchmark problem with variable noise landscape.

  • 10.
    Siegmund, Florian
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Ng, Amos H. C.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Deb, Kalyanmoy
    Department of Electrical and Computer Engineering, Michigan State University, USA.
    Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems2015Conference paper (Refereed)
    Abstract [en]

    In Multi-objective Optimization many solutions have to be evaluated in order to provide the decision maker with a diverse choice of solutions along the Pareto-front. In Simulation-based Optimization the number of optimization function evaluations is usually very limited due to the long execution times of the simulation models. If preference information is available however, the available number of function evaluations can be used more effectively. The optimization can be performed as a guided, focused search which returns solutions close to interesting, preferred regions of the Pareto-front. One such algorithm for guided search is the Reference-point guided Non-dominated Sorting Genetic Algorithm II, R-NSGA-II. It is a population-based Evolutionary Algorithm that finds a set of non-dominated solutions in a single optimization run. R-NSGA-II takes reference points in the objective space provided by the decision maker and guides the optimization towards areas of the Pareto-front close the reference points.

    In Simulation-based Optimization the modeled and simulated systems are often stochastic and a common method to handle objective noise is Resampling. Reliable quality assessment of system configurations by resampling requires many simulation runs. Therefore, the optimization process can benefit from Dynamic Resampling algorithms that distribute the available function evaluations among the solutions in the best possible way. Solutions can vary in their sampling need. For example, solutions with highly variable objective values have to be sampled more times to reduce their objective value standard error. Dynamic resampling algorithms assign as much samples to them as is needed to reduce the uncertainty about their objective values below a certain threshold. Another criterion the number of samples can be based on is a solution's closeness to the Pareto-front. For solutions that are close to the Pareto-front it is likely that they are member of the final result set. It is therefore important to have accurate knowledge of their objective values available, in order to be able to to tell which solutions are better than others. Usually, the distance to the Pareto-front is not known, but another criterion can be used as an indication for it instead: The elapsed optimization time. A third example of a resampling criterion can be the dominance relations between different solutions. The optimization algorithm has to determine for pairs of solutions which is the better one. Here both distances between objective vectors and the variance of the objective values have to be considered which requires a more advanced resampling technique. This is a Ranking and Selection problem.

    If R-NSGA-II is applied in a scenario with a stochastic fitness function resampling algorithms have to be used to support it in the best way and avoid a performance degradation due to uncertain knowledge about the objective values of solutions. In our work we combine R-NSGA-II with several resampling algorithms that are based on the above mentioned resampling criteria or combinations thereof and evaluate which are the best criteria the sampling allocation can be based on, in which situations.

    Due to the preference information R-NSGA-II has an important fitness information about the solutions at its disposal: The distance to reference points. We propose a resampling strategy that allocates more samples to solutions close to a reference point. This idea is then extended with a resampling technique that compares solutions based on their distance to the reference point. We base this algorithm on a classical Ranking and Selection algorithm, Optimal Computing Budget Allocation, and show how OCBA can be applied to support R-NSGA-II. We show the applicability of the proposed algorithms in a case study of an industrial production line for car manufacturing.

  • 11.
    Siegmund, Florian
    et al.
    University of Skövde, School of Technology and Society. University of Skövde, The Virtual Systems Research Centre.
    Ng, Amos H. C.
    University of Skövde, School of Technology and Society. University of Skövde, The Virtual Systems Research Centre.
    Deb, Kalyanmoy
    Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India.
    Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls2012In: 2012 IEEE Congress on Evolutionary Computation, IEEE, 2012, p. 2417-2424Conference paper (Refereed)
    Abstract [en]

    Evolutionary Multi-objective Optimization aims at finding a diverse set of Pareto-optimal solutions whereof the decision maker can choose the solution that fits best to her or his preferences. In case of limited time (of function evaluations) for optimization this preference information may be used to speed up the search by making the algorithm focus directly on interesting areas of the objective space. The R-NSGA-II algorothm (1) uses reference points to which the search is guided specified according to the preferences of the user. In this paper, we propose an extension to R-NSGA-II that limits the Pareto-fitness to speed up the search for a limited number of function calls. It avoids to automatically select all solutions of the first front of the candidate set into the next population. In this way non-preferred Pareto-optimal solutions are not considered thereby accelerating the search process. With focusing comes the necessity to maintain diversity. In R-NSGA-II this is achieved with the help of a clustering algorithm which keeps the found solutions above a minimum distance ε. In this paper, we propose a self-adaptive ε approach that autonomously provides the decision maker with a more diverse solution set if the found Pareto-set is situated further away from a reference point. Similarly, the approach also varies the diversity inside of the Pareto-set. This helps the decision maker to get a better overview of the available solutions and supports decisions about how to adapt the reference points.

  • 12.
    Siegmund, Florian
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Ng, Amos H. C.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Deb, Kalyanmoy
    Department of Electrical and Computer Engineering, Michigan State University, East Lansing, USA.
    Hybrid Dynamic Resampling Algorithms for Evolutionary Multi-objective Optimization of Invariant-Noise Problems2016In: Applications of Evolutionary Computation: 19th European Conference, EvoApplications 2016, Porto, Portugal, March 30 – April 1, 2016, Proceedings, Part II / [ed] Giovanni Squillero, Paolo Burelli, 2016, Vol. 9598, p. 311-326Conference paper (Refereed)
    Abstract [en]

    In Simulation-based Evolutionary Multi-objective Optimization (EMO) the available time for optimization usually is limited. Since many real-world optimization problems are stochastic models, the optimization algorithm has to employ a noise compensation technique for the objective values. This article analyzes Dynamic Resampling algorithms for handling the objective noise. Dynamic Resampling improves the objective value accuracy by spending more time to evaluate the solutions multiple times, which tightens the optimization time limit even more. This circumstance can be used to design Dynamic Resampling algorithms with a better sampling allocation strategy that uses the time limit. In our previous work, we investigated Time-based Hybrid Resampling algorithms for Preference-based EMO. In this article, we extend our studies to general EMO which aims to find a converged and diverse set of alternative solutions along the whole Pareto-front of the problem. We focus on problems with an invariant noise level, i.e. a flat noise landscape.

  • 13.
    Siegmund, Florian
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Ng, Amos H. C.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Deb, Kalyanmoy
    Department of Electrical and Computer Engineering, Michigan State University, USA.
    Hybrid Dynamic Resampling for Guided Evolutionary Multi-Objective Optimization2015In: Evolutionary Multi-Criterion Optimization: 8th International Conference, EMO 2015, Guimarães, Portugal, March 29 --April 1, 2015. Proceedings, Part I / [ed] António Gaspar-Cunha; Carlos Henggeler Antunes; Carlos Coello Coello, Springer International Publishing Switzerland , 2015, p. 366-380Conference paper (Refereed)
    Abstract [en]

    In Guided Evolutionary Multi-objective Optimization the goal is to find a diverse, but locally focused non-dominated front in a decision maker’s area of interest, as close as possible to the true Pareto-front. The optimization can focus its efforts towards the preferred area and achieve a better result [9, 17, 7, 13]. The modeled and simulated systems are often stochastic and a common method to handle the objective noise is Resampling. The given preference information allows to define better resampling strategies which further improve the optimization result. In this paper, resampling strategies are proposed that base the sampling allocation on multiple factors, and thereby combine multiple resampling strategies proposed by the authors in [15]. These factors are, for example, the Pareto-rank of a solution and its distance to the decision maker’s area of interest. The proposed hybrid Dynamic Resampling Strategy DR2 is evaluated on the Reference point-guided NSGA-II optimization algorithm (R-NSGA-II) [9].

1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf