his.sePublikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
BETA
Alternativa namn
Publikationer (10 of 25) Visa alla publikationer
Deb, K., Bandaru, S. & Seada, H. (2019). Generating Uniformly Distributed Points on a Unit Simplex for Evolutionary Many-Objective Optimization. In: Kalyanmoy Deb, Erik Goodman, Carlos A. Coello Coello, Kathrin Klamroth, Kaisa Miettinen, Sanaz Mostaghim, Patrick Reed (Ed.), Evolutionary Multi-Criterion Optimization: 10th International Conference, EMO 2019, East Lansing, MI, USA, March 10-13, 2019, Proceedings. Paper presented at 10th International Conference on Evolutionary Multi-Criterion Optimization, EMO 2019, East Lansing, MI, USA, March 10-13, 2019 (pp. 179-190). Cham, Switzerland: Springer, 11411
Öppna denna publikation i ny flik eller fönster >>Generating Uniformly Distributed Points on a Unit Simplex for Evolutionary Many-Objective Optimization
2019 (Engelska)Ingår i: Evolutionary Multi-Criterion Optimization: 10th International Conference, EMO 2019, East Lansing, MI, USA, March 10-13, 2019, Proceedings / [ed] Kalyanmoy Deb, Erik Goodman, Carlos A. Coello Coello, Kathrin Klamroth, Kaisa Miettinen, Sanaz Mostaghim, Patrick Reed, Cham, Switzerland: Springer, 2019, Vol. 11411, s. 179-190Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Most of the recently proposed evolutionary many-objective optimization (EMO) algorithms start with a number of predefined reference points on a unit simplex. These algorithms use reference points to create reference directions in the original objective space and attempt to find a single representative near Pareto-optimal point around each direction. So far, most studies have used Das and Dennis’s structured approach for generating a uniformly distributed set of reference points on the unit simplex. Due to the highly structured nature of the procedure, this method does not scale well with an increasing number of objectives. In higher dimensions, most created points lie on the boundary of the unit simplex except for a few interior exceptions. Although a level-wise implementation of Das and Dennis’s approach has been suggested, EMO researchers always felt the need for a more generic approach in which any arbitrary number of uniformly distributed reference points can be created easily at the start of an EMO run. In this paper, we discuss a number of methods for generating such points and demonstrate their ability to distribute points uniformly in 3 to 15-dimensional objective spaces.

Ort, förlag, år, upplaga, sidor
Cham, Switzerland: Springer, 2019
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 11411
Nyckelord
Many-objective optimization, Reference points, Das and Dennis points, Diversity preservation
Nationell ämneskategori
Annan data- och informationsvetenskap
Forskningsämne
Produktion och automatiseringsteknik
Identifikatorer
urn:nbn:se:his:diva-16713 (URN)10.1007/978-3-030-12598-1_15 (DOI)2-s2.0-85063041223 (Scopus ID)978-3-030-12597-4 (ISBN)978-3-030-12598-1 (ISBN)
Konferens
10th International Conference on Evolutionary Multi-Criterion Optimization, EMO 2019, East Lansing, MI, USA, March 10-13, 2019
Projekt
Knowledge-Driven Decision Support (KDDS)
Forskningsfinansiär
KK-stiftelsen, 41231
Anmärkning

Also part of the Theoretical Computer Science and General Issues book sub series (LNTCS, volume 11411)

Tillgänglig från: 2019-03-25 Skapad: 2019-03-25 Senast uppdaterad: 2019-05-23Bibliografiskt granskad
Siegmund, F., Ng, A. H. C. & Deb, K. (2017). A Comparative Study of Fast Adaptive Preference-Guided Evolutionary Multi-objective Optimization. In: Heike Trautmann, Rudolph Günter, Kathrin Klamroth, Oliver Schütze, Margaret Wiecek, Yaochu Jin, and Christian Grimme (Ed.), Evolutionary Multi-Criterion Optimization: 9th International Conference, EMO 2017, Münster, Germany, March 19-22, 2017, Proceedings. Paper presented at 9th International Conference, EMO 2017, Münster, Germany, March 19-22, 2017 (pp. 560-574). Springer, 10173
Öppna denna publikation i ny flik eller fönster >>A Comparative Study of Fast Adaptive Preference-Guided Evolutionary Multi-objective Optimization
2017 (Engelska)Ingår i: Evolutionary Multi-Criterion Optimization: 9th International Conference, EMO 2017, Münster, Germany, March 19-22, 2017, Proceedings / [ed] Heike Trautmann, Rudolph Günter, Kathrin Klamroth, Oliver Schütze, Margaret Wiecek, Yaochu Jin, and Christian Grimme, Springer, 2017, Vol. 10173, s. 560-574Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In Simulation-based Evolutionary Multi-objective Optimization, the number of simulation runs is very limited, since the complex simulation models require long execution times. With the help of preference information, the optimization result can be improved by guiding the optimization towards relevant areas in the objective space with, for example, the Reference Point-based NSGA-II algorithm (R-NSGA-II). Since the Pareto-relation is the primary fitness function in R-NSGA-II, the algorithm focuses on exploring the objective space with high diversity. Only after the population has converged closeto the Pareto-front does the influence of the reference point distance as secondary fitness criterion increase and the algorithm converges towards the preferred area on the Pareto-front.In this paper, we propose a set of extensions of R-NSGA-II which adaptively control the algorithm behavior, in order to converge faster towards the reference point. The adaption can be based on criteria such as elapsed optimization time or the reference point distance, or a combination thereof. In order to evaluate the performance of the adaptive extensions of R-NSGA-II, a performance metric for reference point-based EMO algorithms is used, which is based on the Hypervolume measure called the Focused Hypervolume metric. It measures convergence and diversity of the population in the preferred area around the reference point. The results are evaluated on two benchmark problems ofdifferent complexity and a simplistic production line model.

Ort, förlag, år, upplaga, sidor
Springer, 2017
Serie
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 10173
Nyckelord
Evolutionary multi-objective optimization, Guided search, Preference-guided EMO, Reference point, Decision support, Adaptive
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
Produktion och automatiseringsteknik; INF201 Virtual Production Development
Identifikatorer
urn:nbn:se:his:diva-13448 (URN)10.1007/978-3-319-54157-0_38 (DOI)2-s2.0-85014258475 (Scopus ID)978-3-319-54156-3 (ISBN)978-3-319-54157-0 (ISBN)
Konferens
9th International Conference, EMO 2017, Münster, Germany, March 19-22, 2017
Forskningsfinansiär
KK-stiftelsen
Tillgänglig från: 2017-03-24 Skapad: 2017-03-24 Senast uppdaterad: 2019-01-24Bibliografiskt granskad
Bandaru, S., Ng, A. H. C. & Deb, K. (2017). Data mining methods for knowledge discovery in multi-objective optimization: Part A - Survey. Expert systems with applications, 70, 139-159
Öppna denna publikation i ny flik eller fönster >>Data mining methods for knowledge discovery in multi-objective optimization: Part A - Survey
2017 (Engelska)Ingår i: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 70, s. 139-159Artikel, forskningsöversikt (Refereegranskat) Published
Abstract [en]

Real-world optimization problems typically involve multiple objectives to be optimized simultaneously under multiple constraints and with respect to several variables. While multi-objective optimization itself can be a challenging task, equally difficult is the ability to make sense of the obtained solutions. In this two-part paper, we deal with data mining methods that can be applied to extract knowledge about multi-objective optimization problems from the solutions generated during optimization. This knowledge is expected to provide deeper insights about the problem to the decision maker, in addition to assisting the optimization process in future design iterations through an expert system. The current paper surveys several existing data mining methods and classifies them by methodology and type of knowledge discovered. Most of these methods come from the domain of exploratory data analysis and can be applied to any multivariate data. We specifically look at methods that can generate explicit knowledge in a machine-usable form. A framework for knowledge-driven optimization is proposed, which involves both online and offline elements of knowledge discovery. One of the conclusions of this survey is that while there are a number of data mining methods that can deal with data involving continuous variables, only a few ad hoc methods exist that can provide explicit knowledge when the variables involved are of a discrete nature. Part B of this paper proposes new techniques that can be used with such datasets and applies them to discrete variable multi-objective problems related to production systems. 

Nyckelord
Data mining, Multi-objective optimization, Descriptive statistics, Visual data mining, Machine learning, Knowledge-driven optimization
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
Teknik; Produktion och automatiseringsteknik; INF201 Virtual Production Development
Identifikatorer
urn:nbn:se:his:diva-13267 (URN)10.1016/j.eswa.2016.10.015 (DOI)000389162000009 ()2-s2.0-84995972531 (Scopus ID)
Projekt
KDISCO and Knowledge Driven Decision Support via Optimization (KDDS)
Forskningsfinansiär
KK-stiftelsen, 41231
Tillgänglig från: 2016-12-29 Skapad: 2016-12-29 Senast uppdaterad: 2019-01-24Bibliografiskt granskad
Bandaru, S., Ng, A. H. C. & Deb, K. (2017). Data mining methods for knowledge discovery in multi-objective optimization: Part B - New developments and applications. Expert systems with applications, 70, 119-138
Öppna denna publikation i ny flik eller fönster >>Data mining methods for knowledge discovery in multi-objective optimization: Part B - New developments and applications
2017 (Engelska)Ingår i: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 70, s. 119-138Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The first part of this paper served as a comprehensive survey of data mining methods that have been used to extract knowledge from solutions generated during multi-objective optimization. The current paper addresses three major shortcomings of existing methods, namely, lack of interactiveness in the objective space, inability to handle discrete variables and inability to generate explicit knowledge. Four data mining methods are developed that can discover knowledge in the decision space and visualize it in the objective space. These methods are (i) sequential pattern mining, (ii) clustering-based classification trees, (iii) hybrid learning, and (iv) flexible pattern mining. Each method uses a unique learning strategy to generate explicit knowledge in the form of patterns, decision rules and unsupervised rules. The methods are also capable of taking the decision maker's preferences into account to generate knowledge unique to preferred regions of the objective space. Three realistic production systems involving different types of discrete variables are chosen as application studies. A multi-objective optimization problem is formulated for each system and solved using NSGA-II to generate the optimization datasets. Next, all four methods are applied to each dataset. In each application, the methods discover similar knowledge for specified regions of the objective space. Overall, the unsupervised rules generated by flexible pattern mining are found to be the most consistent, whereas the supervised rules from classification trees are the most sensitive to user-preferences. 

Nyckelord
Data mining, Knowledge discovery, Multi-objective optimization, Discrete variables, Production systems, Flexible pattern mining
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
Teknik; Produktion och automatiseringsteknik; INF201 Virtual Production Development
Identifikatorer
urn:nbn:se:his:diva-13266 (URN)10.1016/j.eswa.2016.10.016 (DOI)000389162000008 ()2-s2.0-84995977095 (Scopus ID)
Projekt
KDISCO and Knowledge Driven Decision Support via Optimization (KDDS)
Forskningsfinansiär
KK-stiftelsen, 41231
Tillgänglig från: 2016-12-29 Skapad: 2016-12-29 Senast uppdaterad: 2019-01-24Bibliografiskt granskad
Bandaru, S. & Deb, K. (2017). Metaheuristic Techniques. In: Raghu Nandan Sengupta, Aparna Gupta, Joydeep Dutta (Ed.), Decision Sciences: Theory and Practice (pp. 693-750). Boca Raton: CRC Press
Öppna denna publikation i ny flik eller fönster >>Metaheuristic Techniques
2017 (Engelska)Ingår i: Decision Sciences: Theory and Practice / [ed] Raghu Nandan Sengupta, Aparna Gupta, Joydeep Dutta, Boca Raton: CRC Press, 2017, s. 693-750Kapitel i bok, del av antologi (Refereegranskat)
Ort, förlag, år, upplaga, sidor
Boca Raton: CRC Press, 2017
Nyckelord
metaheuristics, evolutionary algorithms, swarm intelligence
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
Teknik; Produktion och automatiseringsteknik; INF201 Virtual Production Development
Identifikatorer
urn:nbn:se:his:diva-13283 (URN)000426383600011 ()978-1-4665-6430-5 (ISBN)978-1-4822-8256-6 (ISBN)
Projekt
KDISCO and Knowledge Driven Decision Support via Optimization (KDDS)
Forskningsfinansiär
KK-stiftelsen, 41231
Tillgänglig från: 2017-01-02 Skapad: 2017-01-02 Senast uppdaterad: 2019-01-24Bibliografiskt granskad
Siegmund, F., Ng, A. H. C. & Deb, K. (2016). A Ranking and Selection Strategy for Preference-based Evolutionary Multi-objective Optimization of Variable-Noise Problems. In: 2016 IEEE Congress on Evolutionary Computation (CEC): . Paper presented at 2016 IEEE Congress on Evolutionary Computation (IEEE CEC) held as part of the IEEE World Congress on Computational Intelligence (IEEE WCC) 2016, 24-29 July 2016, Vancouver, Canada (pp. 3035-3044). IEEE conference proceedings
Öppna denna publikation i ny flik eller fönster >>A Ranking and Selection Strategy for Preference-based Evolutionary Multi-objective Optimization of Variable-Noise Problems
2016 (Engelska)Ingår i: 2016 IEEE Congress on Evolutionary Computation (CEC), IEEE conference proceedings, 2016, s. 3035-3044Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In simulation-based Evolutionary Multi-objective Optimization the number of simulation runs is very limited, since the complex simulation models require long execution times. With the help of preference information, the optimization result can be improved by guiding the optimization towards relevant areas in the objective space, for example with the R-NSGA-II algorithm [9], which uses a reference point specified by the decision maker. When stochastic systems are simulated, the uncertainty of the objective values might degrade the optimization performance. By sampling the solutions multiple times this uncertainty can be reduced. However, resampling methods reduce the overall number of evaluated solutions which potentially worsens the optimization result. In this article, a Dynamic Resampling strategy is proposed which identifies the solutions closest to the reference point which guides the population of the Evolutionary Algorithm. We apply a single-objective Ranking and Selection resampling algorithm in the selection step of R-NSGA-II, which considers the stochastic reference point distance and its variance to identify the best solutions. We propose and evaluate different ways to integrate the sampling allocation method into the Evolutionary Algorithm. On the one hand, the Dynamic Resampling algorithm is made adaptive to support the EA selection step, and it is customized to be used in the time-constrained optimization scenario. Furthermore, it is controlled by other resampling criteria, in the same way as other hybrid DR algorithms. On the other hand, R-NSGA-II is modified to rely more on the scalar reference point distance as fitness function. The results are evaluated on a benchmark problem with variable noise landscape.

Ort, förlag, år, upplaga, sidor
IEEE conference proceedings, 2016
Nyckelord
Evolutionary, multi-objective optimization, preference-based, guided search, reference point, dynamic resampling, budget allocation, ranking and selection, variable noise
Nationell ämneskategori
Systemvetenskap, informationssystem och informatik Robotteknik och automation
Forskningsämne
Teknik; Naturvetenskap; Produktion och automatiseringsteknik
Identifikatorer
urn:nbn:se:his:diva-13161 (URN)10.1109/CEC.2016.7744173 (DOI)000390749103029 ()2-s2.0-85008255213 (Scopus ID)978-1-5090-0623-6 (ISBN)978-1-5090-0624-3 (ISBN)978-1-5090-0622-9 (ISBN)
Konferens
2016 IEEE Congress on Evolutionary Computation (IEEE CEC) held as part of the IEEE World Congress on Computational Intelligence (IEEE WCC) 2016, 24-29 July 2016, Vancouver, Canada
Forskningsfinansiär
KK-stiftelsen
Tillgänglig från: 2016-11-30 Skapad: 2016-11-30 Senast uppdaterad: 2018-03-28Bibliografiskt granskad
Siegmund, F., Ng, A. H. C. & Deb, K. (2016). Hybrid Dynamic Resampling Algorithms for Evolutionary Multi-objective Optimization of Invariant-Noise Problems. In: Giovanni Squillero, Paolo Burelli (Ed.), Applications of Evolutionary Computation: 19th European Conference, EvoApplications 2016, Porto, Portugal, March 30 – April 1, 2016, Proceedings, Part II. Paper presented at 19th European Conference, EvoApplications 2016, Porto, Portugal, March 30 – April 1, 2016 (pp. 311-326). , 9598
Öppna denna publikation i ny flik eller fönster >>Hybrid Dynamic Resampling Algorithms for Evolutionary Multi-objective Optimization of Invariant-Noise Problems
2016 (Engelska)Ingår i: Applications of Evolutionary Computation: 19th European Conference, EvoApplications 2016, Porto, Portugal, March 30 – April 1, 2016, Proceedings, Part II / [ed] Giovanni Squillero, Paolo Burelli, 2016, Vol. 9598, s. 311-326Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In Simulation-based Evolutionary Multi-objective Optimization (EMO) the available time for optimization usually is limited. Since many real-world optimization problems are stochastic models, the optimization algorithm has to employ a noise compensation technique for the objective values. This article analyzes Dynamic Resampling algorithms for handling the objective noise. Dynamic Resampling improves the objective value accuracy by spending more time to evaluate the solutions multiple times, which tightens the optimization time limit even more. This circumstance can be used to design Dynamic Resampling algorithms with a better sampling allocation strategy that uses the time limit. In our previous work, we investigated Time-based Hybrid Resampling algorithms for Preference-based EMO. In this article, we extend our studies to general EMO which aims to find a converged and diverse set of alternative solutions along the whole Pareto-front of the problem. We focus on problems with an invariant noise level, i.e. a flat noise landscape.

Serie
Lecture Notes in Computer Science, ISSN 0302-9743 ; 9598
Nyckelord
Evolutionary multi-objective optimization, Simulationbased optimization, Noise, Dynamic resampling, Budget allocation, Hybrid
Nationell ämneskategori
Robotteknik och automation Systemvetenskap, informationssystem och informatik
Forskningsämne
Naturvetenskap; Teknik; Produktion och automatiseringsteknik
Identifikatorer
urn:nbn:se:his:diva-12074 (URN)10.1007/978-3-319-31153-1_21 (DOI)000467438600021 ()2-s2.0-84962257415 (Scopus ID)978-3-319-31152-4 (ISBN)978-3-319-31153-1 (ISBN)
Konferens
19th European Conference, EvoApplications 2016, Porto, Portugal, March 30 – April 1, 2016
Projekt
BlixtSim, IDSS
Forskningsfinansiär
KK-stiftelsen
Tillgänglig från: 2016-03-29 Skapad: 2016-03-29 Senast uppdaterad: 2019-09-09Bibliografiskt granskad
Bandaru, S., Gaur, A., Deb, K., Khare, V., Chougule, R. & Bandyopadhyay, P. (2015). Development, analysis and applications of a quantitative methodology for assessing customer satisfaction using evolutionary optimization. Applied Soft Computing, 30, 265-278
Öppna denna publikation i ny flik eller fönster >>Development, analysis and applications of a quantitative methodology for assessing customer satisfaction using evolutionary optimization
Visa övriga...
2015 (Engelska)Ingår i: Applied Soft Computing, ISSN 1568-4946, E-ISSN 1872-9681, Vol. 30, s. 265-278Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Consumer-oriented companies are getting increasingly more sensitive about customer's perception of their products, not only to get a feedback on their popularity, but also to improve the quality and service through a better understanding of design issues for further development. However, a consumer's perception is often qualitative and is achieved through third party surveys or the company's recording of after-sale feedback through explicit surveys or warranty based commitments. In this paper, we consider an automobile company's warranty records for different vehicle models and suggest a data mining procedure to assign a customer satisfaction index (CSI) to each vehicle model based on the perceived notion of the level of satisfaction of customers. Based on the developed CSI function, customers are then divided into satisfied and dissatisfied customer groups. The warranty data are then clustered separately for each group and analyzed to find possible causes (field failures) and their relative effects on customer's satisfaction (or dissatisfaction) for a vehicle model. Finally, speculative introspection has been made to identify the amount of improvement in CSI that can be achieved by the reduction of some critical field failures through better design practices. Thus, this paper shows how warranty data from customers can be utilized to have a better perception of ranking of a product compared to its competitors in the market and also to identify possible causes for making some customers dissatisfied and eventually to help percolate these issues at the design level. This closes the design cycle loop in which after a design is converted into a product, its perceived level of satisfaction by customers can also provide valuable information to help make the design better in an iterative manner. The proposed methodology is generic and novel, and can be applied to other consumer products as well.

Ort, förlag, år, upplaga, sidor
Elsevier, 2015
Nyckelord
Customer satisfaction index (CSI), Quantitative modeling, Evolutionary optimization, Customer relationship management (CRM)
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
Teknik; Produktion och automatiseringsteknik
Identifikatorer
urn:nbn:se:his:diva-10696 (URN)10.1016/j.asoc.2015.01.014 (DOI)000351296200023 ()2-s2.0-84923173199 (Scopus ID)
Tillgänglig från: 2015-02-19 Skapad: 2015-02-19 Senast uppdaterad: 2018-03-29Bibliografiskt granskad
Siegmund, F., Ng, A. H. C. & Deb, K. (2015). Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems. In: : . Paper presented at 23rd International Conference on Multiple Criteria Decision Making MCDM 2015, August 3-7, 2015, Hamburg, Germany.
Öppna denna publikation i ny flik eller fönster >>Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems
2015 (Engelska)Konferensbidrag, Muntlig presentation med publicerat abstract (Refereegranskat)
Abstract [en]

In Multi-objective Optimization many solutions have to be evaluated in order to provide the decision maker with a diverse choice of solutions along the Pareto-front. In Simulation-based Optimization the number of optimization function evaluations is usually very limited due to the long execution times of the simulation models. If preference information is available however, the available number of function evaluations can be used more effectively. The optimization can be performed as a guided, focused search which returns solutions close to interesting, preferred regions of the Pareto-front. One such algorithm for guided search is the Reference-point guided Non-dominated Sorting Genetic Algorithm II, R-NSGA-II. It is a population-based Evolutionary Algorithm that finds a set of non-dominated solutions in a single optimization run. R-NSGA-II takes reference points in the objective space provided by the decision maker and guides the optimization towards areas of the Pareto-front close the reference points.

In Simulation-based Optimization the modeled and simulated systems are often stochastic and a common method to handle objective noise is Resampling. Reliable quality assessment of system configurations by resampling requires many simulation runs. Therefore, the optimization process can benefit from Dynamic Resampling algorithms that distribute the available function evaluations among the solutions in the best possible way. Solutions can vary in their sampling need. For example, solutions with highly variable objective values have to be sampled more times to reduce their objective value standard error. Dynamic resampling algorithms assign as much samples to them as is needed to reduce the uncertainty about their objective values below a certain threshold. Another criterion the number of samples can be based on is a solution's closeness to the Pareto-front. For solutions that are close to the Pareto-front it is likely that they are member of the final result set. It is therefore important to have accurate knowledge of their objective values available, in order to be able to to tell which solutions are better than others. Usually, the distance to the Pareto-front is not known, but another criterion can be used as an indication for it instead: The elapsed optimization time. A third example of a resampling criterion can be the dominance relations between different solutions. The optimization algorithm has to determine for pairs of solutions which is the better one. Here both distances between objective vectors and the variance of the objective values have to be considered which requires a more advanced resampling technique. This is a Ranking and Selection problem.

If R-NSGA-II is applied in a scenario with a stochastic fitness function resampling algorithms have to be used to support it in the best way and avoid a performance degradation due to uncertain knowledge about the objective values of solutions. In our work we combine R-NSGA-II with several resampling algorithms that are based on the above mentioned resampling criteria or combinations thereof and evaluate which are the best criteria the sampling allocation can be based on, in which situations.

Due to the preference information R-NSGA-II has an important fitness information about the solutions at its disposal: The distance to reference points. We propose a resampling strategy that allocates more samples to solutions close to a reference point. This idea is then extended with a resampling technique that compares solutions based on their distance to the reference point. We base this algorithm on a classical Ranking and Selection algorithm, Optimal Computing Budget Allocation, and show how OCBA can be applied to support R-NSGA-II. We show the applicability of the proposed algorithms in a case study of an industrial production line for car manufacturing.

Serie
COIN Report ; 2015020
Nyckelord
Evolutionary multi-objective optimization, guided search, preference-based optimization, reference point, dynamic resampling, budget allocation, decision support, simulation-based optimization, stochastic systems
Nationell ämneskategori
Data- och informationsvetenskap Robotteknik och automation
Forskningsämne
Naturvetenskap; Teknik; Produktion och automatiseringsteknik
Identifikatorer
urn:nbn:se:his:diva-11494 (URN)
Konferens
23rd International Conference on Multiple Criteria Decision Making MCDM 2015, August 3-7, 2015, Hamburg, Germany
Forskningsfinansiär
KK-stiftelsen
Tillgänglig från: 2015-09-07 Skapad: 2015-09-07 Senast uppdaterad: 2018-03-29Bibliografiskt granskad
Bandaru, S., Aslam, T., Ng, A. & Deb, K. (2015). Generalized higher-level automated innovization with application to inventory management. European Journal of Operational Research, 243(2), 480-496
Öppna denna publikation i ny flik eller fönster >>Generalized higher-level automated innovization with application to inventory management
2015 (Engelska)Ingår i: European Journal of Operational Research, ISSN 0377-2217, E-ISSN 1872-6860, Vol. 243, nr 2, s. 480-496Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

This paper generalizes the automated innovization framework using genetic programming in the context of higher-level innovization. Automated innovization is an unsupervised machine learning technique that can automatically extract significant mathematical relationships from Pareto-optimal solution sets. These resulting relationships describe the conditions for Pareto-optimality for the multi-objective problem under consideration and can be used by scientists and practitioners as thumb rules to understand the problem better and to innovate new problem solving techniques; hence the name innovization (innovation through optimization). Higher-level innovization involves performing automated innovization on multiple Pareto-optimal solution sets obtained by varying one or more problem parameters. The automated innovization framework was recently updated using genetic programming. We extend this generalization to perform higher-level automated innovization and demonstrate the methodology on a standard two-bar bi-objective truss design problem. The procedure is then applied to a classic case of inventory management with multi-objective optimization performed at both system and process levels. The applicability of automated innovization to this area should motivate its use in other avenues of operational research.

Ort, förlag, år, upplaga, sidor
Elsevier, 2015
Nyckelord
Automated innovization, Higher-level innovization, Genetic programming, Inventory management, Knowledge discovery
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
Produktion och automatiseringsteknik
Identifikatorer
urn:nbn:se:his:diva-10693 (URN)10.1016/j.ejor.2014.11.015 (DOI)000350834800012 ()2-s2.0-84923494070 (Scopus ID)
Projekt
KDISCO
Forskningsfinansiär
KK-stiftelsen, 41128
Tillgänglig från: 2015-02-19 Skapad: 2015-02-19 Senast uppdaterad: 2018-03-29Bibliografiskt granskad
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0001-7402-9939

Sök vidare i DiVA

Visa alla publikationer