his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Evaluating Standard Techniques for Implicit Diversity
Department of Business and Informatics, University of Borås, Borås, Sweden .
University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
2008 (English)In: Advances in Knowledge Discovery and Data Mining: 12th Pacific-Asia Conference PAKDD 2008 / [ed] Washio, T; Suzuki, E; Ting, KM; Inokuchi, A, Springer Berlin/Heidelberg, 2008, 592-599 p.Conference paper, (Refereed)
Abstract [en]

When performing predictive modeling, ensembles are often utilized in order to boost accuracy. The problem of how to maximize ensemble accuracy is, however, far from solved. In particular, the relationship between ensemble diversity and accuracy is, especially for classification, not completely understood. More specifically, the fact that ensemble diversity and base classifier accuracy are highly correlated, makes it necessary to balance these properties instead of just maximizing diversity. In this study, three standard techniques to obtain implicit diversity in neural network ensembles are evaluated using 14 UCI data sets. The experiments show that standard resampling; i.e. dividing the training data by instances, produces more diverse models, but at the expense of base classifier accuracy, thus resulting in less accurate ensembles. Building ensembles using neural networks with heterogeneous architectures improves test set accuracies, but without actually increasing the diversity. The results regarding resampling using features are inconclusive, the ensembles become more diverse, but the level of test set accuracies is unchanged. For the setups evaluated, ensemble training accuracy and base classifier training accuracy are positively correlated with ensemble test accuracy, but the opposite holds for diversity; i.e. ensembles with low diversity are generally more accurate.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2008. 592-599 p.
Series
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN 0302-9743 ; 5012
National Category
Computer Science
Research subject
Technology
Identifiers
URN: urn:nbn:se:his:diva-2797DOI: 10.1007/978-3-540-68125-0_54ISI: 000256127100053Scopus ID: 2-s2.0-44649182764ISBN: 978-3-540-68124-3 OAI: oai:DiVA.org:his-2797DiVA: diva2:200963
Conference
12th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2008;Osaka;20 May 2008through23 May 2008
Available from: 2009-03-02 Created: 2009-03-02 Last updated: 2013-03-17

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Löfström, TuveNiklasson, Lars
By organisation
School of Humanities and InformaticsThe Informatics Research Centre
Computer Science

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 341 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf