his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
The Problem with Ranking Ensembles Based on Training or Validation Performance
University of Borås, Sch Business & Informat.
University of Borås, Sch Business & Informat.
University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
2008 (English)In: Proceedings of the International Joint Conference on Neural Networks, IEEE Press, 2008, 3221-3227 p.Conference paper, (Refereed)
Abstract [en]

 The main purpose of this study was to determine whether it is possible to somehow use results on training or validation data to estimate ensemble performance on novel data. With the specific setup evaluated; i.e. using ensembles built from a pool of independently trained neural networks and targeting diversity only implicitly, the answer is a resounding no. Experimentation, using 13 UCI datasets, shows that there is in general nothing to gain in performance on novel data by choosing an ensemble based on any of the training measures evaluated here. This is despite the fact that the measures evaluated include all the most frequently used; i.e. ensemble training and validation accuracy, base classifier training and validation accuracy, ensemble training and validation AUC and two diversity measures. The main reason is that all ensembles tend to have quite similar performance, unless we deliberately lower the accuracy of the base classifiers. The key consequence is, of course, that a data miner can do no better than picking an ensemble at random. In addition, the results indicate that it is futile to look for an algorithm aimed at optimizing ensemble performance by somehow selecting a subset of available base classifiers.

 

Place, publisher, year, edition, pages
IEEE Press, 2008. 3221-3227 p.
Series
IEEE International Joint Conference on Neural Networks. Peoceedings
Research subject
Technology
Identifiers
URN: urn:nbn:se:his:diva-3613DOI: 10.1109/IJCNN.2008.4634255ISI: 000263827202015Scopus ID: 2-s2.0-56349145712ISBN: 978-1-4244-1821-3 OAI: oai:DiVA.org:his-3613DiVA: diva2:291123
Conference
2008 International Joint Conference on Neural Networks, IJCNN 2008;Hong Kong;1 June 2008through8 June 2008
Available from: 2010-01-29 Created: 2010-01-29 Last updated: 2013-03-17

Open Access in DiVA

No full text

Other links

Publisher's full textScopushttp://hdl.handle.net/2320/3973

Search in DiVA

By author/editor
Boström, Henrik
By organisation
School of Humanities and InformaticsThe Informatics Research Centre

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 21 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf