his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Post-processing Evolved Decision Trees
School of Business and Informatics, University of Borås, Borås, Sweden.
School of Business and Informatics, University of Borås, Borås, Sweden.
School of Business and Informatics, University of Borås, Borås, Sweden.
School of Business and Informatics, University of Borås, Borås, Sweden.
Show others and affiliations
2009 (English)In: Studies in Computational Intelligence, ISSN 1860-949X, E-ISSN 1860-9503, Vol. 204, 149-164 p.Article in journal (Other academic) Published
Abstract [en]

Although Genetic Programming (GP) is a very general technique, it is also quite powerful. As a matter of fact, GP has often been shown to outperform more specialized techniques on a variety of tasks. In data mining, GP has successfully been applied to most major tasks; e.g. classification, regression and clustering. In this chapter, we introduce, describe and evaluate a straightforward novel algorithm for post-processing genetically evolved decision trees. The algorithm works by iteratively, one node at a time, search for possible modifications that will result in higher accuracy. More specifically, the algorithm, for each interior test, evaluates every possible split for the current attribute and chooses the best. With this design, the post-processing algorithm can only increase training accuracy, never decrease it. In the experiments, the suggested algorithm is applied to GP decision trees, either induced directly from datasets, or extracted from neural network ensembles. The experimentation, using 22 UCI datasets, shows that the suggested post-processing technique results in higher test set accuracies on a large majority of the datasets. As a matter of fact, the increase in test accuracy is statistically significant for one of the four evaluated setups, and substantial on two out of the other three.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2009. Vol. 204, 149-164 p.
National Category
Computer and Information Science
Research subject
Technology
Identifiers
URN: urn:nbn:se:his:diva-3210DOI: 10.1007/978-3-642-01088-0_7Scopus ID: 2-s2.0-65549119359ISBN: 978-3-642-01087-3 OAI: oai:DiVA.org:his-3210DiVA: diva2:225373
Note

978-3-642-01087-3 (Print)

978-3-642-01088-0 (Online)

Foundations of Computational Intelligence Volume 4: Bio-Inspired Data Mining: Theoretical Foundations and Applications

edited by Ajith Abraham, Aboul-Ella Hassanien, André Ponce de Leon F. Carvalho

Studies in Computational Intelligence Volume 204

1860-9503

1860-949X

Available from: 2009-06-26 Created: 2009-06-26 Last updated: 2015-01-23Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Niklasson, Lars
By organisation
School of Humanities and InformaticsThe Informatics Research Centre
In the same journal
Studies in Computational Intelligence
Computer and Information Science

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 358 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf