his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rule Extraction from Opaque Models: A Slightly Different Perspective
University of Skövde, School of Humanities and Informatics.
University of Skövde, School of Humanities and Informatics.
Show others and affiliations
2006 (English)In: 6th International Conference on Machine Learning and Applications, IEEE Computer Society, 2006, p. 22-27Conference paper, Published paper (Refereed)
Abstract [en]

When performing predictive modeling, the key criterion is always accuracy. With this in mind, complex techniques like neural networks or ensembles are normally used, resulting in opaque models impossible to interpret. When models need to be comprehensible, accuracy is often sacrificed by using simpler techniques directly producing transparent models; a tradeoff termed the accuracy vs. comprehensibility tradeoff. In order to reduce this tradeoff, the opaque model can be transformed into another, interpretable, model; an activity termed rule extraction. In this paper, it is argued that rule extraction algorithms should gain from using oracle data; i.e. test set instances, together with corresponding predictions from the opaque model. The experiments, using 17 publicly available data sets, clearly show that rules extracted using only oracle data were significantly more accurate than both rules extracted by the same algorithm, using training data, and standard decision tree algorithms. In addition, the same rules were also significantly more compact; thus providing better comprehensibility. The overall implication is that rules extracted in this fashion will explain the predictions made on novel data better than rules extracted in the standard way; i.e. using training data only.

Place, publisher, year, edition, pages
IEEE Computer Society, 2006. p. 22-27
Identifiers
URN: urn:nbn:se:his:diva-1952DOI: 10.1109/ICMLA.2006.46ISI: 000244477800004Scopus ID: 2-s2.0-40349090116ISBN: 0-7695-2735-3 OAI: oai:DiVA.org:his-1952DiVA, id: diva2:32228
Available from: 2008-04-11 Created: 2008-04-11 Last updated: 2017-11-27

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Löfström, ToveKönig, RichardNiklasson, Lars

Search in DiVA

By author/editor
Löfström, ToveKönig, RichardNiklasson, Lars
By organisation
School of Humanities and Informatics

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 354 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf