his.sePublications
Change search
Refine search result
1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Johansson, Ulf
    et al.
    School of Business and Informatics, University of Borås, Borås, Sweden.
    König, Rikard
    School of Business and Informatics, University of Borås, Borås, Sweden.
    Niklasson, Lars
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Genetically Evolved kNN Ensembles2009In: Data Mining: Special Issue in Annals of Information Systems / [ed] Robert Stahlbock, Sven F. Crone, Stefan Lessmann, Springer Science+Business Media B.V., 2009, 1, p. 299-313Chapter in book (Other academic)
    Abstract [en]

    Both theory and a wealth of empirical studies have established that ensembles are more accurate than single predictive models. For the ensemble approach to work, base classifiers must not only be accurate but also diverse, i.e., they should commit their errors on different instances. Instance-based learners are, however, very robust with respect to variations of a data set, so standard resampling methods will normally produce only limited diversity. Because of this, instance-based learners are rarely used as base classifiers in ensembles. In this chapter, we introduce a method where genetic programming is used to generate kNN base classifiers with optimized k-values and feature weights. Due to the inherent inconsistency in genetic programming (i.e., different runs using identical data and parameters will still produce different solutions) a group of independently evolved base classifiers tend to be not only accurate but also diverse. In the experimentation, using 30 data sets from the UCI repository, two slightly different versions of kNN ensembles are shown to significantly outperform both the corresponding base classifiers and standard kNN with optimized k-values, with respect to accuracy and AUC.

  • 2.
    Johansson, Ulf
    et al.
    University of Skövde, School of Humanities and Informatics. Department of Business and Informatics, University of Borås, Sweden.
    König, Rikard
    University of Skövde, School of Humanities and Informatics. Department of Business and Informatics, University of Borås, Sweden.
    Niklasson, Lars
    University of Skövde, School of Humanities and Informatics.
    The Truth is in There: Rule Extraction from Opaque Models Using Genetic Programming2004In: Proceedings of the Seventeenth International Florida Artificial Intelligence Research Society Conference, FLAIRS 2004 / [ed] Valerie Barr, Zdravko Markov, AAAI Press, 2004, p. 658-663Conference paper (Other academic)
  • 3. Johansson, Ulf
    et al.
    Löfström, Tove
    University of Skövde, School of Humanities and Informatics.
    König, Richard
    University of Skövde, School of Humanities and Informatics.
    Sönströd, Cecilia
    Niklasson, Lars
    University of Skövde, School of Humanities and Informatics.
    Rule Extraction from Opaque Models: A Slightly Different Perspective2006In: 6th International Conference on Machine Learning and Applications, IEEE Computer Society, 2006, p. 22-27Conference paper (Refereed)
    Abstract [en]

    When performing predictive modeling, the key criterion is always accuracy. With this in mind, complex techniques like neural networks or ensembles are normally used, resulting in opaque models impossible to interpret. When models need to be comprehensible, accuracy is often sacrificed by using simpler techniques directly producing transparent models; a tradeoff termed the accuracy vs. comprehensibility tradeoff. In order to reduce this tradeoff, the opaque model can be transformed into another, interpretable, model; an activity termed rule extraction. In this paper, it is argued that rule extraction algorithms should gain from using oracle data; i.e. test set instances, together with corresponding predictions from the opaque model. The experiments, using 17 publicly available data sets, clearly show that rules extracted using only oracle data were significantly more accurate than both rules extracted by the same algorithm, using training data, and standard decision tree algorithms. In addition, the same rules were also significantly more compact; thus providing better comprehensibility. The overall implication is that rules extracted in this fashion will explain the predictions made on novel data better than rules extracted in the standard way; i.e. using training data only.

  • 4. Johansson, Ulf
    et al.
    Löfström, Tuve
    König, Rikard
    Niklasson, Lars
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Accurate Neural Network Ensembles Using Genetic Programming2006In: Proceedings of SAIS: The 23rd Annual Workshop of the Swedish Artificial Intelligence Society, Swedish Artificial Intelligence Society - SAIS, Umeå universitet , 2006Conference paper (Other academic)
    Abstract [en]

    Abstract: In this paper we present and evaluate a novel algorithm for ensemble creation. The main idea of the algorithm is to first independently train a fixed number of neural networks (here ten) and then use genetic programming to combine these networks into an ensemble. The use of genetic programming makes it possible to not only consider ensembles of different sizes, but also to use ensembles as intermediate building blocks. The final result is therefore more correctly described as an ensemble of neural network ensembles. The experiments show that the proposed method, when evaluated on 22 publicly available data sets, obtains very high accuracy, clearly outperforming the other methods evaluated. In this study several micro techniques are used, and we believe that they all contribute to the increased performance.

    One such micro technique, aimed at reducing overtraining, is the training method, called tombola training, used during genetic evolution. When using tombola training, training data is regularly resampled into new parts, called training groups. Each ensemble is then evaluated on every training group and the actual fitness is determined solely from the result on the hardest part.

  • 5. Johansson, Ulf
    et al.
    Löfström, Tuve
    König, Rikard
    Niklasson, Lars
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Building Neural Network Ensembles using Genetic Programming2006In: The International Joint Conference on Neural Networks 2006, IEEE Press, 2006, p. 2239-2244Conference paper (Refereed)
    Abstract [en]

    algorithm for ensemble creation. The main idea of the algorithm is to first independently train a fixed number of neural networks (here ten) and then use genetic programming to combine these networks into an ensemble. The use of genetic programming makes it possible to not only consider ensembles of different sizes, but also to use ensembles as intermediate building blocks. The final result is therefore more correctly described as an ensemble of neural network ensembles. The experiments show that the proposed method, when evaluated on 22 publicly available data sets, obtains very high accuracy, clearly outperforming the other methods evaluated. In this study several micro techniques are used, and we believe that they all contribute to the increased performance. One such micro technique, aimed at reducing overtraining, is the training method, called tombola training, used during genetic evolution. When using tombola training, training data is regularly resampled into new parts, called training groups. Each ensemble is then evaluated on every training group and the actual fitness is determined solely from the result on the hardest part.

  • 6.
    Johansson, Ulf
    et al.
    School of Business and Informatics, University of Borås, Sweden.
    Löfström, Tuve
    School of Business and Informatics, University of Borås, Sweden.
    König, Rikard
    School of Business and Informatics, University of Borås, Sweden.
    Niklasson, Lars
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Genetically Evolved Trees Representing Ensembles2006In: Artificial intelligence and soft computing - ICAISC 2006: 8th international conference, Zakopane, Poland, June 25 - 29, 2006 ; proceedings, 2006, p. 613-622Conference paper (Refereed)
    Abstract [en]

    We have recently proposed a novel algorithm for ensemble creation called GEMS (Genetic Ensemble Member Selection). GEMS first trains a fixed number of neural networks (here twenty) and then uses genetic programming to combine these networks into an ensemble. The use of genetic programming makes it possible for GEMS to not only consider ensembles of different sizes, but also to use ensembles as intermediate building blocks. In this paper, which is the first extensive study of GEMS, the representation language is extended to include tests partitioning the data, further increasing flexibility. In addition, several micro techniques are applied to reduce overfitting, which appears to be the main problem for this powerful algorithm. The experiments show that GEMS, when evaluated on 15 publicly available data sets, obtains very high accuracy, clearly outperforming both straightforward ensemble designs and standard decision tree algorithms.

  • 7. Johansson, Ulf
    et al.
    Niklasson, Lars
    University of Skövde, School of Humanities and Informatics.
    König, Rikard
    Accuracy vs. comprehensibility in data mining models2004In: Proceedings of the Seventh International Conference on Information Fusion: 28 June - 1 July 2004 Stockholm Sweden, 2004, p. 295-300Conference paper (Other academic)
    Abstract [en]

    This paper addresses the important issue of the tradeoff between accuracy and comprehensibility in data mining. The paper presents results which show that it is, to some extent, possible to bridge this gap. A method for rule extraction from opaque models (Genetic Rule EXtraction – G-REX) is used to show the effects on accuracy when forcing the creation of comprehensible representations. In addition the technique of combining different classifiers to an ensemble is demonstrated on some well-known data sets. The results show that ensembles generally have very high accuracy, thus making them a good first choice when performing predictive data mining.

  • 8.
    König, Rikard
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Predictive Techniques and Methods for Decision Support in Situations with Poor Data Quality2009Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Today, decision support systems based on predictive modeling are becoming more common, since organizations often collect more data than decision makers can handle manually. Predictive models are used to find potentially valuable patterns in the data, or to predict the outcome of some event. There are numerous predictive techniques, ranging from simple techniques such as linear regression, to complex powerful ones like artificial neural networks. Complex models usually obtain better predictive performance, but are opaque and thus cannot be used to explain predictions or discovered patterns. The design choice of which predictive technique to use becomes even harder since no technique outperforms all others over a large set of problems. It is even difficult to find the best parameter values for a specific technique, since these settings also are problem dependent. One way to simplify this vital decision is to combine several models, possibly created with different settings and techniques, into an ensemble. Ensembles are known to be more robust and powerful than individual models, and ensemble diversity can be used to estimate the uncertainty associated with each prediction.

    In real-world data mining projects, data is often imprecise, contain uncertainties or is missing important values, making it impossible to create models with sufficient performance for fully automated systems. In these cases, predictions need to be manually analyzed and adjusted. Here, opaque models like ensembles have a disadvantage, since the analysis requires understandable models. To overcome this deficiency of opaque models, researchers have developed rule extraction techniques that try to extract comprehensible rules from opaque models, while retaining sufficient accuracy.

    This thesis suggests a straightforward but comprehensive method for predictive modeling in situations with poor data quality. First, ensembles are used for the actual modeling, since they are powerful, robust and require few design choices. Next, ensemble uncertainty estimations pinpoint predictions that need special attention from a decision maker. Finally, rule extraction is performed to support the analysis of uncertain predictions. Using this method, ensembles can be used for predictive modeling, in spite of their opacity and sometimes insufficient global performance, while the involvement of a decision maker is minimized.

    The main contributions of this thesis are three novel techniques that enhance the performance of the purposed method. The first technique deals with ensemble uncertainty estimation and is based on a successful approach often used in weather forecasting. The other two are improvements of a rule extraction technique, resulting in increased comprehensibility and more accurate uncertainty estimations.

  • 9.
    König, Rikard
    et al.
    University of Skövde, School of Humanities and Informatics.
    Johansson, Ulf
    Niklasson, Lars
    University of Skövde, School of Humanities and Informatics.
    Increasing rule extraction comprehensibility2006In: International Journal of Information Technology and Intelligent Computing, ISSN 1895-8648, Vol. 1, no 2, p. 303-314Article in journal (Refereed)
  • 10. Löfström, Tuve
    et al.
    Johansson, UlfSönströd, CeciliaKönig, RikardNiklasson, LarsUniversity of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Proceedings of SAIS 2007: The 24th Annual Workshop of the Swedish Artificial Intelligence Society, Borås, May 22-23, 20072007Conference proceedings (editor) (Other academic)
  • 11.
    Löfström, Tuve
    et al.
    University of Skövde, School of Humanities and Informatics.
    König, Richard
    University of Skövde, School of Humanities and Informatics.
    Johansson, Ulf
    University of Skövde, School of Humanities and Informatics.
    Niklasson, Lars
    University of Skövde, School of Humanities and Informatics.
    Strand, Mattias
    University of Skövde, School of Humanities and Informatics.
    Ziemke, Tom
    University of Skövde, School of Humanities and Informatics.
    Benefits of Relating the Retail Domain to Information Fusion2006In: 9th International Conference on Information Fusion: IEEE ISIF, IEEE conference proceedings, 2006, p. Article number 4085930-Conference paper (Refereed)
1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf