Högskolan i Skövde

his.sePublications
Planned maintenance
A system upgrade is planned for 24/9-2024, at 12:00-14:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Explainable AI methods for credit card fraud detection: Evaluation of LIME and SHAP through a User Study
University of Skövde, School of Informatics.
2021 (English)Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
Abstract [en]

In the past few years, Artificial Intelligence (AI) has evolved into a powerful tool applied in multi-disciplinary fields to resolve sophisticated problems. As AI becomes more powerful and ubiquitous, oftentimes the AI methods also become opaque, which might lead to trust issues for the users of the AI systems as well as fail to meet the legal requirements of AI transparency. In this report, the possibility of making a credit-card fraud detection support system explainable to users is investigated through a quantitative survey. A publicly available credit card dataset was used. Deep Learning and Random Forest were the two Machine Learning (ML) methodsimplemented and applied on the credit card fraud dataset, and the performance of their results was evaluated in terms of their accuracy, recall, sufficiency, and F1 score. After that, two explainable AI (XAI) methods - SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) were implemented and applied to the results obtained from these two ML methods. Finally, the XAI results were evaluated through a quantitative survey. The results from the survey revealed that the XAI explanations can slightly increase the users' impression of the system's ability to reason and LIME had a slight advantage over SHAP in terms of explainability. Further investigation of visualizing data pre-processing and the training process is suggested to offer deep explanations for users. 

Place, publisher, year, edition, pages
2021. , p. 49
Keywords [en]
Explainable AI, Local Explanations, Shapley Additive Explanations, Local Interpretable Model Agnostic Explanations, Credit Card Fraud Detection
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:his:diva-20848OAI: oai:DiVA.org:his-20848DiVA, id: diva2:1626230
Subject / course
Informationsteknologi
Educational program
Data Science - Master’s Programme
Supervisors
Examiners
Available from: 2022-01-11 Created: 2022-01-11 Last updated: 2022-01-11Bibliographically approved

Open Access in DiVA

fulltext(1592 kB)2868 downloads
File information
File name FULLTEXT01.pdfFile size 1592 kBChecksum SHA-512
749857f40abc0e43b624eb676c4f629bda0130cd6c61bb278f32b1294e818ff69dc68158011965e0f33dc16a66c6387514270a208aa35cdbea34562c8fa31020
Type fulltextMimetype application/pdf

By organisation
School of Informatics
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 2877 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 5636 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf