Högskolan i Skövde

his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards Sonification in Multimodal and User-Friendly Explainable Artificial Intelligence
Embedded Intelligence for Health Care & Wellbeing, University of Augsburg, Augsburg, Germany.
Audio Research Group, Tampere University, Finland.
epartment of Computing, Jönköping University, Jönköping AI Lab (JAIL), Sweden.ORCID iD: 0000-0003-2900-9335
GLAM – the Group on Language, Audio, & Music, Imperial College London, United Kingdom.
Show others and affiliations
2021 (English)In: ICMI '21: Proceedings of the 2021 International Conference on Multimodal Interaction / [ed] Zakia Hammal; Carlos Busso; Catherine Pelachaud; Sharon Oviatt; Albert Ali Salah; Guoying Zhao, Association for Computing Machinery (ACM), 2021, p. 788-792Conference paper, Published paper (Refereed)
Abstract [en]

We are largely used to hearing explanations. For example, if someone thinks you are sad today, they might reply to your “why?” with “because you were so Hmmmmm-mmm-mmm”. Today’s Artificial Intelligence (AI), however, is – if at all – largely providing explanations of decisions in a visual or textual manner. While such approaches are good for communication via visual media such as in research papers or screens of intelligent devices, they may not always be the best way to explain; especially when the end user is not an expert. In particular, when the AI’s task is about Audio Intelligence, visual explanations appear less intuitive than audible, sonified ones. Sonification has also great potential for explainable AI (XAI) in systems that deal with non-audio data – for example, because it does not require visual contact or active attention of a user. Hence, sonified explanations of AI decisions face a challenging, yet highly promising and pioneering task. That involves incorporating innovative XAI algorithms to allow pointing back at the learning data responsible for decisions made by an AI, and to include decomposition of the data to identify salient aspects. It further aims to identify the components of the preprocessing, feature representation, and learnt attention patterns that are responsible for the decisions. Finally, it targets decision-making at the model-level, to provide a holistic explanation of the chain of processing in typical pattern recognition problems from end-to-end. Sonified AI explanations will need to unite methods for sonification of the identified aspects that benefit decisions, decomposition and recomposition of audio to sonify which parts in the audio were responsible for the decision, and rendering attention patterns and salient feature representations audible. Benchmarking sonified XAI is challenging, as it will require a comparison against a backdrop of existing, state-of-the-art visual and textual alternatives, as well as synergistic complementation of all modalities in user evaluations. Sonified AI explanations will need to target different user groups to allow personalisation of the sonification experience for different user needs, to lead to a major breakthrough in comprehensibility of AI via hearing how decisions are made, hence supporting tomorrow’s humane AI’s trustability. Here, we introduce and motivate the general idea, and provide accompanying considerations including milestones of realisation of sonifed XAI and foreseeable risks.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2021. p. 788-792
Keywords [en]
Explainable artificial intelligence, sonification, trustworthy artificial intelligence, human computer interaction, multimodality
National Category
Computer Sciences Media Engineering Human Computer Interaction
Identifiers
URN: urn:nbn:se:his:diva-22301DOI: 10.1145/3462244.3479879Scopus ID: 2-s2.0-85118971526ISBN: 978-1-4503-8481-0 (print)OAI: oai:DiVA.org:his-22301DiVA, id: diva2:1739339
Conference
ICMI ’21, International Conference on Multimodal Interaction, 18–22 October, Montréal, QC, Canada
Funder
EU, Horizon 2020, 826506
Note

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agree-ment No. 826506 (sustAGE).

VF-KDO

Available from: 2021-10-21 Created: 2023-02-24 Last updated: 2023-02-28Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Riveiro, Maria

Search in DiVA

By author/editor
Riveiro, Maria
Computer SciencesMedia EngineeringHuman Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 45 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf