Designing for human-centered AI: Lessons learned from a case study in the clinical domain
2025 (English)In: International journal of human-computer studies, ISSN 1071-5819, E-ISSN 1095-9300, Vol. 205, no November 2025, article id 103623Article in journal (Refereed) Published
Abstract [en]
AI tools for supporting, or even fully automating, human decision-making have been proposed in a variety of domains, promising faster and better quality of decisions. However, for high-stakes and critical decisions, humans are still required in the decision-making process. Despite the need for human involvement, the research core centers mainly around the technical issues of AI, i.e. how to develop better performing machine learning (ML) models, setting aside the issue of designing, developing, and evaluating AI tools that are to be used in a human-AI context. This focus has led to a lack of experience and guidance of designing and developing AI tools that support their users in a decision-making context, keeping the human in the loop. In this paper, we outline our work on designing, developing, and evaluating a transparent AI-based tool to be used by non-AI experts, namely healthcare professionals. The work carried out had two parallel tracks. One focused on testing and implementing a suitable ML technique for sepsis diagnostics based on real patient data and applying explainable AI (XAI) techniques on the results to better enable healthcare professionals to understand and trust the analysis results. The other track included an iterative design process for developing a user-centered, transparent, and trustworthy sepsis diagnostic tool, evaluating whether the generated XAI explanations were fit for purpose. We present the process applied for intertwining these tracks during a common multidisciplinary development process, providing guidance how to conduct a human-centered AI (HCAI) project. We discuss lessons learned, and outline future work for the development of HCAI tools to be used by non-AI experts.
Place, publisher, year, edition, pages
Elsevier, 2025. Vol. 205, no November 2025, article id 103623
Keywords [en]
Human-centered AI, trust, transparency, Explainable AI, AI for clinical decision-making
National Category
Human Computer Interaction
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
URN: urn:nbn:se:his:diva-25833DOI: 10.1016/j.ijhcs.2025.103623ISI: 001583488400001Scopus ID: 2-s2.0-105015863866OAI: oai:DiVA.org:his-25833DiVA, id: diva2:1998371
Funder
Knowledge Foundation
Note
CC BY 4.0
This article is part of a Special issue entitled: ‘HCAI’ published in International Journal of Human - Computer Studies.
Corresponding author: tove.helldin@his.se (T. Helldin).
This work has been carried out under grant ‘‘Future diagnostics of sepsis - miRSeps’’, funded by the Swedish Knowledge Foundation, Sweden. We would like to thank all the participants in our user studies, enabling an iterative refinement of the sepsis diagnostic tool. We would also like to thank Anna Kjellsdotter at VGR for enabling the distribution of the survey.
2025-09-162025-09-162025-11-17Bibliographically approved