Högskolan i Skövde

his.sePublications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 77) Show all publications
Rosén, J., Lindblom, J., Lamb, M. & Billing, E. (2024). Previous Experience Matters: An in-Person Investigation of Expectations in Human–Robot Interaction. International Journal of Social Robotics
Open this publication in new window or tab >>Previous Experience Matters: An in-Person Investigation of Expectations in Human–Robot Interaction
2024 (English)In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805Article in journal (Refereed) Epub ahead of print
Abstract [en]

The human–robot interaction (HRI) field goes beyond the mere technical aspects of developing robots, often investigating how humans perceive robots. Human perceptions and behavior are determined, in part, by expectations. Given the impact of expectations on behavior, it is important to understand what expectations individuals bring into HRI settings and how those expectations may affect their interactions with the robot over time. For many people, social robots are not a common part of their experiences, thus any expectations they have of social robots are likely shaped by other sources. As a result, individual expectations coming into HRI settings may be highly variable. Although there has been some recent interest in expectations within the field, there is an overall lack of empirical investigation into its impacts on HRI, especially in-person robot interactions. To this end, a within-subject in-person study () was performed where participants were instructed to engage in open conversation with the social robot Pepper during two 2.5 min sessions. The robot was equipped with a custom dialogue system based on the GPT-3 large language model, allowing autonomous responses to verbal input. Participants’ affective changes towards the robot were assessed using three questionnaires, NARS, RAS, commonly used in HRI studies, and Closeness, based on the IOS scale. In addition to the three standard questionnaires, a custom question was administered to capture participants’ views on robot capabilities. All measures were collected three times, before the interaction with the robot, after the first interaction with the robot, and after the second interaction with the robot. Results revealed that participants to large degrees stayed with the expectations they had coming into the study, and in contrast to our hypothesis, none of the measured scales moved towards a common mean. Moreover, previous experience with robots was revealed to be a major factor of how participants experienced the robot in the study. These results could be interpreted as implying that expectations of robots are to large degrees decided before interactions with the robot, and that these expectations do not necessarily change as a result of the interaction. Results reveal a strong connection to how expectations are studied in social psychology and human-human interaction, underpinning its relevance for HRI research.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
Expectations, Previous experience, Social robot, Human–robot interaction, Experiment, Expectation gap, Pepper, GPT, Large language models
National Category
Robotics Human Computer Interaction Social Psychology
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-23641 (URN)10.1007/s12369-024-01107-3 (DOI)001172192700001 ()2-s2.0-85186211586 (Scopus ID)
Funder
University of Skövde
Note

CC BY 4.0 DEED

Published: 29 February 2024

Open access funding provided by University of Skövde.

Available from: 2024-02-29 Created: 2024-02-29 Last updated: 2024-04-15Bibliographically approved
Schreiter, T., Morillo-Mendez, L., Chadalavada, R. T., Rudenko, A., Billing, E., Magnusson, M., . . . Lilienthal, A. J. (2023). Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver. In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): . Paper presented at 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) August 28-31, 2023, Paradise Hotel, Busan, Korea (pp. 293-300). IEEE
Open this publication in new window or tab >>Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver
Show others...
2023 (English)In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, 2023, p. 293-300Conference paper, Published paper (Refereed)
Abstract [en]

Robots are increasingly used in shared environments with humans, making effective communication a necessity for successful human-robot interaction. In our work, we study a crucial component: active communication of robot intent. Here, we present an anthropomorphic solution where a humanoid robot communicates the intent of its host robot acting as an “Anthropomorphic Robotic Mock Driver” (ARMoD). We evaluate this approach in two experiments in which participants work alongside a mobile robot on various tasks, while the ARMoD communicates a need for human attention, when required, or gives instructions to collaborate on a joint task. The experiments feature two interaction styles of the ARMoD: a verbal-only mode using only speech and a multimodal mode, additionally including robotic gaze and pointing gestures to support communication and register intent in space. Our results show that the multimodal interaction style, including head movements and eye gaze as well as pointing gestures, leads to more natural fixation behavior. Participants naturally identified and fixated longer on the areas relevant for intent communication, and reacted faster to instructions in collaborative tasks. Our research further indicates that the ARMoD intent communication improves engagement and social interaction with mobile robots in workplace settings.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE International Symposium on Robot and Human Interactive Communication proceedings, ISSN 1944-9437, E-ISSN 1944-9445
National Category
Robotics Human Computer Interaction
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-23366 (URN)10.1109/ro-man57019.2023.10309629 (DOI)001108678600042 ()2-s2.0-85186997577 (Scopus ID)979-8-3503-3670-2 (ISBN)979-8-3503-3671-9 (ISBN)
Conference
2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) August 28-31, 2023, Paradise Hotel, Busan, Korea
Note

We are grateful for the support of Chittaranjan Swaminathan, Janik Kaden and Timm Linder in setting up the software, Per Sporrong for technical assistance in configuring the hardware, Per Lindström for creating the mock driver seat used in this study. Their contributions were invaluable to thesuccess of this research.

Available from: 2023-11-17 Created: 2023-11-17 Last updated: 2024-04-15Bibliographically approved
Rosén, J., Billing, E. & Lindblom, J. (2023). Applying the Social Robot Expectation Gap Evaluation Framework. In: Masaaki Kurosu; Ayako Hashizume (Ed.), Human-Computer Interaction: Thematic Area, HCI 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Copenhagen, Denmark, July 23–28, 2023, Proceedings, Part III. Paper presented at International Conference on Human-Computer Interaction HCI 2023, Thematic Area, HCI 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Copenhagen, Denmark, July 23–28, 2023 (pp. 169-188). Cham: Springer
Open this publication in new window or tab >>Applying the Social Robot Expectation Gap Evaluation Framework
2023 (English)In: Human-Computer Interaction: Thematic Area, HCI 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Copenhagen, Denmark, July 23–28, 2023, Proceedings, Part III / [ed] Masaaki Kurosu; Ayako Hashizume, Cham: Springer, 2023, p. 169-188Conference paper, Published paper (Refereed)
Abstract [en]

Expectations shape our experience with the world, including our interaction with technology. There is a mismatch between whathumans expect of social robots and what they are actually capable of.Expectations are dynamic and can change over time. We have previ- AQ1ously developed a framework for studying these expectations over timein human-robot interaction (HRI). In this work, we applied the socialrobot expectation gap evaluation framework in an HRI scenario from aUX evaluation perspective, by analyzing a subset of data collected froma larger experiment. The framework is based on three factors of expectation: affect, cognitive processing, as well as behavior and performance. Four UX goals related to a human-robot interaction scenario were evaluated. Results show that expectations change over time with an overallimproved UX in the second interaction. Moreover, even though some UX goals were partly fulfilled, there are severe issues with the conversation between the user and the robot, ranging from the quality of theinteraction to the users’ utterances not being recognized by the robot.This work takes the initial steps towards disentangling how expectations work and change over time in HRI. Future work includes expanding the metrics to study expectations and to further validate the framework.

Place, publisher, year, edition, pages
Cham: Springer, 2023
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14013
Keywords
Human-robot interaction, Social robots, Expectations, User experience, Evaluation, Expectation gap
National Category
Human Computer Interaction
Research subject
Interaction Lab (ILAB); INF302 Autonomous Intelligent Systems
Identifiers
urn:nbn:se:his:diva-23092 (URN)10.1007/978-3-031-35602-5_13 (DOI)2-s2.0-85173035452 (Scopus ID)978-3-031-35602-5 (ISBN)978-3-031-35601-8 (ISBN)
Conference
International Conference on Human-Computer Interaction HCI 2023, Thematic Area, HCI 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Copenhagen, Denmark, July 23–28, 2023
Note

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023

Available from: 2023-08-15 Created: 2023-08-15 Last updated: 2023-12-07Bibliographically approved
Mahmoud, S., Billing, E., Svensson, H. & Thill, S. (2023). How to train a self-driving vehicle: On the added value (or lack thereof) of curriculum learning and replay buffers. Frontiers in Artificial Intelligence, 6, Article ID 1098982.
Open this publication in new window or tab >>How to train a self-driving vehicle: On the added value (or lack thereof) of curriculum learning and replay buffers
2023 (English)In: Frontiers in Artificial Intelligence, E-ISSN 2624-8212, Vol. 6, article id 1098982Article in journal (Refereed) Published
Abstract [en]

Learning from only real-world collected data can be unrealistic and time consuming in many scenario. One alternative is to use synthetic data as learning environments to learn rare situations and replay buffers to speed up the learning. In this work, we examine the hypothesis of how the creation of the environment affects the training of reinforcement learning agent through auto-generated environment mechanisms. We take the autonomous vehicle as an application. We compare the effect of two approaches to generate training data for artificial cognitive agents. We consider the added value of curriculum learning—just as in human learning—as a way to structure novel training data that the agent has not seen before as well as that of using a replay buffer to train further on data the agent has seen before. In other words, the focus of this paper is on characteristics of the training data rather than on learning algorithms. We therefore use two tasks that are commonly trained early on in autonomous vehicle research: lane keeping and pedestrian avoidance. Our main results show that curriculum learning indeed offers an additional benefit over a vanilla reinforcement learning approach (using Deep-Q Learning), but the replay buffer actually has a detrimental effect in most (but not all) combinations of data generation approaches we considered here. The benefit of curriculum learning does depend on the existence of a well-defined difficulty metric with which various training scenarios can be ordered. In the lane-keeping task, we can define it as a function of the curvature of the road, in which the steeper and more occurring curves on the road, the more difficult it gets. Defining such a difficulty metric in other scenarios is not always trivial. In general, the results of this paper emphasize both the importance of considering data characterization, such as curriculum learning, and the importance of defining an appropriate metric for the task.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2023
Keywords
data generation, curriculum learning, cognitive-inspired learning, reinforcement learning, replay buffer, self-driving cars
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems) Robotics
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-22215 (URN)10.3389/frai.2023.1098982 (DOI)000928959000001 ()36762255 (PubMedID)2-s2.0-85147654896 (Scopus ID)
Funder
EU, Horizon 2020, 731593
Note

CC BY 4.0

Received 15 November 2022, Accepted 05 January 2023, Published 25 January 2023

This article was submitted to Machine Learning and Artificial Intelligence, a section of the journal Frontiers in Artificial Intelligence

This article is part of the Research Topic Artificial Intelligence and Autonomous Systems

Correspondence Sara Mahmoud sara.mahmoud@his.se

Part of this work was funded under the Horizon 2020 project DREAMS4CARS, Grant No. 731593.

Available from: 2023-01-31 Created: 2023-01-31 Last updated: 2023-05-04Bibliographically approved
Nair, V., Hemeren, P., Vignolo, A., Noceti, N., Nicora, E., Sciutti, A., . . . Sandini, G. (2023). Kinematic primitives in action similarity judgments: A human-centered computational model. IEEE Transactions on Cognitive and Developmental Systems, 15(4), 1981-1992
Open this publication in new window or tab >>Kinematic primitives in action similarity judgments: A human-centered computational model
Show others...
2023 (English)In: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 15, no 4, p. 1981-1992Article in journal (Refereed) Published
Abstract [en]

This paper investigates the role that kinematic features play in human action similarity judgments. The results of three experiments with human participants are compared with the computational model that solves the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experimental results show that both model and human participants can reliably identify whether two actions are the same or not. Specifically, most of the given actions could be similarity judged based on very limited information from a single feature domain (velocity or spatial). Both velocity and spatial features were however necessary to reach a level of human performance on evaluated actions. The experimental results also show that human performance on an action identification task indicated that they clearly relied on kinematic information rather than on action semantics. The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features, which can provide an essential basis for classifying human actions. 

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Biological systems, Computation theory, Computational methods, Job analysis, Kinematics, Semantics, Action matching, Action similarity, Biological motion, Biological system modeling, Comparatives studies, Computational modelling, Kinematic primitive, Light display, Point light display, Task analysis, Optical flows, Biology, comparative study, computational model, Computational modeling, Data models, Dictionaries, kinematic primitives, Optical flow
National Category
Computer Sciences Human Computer Interaction
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-22308 (URN)10.1109/TCDS.2023.3240302 (DOI)001126639000035 ()2-s2.0-85148457281 (Scopus ID)
Note

CC BY 4.0

Corresponding author: Vipul Nair.

This work has been partially carried out at the Machine Learning Genoa (MaLGa) center, Università di Genova (IT). It has been partially supported by AFOSR, grant n. FA8655-20-1-7035, and research collaboration between University of Skövde and Istituto Italiano di Tecnologia, Genoa.

Available from: 2023-03-02 Created: 2023-03-02 Last updated: 2024-05-03Bibliographically approved
Billing, E., Rosén, J. & Lamb, M. (2023). Language Models for Human-Robot Interaction. In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. Paper presented at ACM/IEEE International Conference on Human-Robot Interaction, March 13–16, 2023, Stockholm, Sweden (pp. 905-906). ACM Digital Library
Open this publication in new window or tab >>Language Models for Human-Robot Interaction
2023 (English)In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, ACM Digital Library, 2023, p. 905-906Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Recent advances in large scale language models have significantly changed the landscape of automatic dialogue systems and chatbots. We believe that these models also have a great potential for changing the way we interact with robots. Here, we present the first integration of the OpenAI GPT-3 language model for the Aldebaran Pepper and Nao robots. The present work transforms the text-based API of GPT-3 into an open verbal dialogue with the robots. The system will be presented live during the HRI2023 conference and the source code of this integration is shared with the hope that it will serve the community in designing and evaluating new dialogue systems for robots.

Place, publisher, year, edition, pages
ACM Digital Library, 2023
National Category
Language Technology (Computational Linguistics) Computer Vision and Robotics (Autonomous Systems)
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-22328 (URN)10.1145/3568294.3580040 (DOI)001054975700198 ()2-s2.0-85150449271 (Scopus ID)978-1-4503-9970-8 (ISBN)
Conference
ACM/IEEE International Conference on Human-Robot Interaction, March 13–16, 2023, Stockholm, Sweden
Note

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

Available from: 2023-03-17 Created: 2023-03-17 Last updated: 2023-10-13Bibliographically approved
Gander, P., Holm, L. & Billing, E. (Eds.). (2023). Proceedings of the 18th SweCog Conference. Paper presented at 18th SweCog Conference, Swedish Cognitive Society, Göteborg 2023, 5 - 6 October. Skövde: Högskolan i Skövde
Open this publication in new window or tab >>Proceedings of the 18th SweCog Conference
2023 (English)Conference proceedings (editor) (Refereed)
Place, publisher, year, edition, pages
Skövde: Högskolan i Skövde, 2023. p. 90
Series
SUSI, ISSN 1653-2325 ; 2023:1
National Category
Psychology (excluding Applied Psychology) Computer Sciences Neurosciences
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-23342 (URN)978-91-989038-0-5 (ISBN)
Conference
18th SweCog Conference, Swedish Cognitive Society, Göteborg 2023, 5 - 6 October
Available from: 2023-11-08 Created: 2023-11-08 Last updated: 2023-11-14Bibliographically approved
Sandhu, G., Kilburg, A., Martin, A., Pande, C., Witschel, H. F., Laurenzi, E. & Billing, E. (2022). A Learning Tracker using Digital Biomarkers for Autistic Preschoolers. In: Knut Hinkelmann; Aurona Gerber (Ed.), Proceedings of the Society 5.0 Conference 2022 - Integrating Digital World and Real World to Resolve Challenges in Business and Society: . Paper presented at Society 5.0,Integrating Digital World and Real World to Resolve Challenges in Business and Society, 2nd Conference, hybrid (online and physical) at the FHNW University of Applied Sciences and Arts Northwestern Switzerland from 20th to 22nd June 2022, Windisch, Switzerland (pp. 219-230). EasyChair
Open this publication in new window or tab >>A Learning Tracker using Digital Biomarkers for Autistic Preschoolers
Show others...
2022 (English)In: Proceedings of the Society 5.0 Conference 2022 - Integrating Digital World and Real World to Resolve Challenges in Business and Society / [ed] Knut Hinkelmann; Aurona Gerber, EasyChair , 2022, p. 219-230Conference paper, Published paper (Refereed)
Abstract [en]

Preschool children, when diagnosed with Autism Spectrum Disorder (ASD), often ex- perience a long and painful journey on their way to self-advocacy. Access to standard of care is poor, with long waiting times and the feeling of stigmatization in many social set- tings. Early interventions in ASD have been found to deliver promising results, but have a high cost for all stakeholders. Some recent studies have suggested that digital biomarkers (e.g., eye gaze), tracked using affordable wearable devices such as smartphones or tablets, could play a role in identifying children with special needs. In this paper, we discuss the possibility of supporting neurodiverse children with technologies based on digital biomark- ers which can help to a) monitor the performance of children diagnosed with ASD and b) predict those who would benefit most from early interventions. We describe an ongoing feasibility study that uses the “DREAM dataset”, stemming from a clinical study with 61 pre-school children diagnosed with ASD, to identify digital biomarkers informative for the child’s progression on tasks such as imitation of gestures. We describe our vision of a tool that will use these prediction models and that ASD pre-schoolers could use to train certain social skills at home. Our discussion includes the settings in which this usage could be embedded. 

Place, publisher, year, edition, pages
EasyChair, 2022
Series
EPiC Series in Computing, ISSN 2398-7340 ; 84
Keywords
Autism Spectrum Disorder, Digital Biomarkers, machine learning, personalized medicine
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering Nursing
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-21351 (URN)10.29007/m2jx (DOI)2-s2.0-85133755454 (Scopus ID)
Conference
Society 5.0,Integrating Digital World and Real World to Resolve Challenges in Business and Society, 2nd Conference, hybrid (online and physical) at the FHNW University of Applied Sciences and Arts Northwestern Switzerland from 20th to 22nd June 2022, Windisch, Switzerland
Note

"Our feasibility study is funded by the Swiss Innovation Agency Innosuisse under the grantnumber 60506.1 INNO-LS."

Available from: 2022-06-21 Created: 2022-06-21 Last updated: 2022-07-21Bibliographically approved
Hanson, L., Högberg, D., Brolin, E., Billing, E., Iriondo Pascual, A. & Lamb, M. (2022). Current Trends in Research and Application of Digital Human Modeling. In: Nancy L. Black; W. Patrick Neumann; Ian Noy (Ed.), Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021): Volume V: Methods & Approaches. Paper presented at 21st Congress of the International Ergonomics Association (IEA 2021), 13-18 June (pp. 358-366). Cham: Springer
Open this publication in new window or tab >>Current Trends in Research and Application of Digital Human Modeling
Show others...
2022 (English)In: Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021): Volume V: Methods & Approaches / [ed] Nancy L. Black; W. Patrick Neumann; Ian Noy, Cham: Springer, 2022, p. 358-366Conference paper, Published paper (Refereed)
Abstract [en]

The paper reports an investigation conducted during the DHM2020 Symposium regarding current trends in research and application of DHM in academia, software development, and industry. The results show that virtual reality (VR), augmented reality (AR), and digital twin are major current trends. Furthermore, results show that human diversity is considered in DHM using established methods. Results also show a shift from the assessment of static postures to assessment of sequences of actions, combined with a focus mainly on human well-being and only partly on system performance. Motion capture and motion algorithms are alternative technologies introduced to facilitate and improve DHM simulations. Results from the DHM simulations are mainly presented through pictures or animations.

Place, publisher, year, edition, pages
Cham: Springer, 2022
Series
Lecture Notes in Networks and Systems, ISSN 2367-3370, E-ISSN 2367-3389 ; 223
Keywords
Digital Human Modeling, Trends, Research, Development, Application
National Category
Production Engineering, Human Work Science and Ergonomics
Research subject
User Centred Product Design; Interaction Lab (ILAB); VF-KDO
Identifiers
urn:nbn:se:his:diva-19959 (URN)10.1007/978-3-030-74614-8_44 (DOI)2-s2.0-85111461730 (Scopus ID)978-3-030-74613-1 (ISBN)978-3-030-74614-8 (ISBN)
Conference
21st Congress of the International Ergonomics Association (IEA 2021), 13-18 June
Funder
Knowledge Foundation, 20180167Vinnova, 2018-05026Knowledge Foundation, 20200003
Note

© 2022

Available from: 2021-06-22 Created: 2021-06-22 Last updated: 2023-08-16Bibliographically approved
Lamb, M., Brundin, M., Perez Luque, E. & Billing, E. (2022). Eye-Tracking Beyond Peripersonal Space in Virtual Reality: Validation and Best Practices. Frontiers in Virtual Reality, 3, Article ID 864653.
Open this publication in new window or tab >>Eye-Tracking Beyond Peripersonal Space in Virtual Reality: Validation and Best Practices
2022 (English)In: Frontiers in Virtual Reality, E-ISSN 2673-4192, Vol. 3, article id 864653Article in journal (Refereed) Published
Abstract [en]

Recent developments in commercial virtual reality (VR) hardware with embedded eye-tracking create tremendous opportunities for human subjects researchers. Accessible eye-tracking in VR opens new opportunities for highly controlled experimental setups in which participants can engage novel 3D digital environments. However, because VR embedded eye-tracking differs from the majority of historical eye-tracking research, in both providing for relatively unconstrained movement and stimulus presentation distances, there is a need for greater discussion around methods for implementation and validation of VR based eye-tracking tools. The aim of this paper is to provide a practical introduction to the challenges of, and methods for, 3D gaze-tracking in VR with a focus on best practices for results validation and reporting. Specifically, first, we identify and define challenges and methods for collecting and analyzing 3D eye-tracking data in VR. Then, we introduce a validation pilot study with a focus on factors related to 3D gaze tracking. The pilot study provides both a reference data point for a common commercial hardware/software platform (HTC Vive Pro Eye) and illustrates the proposed methods. One outcome of this study was the observation that accuracy and precision of collected data may depend on stimulus distance, which has consequences for studies where stimuli is presented on varying distances. We also conclude that vergence is a potentially problematic basis for estimating gaze depth in VR and should be used with caution as the field move towards a more established method for 3D eye-tracking.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2022
Keywords
eye tracking, virtual reality, gaze depth, vergence, validation
National Category
Computer Sciences Human Computer Interaction Interaction Technologies
Research subject
Interaction Lab (ILAB); User Centred Product Design
Identifiers
urn:nbn:se:his:diva-21062 (URN)10.3389/frvir.2022.864653 (DOI)001023339600001 ()2-s2.0-85138010016 (Scopus ID)
Funder
Knowledge Foundation
Note

CC BY 4.0

Correspondence: Maurice Lamb Maurice.Lamb@his.se

published: 08 April 2022

The raw data supporting the conclusions of this article will bemade available by the authors, without undue reservation. The software used for data collection in this project can be found at https://doi.org/10.5281/zenodo.6368107.

Funding of this project was provided through the Knowledge Foundation as a part of both the Recruitment and Strategic Knowledge Reinforcement initiative and within the Synergy Virtual Ergonomics (SVE) project (#20180167).

We want to thank the Knowledge Foundation and the associated INFINIT research environment at the University of Skövde for support through funding of both the Recruitment and Strategic Knowledge Reinforcement initiative and within the Synergy Virtual Ergonomics (SVE) project. This support is gratefully acknowledged.

Available from: 2022-04-14 Created: 2022-04-14 Last updated: 2023-08-23Bibliographically approved
Projects
Synergy Virtual Ergonomics (SVE) [20180167]; University of Skövde; Publications
Iriondo Pascual, A. (2023). Simulation-based multi-objective optimization of productivity and worker well-being. (Doctoral dissertation). Skövde: University of SkövdeHanson, L., Högberg, D., Brolin, E., Billing, E., Iriondo Pascual, A. & Lamb, M. (2022). Current Trends in Research and Application of Digital Human Modeling. In: Nancy L. Black; W. Patrick Neumann; Ian Noy (Ed.), Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021): Volume V: Methods & Approaches. Paper presented at 21st Congress of the International Ergonomics Association (IEA 2021), 13-18 June (pp. 358-366). Cham: SpringerGarcia Rivera, F., Högberg, D., Lamb, M. & Perez Luque, E. (2022). DHM supported assessment of the effects of using an exoskeleton during work. International Journal of Human Factors Modelling and Simulation, 7(3/4), 231-246Marshall, R., Brolin, E., Summerskill, S. & Högberg, D. (2022). Digital Human Modelling: Inclusive Design and the Ageing Population (1ed.). In: Sofia Scataglini; Silvia Imbesi; Gonçalo Marques (Ed.), Internet of Things for Human-Centered Design: Application to Elderly Healthcare (pp. 73-96). Singapore: Springer NatureIriondo Pascual, A., Lind, A., Högberg, D., Syberfeldt, A. & Hanson, L. (2022). Enabling Concurrent Multi-Objective Optimization of Worker Well-Being and Productivity in DHM Tools. In: Amos H. C. Ng; Anna Syberfeldt; Dan Högberg; Magnus Holm (Ed.), SPS2022: Proceedings of the 10th Swedish Production Symposium. Paper presented at 10th Swedish Production Symposium (SPS2022), Skövde, April 26–29 2022 (pp. 404-414). Amsterdam; Berlin; Washington, DC: IOS PressIriondo Pascual, A., Smedberg, H., Högberg, D., Syberfeldt, A. & Lämkull, D. (2022). Enabling Knowledge Discovery in Multi-Objective Optimizations of Worker Well-Being and Productivity. Sustainability, 14(9), Article ID 4894. Lamb, M., Brundin, M., Perez Luque, E. & Billing, E. (2022). Eye-Tracking Beyond Peripersonal Space in Virtual Reality: Validation and Best Practices. Frontiers in Virtual Reality, 3, Article ID 864653. Hanson, L., Högberg, D., Iriondo Pascual, A., Brolin, A., Brolin, E. & Lebram, M. (2022). Integrating Physical Load Exposure Calculations and Recommendations in Digitalized Ergonomics Assessment Processes. In: Amos H. C. Ng; Anna Syberfeldt; Dan Högberg; Magnus Holm (Ed.), SPS2022: Proceedings of the 10th Swedish Production Symposium. Paper presented at 10th Swedish Production Symposium (SPS2022), Skövde, April 26–29 2022 (pp. 233-239). Amsterdam; Berlin; Washington, DC: IOS PressIriondo Pascual, A., Högberg, D., Syberfeldt, A., Brolin, E., Perez Luque, E., Hanson, L. & Lämkull, D. (2022). Multi-objective Optimization of Ergonomics and Productivity by Using an Optimization Framework. In: Nancy L. Black; W. Patrick Neumann; Ian Noy (Ed.), Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021): Volume V: Methods & Approaches. Paper presented at 21st Congress of the International Ergonomics Association (IEA 2021), 13-18 June, 2021 (pp. 374-378). Cham: SpringerGarcía Rivera, F., Lamb, M., Högberg, D. & Brolin, A. (2022). The Schematization of XR Technologies in the Context of Collaborative Design. In: Amos H. C. Ng; Anna Syberfeldt; Dan Högberg; Magnus Holm (Ed.), SPS2022: Proceedings of the 10th Swedish Production Symposium. Paper presented at 10th Swedish Production Symposium (SPS2022), Skövde, April 26–29 2022 (pp. 520-529). Amsterdam; Berlin; Washington, DC: IOS Press
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-6568-9342

Search in DiVA

Show all publications