Högskolan i Skövde

his.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 65) Show all publications
Lagerstedt, E. & Thill, S. (2023). Conceptual Tools for Exploring Perspectives of Different Kinds of Road-Users. In: : . Paper presented at HAI ’23 Workshop — Cars As Social Agents, Gothenburg, Sweden, December 4, 2023, Co-located with the 11th International Conference on Human-Agent Interaction (HAI 2023).
Open this publication in new window or tab >>Conceptual Tools for Exploring Perspectives of Different Kinds of Road-Users
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

The traffic domain is increasingly inhabited by vehicles with driving support systems and automation to the degree where the idea of fully autonomous vehicles is gaining popularity as a credible prediction about the near future. As more aspects of driving become automated, the role of the driver, and the way they perceive their vehicle, surroundings, and fellow road users, change. To address some of the emerging kinds of interaction between different agents in the traffic environment, it is important to take social phenomena and abilities into account, even to the extent of considering highly automated vehicles to be social agents in their own right. To benefit from that, it is important to frame the perception of the traffic environment, as well as the road users in it, in an appropriate theoretical context. We propose that there are helpful concepts related to functional and subjective perception, derived from gestalt psychology and Umweltlehre, that can fill this theoretical need, and support better understanding of vehicles of various degrees of automation.

Keywords
Autonomous Vehicles, Human-Agent Interaction, Human-Robot Interaction, perception, social, interaction
National Category
Human Computer Interaction Psychology Transport Systems and Logistics
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-23444 (URN)
Conference
HAI ’23 Workshop — Cars As Social Agents, Gothenburg, Sweden, December 4, 2023, Co-located with the 11th International Conference on Human-Agent Interaction (HAI 2023)
Available from: 2023-12-08 Created: 2023-12-08 Last updated: 2023-12-11Bibliographically approved
Mahmoud, S., Billing, E., Svensson, H. & Thill, S. (2023). How to train a self-driving vehicle: On the added value (or lack thereof) of curriculum learning and replay buffers. Frontiers in Artificial Intelligence, 6, Article ID 1098982.
Open this publication in new window or tab >>How to train a self-driving vehicle: On the added value (or lack thereof) of curriculum learning and replay buffers
2023 (English)In: Frontiers in Artificial Intelligence, E-ISSN 2624-8212, Vol. 6, article id 1098982Article in journal (Refereed) Published
Abstract [en]

Learning from only real-world collected data can be unrealistic and time consuming in many scenario. One alternative is to use synthetic data as learning environments to learn rare situations and replay buffers to speed up the learning. In this work, we examine the hypothesis of how the creation of the environment affects the training of reinforcement learning agent through auto-generated environment mechanisms. We take the autonomous vehicle as an application. We compare the effect of two approaches to generate training data for artificial cognitive agents. We consider the added value of curriculum learning—just as in human learning—as a way to structure novel training data that the agent has not seen before as well as that of using a replay buffer to train further on data the agent has seen before. In other words, the focus of this paper is on characteristics of the training data rather than on learning algorithms. We therefore use two tasks that are commonly trained early on in autonomous vehicle research: lane keeping and pedestrian avoidance. Our main results show that curriculum learning indeed offers an additional benefit over a vanilla reinforcement learning approach (using Deep-Q Learning), but the replay buffer actually has a detrimental effect in most (but not all) combinations of data generation approaches we considered here. The benefit of curriculum learning does depend on the existence of a well-defined difficulty metric with which various training scenarios can be ordered. In the lane-keeping task, we can define it as a function of the curvature of the road, in which the steeper and more occurring curves on the road, the more difficult it gets. Defining such a difficulty metric in other scenarios is not always trivial. In general, the results of this paper emphasize both the importance of considering data characterization, such as curriculum learning, and the importance of defining an appropriate metric for the task.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2023
Keywords
data generation, curriculum learning, cognitive-inspired learning, reinforcement learning, replay buffer, self-driving cars
National Category
Computer Sciences Computer graphics and computer vision Robotics and automation
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-22215 (URN)10.3389/frai.2023.1098982 (DOI)000928959000001 ()36762255 (PubMedID)2-s2.0-85147654896 (Scopus ID)
Funder
EU, Horizon 2020, 731593
Note

CC BY 4.0

Received 15 November 2022, Accepted 05 January 2023, Published 25 January 2023

This article was submitted to Machine Learning and Artificial Intelligence, a section of the journal Frontiers in Artificial Intelligence

This article is part of the Research Topic Artificial Intelligence and Autonomous Systems

Correspondence Sara Mahmoud sara.mahmoud@his.se

Part of this work was funded under the Horizon 2020 project DREAMS4CARS, Grant No. 731593.

Available from: 2023-01-31 Created: 2023-01-31 Last updated: 2025-02-05Bibliographically approved
Lagerstedt, E. & Thill, S. (2023). Multiple Roles of Multimodality Among Interacting Agents. ACM Transactions on Human-Robot Interaction, 12(2), Article ID 17.
Open this publication in new window or tab >>Multiple Roles of Multimodality Among Interacting Agents
2023 (English)In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 12, no 2, article id 17Article in journal (Refereed) Published
Abstract [en]

The term ‘multimodality’ has come to take on several somewhat different meanings depending on the underlying theoretical paradigms and traditions, and the purpose and context of use. The term is closely related to embodiment, which in turn is also used in several different ways. In this paper, we elaborate on this connection and propose that a pragmatic and pluralistic stance is appropriate for multimodality. We further propose a distinction between first and second order effects of multimodality; what is achieved by multiple modalities in isolation and the opportunities that emerge when several modalities are entangled. This highlights questions regarding ways to cluster or interchange different modalities, for example through redundancy or degeneracy. Apart from discussing multimodality with respect to an individual agent, we further look to more distributed agents and situations where social aspects become relevant.

In robotics, understanding the various uses and interpretations of these terms can prevent miscommunication when designing robots, as well as increase awareness of the underlying theoretical concepts. Given the complexity of the different ways in which multimodality is relevant in social robotics, this can provide the basis for negotiating appropriate meanings of the term at a case by case basis.

Place, publisher, year, edition, pages
ACM Digital Library, 2023
Keywords
Multimodality, Embodiment, Robotics, Sensors
National Category
Philosophy Human Aspects of ICT Robotics and automation
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-21986 (URN)10.1145/3549955 (DOI)001020329300004 ()2-s2.0-85164237637 (Scopus ID)
Note

CC BY 4.0

Available from: 2022-10-27 Created: 2022-10-27 Last updated: 2025-02-05Bibliographically approved
Mahmoud, S., Billing, E., Svensson, H. & Thill, S. (2022). Where to from here?: On the future development of autonomous vehicles from a cognitive systems perspective. Cognitive Systems Research, 76, 63-77
Open this publication in new window or tab >>Where to from here?: On the future development of autonomous vehicles from a cognitive systems perspective
2022 (English)In: Cognitive Systems Research, ISSN 2214-4366, E-ISSN 1389-0417, Vol. 76, p. 63-77Article in journal (Refereed) Published
Abstract [en]

Self-driving cars not only solve the problem of navigating safely from location A to location B; they also have to deal with an abundance of (sometimes unpredictable) factors, such as traffic rules, weather conditions, and interactions with humans. Over the last decades, different approaches have been proposed to design intelligent driving systems for self-driving cars that can deal with an uncontrolled environment. Some of them are derived from computationalist paradigms, formulating mathematical models that define the driving agent, while other approaches take inspiration from biological cognition. However, despite the extensive work in the field of self-driving cars, many open questions remain. Here, we discuss the different approaches for implementing driving systems for self-driving cars, as well as the computational paradigms from which they originate. In doing so, we highlight two key messages: First, further progress in the field might depend on adapting new paradigms as opposed to pushing technical innovations in those currently used. Specifically, we discuss how paradigms from cognitive systems research can be a source of inspiration for further development in modeling driving systems, highlighting emergent approaches as a possible starting point. Second, self-driving cars can themselves be considered cognitive systems in a meaningful sense, and are therefore a relevant, yet underutilised resource in the study of cognitive mechanisms. Overall, we argue for a stronger synergy between the fields of cognitive systems and self-driving vehicles.

Place, publisher, year, edition, pages
Elsevier, 2022
National Category
Robotics and automation Computer Systems Other Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-21894 (URN)10.1016/j.cogsys.2022.09.005 (DOI)000883846400001 ()2-s2.0-85140001741 (Scopus ID)
Note

CC BY 4.0

Available online 1 October 2022

Corresponding author: E-mail address: sara.mahmoud@his.se (S. Mahmoud).

Available from: 2022-10-03 Created: 2022-10-03 Last updated: 2025-02-05Bibliographically approved
Hemeren, P., Veto, P., Thill, S., Cai, L. & Sun, J. (2021). Kinematic-based classification of social gestures and grasping by humans and machine learning techniques. Frontiers in Robotics and AI, 8(308), 1-17, Article ID 699505.
Open this publication in new window or tab >>Kinematic-based classification of social gestures and grasping by humans and machine learning techniques
Show others...
2021 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, no 308, p. 1-17, article id 699505Article in journal (Refereed) Published
Abstract [en]

The affective motion of humans conveys messages that other humans perceive and understand without conventional linguistic processing. This ability to classify human movement into meaningful gestures or segments plays also a critical role in creating social interaction between humans and robots. In the research presented here, grasping and social gesture recognition by humans and four machine learning techniques (k-Nearest Neighbor, Locality-Sensitive Hashing Forest, Random Forest and Support Vector Machine) is assessed by using human classification data as a reference for evaluating the classification performance of machine learning techniques for thirty hand/arm gestures. The gestures are rated according to the extent of grasping motion on one task and the extent to which the same gestures are perceived as social according to another task. The results indicate that humans clearly rate differently according to the two different tasks. The machine learning techniques provide a similar classification of the actions according to grasping kinematics and social quality. Furthermore, there is a strong association between gesture kinematics and judgments of grasping and the social quality of the hand/arm gestures. Our results support previous research on intention-from-movement understanding that demonstrates the reliance on kinematic information for perceiving the social aspects and intentions in different grasping actions as well as communicative point-light actions. 

Place, publisher, year, edition, pages
Frontiers Media S.A., 2021
Keywords
gesture recognition, social gestures, machine learning, Biological motion, kinematics, social signal processing
National Category
Human Computer Interaction Robotics and automation
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-20560 (URN)10.3389/frobt.2021.699505 (DOI)000716638700001 ()34746242 (PubMedID)2-s2.0-85118674941 (Scopus ID)
Note

CC BY 4.0

Correspondence: Dr. Paul Hemeren, University of Skövde, Skövde, Sweden, paul.hemeren@his.se

This article is part of the Research Topic Affective Shared Perception

published: 15 October 2021

Available from: 2021-09-13 Created: 2021-09-13 Last updated: 2025-02-05
Windridge, D., Svensson, H. & Thill, S. (2021). On the utility of dreaming: A general model for how learning in artificial agents can benefit from data hallucination. Adaptive Behavior, 29(3), 267-280, Article ID UNSP 1059712319896489.
Open this publication in new window or tab >>On the utility of dreaming: A general model for how learning in artificial agents can benefit from data hallucination
2021 (English)In: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 29, no 3, p. 267-280, article id UNSP 1059712319896489Article in journal (Refereed) Published
Abstract [en]

We consider the benefits of dream mechanisms - that is, the ability to simulate new experiences based on past ones - in a machine learning context. Specifically, we are interested in learning for artificial agents that act in the world, and operationalize "dreaming" as a mechanism by which such an agent can use its own model of the learning environment to generate new hypotheses and training data. We first show that it is not necessarily a given that such a data-hallucination process is useful, since it can easily lead to a training set dominated by spurious imagined data until an ill-defined convergence point is reached. We then analyse a notably successful implementation of a machine learning-based dreaming mechanism by Ha and Schmidhuber (Ha, D., & Schmidhuber, J. (2018). World models. arXiv e-prints, arXiv:1803.10122). On that basis, we then develop a general framework by which an agent can generate simulated data to learn from in a manner that is beneficial to the agent. This, we argue, then forms a general method for an operationalized dream-like mechanism. We finish by demonstrating the general conditions under which such mechanisms can be useful in machine learning, wherein the implicit simulator inference and extrapolation involved in dreaming act without reinforcing inference error even when inference is incomplete.

Place, publisher, year, edition, pages
Sage Publications, 2021
Keywords
Artificial dream mechanisms, data simulation, machine learning, reinforcement learning
National Category
Computer Sciences
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-18160 (URN)10.1177/1059712319896489 (DOI)000506780000001 ()2-s2.0-85077601986 (Scopus ID)
Projects
EC H2020 research project Dreams4Cars (no. 731593)
Note

First Published January 8, 2020

CC BY-NC 4.0

Available from: 2020-01-23 Created: 2020-01-23 Last updated: 2022-12-16Bibliographically approved
Lagerstedt, E. & Thill, S. (2020). Benchmarks for evaluating human-robot interaction: lessons learned from human-animal interactions. In: Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): Robots with Heart, Mind, and Soul. Paper presented at 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), August 31 - September 4, 2020 Virtual Conference (pp. 137-143). IEEE
Open this publication in new window or tab >>Benchmarks for evaluating human-robot interaction: lessons learned from human-animal interactions
2020 (English)In: Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): Robots with Heart, Mind, and Soul, IEEE, 2020, p. 137-143Conference paper, Published paper (Refereed)
Abstract [en]

Human-robot interaction (HRI) is fundamentally concerned with studying the interaction between humans and robots. While it is still a relatively young field, it can draw inspiration from other disciplines studying human interaction with other types of agents. Often, such inspiration is sought from the study of human-computer interaction (HCI) and the social sciences studying human-human interaction (HHI). More rarely, the field also turns to human-animal interaction (HAI).

In this paper, we identify two distinct underlying motivations for making such comparisons: to form a target to recreate orto obtain a benchmark (or baseline) for evaluation. We further highlight relevant (existing) overlap between HRI and HAI, and identify specific themes that are of particular interest for further trans-disciplinary exploration. At the same time, since robots and animals are clearly not the same, we also discuss important differences between HRI and HAI, their complementarity notwithstanding. The overall purpose of this discussion is thus to create an awareness of the potential mutual benefit between the two disciplines and to describe opportunities that exist for future work, both in terms of new domains to explore, and existing results to learn from.

Place, publisher, year, edition, pages
IEEE, 2020
Series
IEEE International Symposium on Robot and Human Interactive Communication proceedings, ISSN 1944-9445, E-ISSN 1944-9437 ; 29
National Category
Interaction Technologies Robotics and automation
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-19188 (URN)10.1109/RO-MAN47096.2020.9223347 (DOI)000598571700021 ()2-s2.0-85095758350 (Scopus ID)978-1-7281-6076-4 (ISBN)978-1-7281-6075-7 (ISBN)
Conference
29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), August 31 - September 4, 2020 Virtual Conference
Note

IEEE International Workshop on Robot and Human Communication (ROMAN)

Available from: 2020-10-16 Created: 2020-10-16 Last updated: 2025-02-05Bibliographically approved
Bartlett, M. E., Costescu, C., Baxter, P. & Thill, S. (2020). Requirements for Robotic Interpretation of Social Signals “in the Wild”: Insights from Diagnostic Criteria of Autism Spectrum Disorder. Information, 11(2), Article ID 81.
Open this publication in new window or tab >>Requirements for Robotic Interpretation of Social Signals “in the Wild”: Insights from Diagnostic Criteria of Autism Spectrum Disorder
2020 (English)In: Information, E-ISSN 2078-2489, Vol. 11, no 2, article id 81Article in journal (Refereed) Published
Abstract [en]

The last few decades have seen widespread advances in technological means to characterise observable aspects of human behaviour such as gaze or posture. Among others, these developments have also led to significant advances in social robotics. At the same time, however, social robots are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether the technological progress is sufficient to let such robots move “into the wild”. In this paper, we characterise the problems that a social robot in the real world may face, and review the technological state of the art in terms of addressing these. We do this by considering what it would entail to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD diagnosis fundamentally requires the ability to characterise human behaviour from observable aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall, we demonstrate that even with relatively clear therapist-provided criteria and current technological progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis, we provide a classification of criteria based on whether or not they depend on covert information and highlight present-day possibilities for supporting therapists in diagnosis through technological means. For social robotics, we highlight the fundamental role of covert behaviour, show that the current state-of-the-art is unable to charact

Place, publisher, year, edition, pages
MDPI, 2020
Keywords
autism spectrum disorder, diagnosis, technology, behaviour
National Category
Human Computer Interaction
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-18184 (URN)10.3390/info11020081 (DOI)000519542400049 ()2-s2.0-85081118497 (Scopus ID)
Note

This article belongs to the Special Issue Advances in Social Robots

Available from: 2020-02-03 Created: 2020-02-03 Last updated: 2020-04-22Bibliographically approved
Billing, E., Belpaeme, T., Cai, H., Cao, H.-L., Ciocan, A., Costescu, C., . . . Ziemke, T. (2020). The DREAM Dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy. PLOS ONE, 15(8), Article ID e0236939.
Open this publication in new window or tab >>The DREAM Dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy
Show others...
2020 (English)In: PLOS ONE, E-ISSN 1932-6203, Vol. 15, no 8, article id e0236939Article in journal (Refereed) Published
Abstract [en]

We present a dataset of behavioral data recorded from 61 children diagnosed with Autism Spectrum Disorder (ASD). The data was collected during a large-scale evaluation of Robot Enhanced Therapy (RET). The dataset covers over 3000 therapy sessions and more than 300 hours of therapy. Half of the children interacted with the social robot NAO supervised by a therapist. The other half, constituting a control group, interacted directly with a therapist. Both groups followed the Applied Behavior Analysis (ABA) protocol. Each session was recorded with three RGB cameras and two RGBD (Kinect) cameras, providing detailed information of children’s behavior during therapy. This public release of the dataset comprises body motion, head position and orientation, and eye gaze variables, all specified as 3D data in a joint frame of reference. In addition, metadata including participant age, gender, and autism diagnosis (ADOS) variables are included. We release this data with the hope of supporting further data-driven studies towards improved therapy methods as well as a better understanding of ASD in general.

Place, publisher, year, edition, pages
Public Library of Science, 2020
National Category
Human Computer Interaction
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-18958 (URN)10.1371/journal.pone.0236939 (DOI)000564080300032 ()32823270 (PubMedID)2-s2.0-85089811349 (Scopus ID)
Projects
DREAM - Development of robot-enhanced therapy for children with autism spectrum disorders.
Funder
EU, FP7, Seventh Framework Programme, 611391
Note

CC BY 4.0

Available from: 2020-08-27 Created: 2020-08-27 Last updated: 2023-03-03Bibliographically approved
Mahmoud, S., Svensson, H. & Thill, S. (2019). Cognitively-inspired episodic imagination for self-driving vehicles. In: Towards Cognitive Vehicles: perception, learning and decision making under real-world constraints. Is bio-inspiration helpful?: Proceedings. Paper presented at TCV2019: Towards Cognitive Vehicles: perception, learning and decision making under real-world constraints. Is bio-inspiration helpful? Workshop held as part of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019). Macau, China, November 8, 2019. (pp. 28-31).
Open this publication in new window or tab >>Cognitively-inspired episodic imagination for self-driving vehicles
2019 (English)In: Towards Cognitive Vehicles: perception, learning and decision making under real-world constraints. Is bio-inspiration helpful?: Proceedings, 2019, p. 28-31Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

The controller of an autonomous vehicle needsthe ability to learn how to act in different driving scenariosthat it may face. A significant challenge is that it is difficult,dangerous, or even impossible to experience and explore variousactions in situations that might be encountered in the realworld. Autonomous vehicle control would therefore benefitfrom a mechanism that allows the safe exploration of actionpossibilities and their consequences, as well as the ability tolearn from experience thus gained to improve driving skills.In this paper we demonstrate a methodology that allows alearning agent to create simulations of possible situations. Thesesimulations can be chained together in a sequence that allowsthe progressive improvement of the agent’s performance suchthat the agent is able to appropriately deal with novel situationsat the end of training. This methodology takes inspirationfrom the human ability to imagine hypothetical situations usingepisodic simulation; we therefore refer to this methodology asepisodic imagination.An interesting question in this respect is what effect thestructuring of such a sequence of episodic imaginations hason performance. Here, we compare a random process to astructured one and initial results indicate  that a structuredsequence outperforms a random one.

National Category
Robotics and automation
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-18175 (URN)
Conference
TCV2019: Towards Cognitive Vehicles: perception, learning and decision making under real-world constraints. Is bio-inspiration helpful? Workshop held as part of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019). Macau, China, November 8, 2019.
Funder
EU, Horizon 2020, 41365
Available from: 2020-01-28 Created: 2020-01-28 Last updated: 2025-02-09Bibliographically approved
Projects
Vocal interaction in-and-between humans, animals, and robots (VIHAR) [2017-00542_VR]; University of Skövde
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-1177-4119

Search in DiVA

Show all publications