Högskolan i Skövde

his.sePublikationer
Ändra sökning
Avgränsa sökresultatet
12 1 - 50 av 69
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Rosén, Julia
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Department of Information Technology, Uppsala University, Sweden.
    Lamb, Maurice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Previous Experience Matters: An in-Person Investigation of Expectations in Human–Robot Interaction2024Ingår i: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The human–robot interaction (HRI) field goes beyond the mere technical aspects of developing robots, often investigating how humans perceive robots. Human perceptions and behavior are determined, in part, by expectations. Given the impact of expectations on behavior, it is important to understand what expectations individuals bring into HRI settings and how those expectations may affect their interactions with the robot over time. For many people, social robots are not a common part of their experiences, thus any expectations they have of social robots are likely shaped by other sources. As a result, individual expectations coming into HRI settings may be highly variable. Although there has been some recent interest in expectations within the field, there is an overall lack of empirical investigation into its impacts on HRI, especially in-person robot interactions. To this end, a within-subject in-person study () was performed where participants were instructed to engage in open conversation with the social robot Pepper during two 2.5 min sessions. The robot was equipped with a custom dialogue system based on the GPT-3 large language model, allowing autonomous responses to verbal input. Participants’ affective changes towards the robot were assessed using three questionnaires, NARS, RAS, commonly used in HRI studies, and Closeness, based on the IOS scale. In addition to the three standard questionnaires, a custom question was administered to capture participants’ views on robot capabilities. All measures were collected three times, before the interaction with the robot, after the first interaction with the robot, and after the second interaction with the robot. Results revealed that participants to large degrees stayed with the expectations they had coming into the study, and in contrast to our hypothesis, none of the measured scales moved towards a common mean. Moreover, previous experience with robots was revealed to be a major factor of how participants experienced the robot in the study. These results could be interpreted as implying that expectations of robots are to large degrees decided before interactions with the robot, and that these expectations do not necessarily change as a result of the interaction. Results reveal a strong connection to how expectations are studied in social psychology and human-human interaction, underpinning its relevance for HRI research.

    Ladda ner fulltext (pdf)
    fulltext
  • 2.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Rosén, Julia
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lamb, Maurice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Language Models for Human-Robot Interaction2023Ingår i: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, ACM Digital Library, 2023, s. 905-906Konferensbidrag (Refereegranskat)
    Abstract [en]

    Recent advances in large scale language models have significantly changed the landscape of automatic dialogue systems and chatbots. We believe that these models also have a great potential for changing the way we interact with robots. Here, we present the first integration of the OpenAI GPT-3 language model for the Aldebaran Pepper and Nao robots. The present work transforms the text-based API of GPT-3 into an open verbal dialogue with the robots. The system will be presented live during the HRI2023 conference and the source code of this integration is shared with the hope that it will serve the community in designing and evaluating new dialogue systems for robots.

    Ladda ner fulltext (pdf)
    fulltext
  • 3.
    Mahmoud, Sara
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Svensson, Henrik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Thill, Serge
    Donders Institute for Brain,Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands.
    How to train a self-driving vehicle: On the added value (or lack thereof) of curriculum learning and replay buffers2023Ingår i: Frontiers in Artificial Intelligence, E-ISSN 2624-8212, Vol. 6, artikel-id 1098982Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Learning from only real-world collected data can be unrealistic and time consuming in many scenario. One alternative is to use synthetic data as learning environments to learn rare situations and replay buffers to speed up the learning. In this work, we examine the hypothesis of how the creation of the environment affects the training of reinforcement learning agent through auto-generated environment mechanisms. We take the autonomous vehicle as an application. We compare the effect of two approaches to generate training data for artificial cognitive agents. We consider the added value of curriculum learning—just as in human learning—as a way to structure novel training data that the agent has not seen before as well as that of using a replay buffer to train further on data the agent has seen before. In other words, the focus of this paper is on characteristics of the training data rather than on learning algorithms. We therefore use two tasks that are commonly trained early on in autonomous vehicle research: lane keeping and pedestrian avoidance. Our main results show that curriculum learning indeed offers an additional benefit over a vanilla reinforcement learning approach (using Deep-Q Learning), but the replay buffer actually has a detrimental effect in most (but not all) combinations of data generation approaches we considered here. The benefit of curriculum learning does depend on the existence of a well-defined difficulty metric with which various training scenarios can be ordered. In the lane-keeping task, we can define it as a function of the curvature of the road, in which the steeper and more occurring curves on the road, the more difficult it gets. Defining such a difficulty metric in other scenarios is not always trivial. In general, the results of this paper emphasize both the importance of considering data characterization, such as curriculum learning, and the importance of defining an appropriate metric for the task.

    Ladda ner fulltext (pdf)
    fulltext
  • 4.
    Nair, Vipul
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Hemeren, Paul
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Vignolo, Alessia
    CONTACT Unit, Istituto Italiano di Tecnologia, Genoa, Italy.
    Noceti, Nicoletta
    MaLGa Center -DIBRIS, Universita di Genova, Genova, Italy.
    Nicora, Elena
    MaLGa Center -DIBRIS, Universita di Genova, Genova, Italy.
    Sciutti, Alessandra
    CONTACT Unit, Istituto Italiano di Tecnologia, Genoa, Italy.
    Rea, Francesco
    RBCS Unit, Istituto Italiano di Tecnologia, Genoa, Italy.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Bhatt, Mehul
    School of Science and Technology, Örebro University, Sweden.
    Odone, Francesca
    MaLGa Center -DIBRIS, Universita di Genova, Genova, Italy.
    Sandini, Giulio
    RBCS Unit, Istituto Italiano di Tecnologia, Genoa, Italy.
    Kinematic primitives in action similarity judgments: A human-centered computational model2023Ingår i: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 15, nr 4, s. 1981-1992Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper investigates the role that kinematic features play in human action similarity judgments. The results of three experiments with human participants are compared with the computational model that solves the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experimental results show that both model and human participants can reliably identify whether two actions are the same or not. Specifically, most of the given actions could be similarity judged based on very limited information from a single feature domain (velocity or spatial). Both velocity and spatial features were however necessary to reach a level of human performance on evaluated actions. The experimental results also show that human performance on an action identification task indicated that they clearly relied on kinematic information rather than on action semantics. The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features, which can provide an essential basis for classifying human actions. 

    Ladda ner fulltext (pdf)
    fulltext
  • 5.
    Rosén, Julia
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Department of Information Technology, Uppsala University, Sweden.
    Applying the Social Robot Expectation Gap Evaluation Framework2023Ingår i: Human-Computer Interaction: Thematic Area, HCI 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Copenhagen, Denmark, July 23–28, 2023, Proceedings, Part III / [ed] Masaaki Kurosu; Ayako Hashizume, Cham: Springer, 2023, s. 169-188Konferensbidrag (Refereegranskat)
    Abstract [en]

    Expectations shape our experience with the world, including our interaction with technology. There is a mismatch between whathumans expect of social robots and what they are actually capable of.Expectations are dynamic and can change over time. We have previ- AQ1ously developed a framework for studying these expectations over timein human-robot interaction (HRI). In this work, we applied the socialrobot expectation gap evaluation framework in an HRI scenario from aUX evaluation perspective, by analyzing a subset of data collected froma larger experiment. The framework is based on three factors of expectation: affect, cognitive processing, as well as behavior and performance. Four UX goals related to a human-robot interaction scenario were evaluated. Results show that expectations change over time with an overallimproved UX in the second interaction. Moreover, even though some UX goals were partly fulfilled, there are severe issues with the conversation between the user and the robot, ranging from the quality of theinteraction to the users’ utterances not being recognized by the robot.This work takes the initial steps towards disentangling how expectations work and change over time in HRI. Future work includes expanding the metrics to study expectations and to further validate the framework.

  • 6.
    Schreiter, Tim
    et al.
    Centre for Applied Autonomous Sensor Systems (AASS), Örebro University, Sweden.
    Morillo-Mendez, Lucas
    Centre for Applied Autonomous Sensor Systems (AASS), Örebro University, Sweden.
    Chadalavada, Ravi T.
    Centre for Applied Autonomous Sensor Systems (AASS), Örebro University, Sweden.
    Rudenko, Andrey
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Magnusson, Martin
    Centre for Applied Autonomous Sensor Systems (AASS), Örebro University, Sweden.
    Arras, Kai O.
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Lilienthal, Achim J.
    TU Munich, Germany ; Centre for Applied Autonomous Sensor Systems (AASS), Örebro University, Sweden.
    Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver2023Ingår i: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, 2023, s. 293-300Konferensbidrag (Refereegranskat)
    Abstract [en]

    Robots are increasingly used in shared environments with humans, making effective communication a necessity for successful human-robot interaction. In our work, we study a crucial component: active communication of robot intent. Here, we present an anthropomorphic solution where a humanoid robot communicates the intent of its host robot acting as an “Anthropomorphic Robotic Mock Driver” (ARMoD). We evaluate this approach in two experiments in which participants work alongside a mobile robot on various tasks, while the ARMoD communicates a need for human attention, when required, or gives instructions to collaborate on a joint task. The experiments feature two interaction styles of the ARMoD: a verbal-only mode using only speech and a multimodal mode, additionally including robotic gaze and pointing gestures to support communication and register intent in space. Our results show that the multimodal interaction style, including head movements and eye gaze as well as pointing gestures, leads to more natural fixation behavior. Participants naturally identified and fixated longer on the areas relevant for intent communication, and reacted faster to instructions in collaborative tasks. Our research further indicates that the ARMoD intent communication improves engagement and social interaction with mobile robots in workplace settings.

  • 7.
    Hanson, Lars
    et al.
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling. Scania CV AB, Global Industrial Development, Södertälje, Sweden.
    Högberg, Dan
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Brolin, Erik
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Iriondo Pascual, Aitor
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Lamb, Maurice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Current Trends in Research and Application of Digital Human Modeling2022Ingår i: Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021): Volume V: Methods & Approaches / [ed] Nancy L. Black; W. Patrick Neumann; Ian Noy, Cham: Springer, 2022, s. 358-366Konferensbidrag (Refereegranskat)
    Abstract [en]

    The paper reports an investigation conducted during the DHM2020 Symposium regarding current trends in research and application of DHM in academia, software development, and industry. The results show that virtual reality (VR), augmented reality (AR), and digital twin are major current trends. Furthermore, results show that human diversity is considered in DHM using established methods. Results also show a shift from the assessment of static postures to assessment of sequences of actions, combined with a focus mainly on human well-being and only partly on system performance. Motion capture and motion algorithms are alternative technologies introduced to facilitate and improve DHM simulations. Results from the DHM simulations are mainly presented through pictures or animations.

  • 8.
    Lamb, Maurice
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Brundin, Malin
    Högskolan i Skövde, Institutionen för informationsteknologi.
    Perez Luque, Estela
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Eye-Tracking Beyond Peripersonal Space in Virtual Reality: Validation and Best Practices2022Ingår i: Frontiers in Virtual Reality, E-ISSN 2673-4192, Vol. 3, artikel-id 864653Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Recent developments in commercial virtual reality (VR) hardware with embedded eye-tracking create tremendous opportunities for human subjects researchers. Accessible eye-tracking in VR opens new opportunities for highly controlled experimental setups in which participants can engage novel 3D digital environments. However, because VR embedded eye-tracking differs from the majority of historical eye-tracking research, in both providing for relatively unconstrained movement and stimulus presentation distances, there is a need for greater discussion around methods for implementation and validation of VR based eye-tracking tools. The aim of this paper is to provide a practical introduction to the challenges of, and methods for, 3D gaze-tracking in VR with a focus on best practices for results validation and reporting. Specifically, first, we identify and define challenges and methods for collecting and analyzing 3D eye-tracking data in VR. Then, we introduce a validation pilot study with a focus on factors related to 3D gaze tracking. The pilot study provides both a reference data point for a common commercial hardware/software platform (HTC Vive Pro Eye) and illustrates the proposed methods. One outcome of this study was the observation that accuracy and precision of collected data may depend on stimulus distance, which has consequences for studies where stimuli is presented on varying distances. We also conclude that vergence is a potentially problematic basis for estimating gaze depth in VR and should be used with caution as the field move towards a more established method for 3D eye-tracking.

    Ladda ner fulltext (pdf)
    fulltext
  • 9.
    Lamb, Maurice
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Pérez Luque, Estela
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Understanding Eye-Tracking in Virtual Reality2022Ingår i: AIC 2022 Artificial Intelligence and Cognition 2022: Proceedings of the 8th International Workshop on Artificial Intelligence and Cognition, Örebro, Sweden, 15-17 June, 2022 / [ed] Hadi Banaee; Amy Loutfi; Alessandro Saffiotti; Antonio Lieto, CEUR-WS.org , 2022, s. 180-181Konferensbidrag (Refereegranskat)
    Ladda ner fulltext (pdf)
    fulltext
  • 10.
    Lamb, Maurice
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Seunghun, Lee
    Texas Tech University, United States.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Högberg, Dan
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Yang, James
    Texas Tech University, United States.
    Forward and Backward Reaching Inverse Kinematics (FABRIK) solver for DHM: A pilot study2022Ingår i: Proceedings of the 7th International Digital Human Modeling Symposium (DHM 2022), August 29–30, 2022, Iowa City, Iowa, USA, University of Iowa Press, 2022, Vol. 7, s. 1-11, artikel-id 26Konferensbidrag (Refereegranskat)
    Abstract [en]

    Posture/motion prediction is the basis of the human motion simulations that make up the core of many digital human modeling (DHM) tools and methods. With the goal of producing realistic postures and motions, a common element of posture/motion prediction methods involves applying some set of constraints to biomechanical models of humans on the positions and orientations of specified body parts. While many formulations of biomechanical constraints may produce valid predictions, they must overcome the challenges posed by the highly redundant nature of human biomechanical systems. DHM researchers and developers typically focus on optimization formulations to facilitate the identification and selection of valid solutions. While these approaches produce optimal behavior according to some, e.g., ergonomic, optimization criteria, these solutions require considerable computational power and appear vastly different from how humans produce motion. In this paper, we take a different approach and consider the Forward and Backward Reaching Inverse Kinematics (FABRIK) solver developed in the context of computer graphics for rigged character animation. This approach identifies postures quickly and efficiently, often requiring a fraction of the computation time involved in optimization-based methods. Critically, the FABRIK solver identifies posture predictions based on a lightweight heuristic approach. Specifically, the solver works in joint position space and identifies solutions according to a minimal joint displacement principle. We apply the FABRIK solver to a seven-degree of freedom human arm model during a reaching task from an initial to an end target location, fixing the shoulder position and providing the end effector (index fingertip) position and orientation from each frame of the motion capture data. In this preliminary study, predicted postures are compared to experimental data from a single human subject. Overall the predicted postures were very near the recorded data, with an average RMSE of 1.67°. Although more validation is necessary, we believe that the FABRIK solver has great potential for producing realistic human posture/motion in real-time, with applications in the area of DHM.

  • 11.
    Mahmoud, Sara
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Svensson, Henrik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Thill, Serge
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands.
    Where to from here?: On the future development of autonomous vehicles from a cognitive systems perspective2022Ingår i: Cognitive Systems Research, ISSN 2214-4366, E-ISSN 1389-0417, Vol. 76, s. 63-77Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Self-driving cars not only solve the problem of navigating safely from location A to location B; they also have to deal with an abundance of (sometimes unpredictable) factors, such as traffic rules, weather conditions, and interactions with humans. Over the last decades, different approaches have been proposed to design intelligent driving systems for self-driving cars that can deal with an uncontrolled environment. Some of them are derived from computationalist paradigms, formulating mathematical models that define the driving agent, while other approaches take inspiration from biological cognition. However, despite the extensive work in the field of self-driving cars, many open questions remain. Here, we discuss the different approaches for implementing driving systems for self-driving cars, as well as the computational paradigms from which they originate. In doing so, we highlight two key messages: First, further progress in the field might depend on adapting new paradigms as opposed to pushing technical innovations in those currently used. Specifically, we discuss how paradigms from cognitive systems research can be a source of inspiration for further development in modeling driving systems, highlighting emergent approaches as a possible starting point. Second, self-driving cars can themselves be considered cognitive systems in a meaningful sense, and are therefore a relevant, yet underutilised resource in the study of cognitive mechanisms. Overall, we argue for a stronger synergy between the fields of cognitive systems and self-driving vehicles.

    Ladda ner fulltext (pdf)
    fulltext
  • 12.
    Rosén, Julia
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Department of Information Technology, Uppsala University, Sweden.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    The Social Robot Expectation Gap Evaluation Framework2022Ingår i: Human-Computer Interaction: Technological Innovation: Thematic Area, HCI 2022 Held as Part of the 24th HCI International Conference, HCII 2022 Virtual Event, June 26 – July 1, 2022 Proceedings, Part II / [ed] Masaaki Kurosu, Cham: Springer Nature Switzerland AG , 2022, s. 590-610Konferensbidrag (Refereegranskat)
    Abstract [en]

    Social robots are designed in manners that encourage users to interact and communicate with them in socially appropriate ways, which implies that these robots should copy many social human behaviors to succeed in social settings. However, this approach has implications for what humans subsequently expect from these robots. There is a mismatch between expected capabilities and actual capabilities of social robots. Expectations of social robots are thus of high relevance for the field of Human-Robot Interaction (HRI). While there is recent interest of expectations in the HRI field there is no widely adapted or well formulated evaluation framework that offers a deeper understanding of how these expectations affect the success of the interaction. With basis in social psychology, user experience, and HRI, we have developed an evaluation framework for studying users’ expectations of social robots. We have identified three main factors of expectations for assessing HRI: affect, cognitive processing, and behavior and performance. In our framework, we propose several data collection techniques and specific metrics for assessing these factors. The framework and its procedure enables analysis of the collected data via triangulation to identify problems and insights, which can grant us a richer understanding of the complex facets of expectations, including if the expectations were confirmed or disconfirmed in the interaction. Ultimately, by gaining a richer understanding of how expectations affect HRI, we can narrow the social robot expectation gap and create more successful interactions between humans and social robots in society. 

  • 13.
    Sandhu, Gurmit
    et al.
    FHNW University of Applied Sciences and Arts Northwestern Switzerland, Muttenz, Switzerland.
    Kilburg, Anne
    Kilburg Dialogue, Allschwil, Switzerland.
    Martin, Andreas
    FHNW University of Applied Sciences and Arts Northwestern Switzerland, Olten, Switzerland.
    Pande, Charuta
    FHNW University of Applied Sciences and Arts Northwestern Switzerland, Olten, Switzerland.
    Witschel, Hans Friedrich
    FHNW University of Applied Sciences and Arts Northwestern Switzerland, Olten, Switzerland.
    Laurenzi, Emanuele
    FHNW University of Applied Sciences and Arts Northwestern Switzerland, Olten, Switzerland.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    A Learning Tracker using Digital Biomarkers for Autistic Preschoolers2022Ingår i: Proceedings of the Society 5.0 Conference 2022 - Integrating Digital World and Real World to Resolve Challenges in Business and Society / [ed] Knut Hinkelmann; Aurona Gerber, EasyChair , 2022, s. 219-230Konferensbidrag (Refereegranskat)
    Abstract [en]

    Preschool children, when diagnosed with Autism Spectrum Disorder (ASD), often ex- perience a long and painful journey on their way to self-advocacy. Access to standard of care is poor, with long waiting times and the feeling of stigmatization in many social set- tings. Early interventions in ASD have been found to deliver promising results, but have a high cost for all stakeholders. Some recent studies have suggested that digital biomarkers (e.g., eye gaze), tracked using affordable wearable devices such as smartphones or tablets, could play a role in identifying children with special needs. In this paper, we discuss the possibility of supporting neurodiverse children with technologies based on digital biomark- ers which can help to a) monitor the performance of children diagnosed with ASD and b) predict those who would benefit most from early interventions. We describe an ongoing feasibility study that uses the “DREAM dataset”, stemming from a clinical study with 61 pre-school children diagnosed with ASD, to identify digital biomarkers informative for the child’s progression on tasks such as imitation of gestures. We describe our vision of a tool that will use these prediction models and that ASD pre-schoolers could use to train certain social skills at home. Our discussion includes the settings in which this usage could be embedded. 

    Ladda ner fulltext (pdf)
    fulltext
  • 14.
    Schreiter, Tim
    et al.
    Mobile Robotics and Olfaction Lab, Örebro University, Sweden.
    Morillo-Mendez, Lucas
    Machine Perception and Interaction Lab, Örebro University, Sweden.
    Chadalavada, Ravi T.
    Mobile Robotics and Olfaction Lab, Örebro University, Sweden.
    Rudenko, Andrey
    Robert Bosch GmbH, Corporate Research, Stuttgart, Germany.
    Billing, Erik Alexander
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lilienthal, Achim J.
    Mobile Robotics and Olfaction Lab, Örebro University, Sweden.
    The Effect of Anthropomorphism on Trust in an Industrial Human-Robot Interaction2022Ingår i: Trust, Acceptance and Social Cues in Human-Robot Interaction - SCRITA Workshop: IEEE International Conference on Robot & Human Interactive Communication 29 August 2022, Naples (Italy) / [ed] Alessandra Rossi; Patrick Holthaus; Silvia Moros; Gabriella Lakatos, IEEE, 2022, artikel-id arXiv:2208.11090Konferensbidrag (Refereegranskat)
    Abstract [en]

    Robots are increasingly deployed in spaces shared with humans, including home settings and industrial environments. In these environments, the interaction between humans and robots (HRI) is crucial for safety, legibility, andefficiency. A key factor in HRI is trust, which modulates the acceptance of the system. Anthropomorphism has been shown to modulate trust development in a robot, but robotsin industrial environments are usually not anthropomorphic. We designed a simple interaction in an industrial environment in which an anthropomorphic mock driver (ARMoD) robot simulates driving an autonomous guided vehicle (AGV). The task consisted of a human crossing paths with the AGV, with or without the ARMoD mounted on the top, in a narrow corridor. The human and the system needed to negotiate trajectories when crossing paths, meaning that the human had to attend to the trajectory of the robot to avoid a collision with it. There was a significant increment in the reported trust scores in the condition where the ARMoD was present, showing that the presence of an anthropomorphic robot is enough to modulate the trust, even in limited interactions as the one we present here.

    Ladda ner fulltext (pdf)
    fulltext
  • 15.
    Olofsson, Jonas
    et al.
    Psykologiska institutionen, Stockholms Universitet.
    Gulz, Agneta
    Filosofiska institutionen, Lunds Universitet ; Linköpings universitet.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Falck, Andreas
    Institutionen för psykologi, Lunds Universitet.
    Holm, Linus
    Institutionen för psykologi, Umeå Universitet.
    Låt kognitionsvetenskap stärka lärarutbildningen!2021Ingår i: Curie, ISSN 2001-3426, nr 2021-02-09Artikel i tidskrift (Övrig (populärvetenskap, debatt, mm))
  • 16.
    Rosén, Julia
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Reporting of Ethical Conduct in Human-Robot Interaction Research2021Ingår i: Advances in Human Factors in Robots, Unmanned Systems and Cybersecurity: Proceedings of the AHFE 2021 Virtual Conferences on Human Factors in Robots, Drones and Unmanned Systems, and Human Factors in Cybersecurity, July 25-29, 2021, USA / [ed] Matteo Zallio; Carlos Raymundo Ibañez; Jesus Hechavarria Hernandez, Cham: Springer, 2021, s. 87-94Konferensbidrag (Refereegranskat)
    Abstract [en]

    The field of Human-Robot Interaction (HRI) is progressively maturing into a distinct discipline with its own research practices and traditions. Aiming to support this development, we analyzed how ethical conduct was reported and discussed in HRI research involving human participants. A literature study of 73 papers from three major HRI publication outlets was performed. The analysis considered how often the following five principles of ethical conduct were reported: ethical board approval, informed consent, data protection and privacy, deception, and debriefing. These five principles were selected as they belong to all major and relevant ethical guidelines for the HRI field. The results show that overall, ethical conduct is rarely reported, with four out of five principles mentioned in less than one third of all papers. The most frequently mentioned aspect was informed consent, which was reported in 49% of the articles. In this work, we aim to stimulate increased acknowledgment and discussion of ethical conduct reporting within the HRI field.

  • 17.
    Rosén, Julia
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lamb, Maurice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Ethical Challenges in the Human-Robot Interaction Field2021Ingår i: ACM/IEEE International Conference on Human-Robot Interaction: The Road to a successful HRI: AI, Trust and ethicS - TRAITS Workshop / [ed] Alessandra Rossi ; Anouk van Maris ; Antonio Andriella ; Silvia Rossi, ACM Digital Library, 2021Konferensbidrag (Refereegranskat)
    Ladda ner fulltext (pdf)
    fulltext
  • 18.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    The DREAM Dataset: Behavioural data from robot enhanced therapies for children with autism spectrum disorder2020Dataset
    Abstract [sv]

    Denna databas omfattar beteendedata från 61 barn diagnostiserade med Autismspektrumtillstånd (AST). Insamlat data kommer från en storskalig studie på autismterapi med stöd av robotar. Databasen omfattar över 3000 sessioner från mer än 300 timmar terapi. Hälften av barnen interagerade med den sociala roboten NAO, övervakad av en terapeut. Den andra hälften, vilka utgjorde kontrollgrupp, interagerade direkt med en terapeut. Båda grupperna följde samma standardprotokoll för kognitiv beteendeterapi, Applied Behavior Analysis (ABA). Varje session spelades in med tre RGB-kameror och två RGBD kameror (Kinect) vilka analyserats med bildbehandlingstekniker för att identifiera barnets beteende under terapin. Den här publika versionen av databasen innehåller inget inspelat videomaterial eller andra personuppgifter, utan omfattar i stället anonymiserat data som beskriver barnets rörelser, huvudets position och orientering, samt ögonrörelser, alla angivna i ett gemensamt koordinatsystem. Vidare inkluderas metadata i form av barnets ålder, kön, och autismdiagnos (ADOS).

  • 19.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Bampouni, Elpida
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lamb, Maurice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Automatic Selection of Viewpoint for Digital Human Modelling2020Ingår i: DHM2020: Proceedings of the 6th International Digital Human Modeling Symposium, August 31 – September 2, 2020 / [ed] Lars Hanson, Dan Högberg, Erik Brolin, Amsterdam: IOS Press, 2020, s. 61-70Konferensbidrag (Refereegranskat)
    Abstract [en]

    During concept design of new vehicles, work places, and other complex artifacts, it is critical to assess positioning of instruments and regulators from the perspective of the end user. One common way to do these kinds of assessments during early product development is by the use of Digital Human Modelling (DHM). DHM tools are able to produce detailed simulations, including vision. Many of these tools comprise evaluations of direct vision and some tools are also able to assess other perceptual features. However, to our knowledge, all DHM tools available today require manual selection of manikin viewpoint. This can be both cumbersome and difficult, and requires that the DHM user possesses detailed knowledge about visual behavior of the workers in the task being modelled. In the present study, we take the first steps towards an automatic selection of viewpoint through a computational model of eye-hand coordination. We here report descriptive statistics on visual behavior in a pick-and-place task executed in virtual reality. During reaching actions, results reveal a very high degree of eye-gaze towards the target object. Participants look at the target object at least once during basically every trial, even during a repetitive action. The object remains focused during large proportions of the reaching action, even when participants are forced to move in order to reach the object. These results are in line with previous research on eye-hand coordination and suggest that DHM tools should, by default, set the viewpoint to match the manikin’s grasping location.

    Ladda ner fulltext (pdf)
    fulltext
  • 20.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Belpaeme, Tony
    University of Plymouth, United Kingdom / IDLab - imec, Ghent University, Belgium.
    Cai, Haibin
    University of Portsmouth, United Kingdom.
    Cao, Hoang-Long
    Vrije Universiteit Brussel, Belgium / Flanders Make, Lommel, Belgium.
    Ciocan, Anamaria
    Universitatea Babeş-Bolyai, Romania.
    Costescu, Cristina
    Universitatea Babeş-Bolyai, Romania.
    David, Daniel
    Universitatea Babeş-Bolyai, Romania.
    Homewood, Robert
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Hernandez Garcia, Daniel
    University of Plymouth, United Kingdom.
    Gomez Esteban, Pablo
    Vrije Universiteit Brussel, Belgium / Flanders Make, Lommel, Belgium.
    Liu, Honghai
    Universityof Portsmouth, United Kingdom.
    Nair, Vipul
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Matu, Silviu
    Universitatea Babeş-Bolyai, Romania.
    Mazel, Alexandre
    SoftBank Robotics, Paris, France.
    Selescu, Mihaela
    Universitatea Babeş-Bolyai, Romania.
    Senft, Emmanuel
    University of Plymouth, United Kingdom.
    Thill, Serge
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, The Netherlands.
    Vanderborght, Bram
    Vrije Universiteit Brussel, Belgium / Flanders Make, Lommel, Belgium.
    Vernon, David
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Linköping University, Sweden.
    The DREAM Dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy2020Ingår i: PLOS ONE, E-ISSN 1932-6203, Vol. 15, nr 8, artikel-id e0236939Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a dataset of behavioral data recorded from 61 children diagnosed with Autism Spectrum Disorder (ASD). The data was collected during a large-scale evaluation of Robot Enhanced Therapy (RET). The dataset covers over 3000 therapy sessions and more than 300 hours of therapy. Half of the children interacted with the social robot NAO supervised by a therapist. The other half, constituting a control group, interacted directly with a therapist. Both groups followed the Applied Behavior Analysis (ABA) protocol. Each session was recorded with three RGB cameras and two RGBD (Kinect) cameras, providing detailed information of children’s behavior during therapy. This public release of the dataset comprises body motion, head position and orientation, and eye gaze variables, all specified as 3D data in a joint frame of reference. In addition, metadata including participant age, gender, and autism diagnosis (ADOS) variables are included. We release this data with the hope of supporting further data-driven studies towards improved therapy methods as well as a better understanding of ASD in general.

    Ladda ner fulltext (pdf)
    fulltext
  • 21.
    Lindblom, Jessica
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Alenljung, Beatrice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Evaluating the User Experience of Human-Robot Interaction2020Ingår i: Human-Robot Interaction: Evaluation Methods and Their Standardization / [ed] Céline Jost, Brigitte Le Pévédic, Tony Belpaeme, Cindy Bethel, Dimitrios Chrysostomou, Nigel Crook, Marine Grandgeorge, Nicole Mirnig, Cham: Springer, 2020, s. 231-256Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    For social robots, like in all other digitally interactive systems, products, services, and devices, positive user experience (UX) is necessary in order to achieve the intended benefits and societal relevance of human–robot interaction (HRI). The experiences that humans have when interacting with robots have the power to enable, or disable, the robots’ acceptance rate and utilization in society. For a commercial robot product, it is the achieved UX in the natural context when fulfilling its intended purpose that will determine its success. The increased number of socially interactive robots in human environments and their level of participation in everyday activities obviously highlights the importance of systematically evaluating the quality of the interaction from a human-centered perspective. There is also a need for robot developers to acquire knowledge about proper UX evaluation, both in theory and in practice. In this chapter we are asking: What is UX evaluation? Why should UX evaluation be performed? When is it appropriate to conduct a UX evaluation? How could a UX evaluation be carried out? Where could UX evaluation take place? Who should perform the UX evaluation and for whom? The aim is to briefly answer these questions in the context of doing UX evaluation in HRI, highlighting evaluation processes and methods that have methodological validity and reliability as well as practical applicability. We argue that each specific HRI project needs to take the UX perspective into account during the whole development process. We suggest that a more diverse use of methods in HRI will benefit the field, and the future users of social robots will benefit even more.

  • 22.
    Nair, Vipul
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Hemeren, Paul
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Vignolo, Alessia
    CONTACT Unit, Istituto Italiano di Tecnologia, Italy.
    Noceti, Nicoletta
    MaLGa Center - DIBRIS, Universita di Genova, Italy.
    Nicora, Elena
    MaLGa Center - DIBRIS, Universita di Genova, Italy.
    Sciutti, Alessandra
    CONTACT Unit, Istituto Italiano di Tecnologia, Italy.
    Rea, Francesco
    RBCS Unit, Istituto Italiano di Tecnologia, Italy.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Odone, Francesca
    MaLGa Center - DIBRIS, Universita di Genova, Italy.
    Sandini, Giulio
    RBCS Unit, Istituto Italiano di Tecnologia, Italy.
    Action similarity judgment based on kinematic primitives2020Ingår i: 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), IEEE, 2020Konferensbidrag (Refereegranskat)
    Abstract [en]

    Understanding which features humans rely on - in visually recognizing action similarity is a crucial step towards a clearer picture of human action perception from a learning and developmental perspective. In the present work, we investigate to which extent a computational model based on kinematics can determine action similarity and how its performance relates to human similarity judgments of the same actions. To this aim, twelve participants perform an action similarity task, and their performances are compared to that of a computational model solving the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experiment results show that both the model and human participants can reliably identify whether two actions are the same or not. However, the model produces more false hits and has a greater selection bias than human participants. A possible reason for this is the particular sensitivity of the model towards kinematic primitives of the presented actions. In a second experiment, human participants' performance on an action identification task indicated that they relied solely on kinematic information rather than on action semantics. The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features, which can provide an essential basis for classifying human actions.

  • 23.
    Rosén, Julia
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lamb, Maurice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Digital Human Modeling Technology in Virtual Reality: Studying Aspects of Users’ Experiences2020Ingår i: DHM2020: Proceedings of the 6th International Digital Human Modeling Symposium, August 31 – September 2, 2020 / [ed] Lars Hanson, Dan Högberg, Erik Brolin, Amsterdam: IOS Press, 2020, s. 330-341Konferensbidrag (Refereegranskat)
    Abstract [en]

    Virtual Reality (VR) could be used to develop more representative Digital Human Modeling (DHM) simulations of work tasks for future Operators 4.0. Although VR allows users to experience the manikin as rather realistic in itself, there are still several aspects that need to be considered when shifting from tasks performed in the real world into a virtual one, adding cognitive and user experience (UX) aspects. Currently, there is limited research of UX in VR. The overall aim was to gain deeper insights into how users’ experiences can ultimately help us to improve how VR can aid in DHM. A pilot study examined how users perceived and experienced actions performed by a humanoid hand (manikin) in VR. Users’ perceived presence indicates how well they are immersed in the virtual environment, and Proactive eye gaze (PEG) was used to measure the realism of the virtual hand. The obtained findings indicate some potentially surprising outcomes and some tentative explanations for these are discussed. The lessons learned from this pilot will be used as input to a future larger study that continues to highlight how UX aspects can be useful in a DHM context.

    Ladda ner fulltext (pdf)
    fulltext
  • 24.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Hanson, Lars
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Lamb, Maurice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Högberg, Dan
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Digital Human Modelling in Action2019Ingår i: Proceedings of the 15th SweCog Conference / [ed] Linus Holm; Erik Billing, Skövde: University of Skövde , 2019, s. 25-28Konferensbidrag (Refereegranskat)
    Ladda ner fulltext (pdf)
    fulltext
  • 25.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Rosén, Julia
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Expectations of robot technology in welfare2019Konferensbidrag (Refereegranskat)
    Abstract [en]

    We report findings from a survey on expectations of robot technology in welfare, within the coming 20 years. 34 assistant nurses answered a questionnaire on which tasks, from their daily work, that they believe robots can perform, already today or in the near future. Additionally, the Negative attitudes toward robots scale (NARS) was used to estimate participants' attitudes towards robots in general. Results reveal high expectations of robots, where at least half of the participants answered Already today or Within 10 years to 9 out of 10 investigated tasks. Participants were also fairly positive towards robots, reporting low scores on NARS. The obtained results can be interpreted as a serious over-estimation of what robots will be able to do in the near future, but also large varieties in participants' interpretation of what robots are. We identify challenges in communicating both excitement towards a technology in rapid development and realistic limitations of this technology.

    Ladda ner fulltext (pdf)
    fulltext
  • 26.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Sciutti, Alessandra
    Italian Institute of Technology, Genova, Italy.
    Sandini, Giulio
    Italian Institute of Technology, Genova, Italy.
    Proactive eye-gaze in human-robot interaction2019Konferensbidrag (Refereegranskat)
    Ladda ner fulltext (pdf)
    fulltext
  • 27.
    Cai, Haibin
    et al.
    School of Computing, University of Portsmouth, U.K..
    Fang, Yinfeng
    School of Computing, University of Portsmouth, U.K..
    Ju, Zhaojie
    School of Computing, University of Portsmouth, U.K..
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    David, Daniel
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Computer and Information Science, Linkoping University, Sweden.
    Thill, Serge
    University of Plymouth, U.K..
    Belpaeme, Tony
    University of Plymouth, U.K..
    Vanderborght, Bram
    Vrije Universiteit Brussel and Flanders Make, Belgium.
    Vernon, David
    Carnegie Mellon University Africa, Rwanda.
    Richardson, Kathleen
    De Montfort University, U.K..
    Liu, Honghai
    School of Computing, University of Portsmouth, U.K..
    Sensing-enhanced Therapy System for Assessing Children with Autism Spectrum Disorders: A Feasibility Study2019Ingår i: IEEE Sensors Journal, ISSN 1530-437X, E-ISSN 1558-1748, Vol. 19, nr 4, s. 1508-1518Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    It is evident that recently reported robot-assisted therapy systems for assessment of children with autism spectrum disorder (ASD) lack autonomous interaction abilities and require significant human resources. This paper proposes a sensing system that automatically extracts and fuses sensory features such as body motion features, facial expressions, and gaze features, further assessing the children behaviours by mapping them to therapist-specified behavioural classes. Experimental results show that the developed system has a capability of interpreting characteristic data of children with ASD, thus has the potential to increase the autonomy of robots under the supervision of a therapist and enhance the quality of the digital description of children with ASD. The research outcomes pave the way to a feasible machine-assisted system for their behaviour assessment. IEEE

  • 28.
    Cao, Hoang-Long
    et al.
    Vrije Universiteit Brussel, Belgium.
    Esteban, Pablo G.
    Mechanical Engineering, Vrije Universiteit Brusel, Brussels, Belgium.
    Bartlett, Madeleine
    Plymouth University, United Kingdom.
    Baxter, Paul Edward
    School of Computer Science, University of Lincoln, United Kingdom.
    Belpaeme, Tony
    Faculty of Science and Environment, Plymouth University, United Kingdom.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Cai, Haibin
    School of computing, University of Portsmouth, Southampton, United Kingdom.
    Coeckelbergh, Mark
    University of Twente, The Netherlands.
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Universitatea Babes-Bolyai, Cluj Napoca, Romania.
    David, Daniel
    Babes-Bolyai University, Romania.
    De Beir, Albert
    Robotics & Multibody Mechanics Research Group, Vrije Universiteit Brussel (VUB), Bruxelles, Belgium.
    Hernandez Garcia, Daniel
    School of Computing, Electronics and Mathematics, University of Plymouth, United Kingdom.
    Kennedy, James
    Disney Research Los Angeles, Disney Research, Glendale, California United States of America.
    Liu, Honghai
    Institute of Industrial Research, University of Portsmouth, Portsmouth, United Kingdom.
    Matu, Silviu
    Babes-Bolyai University, Romania.
    Mazel, Alexandre
    Research, Aldebaran-Robotics, Le Kremlin Bicetre, France.
    Pandey, Amit Kumar
    Innovation Department, SoftBank Robotics, Paris, France.
    Richardson, Kathleen
    Faculty of Technology, De Montfort University, Leicester, United Kingdom.
    Senft, Emmanuel
    Centre for Robotics and Neural System, Plymouth University, United Kingdom.
    Thill, Serge
    Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands.
    Van de Perre, Greet
    Applied Mechanics, Vrije Universiteit Brussel, Elsene, Belgium.
    Vanderborght, Bram
    Department of Mechanical Engineering, Vrije Universiteit Brussel, Brussels, Belgium.
    Vernon, David
    Electrical and Computer Engineering, Carnegie Mellon University Africa, Kigali, Rwanda.
    Wakanuma, Kutoma
    De Montfort University, United Kingdom.
    Yu, Hui
    Creative Technologies, University of Portsmouth, Portsmouth, United Kingdom.
    Zhou, Xiaolong
    Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Robot-Enhanced Therapy: Development and Validation of a Supervised Autonomous Robotic System for Autism Spectrum Disorders Therapy2019Ingår i: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 26, nr 2, s. 49-58Artikel i tidskrift (Refereegranskat)
  • 29.
    Hernández García, Daniel
    et al.
    University of Plymouth, United Kingdom.
    Esteban, Pablo G.
    Vrije Universiteit Brussel.
    Lee, Hee Rin
    UC San Diego, United States.
    Romeo, Marta
    University of Manchester, United Kingdom.
    Senft, Emmanuel
    University of Plymouth, United Kingdom.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Social Robots in Therapy and Care2019Ingår i: Proceedings of the 14th ACM/IEEE International Conference on Human Robot Interaction, Daegu: IEEE conference proceedings, 2019, s. 669-670Konferensbidrag (Refereegranskat)
    Abstract [en]

    The Social Robots in Therapy workshop series aims at advancing research topics related to the use of robots in the contexts of Social Care and Robot-Assisted Therapy (RAT). Robots in social care and therapy have been a long time promise in HRI as they have the opportunity to improve patients life significantly. Multiple challenges have to be addressed for this, such as building platforms that work in proximity with patients, therapists and health-care professionals; understanding user needs; developing adaptive and autonomous robot interactions; and addressing ethical questions regarding the use of robots with a vulnerable population. The full-day workshop follows last year's edition which centered on how social robots can improve health-care interventions, how increasing the degree of autonomy of the robots might affect therapies, and how to overcome the ethical challenges inherent to the use of robot assisted technologies. This 2nd edition of the workshop will be focused on the importance of equipping social robots with socio-emotional intelligence and the ability to perform meaningful and personalized interactions. This workshop aims to bring together researchers and industry experts in the fields of Human-Robot Interaction, Machine Learning and Robots in Health and Social Care. It will be an opportunity for all to share and discuss ideas, strategies and findings to guide the design and development of robot assisted systems for therapy and social care implementations that can provide personalize, natural, engaging and autonomous interactions with patients (and health-care providers).

  • 30.
    Alenljung, Beatrice
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Andreasson, Rebecca
    Department of Information Technology, Uppsala University.
    Lowe, Robert
    Department of Applied IT, University of Gothenburg.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Conveying Emotions by Touch to the Nao Robot: A User Experience Perspective2018Ingår i: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 2, nr 4, artikel-id 82Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Social robots are expected gradually to be used by more and more people in a widerrange of settings, domestic as well as professional. As a consequence, the features and qualityrequirements on human–robot interaction will increase, comprising possibilities to communicateemotions, establishing a positive user experience, e.g., using touch. In this paper, the focus is ondepicting how humans, as the users of robots, experience tactile emotional communication with theNao Robot, as well as identifying aspects affecting the experience and touch behavior. A qualitativeinvestigation was conducted as part of a larger experiment. The major findings consist of 15 differentaspects that vary along one or more dimensions and how those influence the four dimensions ofuser experience that are present in the study, as well as the different parts of touch behavior ofconveying emotions.

    Ladda ner fulltext (pdf)
    fulltext
  • 31.
    Andreasson, Rebecca
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Alenljung, Beatrice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lowe, Robert
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Affective Touch in Human–Robot Interaction: Conveying Emotion to the Nao Robot2018Ingår i: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 10, nr 4, s. 473-491Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Affective touch has a fundamental role in human development, social bonding, and for providing emotional support in interpersonal relationships. We present, what is to our knowledge, the first HRI study of tactile conveyance of both positive and negative emotions (affective touch) on the Nao robot, and based on an experimental set-up from a study of human–human tactile communication. In the present work, participants conveyed eight emotions to a small humanoid robot via touch. We found that female participants conveyed emotions for a longer time, using more varied interaction and touching more regions on the robot’s body, compared to male participants. Several differences between emotions were found such that emotions could be classified by the valence of the emotion conveyed, by combining touch amount and duration. Overall, these results show high agreement with those reported for human–human affective tactile communication and could also have impact on the design and placement of tactile sensors on humanoid robots.

    Ladda ner fulltext (pdf)
    fulltext
  • 32.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Computer & Information Science, Linköping University.
    Robot-Enhanced Therapy for Children with Autism2018Ingår i: Proceedings of the 14th SweCog Conference / [ed] Tom Ziemke, Mattias Arvola, Nils Dahlbäck, Erik Billing, Skövde: University of Skövde , 2018, s. 19-22Konferensbidrag (Refereegranskat)
    Ladda ner fulltext (pdf)
    fulltext
  • 33.
    Fast-Berglund, Åsa
    et al.
    Chalmers University of Technology, Gothenburg, Sweden.
    Thorvald, Peter
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningscentrum för Virtuella system.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Palmquist, Adam
    Insert Coin, Gothenburg, Sweden.
    Romero, David
    Tecnologico de Monterrey, Mexico.
    Weichhart, Georg
    Profactor, Studgart, Austria.
    Conceptualizing Embodied Automation to Increase Transfer of Tacit knowledge in the Learning Factory2018Ingår i: "Theory, Research and Innovation in Applications": 9th International Conference on Intelligent Systems 2018 (IS’18) / [ed] Ricardo Jardim-Gonçalves, João Pedro Mendonça, Vladimir Jotsov, Maria Marques, João Martins, Robert Bierwolf, IEEE, 2018, s. 358-364, artikel-id 8710482Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper will discuss how cooperative agent-based systems, deployed with social skills and embodied automation features, can be used to interact with the operators in order to facilitate sharing of tacit knowledge and its later conversion into explicit knowledge. The proposal is to combine social software robots (softbots) with industrial collaborative robots (co-bots) to create a digital apprentice for experienced operators in human- robot collaboration workstations. This is to address the problem within industry that experienced operators have difficulties in explaining how they perform their tasks and later, how to turn this procedural knowledge (knowhow) into instructions to be shared among other operators. By using social softbots and co-bots, as cooperative agents with embodied automation features, we think we can facilitate the ‘externalization’ of procedural knowledge in human-robot interaction(s). This enabled by the capabilities of social cooperative agents with embodied automation features of continuously learning by looking over the shoulder of the operators, and documenting and collaborating with them in a non-intrusive way as they perform their daily tasks. 

    Ladda ner fulltext (pdf)
    fulltext
  • 34.
    Lowe, Robert
    et al.
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Andreasson, Rebecca
    Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Alenljung, Beatrice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lund, Anja
    Department of Chemistry and Chemical Engineering, Chalmers University of Technology, Gothenburg, Sweden / The Swedish School of Textiles, University of Borås, Borås, Sweden.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Designing for a Wearable Affective Interface for the NAO Robot: A Study of Emotion Conveyance by Touch2018Ingår i: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 2, nr 1Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We here present results and analysis from a study of affective tactile communication between human and humanoid robot (the NAO robot). In the present work, participants conveyed eight emotions to the NAO via touch. In this study, we sought to understand the potential for using a wearable affective (tactile) interface, or WAffI. The aims of our study were to address the following: (i) how emotions and affective states can be conveyed (encoded) to such a humanoid robot, (ii) what are the effects of dressing the NAO in the WAffI on emotion conveyance and (iii) what is the potential for decoding emotion and affective states. We found that subjects conveyed touch for longer duration and over more locations on the robot when the NAO was dressed with WAffI than when it was not. Our analysis illuminates ways by which affective valence, and separate emotions, might be decoded by a humanoid robot according to the different features of touch: intensity, duration, location, type. Finally, we discuss the types of sensors and their distribution as they may be embedded within the WAffI and that would likely benefit Human-NAO (and Human-Humanoid) interaction along the affective tactile dimension.

    Ladda ner fulltext (pdf)
    fulltext
  • 35.
    Messina Dahlberg, Giulia
    et al.
    University of Gothenburg, Sweden.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Montebelli, Alberto
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Negotiating epistemic spaces for dialogue across disciplines in higher education: The case of the Pepper experiment2018Ingår i: EARLI, Joint SIG10-21 Conference, 2018, Luxembourg, 2018, Luxembourg, 2018Konferensbidrag (Refereegranskat)
  • 36.
    Richardson, Kathleen
    et al.
    De Montfort University, Leicester, United Kingdom.
    Coeckelbergh, Mark
    De Montfort University, Leicester, United Kingdom.
    Wakunuma, Kutoma
    De Montfort University, Leicester, United Kingdom.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Gómez, Pablo
    Vrije Universiteit, Brussel, Belgium.
    Vanderborght, Bram
    Vrije Universiteit, Brussel, Belgium.
    Belpaeme, Tony
    University of Plymouth, Plymouth, United Kingdom.
    Robot Enhanced Therapy for Children with Autism (DREAM): A Social Model of Autism2018Ingår i: IEEE technology & society magazine, ISSN 0278-0097, E-ISSN 1937-416X, Vol. 37, nr 1, s. 30-39Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The development of social robots for children with autism has been a growth field for the past 15 years. This article reviews studies in robots and autism as a neurodevelopmental disorder that impacts socialcommunication development, and the ways social robots could help children with autism develop social skills. Drawing on ethics research from the EU-funded Development of Robot-Enhanced Therapy for Children with Autism (DREAM) project (framework 7), this paper explores how ethics evolves and developed in this European project.

  • 37.
    Rosén, Julia
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Richardson, Kathleen
    De Montfort University, Leicester, United Kingdom.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    The Robot Illusion: Facts and Fiction2018Ingår i: Proceedings of Workshop in Explainable Robotics System (HRI), 2018Konferensbidrag (Refereegranskat)
    Abstract [en]

    "To researchers and technicians working with robots on a daily basis, it is most often obvious what is part of the staging and not, and thus it may be easy to forget that illusions like these are not explicit and the that the general public may actually be deceived. Should the disclosure of the illusion be the responsibility of roboticists? Or should the assumption be that human beings, on the basis of their experiences as an audience in film, theatre, music or video gaming, assume the audience is able to enjoy the experience without needing to know everything in advance about how the illusion is created? Therefore, we believe that a discussion of whether or not researchers should be more transparent in what kinds of machines they are presenting is necessary. How can researchers present interactive robots in an engaging way, without misleading the audience?"

    Ladda ner fulltext (pdf)
    fulltext
  • 38.
    Alenljung, Beatrice
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Andreasson, Rebecca
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Information Technology, Visual Information & Interaction. Uppsala University, Uppsala, Sweden.
    Billing, Erik A.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lowe, Robert
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    User Experience of Conveying Emotions by Touch2017Ingår i: Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2017, s. 1240-1247Konferensbidrag (Refereegranskat)
    Abstract [en]

    In the present study, 64 users were asked to convey eight distinct emotion to a humanoid Nao robot via touch, and were then asked to evaluate their experiences of performing that task. Large differences between emotions were revealed. Users perceived conveying of positive/pro-social emotions as significantly easier than negative emotions, with love and disgust as the two extremes. When asked whether they would act differently towards a human, compared to the robot, the users’ replies varied. A content analysis of interviews revealed a generally positive user experience (UX) while interacting with the robot, but users also found the task challenging in several ways. Three major themes with impact on the UX emerged; responsiveness, robustness, and trickiness. The results are discussed in relation to a study of human-human affective tactile interaction, with implications for human-robot interaction (HRI) and design of social and affective robotics in particular. 

    Ladda ner fulltext (pdf)
    fulltext
  • 39.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    A New Look at Habits using Simulation Theory2017Ingår i: Proceedings of the Digitalisation for a Sustainable Society: Embodied, Embedded, Networked, Empowered through Information, Computation & Cognition, Göteborg, Sweden, 2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    Habits as a form of behavior re-execution without explicit deliberation is discussed in terms of implicit anticipation, to be contrasted with explicit anticipation and mental simulation. Two hypotheses, addressing how habits and mental simulation may be implemented in the brain and to what degree they represent two modes brain function, are formulated. Arguments for and against the two hypotheses are discussed shortly, specifically addressing whether habits and mental simulation represent two distinct functions, or to what degree there may be intermediate forms of habit execution involving partial deliberation. A potential role of habits in memory consolidation is also hypnotized.

    Ladda ner fulltext (pdf)
    fulltext
  • 40.
    Esteban, Pablo G.
    et al.
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Baxter, Paul
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Belpaeme, Tony
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Cai, Haibin
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Cao, Hoang-Long
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Coeckelbergh, Mark
    Centre for Computing and Social Responsibility, Faculty of Technology, De Montfort University, Leicester, United Kingdom.
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Babeş-Bolyai University, Cluj-Napoca, Romania.
    David, Daniel
    Department of Clinical Psychology and Psychotherapy, Babeş-Bolyai University, Cluj-Napoca, Romania.
    De Beir, Albert
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Fang, Yinfeng
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Ju, Zhaojie
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Kennedy, James
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Liu, Honghai
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Mazel, Alexandre
    Softbank Robotics Europe, Paris, France.
    Pandey, Amit
    Softbank Robotics Europe, Paris, France.
    Richardson, Kathleen
    Centre for Computing and Social Responsibility, Faculty of Technology, De Montfort University, Leicester, United Kingdom.
    Senft, Emmanuel
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Thill, Serge
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Van de Perre, Greet
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Vanderborght, Bram
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Vernon, David
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Yu, Hui
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    How to Build a Supervised Autonomous System for Robot-Enhanced Therapy for Children with Autism Spectrum Disorder2017Ingår i: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 8, nr 1, s. 18-38Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms.However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.

    Ladda ner fulltext (pdf)
    fulltext
  • 41.
    Lowe, Robert
    et al.
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Almér, Alexander
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Sandamirskaya, Yulia
    Institute of Neuroinformatics, Neuroscience Center Zurich, University and ETH Zurich, Zurich, Switzerland.
    Balkenius, Christian
    Cognitive Science, Lund University, Lund, Sweden.
    Affective–associative two-process theory: a neurocomputational account of partial reinforcement extinction effects2017Ingår i: Biological Cybernetics, ISSN 0340-1200, E-ISSN 1432-0770, Vol. 111, nr 5-6, s. 365-388Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The partial reinforcement extinction effect (PREE) is an experimentally established phenomenon: behavioural response to a given stimulus is more persistent when previously inconsistently rewarded than when consistently rewarded. This phenomenon is, however, controversial in animal/human learning theory. Contradictory findings exist regarding when the PREE occurs. One body of research has found a within-subjects PREE, while another has found a within-subjects reversed PREE (RPREE). These opposing findings constitute what is considered the most important problem of PREE for theoreticians to explain. Here, we provide a neurocomputational account of the PREE, which helps to reconcile these seemingly contradictory findings of within-subjects experimental conditions. The performance of our model demonstrates how omission expectancy, learned according to low probability reward, comes to control response choice following discontinuation of reward presentation (extinction). We find that a PREE will occur when multiple responses become controlled by omission expectation in extinction, but not when only one omission-mediated response is available. Our model exploits the affective states of reward acquisition and reward omission expectancy in order to differentially classify stimuli and differentially mediate response choice. We demonstrate that stimulus–response (retrospective) and stimulus–expectation–response (prospective) routes are required to provide a necessary and sufficient explanation of the PREE versus RPREE data and that Omission representation is key for explaining the nonlinear nature of extinction data.

    Ladda ner fulltext (pdf)
    fulltext
  • 42.
    Lowe, Robert
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Göteborgs Universitet, Tillämpad IT.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Affective-Associative Two-Process theory: A neural network investigation of adaptive behaviour in differential outcomes training2017Ingår i: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 25, nr 1, s. 5-23Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this article we present a novel neural network implementation of Associative Two-Process (ATP) theory based on an Actor–Critic-like architecture. Our implementation emphasizes the affective components of differential reward magnitude and reward omission expectation and thus we model Affective-Associative Two-Process theory (Aff-ATP). ATP has been used to explain the findings of differential outcomes training (DOT) procedures, which emphasize learning differentially valuated outcomes for cueing actions previously associated with those outcomes. ATP hypothesizes the existence of a ‘prospective’ memory route through which outcome expectations can bring to bear on decision making and can even substitute for decision making based on the ‘retrospective’ inputs of standard working memory. While DOT procedures are well recognized in the animal learning literature they have not previously been computationally modelled. The model presented in this article helps clarify the role of ATP computationally through the capturing of empirical data based on DOT. Our Aff-ATP model illuminates the different roles that prospective and retrospective memory can have in decision making (combining inputs to action selection functions). In specific cases, the model’s prospective route allows for adaptive switching (correct action selection prior to learning) following changes in the stimulus–response–outcome contingencies.

  • 43.
    Montebelli, Alberto
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Messina Dahlberg, Giulia
    Department of Educational Research and Development, University of Borås, Sweden.
    Reframing HRI Education: A Dialogic Reformulation of HRI Education to Promote Diverse Thinking and Scientific Progress2017Ingår i: Journal of Human-Robot Interaction, E-ISSN 2163-0364, Vol. 6, nr 2, s. 3-26Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Over the last few years, technological developments in semi-autonomous machines have raised awareness about the strategic importance of human-robot interaction (HRI) and its technical and social implications. At the same time, HRI still lacks an established pedagogic tradition in the coordination of its intrinsically interdisciplinary nature. This scenario presents steep and urgent challenges for HRI education. Our contribution presents a normative interdisciplinary dialogic framework for HRI education, denoted InDia wheel, aimed toward seamless and coherent integration of the variety of disciplines that contribute to HRI. Our framework deemphasizes technical mastery, reducing it to a necessary yet not sufficient condition for HRI design, thus modifying the stereotypical narration of HRI-relevant disciplines and creating favorable conditions for a more diverse participation of students. Prospectively, we argue, the design of an educational 'space of interaction’ that focuses on a variety of voices, without giving supremacy to one over the other, will be key to successful HRI education and practice.

    Ladda ner fulltext (pdf)
    fulltext
  • 44.
    Redyuk, Sergey
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik A.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Challenges in face expression recognition from video2017Ingår i: SweDS 2017: The 5th Swedish Workshop on Data Science / [ed] Alexander Schliep, 2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    Identication of emotion from face expressions is a relatively well understood problem where state-of-the-art solutions perform almost as well as humans. However, in many practical applications, disruptingfactors still make identication of face expression a very challenging problem. Within the project DREAM1- Development of Robot Enhanced Therapy for Children with Autism Spectrum Disorder (ASD), we areidentifying face expressions from children with ASD, during therapy. Identied face expressions are usedboth in the online system, to guide the behavior of the robot, and o-line, to automatically annotate videofor measurements of clinical outcomes.

    This setup puts several new challenges on the face expression technology. First of all, in contrast tomost open databases of face expressions comprising adult faces, we are recognizing emotions from childrenbetween the age of 4 to 7 years. Secondly, children with ASD may show emotions dierently, compared totypically developed children. Thirdly, the children move freely during the intervention and, despite the useof several cameras tracking the face of the child from dierent angles, we rarely have a full frontal view ofthe face. Fourthly, and nally, the amount of native data is very limited.

    Although we have access to extensive video recorded material from therapy sessions with ASD children,potentially constituting a very valuable dataset for both training and testing of face expression implemen-tations, this data proved to be dicult to use. A session of 10 minutes of video may comprise only a fewinstances of expressions e.g. smiling. As such, although we have many hours of video in total, the data isvery sparse and the number of clear face expressions is still rather small for it to be used as training data inmost machine learning (ML) techniques.

    We therefore focused on the use of synthetic datasets for transfer learning, trying to overcome thechallenges mentioned above. Three techniques were evaluated: (1) convolutional neural networks for imageclassication by analyzing separate video frames, (2) recurrent neural networks for sequence classication tocapture facial dynamics, and (3) ML algorithms classifying pre-extracted facial landmarks.

    The performance of all three models are unsatisfactory. Although the proposed models were of highaccuracy, approximately 98%, while classifying a test set, they performed poorly on the real-world data.This was due to the usage of a synthetic dataset which had mostly a frontal view of faces. The models whichhave not seen similar examples before failed to classify them correctly. The accuracy decreased drasticallywhen the child rotated her head or covered a part of her face. Even if the frame clearly captured a facialexpression, ML algorithms were not able to provide a stable positive classication rate. Thus, elaborationon training datasets and designing robust ML models are required. Another option is to incorporate voiceand gestures of the child into the model to classify emotional state as a complex concept.

    Ladda ner fulltext (pdf)
    fulltext
  • 45.
    Sun, Jiong
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Seoane, Fernando
    Swedish School of Textiles, University of Borås, Borås, Sweden / Inst. for Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden / Dept. Biomedical Engineering, Karolinska University Hospital, Stockholm, Sweden.
    Zhou, Bo
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
    Högberg, Dan
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningscentrum för Virtuella system.
    Hemeren, Paul
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Categories of touch: Classifying human touch using a soft tactile sensor2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    Social touch plays an important role not only in human communication but also in human-robot interaction. We here report results from an ongoing study on affective human-robot interaction. In our previous research, touch type is shown to be informative for communicated emotion. Here, a soft matrix array sensor is used to capture the tactile interaction between human and robot and a method based on PCA and kNN is applied in the experiment to classify different touch types, constituting a pre-stage to recognizing emotional tactile interaction. Results show an average recognition rate for classified touch type of 71%, with a large variability between different types of touch. Results are discussed in relation to affective HRI and social robotics.

    Ladda ner fulltext (pdf)
    fulltext
  • 46.
    Sun, Jiong
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Redyuk, Sergey
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Högberg, Dan
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningscentrum för Virtuella system.
    Hemeren, Paul
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Tactile Interaction and Social Touch: Classifying Human Touch using a Soft Tactile Sensor2017Ingår i: HAI '17: Proceedings of the 5th International Conference on Human Agent Interaction, New York: Association for Computing Machinery (ACM), 2017, s. 523-526Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents an ongoing study on affective human-robot interaction. In our previous research, touch type is shown to be informative for communicated emotion. Here, a soft matrix array sensor is used to capture the tactile interaction between human and robot and 6 machine learning methods including CNN, RNN and C3D are implemented to classify different touch types, constituting a pre-stage to recognizing emotional tactile interaction. Results show an average recognition rate of 95% by C3D for classified touch types, which provide stable classification results for developing social touch technology. 

    Ladda ner fulltext (pdf)
    fulltext
  • 47.
    Zhou, Bo
    et al.
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany / University of Kaiserslautern, Kaiserslautern, Germany.
    Cruz, Heber Zurian
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany / University of Kaiserslautern, Kaiserslautern, Germany.
    Atefi, Seyed Reza
    Swedish School of Textiles, University of Borås, Borås, Sweden.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Seoane, Fernando
    Inst. for Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden / Dept. Biomedical Engineering, Karolinska University Hospital, Stockholm, Sweden / Swedish School of Textiles, University of Borås, Borås, Sweden.
    Lukowicz, Paul
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany / University of Kaiserslautern, Kaiserslautern, Germany.
    TouchMe: Full-textile Touch Sensitive Skin for Encouraging Human-Robot Interaction2017Konferensbidrag (Refereegranskat)
    Ladda ner fulltext (pdf)
    fulltext
  • 48.
    Zhou, Bo
    et al.
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
    Velez Altamirano, Carlos Andres
    Department Computer Science, University of Kaiserslautern, Kaiserslautern, Germany.
    Cruz Zurian, Heber
    Department Computer Science, University of Kaiserslautern, Kaiserslautern, Germany.
    Atefi, Seyed Reza
    Swedish School of Textiles, University of Borås, Borås, Sweden.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Seoane Martinez, Fernando
    Swedish School of Textiles, University of Borås, Borås, Sweden / Institute for Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden / Department Biomedical Engineering, Karolinska University Hospital, Stockholm, Sweden.
    Lukowicz, Paul
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany / Department Computer Science, University of Kaiserslautern, Kaiserslautern, Germany.
    Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction2017Ingår i: Sensors, E-ISSN 1424-8220, Vol. 17, nr 11, artikel-id 2585Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments’ contribution. The best performing feature-classifier combination can recognize the gestures with a 93.3% accuracy from a known group of participants, and 89.1% from strangers.

    Ladda ner fulltext (pdf)
    fulltext
  • 49.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Svensson, Henrik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lowe, Robert
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Interaction, Cognition and Emotion Lab, Department of Applied IT, University of Gothenburg, Sweden.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Cognition and Interaction Lab, Department of Computer and Information Science, Linköping University, Sweden.
    Finding Your Way from the Bed to the Kitchen: Re-enacting and Re-combining Sensorimotor Episodes Learned from Human Demonstration2016Ingår i: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, nr March, artikel-id 9Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Several simulation theories have been proposed as an explanation for how humans and other agents internalize an "inner world" that allows them to simulate interactions with the external real world - prospectively and retrospectively. Such internal simulation of interaction with the environment has been argued to be a key mechanism behind mentalizing and planning. In the present work, we study internal simulations in a robot acting in a simulated human environment. A model of sensory-motor interactions with the environment is generated from human demonstrations, and tested on a Robosoft Kompai robot. The model is used as a controller for the robot, reproducing the demonstrated behavior. Information from several different demonstrations is mixed, allowing the robot to produce novel paths through the environment, towards a goal specified by top-down contextual information. 

    The robot model is also used in a covert mode, where actions are inhibited and perceptions are generated by a forward model. As a result, the robot generates an internal simulation of the sensory-motor interactions with the environment. Similar to the overt mode, the model is able to reproduce the demonstrated behavior as internal simulations. When experiences from several demonstrations are combined with a top-down goal signal, the system produces internal simulations of novel paths through the environment. These results can be understood as the robot imagining an "inner world" generated from previous experience, allowing it to try out different possible futures without executing actions overtly.

    We found that the success rate in terms of reaching the specified goal was higher during internal simulation, compared to overt action. These results are linked to a reduction in prediction errors generated during covert action. Despite the fact that the model is quite successful in terms of generating covert behavior towards specified goals, internal simulations display different temporal distributions compared to their overt counterparts. Links to human cognition and specifically mental imagery are discussed.

    Ladda ner fulltext (pdf)
    fulltext
  • 50.
    Lowe, Robert
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. University of Gothenburg, Sweden.
    Barakova, Emilia
    Eindhoven University of Technology, The Netherlands.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Broekens, Joost
    Delft University of Technology, The Netherlands.
    Grounding emotions in robots: An introduction to the special issue2016Ingår i: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 24, nr 5, s. 263-266Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Robots inhabiting human environments need to act in relation to their own experience and embodiment as well as to social and emotional aspects. Robots that learn, act upon and incorporate their own experience and perception of others’ emotions into their responses make not only more productive artificial agents but also agents with whom humans can appropriately interact. This special issue seeks to address the significance of grounding of emotions in robots in relation to aspects of physical and homeostatic interaction in the world at an individual and social level. Specific questions concern: How can emotion and social interaction be grounded in the behavioral activity of the robotic system? Is a robot able to have intrinsic emotions? How can emotions, grounded in the embodiment of the robot, facilitate individually and socially adaptive behavior to the robot? This opening chapter provides an introduction to the articles that comprise this special issue and briefly discusses their relationship to grounding emotions in robots.

12 1 - 50 av 69
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf