What Can You See?: Identifying Cues on Internal States From the Movements of Natural Social InteractionsShow others and affiliations
2019 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 6, no 49Article in journal (Refereed) Published
Abstract [en]
In recent years, the field of Human-Robot Interaction (HRI) has seen an increasingdemand for technologies that can recognize and adapt to human behaviors and internalstates (e.g., emotions and intentions). Psychological research suggests that humanmovements are important for inferring internal states. There is, however, a need to betterunderstand what kind of information can be extracted from movement data, particularlyin unconstrained, natural interactions. The present study examines which internal statesand social constructs humans identify from movement in naturalistic social interactions.Participants either viewed clips of the full scene or processed versions of it displaying2D positional data. Then, they were asked to fill out questionnaires assessing their socialperception of the viewed material. We analyzed whether the full scene clips were moreinformative than the 2D positional data clips. First, we calculated the inter-rater agreementbetween participants in both conditions. Then, we employed machine learning classifiersto predict the internal states of the individuals in the videos based on the ratingsobtained. Although we found a higher inter-rater agreement for full scenes comparedto positional data, the level of agreement in the latter case was still above chance,thus demonstrating that the internal states and social constructs under study wereidentifiable in both conditions. A factor analysis run on participants’ responses showedthat participants identified the constructs interaction imbalance, interaction valence andengagement regardless of video condition. The machine learning classifiers achieveda similar performance in both conditions, again supporting the idea that movementalone carries relevant information. Overall, our results suggest it is reasonable to expecta machine learning algorithm, and consequently a robot, to successfully decode andclassify a range of internal states and social constructs using low-dimensional data (suchas the movements and poses of observed individuals) as input.
Place, publisher, year, edition, pages
Frontiers Research Foundation , 2019. Vol. 6, no 49
Keywords [en]
social psychology, human-robot interaction, machine learning, social interaction, recognition
National Category
Human Computer Interaction
Research subject
Interaction Lab (ILAB)
Identifiers
URN: urn:nbn:se:his:diva-17301DOI: 10.3389/frobt.2019.00049ISI: 000473169300001Scopus ID: 2-s2.0-85068522657OAI: oai:DiVA.org:his-17301DiVA, id: diva2:1330785
2019-06-262019-06-262019-12-02Bibliographically approved