his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
What Can You See?: Identifying Cues on Internal States From the Movements of Natural Social Interactions
Centre for Robotics and Neural Systems (CRNS), University of Plymouth, Plymouth, United Kingdom.
Warwick Business School, University of Warwick, Coventry, United Kingdom.
Centre for Robotics and Neural Systems (CRNS), University of Plymouth, Plymouth, United Kingdom / ID Lab—imec, University of Ghent, Belgium.
University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, Netherlands. (Interaction Lab (ILAB))ORCID iD: 0000-0003-1177-4119
Show others and affiliations
2019 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 6, no 49Article in journal (Refereed) Published
Abstract [en]

In recent years, the field of Human-Robot Interaction (HRI) has seen an increasingdemand for technologies that can recognize and adapt to human behaviors and internalstates (e.g., emotions and intentions). Psychological research suggests that humanmovements are important for inferring internal states. There is, however, a need to betterunderstand what kind of information can be extracted from movement data, particularlyin unconstrained, natural interactions. The present study examines which internal statesand social constructs humans identify from movement in naturalistic social interactions.Participants either viewed clips of the full scene or processed versions of it displaying2D positional data. Then, they were asked to fill out questionnaires assessing their socialperception of the viewed material. We analyzed whether the full scene clips were moreinformative than the 2D positional data clips. First, we calculated the inter-rater agreementbetween participants in both conditions. Then, we employed machine learning classifiersto predict the internal states of the individuals in the videos based on the ratingsobtained. Although we found a higher inter-rater agreement for full scenes comparedto positional data, the level of agreement in the latter case was still above chance,thus demonstrating that the internal states and social constructs under study wereidentifiable in both conditions. A factor analysis run on participants’ responses showedthat participants identified the constructs interaction imbalance, interaction valence andengagement regardless of video condition. The machine learning classifiers achieveda similar performance in both conditions, again supporting the idea that movementalone carries relevant information. Overall, our results suggest it is reasonable to expecta machine learning algorithm, and consequently a robot, to successfully decode andclassify a range of internal states and social constructs using low-dimensional data (suchas the movements and poses of observed individuals) as input.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2019. Vol. 6, no 49
Keywords [en]
social psychology, human-robot interaction, machine learning, social interaction, recognition
National Category
Human Computer Interaction
Research subject
Interaction Lab (ILAB)
Identifiers
URN: urn:nbn:se:his:diva-17301DOI: 10.3389/frobt.2019.00049ISI: 000473169300001Scopus ID: 2-s2.0-85068522657OAI: oai:DiVA.org:his-17301DiVA, id: diva2:1330785
Available from: 2019-06-26 Created: 2019-06-26 Last updated: 2019-08-20Bibliographically approved

Open Access in DiVA

fulltext(586 kB)17 downloads
File information
File name FULLTEXT01.pdfFile size 586 kBChecksum SHA-512
814ebeabe0c4873402d9a4a51b173b802cb8ed43179e9470b7556368d66e96597fdd0ce9c31ee6e397f787a1367b23644d40483158b9de70e9723da9d44864a3
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records BETA

Thill, Serge

Search in DiVA

By author/editor
Thill, Serge
By organisation
School of InformaticsThe Informatics Research Centre
In the same journal
Frontiers in Robotics and AI
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar
Total: 17 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 56 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf