his.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
What Can You See?: Identifying Cues on Internal States From the Movements of Natural Social Interactions
Centre for Robotics and Neural Systems (CRNS), University of Plymouth, Plymouth, United Kingdom.
Warwick Business School, University of Warwick, Coventry, United Kingdom.
Centre for Robotics and Neural Systems (CRNS), University of Plymouth, Plymouth, United Kingdom / ID Lab—imec, University of Ghent, Belgium.
Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, Netherlands. (Interaction Lab (ILAB))ORCID-id: 0000-0003-1177-4119
Visa övriga samt affilieringar
2019 (Engelska)Ingår i: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 6, nr 49Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

In recent years, the field of Human-Robot Interaction (HRI) has seen an increasingdemand for technologies that can recognize and adapt to human behaviors and internalstates (e.g., emotions and intentions). Psychological research suggests that humanmovements are important for inferring internal states. There is, however, a need to betterunderstand what kind of information can be extracted from movement data, particularlyin unconstrained, natural interactions. The present study examines which internal statesand social constructs humans identify from movement in naturalistic social interactions.Participants either viewed clips of the full scene or processed versions of it displaying2D positional data. Then, they were asked to fill out questionnaires assessing their socialperception of the viewed material. We analyzed whether the full scene clips were moreinformative than the 2D positional data clips. First, we calculated the inter-rater agreementbetween participants in both conditions. Then, we employed machine learning classifiersto predict the internal states of the individuals in the videos based on the ratingsobtained. Although we found a higher inter-rater agreement for full scenes comparedto positional data, the level of agreement in the latter case was still above chance,thus demonstrating that the internal states and social constructs under study wereidentifiable in both conditions. A factor analysis run on participants’ responses showedthat participants identified the constructs interaction imbalance, interaction valence andengagement regardless of video condition. The machine learning classifiers achieveda similar performance in both conditions, again supporting the idea that movementalone carries relevant information. Overall, our results suggest it is reasonable to expecta machine learning algorithm, and consequently a robot, to successfully decode andclassify a range of internal states and social constructs using low-dimensional data (suchas the movements and poses of observed individuals) as input.

Ort, förlag, år, upplaga, sidor
Frontiers Media S.A., 2019. Vol. 6, nr 49
Nyckelord [en]
social psychology, human-robot interaction, machine learning, social interaction, recognition
Nationell ämneskategori
Människa-datorinteraktion (interaktionsdesign)
Forskningsämne
Interaction Lab (ILAB)
Identifikatorer
URN: urn:nbn:se:his:diva-17301DOI: 10.3389/frobt.2019.00049ISI: 000473169300001Scopus ID: 2-s2.0-85068522657OAI: oai:DiVA.org:his-17301DiVA, id: diva2:1330785
Tillgänglig från: 2019-06-26 Skapad: 2019-06-26 Senast uppdaterad: 2019-08-20Bibliografiskt granskad

Open Access i DiVA

fulltext(586 kB)19 nedladdningar
Filinformation
Filnamn FULLTEXT01.pdfFilstorlek 586 kBChecksumma SHA-512
814ebeabe0c4873402d9a4a51b173b802cb8ed43179e9470b7556368d66e96597fdd0ce9c31ee6e397f787a1367b23644d40483158b9de70e9723da9d44864a3
Typ fulltextMimetyp application/pdf

Övriga länkar

Förlagets fulltextScopus

Personposter BETA

Thill, Serge

Sök vidare i DiVA

Av författaren/redaktören
Thill, Serge
Av organisationen
Institutionen för informationsteknologiForskningscentrum för Informationsteknologi
I samma tidskrift
Frontiers in Robotics and AI
Människa-datorinteraktion (interaktionsdesign)

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 19 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 59 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf