Högskolan i Skövde

his.sePublications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 40) Show all publications
Nair, V., Hemeren, P., Vignolo, A., Noceti, N., Nicora, E., Sciutti, A., . . . Sandini, G. (2023). Kinematic primitives in action similarity judgments: A human-centered computational model. IEEE Transactions on Cognitive and Developmental Systems, 15(4), 1981-1992
Open this publication in new window or tab >>Kinematic primitives in action similarity judgments: A human-centered computational model
Show others...
2023 (English)In: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 15, no 4, p. 1981-1992Article in journal (Refereed) Published
Abstract [en]

This paper investigates the role that kinematic features play in human action similarity judgments. The results of three experiments with human participants are compared with the computational model that solves the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experimental results show that both model and human participants can reliably identify whether two actions are the same or not. Specifically, most of the given actions could be similarity judged based on very limited information from a single feature domain (velocity or spatial). Both velocity and spatial features were however necessary to reach a level of human performance on evaluated actions. The experimental results also show that human performance on an action identification task indicated that they clearly relied on kinematic information rather than on action semantics. The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features, which can provide an essential basis for classifying human actions. 

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
Biological systems, Computation theory, Computational methods, Job analysis, Kinematics, Semantics, Action matching, Action similarity, Biological motion, Biological system modeling, Comparatives studies, Computational modelling, Kinematic primitive, Light display, Point light display, Task analysis, Optical flows, Biology, comparative study, computational model, Computational modeling, Data models, Dictionaries, kinematic primitives, Optical flow
National Category
Computer Sciences Human Computer Interaction
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-22308 (URN)10.1109/TCDS.2023.3240302 (DOI)001126639000035 ()2-s2.0-85148457281 (Scopus ID)
Note

CC BY 4.0

Corresponding author: Vipul Nair.

This work has been partially carried out at the Machine Learning Genoa (MaLGa) center, Università di Genova (IT). It has been partially supported by AFOSR, grant n. FA8655-20-1-7035, and research collaboration between University of Skövde and Istituto Italiano di Tecnologia, Genoa.

Available from: 2023-03-02 Created: 2023-03-02 Last updated: 2024-03-12Bibliographically approved
Nair, V., Suchan, J., Bhatt, M. & Hemeren, P. (2022). Anticipatory instances in films: What do they tell us about event understanding?. In: : . Paper presented at Society for the Cognitive Studies of the Moving Image Conference (SCSMI 2022), Gandia, Spain, June 1-4, 2022.
Open this publication in new window or tab >>Anticipatory instances in films: What do they tell us about event understanding?
2022 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Event perception research highlights the significance of visuospatial attributes that influence event segmentation and prediction. The present study investigates how the visuospatial attributes in film events correlate to viewers’ ongoing event processes such as anticipatory gaze, prediction and segmentation. We derive film instances (such as occlusion, enter/exit, turn towards etc.) that show trends of (high) anticipatory viewing behaviour from an in-depth multimodal (such as speech, handaction, gaze etc.) event features analysis of 25 movie scenes and correlated with visual attention analysis (eye-tracking 32 participants per scene). The first results provide a solid basis for using these derived instances to examine further the nature of the different visuospatial attributes in relation to event changes (where anticipation and segmentation occurs). With the results, we (aim to) argue that by investigating film instances of anticipatory nature, one could explicate how humans perform high-level characterization of visuospatial attributes and understand events.

Keywords
event perception, event segmentation, anticipatory gaze
National Category
Information Systems Other Computer and Information Science
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-21097 (URN)
Conference
Society for the Cognitive Studies of the Moving Image Conference (SCSMI 2022), Gandia, Spain, June 1-4, 2022
Note

The study proposal is accepted to SCSMi 2022. The ISBN no and other publication info will be published after the conference, June 1-4.

Available from: 2022-04-29 Created: 2022-04-29 Last updated: 2023-03-28
Nair, V., Suchan, J., Bhatt, M. & Hemeren, P. (2022). Attentional synchrony in films: A window to visuospatial characterization of events. In: Stephen N. Spencer (Ed.), Proceedings SAP 2022: ACM Symposium on Applied Perception September 22 – 23, 2022. Paper presented at SAP 2022, ACM Symposium on Applied Perception September 22 – 23, 2022, TBC, USA. Association for Computing Machinery (ACM), Article ID 8.
Open this publication in new window or tab >>Attentional synchrony in films: A window to visuospatial characterization of events
2022 (English)In: Proceedings SAP 2022: ACM Symposium on Applied Perception September 22 – 23, 2022 / [ed] Stephen N. Spencer, Association for Computing Machinery (ACM), 2022, article id 8Conference paper, Published paper (Refereed)
Abstract [en]

The study of event perception emphasizes the importance of visuospatial attributes in everyday human activities and how they influence event segmentation, prediction and retrieval. Attending to these visuospatial attributes is the first step toward event understanding, and therefore correlating attentional measures to such attributes would help to further our understanding of event comprehension. In this study, we focus on attentional synchrony amongst other attentional measures and analyze select film scenes through the lens of a visuospatial event model. Here we present the first results of an in-depth multimodal (such as head-turn, hand-action etc.) visuospatial analysis of 10 movie scenes correlated with visual attention (eye-tracking 32 participants per scene). With the results, we tease apart event segments of high and low attentional synchrony and describe the distribution of attention in relation to the visuospatial features. This analysis gives us an indirect measure of attentional saliency for a scene with a particular visuospatial complexity, ultimately directing the attentional selection of the observers in a given context.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2022
Keywords
Visuoauditory cues, Human-interaction, Eye-tracking, Attention
National Category
Human Computer Interaction Media Studies
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-22205 (URN)10.1145/3548814.3551466 (DOI)2-s2.0-85139425610 (Scopus ID)978-1-4503-9455-0 (ISBN)
Conference
SAP 2022, ACM Symposium on Applied Perception September 22 – 23, 2022, TBC, USA
Note

CC BY 4.0

Available from: 2023-01-25 Created: 2023-01-25 Last updated: 2023-05-04Bibliographically approved
Hemeren, P., Johannesson, M., Lebram, M. & Eriksson, F. (2021). Biological Motion Indicators for the Detection of Cyclists at Night. In: Erik Billing; Andreas Kalckert (Ed.), Proceedings of the 16th SweCog Conference: . Paper presented at SweCog 2021, the 16th SweCog conference, virtual from Skövde, Sweden, November 10-12, 2021 (pp. 29-31). Skövde: University of Skövde
Open this publication in new window or tab >>Biological Motion Indicators for the Detection of Cyclists at Night
2021 (English)In: Proceedings of the 16th SweCog Conference / [ed] Erik Billing; Andreas Kalckert, Skövde: University of Skövde , 2021, p. 29-31Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
Skövde: University of Skövde, 2021
Series
SUSI, ISSN 1653-2325 ; 2021:2
Keywords
drivers, cyclists, reflectors, detection, biological motion, eye movements
National Category
Interaction Technologies Psychology (excluding Applied Psychology)
Research subject
Interaction Lab (ILAB); Media, Technology and Culture (MTEC)
Identifiers
urn:nbn:se:his:diva-20938 (URN)978-91-983667-8-5 (ISBN)
Conference
SweCog 2021, the 16th SweCog conference, virtual from Skövde, Sweden, November 10-12, 2021
Note

paul.hemeren@his.se

Available from: 2022-02-24 Created: 2022-02-24 Last updated: 2022-05-12Bibliographically approved
Nair, V., Suchan, J., Bhatt, M. & Hemeren, P. (2021). Event segmentation through the lens of multimodal interaction. In: Thomas Hünefeldt; Marta Olivetti Belardinelli (Ed.), Proceedings of the 8th International Conference on Spatial Cognition: Cognition and Action in a Plurality of Spaces (ICSC 2021). Paper presented at 8th International Conference on Spatial Cognition: Cognition and Action in a Plurality of Spaces (ICSC 2021), (Virtual Conference), September 13-17, 2021 (pp. 61-62). Springer
Open this publication in new window or tab >>Event segmentation through the lens of multimodal interaction
2021 (English)In: Proceedings of the 8th International Conference on Spatial Cognition: Cognition and Action in a Plurality of Spaces (ICSC 2021) / [ed] Thomas Hünefeldt; Marta Olivetti Belardinelli, Springer, 2021, p. 61-62Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Research in naturalistic event perception highlights the significance of visuospatial attributes pertaining to everyday embodied human interaction. This research focuses on developing a conceptual cognitive model to characterise the role of multimodality in human interaction, its influence on visuospatial representation, event segmentation, and high-level event prediction.

Our research aims to characterise the influence of modalities such as visual attention, speech, hand-action, body-pose, headmovement, spatial-position, motion, and gaze on judging event segments. Our particular focus is on visuoauditory narrative media. We select 25 movie scenes from a larger project concerning cognitive film/media studies and performing detailed multimodal analysis in the backdrop of an elaborate (formally specified) event analysis ontology. Corresponding to the semantic event analysis of each scene, we also perform high-level visual attention analysis (eye-tracking based) with 32 participants per scene. Correlating the features of each scene with visual attention constitutes the key method that we utilise in our analysis.

We hypothesise that the attentional performance on event segments reflects the influence exerted by multimodal cues on event segmentation and prediction, thereby enabling us to explicate the representational basis of events. The first results show trends of multiple viewing behaviours such as attentional synchrony, gaze pursuit and attentional saliency towards human faces.

Work is presently in progress, further investigating the role of visuospatial/auditory cues in high-level event perception, e.g., involving anticipatory gaze vis-a-vis event prediction. Applications and impact of this conceptual cognitive model and its behavioural outcomes are aplenty in domains such as (digital) narrative media design and social robotics.

Place, publisher, year, edition, pages
Springer, 2021
Series
Cognitive Processing, ISSN 1612-4782, E-ISSN 1612-4790 ; 22: Suppl. 1
Keywords
visuospatial cues, human-interaction, event segmentation, multimodality, eye-tracking
National Category
Human Computer Interaction Computer Sciences Other Computer and Information Science
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-20709 (URN)000693578400203 ()
Conference
8th International Conference on Spatial Cognition: Cognition and Action in a Plurality of Spaces (ICSC 2021), (Virtual Conference), September 13-17, 2021
Available from: 2021-11-20 Created: 2021-11-20 Last updated: 2023-03-06Bibliographically approved
Hemeren, P., Veto, P., Thill, S., Cai, L. & Sun, J. (2021). Kinematic-based classification of social gestures and grasping by humans and machine learning techniques. Frontiers in Robotics and AI, 8(308), 1-17, Article ID 699505.
Open this publication in new window or tab >>Kinematic-based classification of social gestures and grasping by humans and machine learning techniques
Show others...
2021 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, no 308, p. 1-17, article id 699505Article in journal (Refereed) Published
Abstract [en]

The affective motion of humans conveys messages that other humans perceive and understand without conventional linguistic processing. This ability to classify human movement into meaningful gestures or segments plays also a critical role in creating social interaction between humans and robots. In the research presented here, grasping and social gesture recognition by humans and four machine learning techniques (k-Nearest Neighbor, Locality-Sensitive Hashing Forest, Random Forest and Support Vector Machine) is assessed by using human classification data as a reference for evaluating the classification performance of machine learning techniques for thirty hand/arm gestures. The gestures are rated according to the extent of grasping motion on one task and the extent to which the same gestures are perceived as social according to another task. The results indicate that humans clearly rate differently according to the two different tasks. The machine learning techniques provide a similar classification of the actions according to grasping kinematics and social quality. Furthermore, there is a strong association between gesture kinematics and judgments of grasping and the social quality of the hand/arm gestures. Our results support previous research on intention-from-movement understanding that demonstrates the reliance on kinematic information for perceiving the social aspects and intentions in different grasping actions as well as communicative point-light actions. 

Place, publisher, year, edition, pages
Frontiers Media S.A., 2021
Keywords
gesture recognition, social gestures, machine learning, Biological motion, kinematics, social signal processing
National Category
Human Computer Interaction Robotics
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-20560 (URN)10.3389/frobt.2021.699505 (DOI)000716638700001 ()34746242 (PubMedID)2-s2.0-85118674941 (Scopus ID)
Note

CC BY 4.0

Correspondence: Dr. Paul Hemeren, University of Skövde, Skövde, Sweden, paul.hemeren@his.se

This article is part of the Research Topic Affective Shared Perception

published: 15 October 2021

Available from: 2021-09-13 Created: 2021-09-13 Last updated: 2022-09-02
Nair, V., Hemeren, P., Vignolo, A., Noceti, N., Nicora, E., Sciutti, A., . . . Sandini, G. (2020). Action similarity judgment based on kinematic primitives. In: 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob): . Paper presented at 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), 26th October to 27th of November 2020, Online. IEEE
Open this publication in new window or tab >>Action similarity judgment based on kinematic primitives
Show others...
2020 (English)In: 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), IEEE, 2020Conference paper, Published paper (Refereed)
Abstract [en]

Understanding which features humans rely on - in visually recognizing action similarity is a crucial step towards a clearer picture of human action perception from a learning and developmental perspective. In the present work, we investigate to which extent a computational model based on kinematics can determine action similarity and how its performance relates to human similarity judgments of the same actions. To this aim, twelve participants perform an action similarity task, and their performances are compared to that of a computational model solving the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experiment results show that both the model and human participants can reliably identify whether two actions are the same or not. However, the model produces more false hits and has a greater selection bias than human participants. A possible reason for this is the particular sensitivity of the model towards kinematic primitives of the presented actions. In a second experiment, human participants' performance on an action identification task indicated that they relied solely on kinematic information rather than on action semantics. The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features, which can provide an essential basis for classifying human actions.

Place, publisher, year, edition, pages
IEEE, 2020
Series
IEEE International Conference on Development and Learning, ISSN 2161-9484, E-ISSN 2161-9484
Keywords
Kinematics, Computational modeling, Task analysis, Biological system modeling, Dictionaries, Visualization, Semantics
National Category
Interaction Technologies
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-19425 (URN)10.1109/ICDL-EpiRob48136.2020.9278047 (DOI)000692524300007 ()2-s2.0-85097550238 (Scopus ID)978-1-7281-7306-1 (ISBN)978-1-7281-7320-7 (ISBN)
Conference
2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), 26th October to 27th of November 2020, Online
Funder
Knowledge FoundationEU, European Research Council, 20140220Swedish Research Council, 804388
Note

Funding Agency:10.13039/100003077-Knowledge Foundation; 10.13039/100010663-European Research Council

Available from: 2021-01-23 Created: 2021-01-23 Last updated: 2023-03-03Bibliographically approved
Hemeren, P. & Rybarczyk, Y. (2020). The Visual Perception of Biological Motion in Adults. In: Nicoletta Noceti, Alessandra Sciutti, Francesco Rea (Ed.), Modelling Human Motion: From Human Perception to Robot Design (pp. 53-71). Cham: Springer
Open this publication in new window or tab >>The Visual Perception of Biological Motion in Adults
2020 (English)In: Modelling Human Motion: From Human Perception to Robot Design / [ed] Nicoletta Noceti, Alessandra Sciutti, Francesco Rea, Cham: Springer, 2020, p. 53-71Chapter in book (Refereed)
Abstract [en]

This chapter presents research about the roles of different levels of visual processing and motor control on our ability to perceive biological motion produced by humans and by robots. The levels of visual processing addressed include high-level semantic processing of action prototypes based on global features as well as lower-level local processing based on kinematic features. A further important aspect concerns the interaction between these two levels of processing and the interaction between our own movement patterns and their impact on our visual perception of biological motion. The authors’ results from their research describe the conditions under which semantic and kinematic features influence one another in our understanding of human actions. In addition, results are presented to illustrate the claim that motor control and different levels of the visual perception of biological motion have clear consequences for human–robot interaction. Understanding the movement of robots is greatly facilitated by the movement that is consistent with the psychophysical constraints of Fitts’ law, minimum jerk and the two-thirds power law.

Place, publisher, year, edition, pages
Cham: Springer, 2020
Keywords
biological motion, human vision, robotic vision, action recognition
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-18835 (URN)10.1007/978-3-030-46732-6_4 (DOI)978-3-030-46731-9 (ISBN)978-3-030-46732-6 (ISBN)
Available from: 2020-07-16 Created: 2020-07-16 Last updated: 2020-10-28Bibliographically approved
Hemeren, P. (2019). Översikt: AI: nuläget och vart är vi på väg?. NOD: forum för tro, kultur och samhälle (3)
Open this publication in new window or tab >>Översikt: AI: nuläget och vart är vi på väg?
2019 (Swedish)In: NOD: forum för tro, kultur och samhälle, ISSN 1652-6066, no 3Article, review/survey (Other (popular science, discussion, etc.)) Published
Keywords
AI, artificiell intelligens, språk, medvetande, etik, tänkande
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-18348 (URN)
Available from: 2020-03-29 Created: 2020-03-29 Last updated: 2020-04-16Bibliographically approved
Hemeren, P. (2019). Reverse Hierarchy Theory and the Role of Kinematic Information in Semantic Level Processing and Intention Perception. In: : . Paper presented at Anticipation and Anticipatory Systems: Humans Meet Artificial Intelligence, Örebro, Sweden, June 10-13, 2019.
Open this publication in new window or tab >>Reverse Hierarchy Theory and the Role of Kinematic Information in Semantic Level Processing and Intention Perception
2019 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

In many ways, human cognition is importantly predictive (e.g., Clark, 2015). A critical source of information that humans use to anticipate the future actions of other humans and to perceive intentions is bodily movement (e.g., Ansuini et al., 2014; Becchio et al., 2018; Koul et al., 2019; Sciutti et al., 2015). This ability extends to perceiving the intentions of other humans based on past and current actions. The purpose of this abstract is to address the issue of anticipation according to levels of processing in visual perception and experimental results that demonstrate high-level semantic processing in the visual perception of various biological motion displays. These research results (Hemeren & Thill, 2011; Hemeren et al., 2018; Hemeren et al., 2016) show that social aspects and future movement patterns can be predicted from fairly simple kinematic patterns in biological motion sequences, which demonstrates the different environmental (gravity and perspective) and bodily constraints that contribute to understanding our social and movement-based interactions with others. Understanding how humans perceive anticipation and intention amongst one another should help us create artificial systems that also can perceive human anticipation and intention.

Keywords
anticipation, intention perception, biological motion, cognitive systems
National Category
Human Computer Interaction
Research subject
Interaction Lab (ILAB)
Identifiers
urn:nbn:se:his:diva-17826 (URN)
Conference
Anticipation and Anticipatory Systems: Humans Meet Artificial Intelligence, Örebro, Sweden, June 10-13, 2019
Funder
Knowledge Foundation
Available from: 2019-10-28 Created: 2019-10-28 Last updated: 2019-10-30Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1227-6843

Search in DiVA

Show all publications