Högskolan i Skövde

his.sePublications
Change search
Refine search result
1 - 40 of 40
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Drejing, Karl
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Thill, Serge
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Engagement: A traceable motivational concept in human-robot interaction2015In: Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on, IEEE Computer Society, 2015, p. 956-961Conference paper (Refereed)
    Abstract [en]

    Engagement is essential to meaningful social interaction between humans. Understanding the mechanisms by which we detect engagement of other humans can help us understand how we can build robots that interact socially with humans. However, there is currently a lack of measurable engagement constructs on which to build an artificial system that can reliably support social interaction between humans and robots. This paper proposes a definition, based on motivation theories, and outlines a framework to explore the idea that engagement can be seen as specific behaviors and their attached magnitude or intensity. This is done by the use of data from multiple sources such as observer ratings, kinematic data, audio and outcomes of interactions. We use the domain of human-robot interaction in order to illustrate the application of this approach. The framework further suggests a method to gather and aggregate this data. If certain behaviors and their attached intensities co-occur with various levels of judged engagement, then engagement could be assessed by this framework consequently making it accessible to a robotic platform. This framework could improve the social capabilities of interactive agents by adding the ability to notice when and why an agent becomes disengaged, thereby providing the interactive agent with an ability to reengage him or her. We illustrate and propose validation of our framework with an example from robot-assisted therapy for children with autism spectrum disorder. The framework also represents a general approach that can be applied to other social interactive settings between humans and robots, such as interactions with elderly people.

    Download full text (pdf)
    fulltext
  • 2.
    Hemeren, Paul
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Mind in Action: Action Representation and the Perception of Biological Motion2008Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The ability to understand and communicate about the actions of others is a fundamental aspect of our daily activity. How can we talk about what others are doing? What qualities do different actions have such that they cause us to see them as being different or similar? What is the connection between what we see and the development of concepts and words or expressions for the things that we see? To what extent can two different people see and talk about the same things? Is there a common basis for our perception, and is there then a common basis for the concepts we form and the way in which the concepts become lexicalized in language? The broad purpose of this thesis is to relate aspects of perception, categorization and language to action recognition and conceptualization. This is achieved by empirically demonstrating a prototype structure for action categories and by revealing the effect this structure has on language via the semantic organization of verbs for natural actions. The results also show that implicit access to categorical information can affect the perceptual processing of basic actions. These findings indicate that our understanding of human actions is guided by the activation of high level information in the form of dynamic action templates or prototypes. More specifically, the first two empirical studies investigate the relation between perception and the hierarchical structure of action categories, i.e., subordinate, basic, and superordinate level action categories. Subjects generated lists of verbs based on perceptual criteria. Analyses based on multidimensional scaling showed a significant correlation for the semantic organization of a subset of the verbs for English and Swedish speaking subjects. Two additional experiments were performed in order to further determine the extent to which action categories exhibit graded structure, which would indicate the existence of prototypes for action categories. The results from typicality ratings and category verification showed that typicality judgments reliably predict category verification times for instances of different actions. Finally, the results from a repetition (short-term) priming paradigm suggest that high level information about the categorical differences between actions can be implicitly activated and facilitates the later visual processing of displays of biological motion. This facilitation occurs for upright displays, but appears to be lacking for displays that are shown upside down. These results show that the implicit activation of information about action categories can play a critical role in the perception of human actions.

  • 3.
    Hemeren, Paul
    University of Skövde, School of Humanities and Informatics.
    Orientation specific effects of automatic access to categorical information in biological motion perception2005In: Proceedings of the 27th Annual Conference of the Cognitive Science Society: CogSci05 / [ed] Bruno G. Bara, Lawrence Barsalou, Monica Bucciarelli, Lawrence Erlbaum Associates, 2005, p. 935-940Conference paper (Refereed)
    Abstract [en]

    Previous findings from studies of biological motion perception suggest that access to stored high-level knowledge about action categories contributes to the fast identification of actions depicted in point-light displays of biological motion.

    Three priming experiments were conducted to investigate the automatic access to stored categorical level information in the visual processing of biological motion and the extent to which this access varies as a function of action orientation. The results show that activation of categorical level information occurs even when participants are given a task that does not require access to the categorical nature of the actions depicted in point-light displays. The results suggest that the visual processing of upright actions is indicative of Hochstein and Ahissar’s notion of vision at a glance, whereas inverted actions indicate vision with scrutiny.

  • 4.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Översikt: AI: nuläget och vart är vi på väg?2019In: NOD: forum för tro, kultur och samhälle, ISSN 1652-6066, no 3Article, review/survey (Other (popular science, discussion, etc.))
    Download full text (pdf)
    fulltext
  • 5.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Reverse Hierarchy Theory and the Role of Kinematic Information in Semantic Level Processing and Intention Perception2019Conference paper (Refereed)
    Abstract [en]

    In many ways, human cognition is importantly predictive (e.g., Clark, 2015). A critical source of information that humans use to anticipate the future actions of other humans and to perceive intentions is bodily movement (e.g., Ansuini et al., 2014; Becchio et al., 2018; Koul et al., 2019; Sciutti et al., 2015). This ability extends to perceiving the intentions of other humans based on past and current actions. The purpose of this abstract is to address the issue of anticipation according to levels of processing in visual perception and experimental results that demonstrate high-level semantic processing in the visual perception of various biological motion displays. These research results (Hemeren & Thill, 2011; Hemeren et al., 2018; Hemeren et al., 2016) show that social aspects and future movement patterns can be predicted from fairly simple kinematic patterns in biological motion sequences, which demonstrates the different environmental (gravity and perspective) and bodily constraints that contribute to understanding our social and movement-based interactions with others. Understanding how humans perceive anticipation and intention amongst one another should help us create artificial systems that also can perceive human anticipation and intention.

    Download full text (pdf)
    fulltext
  • 6.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Signals for Active Safety Systems to Detect Cyclists and Their Intentions in Traffic2018Conference paper (Refereed)
    Abstract [en]

    Objectives: Human cognition is importantly predictive. This predictive ability can also be applied to predict the future actions of cyclists in traffic. Active safety systems in (semi-)autonomous vehicles will likely need to detect and predict human actions occurring in different traffic situations.

    Results from two experiments demonstrate the effect of different patterns of human movement on predicting the behavior of the cyclists and the distance it takes drivers to detect the cyclists in a city environment. This research was carried out by observing recorded sequences on a computer but also in a driving simulator in order to include more naturalistic conditions and to achieve a high level of experimental control. As a complement to our previous research (Hemeren et al., 2014), we aimed to determine the distance at which drivers would detect and predict cyclists’ behavior.

    Methods: Participants in both experiments (90 participants in experiment 1 and 24 in experiment 2) observed video-recorded cyclists wearing three different patterns of reflective clothing (Fig. 1): biomotion, vest and the legal minimum requirement (legal), in which no reflector material was worn by the cyclists. In experiment 1, participants were instructed to predict if an approaching cyclist would make a left-turn or continue straight on when approaching a crossing. This task was also performed during daylight, dusk and at night. In the second experiment, participants in a driving simulator indicated (as a secondary task) when they saw a cyclist riding along the side of the road.

    Results: The biomotion reflective clothing led to a prediction accuracy of 88% for cyclists’ intentions at 9 meters before a crossing for the nighttime condition. For the legal minimum, the result was 59% and for the vest 67%. Detection distance in the driving simulator was also significantly greater for the biomotion condition compared to the legal and vest conditions. Visual detection is almost twice the distance for biomotion compared to the other two reflective clothing conditions.

    Conclusions: The results point to the critical role that biological motion can play on predicting the intention and detection of cyclists in traffic. This information can be used to inform (semi-)autonomous systems of human intentions in traffic.

    Download full text (pdf)
    fulltext
  • 7.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    To AIR is Human, or is it?: The Role of High-Level Representations and Conscious Awareness in Biological Motion Perception2019Conference paper (Refereed)
    Abstract [en]

    The purpose of this research is to address the nature of high-level processing within visual perception. In particular, results from the visual processing of biological motion will be used to discuss the role of attention in high-level vision and visual consciousness. Original results from 3 priming experiments indicate “automatic” high-level semantic activation in biological motion perception. The view presented here is discussed in the context of Prinz’s (2000, 2003) AIR-theory. AIR stands for Attended Intermediate-level Representations and claims that visual consciousness resides at the level of intermediate-level representations. In contrast, the view presented here is that results from behavioral and neuroscientific studies of biological motion suggest that visual consciousness occurs at high cortical levels. Moreover, the Reverse Hierarchy Theory of Hochstein and Ahissar (2002) asserts that spread attention in high cortical areas is indicative of what they term “vision at a glance.” The gist of their theory is that explicit high-level visual processing involves initial feedforward mechanisms that implicitly follow a bottom-up hierarchical pathway. The end product of the processing, and the beginning of explicit visual perception, is conscious access to perceptual content in high-level cortical areas. Finally, I discuss the specific claims in AIR and present objections to Prinz’s arguments for why high-level visual processors are not good candidates for the locale of consciousness. In conclusion, the central claim of AIR with an emphasis on the connection between intermediate level representations and perceptual awareness seems to be too strong, and the arguments against high-level perceptual awareness are not convincing

    Download full text (pdf)
    fulltext
  • 8.
    Hemeren, Paul
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Alklind Taylor, Anna-Sofia
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Vad gör en kognitionsvetare?2012In: Kognitionsvetenskap / [ed] Jens Allwood; Mikael Jensen, Lund, 2012, 1, p. 57-66Chapter in book (Refereed)
  • 9.
    Hemeren, Paul E.
    Cognitive Science, Lund University, Sweden.
    Frequency, ordinal position and semantic distance as measures of cross-cultural stability and hierarchies for action verbs1996In: Acta Psychologica, ISSN 0001-6918, E-ISSN 1873-6297, Vol. 91, no 1, p. 39-66Article in journal (Refereed)
    Abstract [en]

    Swedish and English (American) speaking subjects were given a superordinate description for a general class of actions that depict bodily movement. Based on a listing task similar to the one used in Battig and Montague (1969), the subjects were instructed to list all the actions that conformed to the superordinate. The results of the task indicate graded structure for the superordinate category as well as hierarchical relations between a basic and subordinate level as shown by measures of response frequencies and mean ordinal positions. These measures also correlated highly between the Swedish and American samples for the most frequently listed verbs, indicating a strong degree of cross-cultural stability. In an additional test of this stability, the ordinal positions of the verbs were used as proximity data in multidimensional scaling analyses in order to obtain a measure of the semantic distance between the different verbs. A correlation between the Swedish and American samples, using the derived distances for all possible pairs of the verbs, revealed a significant degree of stability. Furthermore, groupings of locomotory and vocal actions in the 3-dimensional multidimensional scaling solutions showed a tendency towards a much stronger stability. A speculative account of these results is proposed in terms of the physical constraints in human motion and the frequency of performing or seeing others perform actions around us.

  • 10.
    Hemeren, Paul E.
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Johannesson, Mikael
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lebram, Mikael
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Eriksson, Fredrik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Detecting Cyclists at Night: visibility effects of reflector placement and different lighting conditions2017In: Proceedings of the 6th Annual International Cycling Safety Conference, 2017Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 11.
    Hemeren, Paul E.
    et al.
    University of Skövde, School of Humanities and Informatics.
    Kasviki, Sofia
    University of Skövde, School of Humanities and Informatics.
    Gawronska, Barbara
    Department of Foreign Languages and Translation, University of Agder, Norway.
    Lexicalization of natural actions and cross-linguistic stability2008In: Proceedings of ISCA Tutorial and Research Workshop On Experimental Linguistics ExLing 2008: 25-27 August 2008, Athens, Greece / [ed] Antonis Botinis, University of Athens , 2008, p. 105-108Conference paper (Refereed)
    Abstract [en]

    To what extent do Modern Greek, Polish, Swedish and American English similarly lexicalize action concepts, and how similar are the semantic associations between verbs denoting natural actions? Previous results indicate cross-linguistic stability between American English, Swedish, and Polish in verbs denoting basic human body movement, mouth movements, and sound production. The research reported here extends the cross-linguistic comparison to include Greek, which, unlike Polish, American English and Swedish, is a path-language. We used action imagery criteria to obtain lists of verbs from native Greek speakers. The data were analyzed by using multidimensional scaling, and the results were compared to those previously obtained.

  • 12.
    Hemeren, Paul E.
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Thill, Serge
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Deriving motor primitives through action segmentation2011In: Frontiers in Psychology, ISSN 1664-1078, Vol. 1, p. 1-11, article id 243Article in journal (Refereed)
    Abstract [en]

    The purpose of the present experiment is to further understand the effect of levels of processing (top-down vs. bottom-up) on the perception of movement kinematics and primitives for grasping actions in order to gain insight into possible primitives used by the mirror system. In the present study, we investigated the potential of identifying such primitives using an action segmentation task. Specifically, we investigated whether or not segmentation was driven primarily by the kinematics of the action, as opposed to high-level top-down information about the action and the object used in the action. Participants in the experiment were shown 12 point-light movies of object-centered hand/arm actions that were either presented in their canonical orientation together with the object in question (top-down condition) or upside down (inverted) without information about the object (bottom-up condition). The results show that (1) despite impaired high-level action recognition for the inverted actions participants were able to reliably segment the actions according to lower-level kinematic variables, (2) segmentation behavior in both groups was significantly related to the kinematic variables of change in direction, velocity, and acceleration of the wrist (thumb and finger tips) for most of the included actions. This indicates that top-down activation of an action representation leads to similar segmentation behavior for hand/arm actions compared to bottom-up, or local, visual processing when performing a fairly unconstrained segmentation task. Motor primitives as parts of more complex actions may therefore be reliably derived through visual segmentation based on movement kinematics.

  • 13.
    Hemeren, Paul
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Gawronska, Barbara
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Lexicalization of natural actions and cross-linguistic stability2007In: Communication - Action - Meaning: A Festschrift to Jens Allwood / [ed] Elisabeth Ahlsén, Peter Juel Henrichsen, Richard Hirsch, Joakim Nivre, Åsa Abelin, Sven Strömqvist, Shirley Nicholson, Göteborg: Department of Linguistics, Göteborg University , 2007, p. 57-74Chapter in book (Other academic)
  • 14.
    Hemeren, Paul
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Gärdenfors, Peter
    Lund University Cognitive Science, Sweden.
    A Framework for Representing Action Meaning in Artificial Systems via Force Dimensions2012In: Artificial General Intelligence: 5th International Conference, AGI 2012, Oxford, UK, December 8-11, 2012. Proceedings / [ed] Joscha Bach; Ben Goertzel; Matthew Iklé, Heidelberg, Dordrecht, London, New York: Springer Berlin/Heidelberg, 2012, p. 99-106Conference paper (Refereed)
    Abstract [en]

    General (human) intelligence critically includes understanding human action, both action production and action recognition. Human actions also convey social signals that allow us to predict the actions of others (intent) as well as the physical and social consequences of our actions. What's more, we are able to talk about what we (and others) are doing. We present a framework for action recognition and communication that is based on access to the force dimensions that constrain human actions. The central idea here is that forces and force patterns constitute vectors in conceptual spaces that can represent actions and events. We conclude by pointing to the consequences of this view for how artificial systms could be made to understand and communicate bout actions.

  • 15.
    Hemeren, Paul
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Hanley, Elizabeth
    University of Michigan.
    Veto, Peter
    University of Cambridge.
    The walker congruency effect and incidental processing of configural and local features in point-light walkers2018Conference paper (Other academic)
    Abstract [en]

    Two visual flanker experiments investigated the roles of configural and local opponent motion cues on the incidental processing of a point-light walker with diagonally configured limbs. Different flankers were used to determine the extent of interference on the visual processing of a central walker. Flankers (walkers) with diagonally configured limbs lacked the local opponent motion of the feet and hands, but contained configural information. Partially scrambled displays with intact opponent motion of the feet at the bottom of the display lacked configural information. These two conditions resulted in different effects of incidental processing. Configural information, without opponent motion, leads to changes in reaction time across flanker conditions, with no measurable congruency effect, while feet-based opponent motion causes a congruency effect without changes in reaction time across different flanker conditions. Life detection is a function of both sources of information.

    Download full text (pdf)
    Poster ECVP
  • 16.
    Hemeren, Paul
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Johannesson, Mikael
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Lebram, Mikael
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Eriksson, Fredrik
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Biological Motion Indicators for the Detection of Cyclists at Night2021In: Proceedings of the 16th SweCog Conference / [ed] Erik Billing; Andreas Kalckert, Skövde: University of Skövde , 2021, p. 29-31Conference paper (Refereed)
  • 17.
    Hemeren, Paul
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Johannesson, Mikael
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Lebram, Mikael
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Eriksson, Fredrik
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Ekman, Kristoffer
    University of Skövde, School of Humanities and Informatics.
    Veto, Peter
    University of Skövde, School of Humanities and Informatics.
    The Use of Perceptual Cues to Determine the Intent of Cyclists in Traffic2013Conference paper (Refereed)
  • 18.
    Hemeren, Paul
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Johannesson, Mikael
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lebram, Mikael
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Eriksson, Fredrik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Ekman, Kristoffer
    University of Skövde, School of Bioscience. University of Skövde, The Systems Biology Research Centre.
    Veto, Peter
    University of Skövde, The Informatics Research Centre.
    The Use of Visual Cues to Determine the Intent of Cyclists in Traffic2014In: 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), IEEE Press, 2014, p. 47-51Conference paper (Refereed)
    Abstract [en]

    The purpose of this research was to answer the following central questions: 1) How accurate are human observers at predicting the behavior of cyclists as the cyclists approached a crossing? 2) If the accuracy is reliably better than chance, what cues were used to make the predictions? 3) At what distance from the crossing did the most critical cues occur? 4) Can the cues be used in a model that can reliably predict cyclist intent? We present results that show a number of indicators that can be used in to predict the intention of a cyclist, i.e., future actions of a cyclist, e.g., “left turn” or “continue forward” etc.

    Results of empirical studies show that humans are reasonably good at this type of prediction for a majority of the situations studied. However, some situations seem to contain conflicting information. The results also suggested that human prediction of intention is to a large extent relying on a single “strong” indicator, e.g., that the cyclist makes a clear “head movement”. Several “weaker" indicators that together could be a strong “combined indicator”, or equivalently strong evidence, is likely to be missed or too complex to be handled by humans in real-time. We suggest this line of research can be used to create decision support systems that predict the behavior of cyclists in traffic.

  • 19.
    Hemeren, Paul
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Johannesson, Mikael
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Lebram, Mikael
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Eriksson, Fredrik
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Ekman, Kristoffer
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Systems Biology Research Centre.
    Veto, Peter
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    URBANIST: Signaler som används för att avläsa cyklisters intentioner i trafiken2013Report (Other academic)
    Abstract [sv]

    Genom att titta på ett fåtal bestämda signaler kan man med god träffsäkerhet förutsäga cyklisters beteende, vilket tyder på att de identifierade signalerna är betydelsefulla. Vetskapen om dessa kan, bland annat, praktiskt användas för att utveckla enkla hjälpmedel – såsom medveten placering av fluorescerande eller reflekterande material på leder och/eller införande av olikfärgade hjälmsidor. Dylika kan förväntas förstärka kommunikationen av viktiga signaler. Vetskapen kan även användas för att utbilda oerfarna bilförare. Båda fallen kan i förlängningen ge en säkrare trafikmiljö för oskyddade trafikanter.

    Download full text (pdf)
    Urbanist:Teknisk rapport
  • 20.
    Hemeren, Paul
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Nair, Vipul
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Actions, intentions and environmental constraints in biological motion perception2018In: Spatial Cognition in a Multimedia and Intercultural World: Proceedings of the 7th International Conference on Spatial Cognition (ICSC 2018) / [ed] Thomas Hünefeldt; Marta Olivetti Belardinelli, Springer, 2018, Vol. 19 (Suppl 1), p. S8-S8Conference paper (Refereed)
    Abstract [en]

    In many ways, human cognition is importantly predictive. We predict the sensory consequences of our own actions, but we also predict, and react to, the sensory consequences of how others experience their own actions. This ability extends to perceiving the intentions of other humans based on past and current actions. We present research results that show that social aspects and future movement patterns can be predicted from fairly simple kinematic patterns in biological motion sequences. The purpose of this presentation is to demonstrate and discuss the different environmental (gravity and perspective) and bodily constraints on understanding our social and movement-based interactions with others. In a series of experiments, we have used psychophysical methods and recordings from interactions with objects in natural settings. This includes experiments on the incidental processing of biological motion as well as driving simulator studies that examine the role of kinematic patterns of cyclists and driver’s accuracy to predict the cyclist’s intentions in traffic.  The results we present show both clear effects of “low-level” biological motion factors, such as opponent motion, on the incidental triggering of attention in basic perceptual tasks and “higher-lever” top-down guided perception in the intention prediction of cyclist behavior. We propose to use our results to stimulate discussion about the interplay between expectation mediated and stimulus driven effects of visual processing in spatial cognition the context of human interaction. Such discussion will include the role of context in gesture recognition and to what extent our visual system can handle visually complex environments.

    Download full text (pdf)
    fulltext
  • 21.
    Hemeren, Paul
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Nair, Vipul
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Nicora, Elena
    DIBRIS, University of Genoa, Italy.
    Vignolo, Alessia
    RBCS Department, Instituto Italiano Di Technologica, Genoa, Italy.
    Noceti, Nicoletta
    DIBRIS, University of Genoa, Italy.
    Odone, Francesca
    DIBRIS, University of Genoa, Italy.
    Rea, Francesco
    RBCS Department, Instituto Italiano Di Technologica, Genoa, Italy.
    Sandini, Giulio
    RBCS Department, Instituto Italiano Di Technologica, Genoa, Italy.
    Sciutti, Alessandra
    Contact Unit, Italian Institute of Technology, Genoa, Italy.
    Similarity Judgments of Hand-Based Actions: From Human Perception to a Computational Model2019In: 42nd European Conference on Visual Perception (ECVP) 2019 Leuven, Sage Publications, 2019, Vol. 48, p. 79-79Conference paper (Refereed)
    Abstract [en]

    How do humans perceive actions in relation to other similar actions? How can we develop artificial systems that can mirror this ability? This research uses human similarity judgments of point-light actions to evaluate the output from different visual computing algorithms for motion understanding, based on movement, spatial features, motion velocity, and curvature. The aim of the research is twofold: (a) to devise algorithms for motion segmentation into action primitives, which can then be used to build hierarchical representations for estimating action similarity and (b) to develop a better understanding of human actioncategorization in relation to judging action similarity. The long-term goal of the work is to allow an artificial system to recognize similar classes of actions, also across different viewpoints. To this purpose, computational methods for visual action classification are used and then compared with human classification via similarity judgments. Confusion matrices for similarity judgments from these comparisons are assessed for all possible pairs of actions. The preliminary results show some overlap between the outcomes of the two analyses. We discuss the extent of the consistency of the different algorithms with human action categorization as a way to model action perception.

  • 22.
    Hemeren, Paul
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Rybarczyk, Yves
    Dalarna University, Falun, Sweden.
    The Visual Perception of Biological Motion in Adults2020In: Modelling Human Motion: From Human Perception to Robot Design / [ed] Nicoletta Noceti, Alessandra Sciutti, Francesco Rea, Cham: Springer, 2020, p. 53-71Chapter in book (Refereed)
    Abstract [en]

    This chapter presents research about the roles of different levels of visual processing and motor control on our ability to perceive biological motion produced by humans and by robots. The levels of visual processing addressed include high-level semantic processing of action prototypes based on global features as well as lower-level local processing based on kinematic features. A further important aspect concerns the interaction between these two levels of processing and the interaction between our own movement patterns and their impact on our visual perception of biological motion. The authors’ results from their research describe the conditions under which semantic and kinematic features influence one another in our understanding of human actions. In addition, results are presented to illustrate the claim that motor control and different levels of the visual perception of biological motion have clear consequences for human–robot interaction. Understanding the movement of robots is greatly facilitated by the movement that is consistent with the psychophysical constraints of Fitts’ law, minimum jerk and the two-thirds power law.

  • 23.
    Hemeren, Paul
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Veto, Peter
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Thill, Serge
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Netherlands.
    Cai, Li
    Pin An Technology Co. Ltd., Shenzhen, China.
    Sun, Jiong
    Volvo Cars, Göteborg, Sweden.
    Kinematic-based classification of social gestures and grasping by humans and machine learning techniques2021In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, no 308, p. 1-17, article id 699505Article in journal (Refereed)
    Abstract [en]

    The affective motion of humans conveys messages that other humans perceive and understand without conventional linguistic processing. This ability to classify human movement into meaningful gestures or segments plays also a critical role in creating social interaction between humans and robots. In the research presented here, grasping and social gesture recognition by humans and four machine learning techniques (k-Nearest Neighbor, Locality-Sensitive Hashing Forest, Random Forest and Support Vector Machine) is assessed by using human classification data as a reference for evaluating the classification performance of machine learning techniques for thirty hand/arm gestures. The gestures are rated according to the extent of grasping motion on one task and the extent to which the same gestures are perceived as social according to another task. The results indicate that humans clearly rate differently according to the two different tasks. The machine learning techniques provide a similar classification of the actions according to grasping kinematics and social quality. Furthermore, there is a strong association between gesture kinematics and judgments of grasping and the social quality of the hand/arm gestures. Our results support previous research on intention-from-movement understanding that demonstrates the reliance on kinematic information for perceiving the social aspects and intentions in different grasping actions as well as communicative point-light actions. 

    Download full text (pdf)
    fulltext
  • 24.
    Li, Cai
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Bredies, Katharina
    Department of Design, Faculty of Textiles, Engineering and Business, University of Borås, Sweden.
    Lund, Anja
    Department of Textile Technology, Faculty of Textiles, Engineering and Business, University of Borås, Sweden.
    Nierstrasz, Vincent
    Department of Textile Technology, Faculty of Textiles, Engineering and Business, University of Borås, Sweden.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Högberg, Dan
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    k-Nearest-Neighbour based Numerical Hand Posture Recognition using a Smart Textile Glove2015In: AMBIENT 2015: The Fifth International Conference on Ambient Computing, Applications, Services and Technologies / [ed] MaartenWeyn, International Academy, Research and Industry Association (IARIA), 2015, p. 36-41Conference paper (Refereed)
    Abstract [en]

    In this article, the authors present an interdisciplinary project that illustrates the potential and challenges in dealing with electronic textiles as sensing devices. An interactive system consisting of a knitted sensor glove and electronic circuit and a numeric hand posture recognition algorithm based on k-nearestneighbour (kNN) is introduced. The design of the sensor glove itself is described, considering two sensitive fiber materials – piezoresistive and piezoelectric fibers – and the construction using an industrial knitting machine as well as the electronic setup is sketched out. Based on the characteristics of the textile sensors, a kNN technique based on a condensed dataset has been chosen to recognize hand postures indicating numbers from one to five from the sensor data. The authors describe two types of data condensation techniques (Reduced Nearest Neighbours and Fast Condensed Nearest Neighbours) in order to improve the data quality used by kNN, which are compared in terms of run time, condensation rate and recognition accuracy. Finally, the article gives an outlook on potential application scenarios for sensor gloves in pervasive computing.

  • 25.
    Malmgren, Helge
    et al.
    Göteborgs universitet.
    Hemeren, Paul
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Svensson, Henrik
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Haglund, Björn
    Begrepp och mentala representationer2012In: Kognitionsvetenskap / [ed] Jens Allwood; Mikael Jensen, 2012, p. 175-190Chapter in book (Refereed)
  • 26.
    Nair, Vipul
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Drejing, Karl
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Incidental processing of biological motion:: Effects of orientation, local-motion and global-form features2018Conference paper (Refereed)
    Abstract [en]

    Previous studies on biological motion perception indicate that the processing of biological motion is fast and automatic. A segment of these studies has shown that task irrelevant and to-be-ignored biological figures are incidentally processed since they interfere with the main task. However more evidence is needed to understand the role of local-motion and global-form processing mechanisms in incidentally processed biological figures. This study investigates the effects of local-motion and global-form features on incidental processing. Point light walkers (PLW) were used in a flanker paradigm in a direction discrimination task to assess the influence of the flankers. Our results show that upright oriented PLW flankers with global-form features have more influence on visual processing of the central PLW than inverted or scrambled PLW flankers with only local-motion features.

    Download full text (pdf)
    Nair ECVP 2018
  • 27.
    Nair, Vipul
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Vignolo, Alessia
    CONTACT Unit, Istituto Italiano di Tecnologia, Genoa, Italy.
    Noceti, Nicoletta
    MaLGa Center -DIBRIS, Universita di Genova, Genova, Italy.
    Nicora, Elena
    MaLGa Center -DIBRIS, Universita di Genova, Genova, Italy.
    Sciutti, Alessandra
    CONTACT Unit, Istituto Italiano di Tecnologia, Genoa, Italy.
    Rea, Francesco
    RBCS Unit, Istituto Italiano di Tecnologia, Genoa, Italy.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Bhatt, Mehul
    School of Science and Technology, Örebro University, Sweden.
    Odone, Francesca
    MaLGa Center -DIBRIS, Universita di Genova, Genova, Italy.
    Sandini, Giulio
    RBCS Unit, Istituto Italiano di Tecnologia, Genoa, Italy.
    Kinematic primitives in action similarity judgments: A human-centered computational model2023In: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 15, no 4, p. 1981-1992Article in journal (Refereed)
    Abstract [en]

    This paper investigates the role that kinematic features play in human action similarity judgments. The results of three experiments with human participants are compared with the computational model that solves the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experimental results show that both model and human participants can reliably identify whether two actions are the same or not. Specifically, most of the given actions could be similarity judged based on very limited information from a single feature domain (velocity or spatial). Both velocity and spatial features were however necessary to reach a level of human performance on evaluated actions. The experimental results also show that human performance on an action identification task indicated that they clearly relied on kinematic information rather than on action semantics. The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features, which can provide an essential basis for classifying human actions. 

    Download full text (pdf)
    fulltext
  • 28.
    Nair, Vipul
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Vignolo, Alessia
    CONTACT Unit, Istituto Italiano di Tecnologia, Italy.
    Noceti, Nicoletta
    MaLGa Center - DIBRIS, Universita di Genova, Italy.
    Nicora, Elena
    MaLGa Center - DIBRIS, Universita di Genova, Italy.
    Sciutti, Alessandra
    CONTACT Unit, Istituto Italiano di Tecnologia, Italy.
    Rea, Francesco
    RBCS Unit, Istituto Italiano di Tecnologia, Italy.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Odone, Francesca
    MaLGa Center - DIBRIS, Universita di Genova, Italy.
    Sandini, Giulio
    RBCS Unit, Istituto Italiano di Tecnologia, Italy.
    Action similarity judgment based on kinematic primitives2020In: 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), IEEE, 2020Conference paper (Refereed)
    Abstract [en]

    Understanding which features humans rely on - in visually recognizing action similarity is a crucial step towards a clearer picture of human action perception from a learning and developmental perspective. In the present work, we investigate to which extent a computational model based on kinematics can determine action similarity and how its performance relates to human similarity judgments of the same actions. To this aim, twelve participants perform an action similarity task, and their performances are compared to that of a computational model solving the same task. The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives. The comparative experiment results show that both the model and human participants can reliably identify whether two actions are the same or not. However, the model produces more false hits and has a greater selection bias than human participants. A possible reason for this is the particular sensitivity of the model towards kinematic primitives of the presented actions. In a second experiment, human participants' performance on an action identification task indicated that they relied solely on kinematic information rather than on action semantics. The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features, which can provide an essential basis for classifying human actions.

  • 29.
    Nair, Vipul
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Suchan, Jakob
    German Aerospace Center (DLR), University of Bremen, Germany.
    Bhatt, Mehul
    Örebro University, Sweden.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Anticipatory instances in films: What do they tell us about event understanding?2022Conference paper (Refereed)
    Abstract [en]

    Event perception research highlights the significance of visuospatial attributes that influence event segmentation and prediction. The present study investigates how the visuospatial attributes in film events correlate to viewers’ ongoing event processes such as anticipatory gaze, prediction and segmentation. We derive film instances (such as occlusion, enter/exit, turn towards etc.) that show trends of (high) anticipatory viewing behaviour from an in-depth multimodal (such as speech, handaction, gaze etc.) event features analysis of 25 movie scenes and correlated with visual attention analysis (eye-tracking 32 participants per scene). The first results provide a solid basis for using these derived instances to examine further the nature of the different visuospatial attributes in relation to event changes (where anticipation and segmentation occurs). With the results, we (aim to) argue that by investigating film instances of anticipatory nature, one could explicate how humans perform high-level characterization of visuospatial attributes and understand events.

    Download full text (pdf)
    fulltext
  • 30.
    Nair, Vipul
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Suchan, Jakob
    German Aerospace Center (DLR), Germany.
    Bhatt, Mehul
    Örebro University, Sweden.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Attentional synchrony in films: A window to visuospatial characterization of events2022In: Proceedings SAP 2022: ACM Symposium on Applied Perception September 22 – 23, 2022 / [ed] Stephen N. Spencer, Association for Computing Machinery (ACM), 2022, article id 8Conference paper (Refereed)
    Abstract [en]

    The study of event perception emphasizes the importance of visuospatial attributes in everyday human activities and how they influence event segmentation, prediction and retrieval. Attending to these visuospatial attributes is the first step toward event understanding, and therefore correlating attentional measures to such attributes would help to further our understanding of event comprehension. In this study, we focus on attentional synchrony amongst other attentional measures and analyze select film scenes through the lens of a visuospatial event model. Here we present the first results of an in-depth multimodal (such as head-turn, hand-action etc.) visuospatial analysis of 10 movie scenes correlated with visual attention (eye-tracking 32 participants per scene). With the results, we tease apart event segments of high and low attentional synchrony and describe the distribution of attention in relation to the visuospatial features. This analysis gives us an indirect measure of attentional saliency for a scene with a particular visuospatial complexity, ultimately directing the attentional selection of the observers in a given context.

    Download full text (pdf)
    fulltext
  • 31.
    Nair, Vipul
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Suchan, Jakob
    University of Bremen, Germany.
    Bhatt, Mehul
    School of Science and Technology, Örebro University, Sweden ; CoDesign Lab.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Event segmentation through the lens of multimodal interaction2021In: Proceedings of the 8th International Conference on Spatial Cognition: Cognition and Action in a Plurality of Spaces (ICSC 2021) / [ed] Thomas Hünefeldt; Marta Olivetti Belardinelli, Springer, 2021, p. 61-62Conference paper (Refereed)
    Abstract [en]

    Research in naturalistic event perception highlights the significance of visuospatial attributes pertaining to everyday embodied human interaction. This research focuses on developing a conceptual cognitive model to characterise the role of multimodality in human interaction, its influence on visuospatial representation, event segmentation, and high-level event prediction.

    Our research aims to characterise the influence of modalities such as visual attention, speech, hand-action, body-pose, headmovement, spatial-position, motion, and gaze on judging event segments. Our particular focus is on visuoauditory narrative media. We select 25 movie scenes from a larger project concerning cognitive film/media studies and performing detailed multimodal analysis in the backdrop of an elaborate (formally specified) event analysis ontology. Corresponding to the semantic event analysis of each scene, we also perform high-level visual attention analysis (eye-tracking based) with 32 participants per scene. Correlating the features of each scene with visual attention constitutes the key method that we utilise in our analysis.

    We hypothesise that the attentional performance on event segments reflects the influence exerted by multimodal cues on event segmentation and prediction, thereby enabling us to explicate the representational basis of events. The first results show trends of multiple viewing behaviours such as attentional synchrony, gaze pursuit and attentional saliency towards human faces.

    Work is presently in progress, further investigating the role of visuospatial/auditory cues in high-level event perception, e.g., involving anticipatory gaze vis-a-vis event prediction. Applications and impact of this conceptual cognitive model and its behavioural outcomes are aplenty in domains such as (digital) narrative media design and social robotics.

    Download full text (pdf)
    ICSC-2021_Nair
    Download (pdf)
    ICSC 2021-Poster
  • 32.
    Sun, Jiong
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Seoane, Fernando
    Swedish School of Textiles, University of Borås, Borås, Sweden / Inst. for Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden / Dept. Biomedical Engineering, Karolinska University Hospital, Stockholm, Sweden.
    Zhou, Bo
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
    Högberg, Dan
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Categories of touch: Classifying human touch using a soft tactile sensor2017Conference paper (Refereed)
    Abstract [en]

    Social touch plays an important role not only in human communication but also in human-robot interaction. We here report results from an ongoing study on affective human-robot interaction. In our previous research, touch type is shown to be informative for communicated emotion. Here, a soft matrix array sensor is used to capture the tactile interaction between human and robot and a method based on PCA and kNN is applied in the experiment to classify different touch types, constituting a pre-stage to recognizing emotional tactile interaction. Results show an average recognition rate for classified touch type of 71%, with a large variability between different types of touch. Results are discussed in relation to affective HRI and social robotics.

    Download full text (pdf)
    fulltext
  • 33.
    Sun, Jiong
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Redyuk, Sergey
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Högberg, Dan
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Tactile Interaction and Social Touch: Classifying Human Touch using a Soft Tactile Sensor2017In: HAI '17: Proceedings of the 5th International Conference on Human Agent Interaction, New York: Association for Computing Machinery (ACM), 2017, p. 523-526Conference paper (Refereed)
    Abstract [en]

    This paper presents an ongoing study on affective human-robot interaction. In our previous research, touch type is shown to be informative for communicated emotion. Here, a soft matrix array sensor is used to capture the tactile interaction between human and robot and 6 machine learning methods including CNN, RNN and C3D are implemented to classify different touch types, constituting a pre-stage to recognizing emotional tactile interaction. Results show an average recognition rate of 95% by C3D for classified touch types, which provide stable classification results for developing social touch technology. 

    Download full text (pdf)
    fulltext
  • 34.
    Thill, Serge
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Duran, BorisUniversity of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.Hemeren, PaulUniversity of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Social signals in action recognition and intention understanding2013Collection (editor) (Refereed)
  • 35.
    Thill, Serge
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Hemeren, Paul E.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Durán, Boris
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Prediction of human action segmentation based on end-effector kinematics using linear models2011In: European Perspectives on Cognitive Science: Proceedings of the European Conference on Cognitive Science / [ed] B. Kokinov, A. Karmiloff-Smith, N. J. Nersessian, Sofia: New Bulgarian University Press , 2011Conference paper (Refereed)
    Abstract [en]

    The work presented in this paper builds on previous research which analysed human action segmentation in the case of simple object manipulations with the hand (rather than larger-scale actions). When designing algorithms to segment observed actions, for instance to train robots by imitation, the typical approach involves non-linear models but it is less clear whether human action segmentation is also based on such analyses. In the present paper, we therefore explore (1) whether linear models built from observed kinematic variables of a human hand can accurately predict human action segmentation and (2) what kinematic variables are the most important in such a task. In previous work, we recorded speed, acceleration and change in direction for the wrist and the tip of each of the five fingers during the execution of actions as well as the segmentation of these actions into individual components by humans. Here, we use this data to train a large number of models based on every possible training set available and find that, amongst others, the speed of the wrist as well as the change in direction of the index finger were preferred in models with good performance. Overall, the best models achieved R2 values over 0.5 on novel test data but the average performance of trained models was modest. We suggest that this is due to a suboptimal training set (which was not specifically designed for the present task) and that further work be carried out to identify better training sets as our initial results indicate that linear models may indeed be a viable approach to predicting human action segmentation.

  • 36.
    Thill, Serge
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Nilsson, Maria
    Cooperative Systems Group, Viktoria Swedish ICT, Göteborg, Sweden.
    The apparent intelligence of a system as a factor in situation awareness2014In: 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), IEEE Communications Society, 2014, p. 52-58Conference paper (Refereed)
    Abstract [en]

    In the context of interactive and automated vehicles, driver situation awareness becomes an increasingly important consideration for future traffic systems, whether it concerns the current status of the vehicle or the surrounding environment. Here, we present a simulator study investigating whether the apparent intelligence - i.e. intelligence as perceived by the driver, which is distinct from how intelligent a designer might think the system is - of a vehicle is a factor in the expectations and behaviour of the driver. We are specifically interested in perceived intelligence as a factor in situation awareness. To this end, the study modulates both traffic conditions and the type of navigational assistance given in a goal-navigation task to influence participant's perception of the system. Our result show two distinct effects relevant to situation awareness: 1) Participants who think the vehicle is highly intelligent spend more time glancing at the surrounding environment through the left door window than those who rank intelligence low and 2) participants prefer an awareness of why the navigation aid decided for specific directions but are sensitive to the manner it is presented. Our results have broader implications for the design of future automated systems in vehicles.

  • 37.
    Thill, Serge
    et al.
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Nilsson, Maria
    Cooperative Systems Group, Viktoria Swedish ICT, Sweden.
    Hemeren, Paul
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    On the influence of a vehicle's apparent intelligence on driving behaviour and consequences for car UI design2013In: Adjunct Proceedings of Automotive User Interfaces and Interactive Vehicular Applications 2013 Oct 27 -- 30, Eindhoven, The Netherlands, 2013, p. 91-92Conference paper (Refereed)
  • 38.
    Thill, Serge
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Riveiro, Maria
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lagerstedt, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lebram, Mikael
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Habibovic, Azra
    Research Institutes of Sweden, RISE Viktoria, Lindholmen Science Park, Göteborg, Sweden.
    Klingegård, Maria
    Research Institutes of Sweden, RISE Viktoria, Lindholmen Science Park, Göteborg, Sweden.
    Driver adherence to recommendations from support systems improves if the systems explain why they are given: A simulator study2018In: Transportation Research Part F: Traffic Psychology and Behaviour, ISSN 1369-8478, E-ISSN 1873-5517, Vol. 56, p. 420-435Article in journal (Refereed)
    Abstract [en]

    This paper presents a large-scale simulator study on driver adherence to recommendationsgiven by driver support systems, specifically eco-driving support and navigation support.123 participants took part in this study, and drove a vehicle simulator through a pre-defined environment for a duration of approximately 10 min. Depending on the experi-mental condition, participants were either given no eco-driving recommendations, or asystem whose provided support was either basic (recommendations were given in theform of an icon displayed in a manner that simulates a heads-up display) or informative(the system additionally displayed a line of text justifying its recommendations). A naviga-tion system that likewise provided either basic or informative support, depending on thecondition, was also provided.

    Effects are measured in terms of estimated simulated fuel savings as well as engine brak-ing/coasting behaviour and gear change efficiency. Results indicate improvements in allvariables. In particular, participants who had the support of an eco-driving system spenta significantly higher proportion of the time coasting. Participants also changed gears atlower engine RPM when using an eco-driving support system, and significantly more sowhen the system provided justifications. Overall, the results support the notion that pro-viding reasons why a support system puts forward a certain recommendation improvesadherence to it over mere presentation of the recommendation.

    Finally, results indicate that participants’ driving style was less eco-friendly if the navi-gation system provided justifications but the eco-system did not. This may be due to par-ticipants considering the two systems as one whole rather than separate entities withindividual merits. This has implications for how to design and evaluate a given driver sup-port system since its effectiveness may depend on the performance of other systems in thevehicle.

    Download full text (pdf)
    fulltext
  • 39.
    Vernon, David
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Thill, Serge
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Ziemke, Tom
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Computer and Information Science, Linköping University, Sweden.
    An Architecture-oriented Approach to System Integration in Collaborative Robotics Research Projects: An Experience Report2015In: Journal of Software Engineering for Robotics, E-ISSN 2035-3928, Vol. 6, no 1, p. 15-32Article in journal (Refereed)
    Abstract [en]

    Effective system integration requires strict adherence to strong software engineering standards, a practice not much favoured in many collaborative research projects. We argue that component-based software engineering (CBSE) provides a way to overcome this problem because it provides flexibility for developers while requiring the adoption of only a modest number of software engineering practices. This focus on integration complements software re-use, the more usual motivation for adopting CBSE. We illustrate our argument by showing how a large-scale system architecture for an application in the domain of robot-enhanced therapy for children with autism spectrum disorder (ASD) has been implemented. We highlight the manner in which the integration process is facilitated by the architecture implementation of a set of placeholder components that comprise stubs for all functional primitives, as well as the complete implementation of all inter-component communications. We focus on the component-port-connector meta-model and show that the YARP robot platform is a well-matched middleware framework for the implementation of this model. To facilitate the validation of port-connector communication, we configure the initial placeholder implementation of the system architecture as a discrete event simulation and control the invocation of each component’s stub primitives probabilistically. This allows the system integrator to adjust the rate of inter-component communication while respecting its asynchronous and concurrent character. Also, individual ports and connectors can be periodically selected as the simulator cycles through each primitive in each sub-system component. This ability to control the rate of connector communication considerably eases the task of validating component-port-connector behaviour in a large system. Ultimately, over and above its well-accepted benefits for software re-use in robotics, CBSE strikes a good balance between software engineering best practice and the socio-technical problem of managing effective integration in collaborative robotics research projects. 

    Download full text (pdf)
    fulltext
  • 40.
    Veto, Peter
    et al.
    Department of Neurological, Neuropsychological, Morphological and Movement Sciences, Section of Physiology and Psychology, University of Verona, Strada Le Grazie, 8, 37143 Verona – Italy.
    Thill, Serge
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Hemeren, Paul
    University of Skövde, School of Humanities and Informatics. University of Skövde, The Informatics Research Centre.
    Incidental and non-incidental processing of biological motion: Orientation, attention and life detection2013In: Cooperative Minds: Social Interaction and Group Dynamics: Proceedings of the 35th Annual Meeting of the Cognitive Science Society Berlin, Germany, July 31-August 3, 2013 / [ed] Markus Knauff, Michael Pauen, Natalie Sebanz & Ipke Wachsmuth, Cognitive Science Society, Inc., 2013, p. 1528-1533Conference paper (Refereed)
    Abstract [en]

    Based on the unique traits of biological motion perception, the existence of a “life detector”, a special sensitivity to perceiving motion patterns typical for animals, seems to be plausible (Johnson, 2006). Showing motion displays upside-down or with changes in global structure is known to disturb processing in different ways, but not much is known yet about how inversion affects attention and incidental processing. To examine the perception of upright and inverted point-light walkers regarding incidental processing, we used a flanker paradigm (Eriksen & Eriksen, 1974) adapted for biological motion (Thornton & Vuong, 2004), and extended it to include inverted and scrambled figures. Results show that inverted walkers do not evoke incidental processing and they allow high accuracy in performance only when attentional capacities are not diminished. An asymmetrical interaction between upright and inverted figures is found which alludes to qualitatively different pathways of processing.

1 - 40 of 40
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf