his.sePublikasjoner
Endre søk
Begrens søket
1 - 3 of 3
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Bartlett, Madeleine
    et al.
    Centre for Robotics and Neural Systems (CRNS), University of Plymouth, Plymouth, United Kingdom.
    Edmunds, Charlotte E.R.
    Warwick Business School, University of Warwick, Coventry, United Kingdom.
    Belpaeme, Tony
    Centre for Robotics and Neural Systems (CRNS), University of Plymouth, Plymouth, United Kingdom / ID Lab—imec, University of Ghent, Belgium.
    Thill, Serge
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, Netherlands.
    Lemaignan, Séverin
    Bristol Robotics Lab, University of the West of England, Bristol, United Kingdom.
    What Can You See?: Identifying Cues on Internal States From the Movements of Natural Social Interactions2019Inngår i: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 6, nr 49Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In recent years, the field of Human-Robot Interaction (HRI) has seen an increasingdemand for technologies that can recognize and adapt to human behaviors and internalstates (e.g., emotions and intentions). Psychological research suggests that humanmovements are important for inferring internal states. There is, however, a need to betterunderstand what kind of information can be extracted from movement data, particularlyin unconstrained, natural interactions. The present study examines which internal statesand social constructs humans identify from movement in naturalistic social interactions.Participants either viewed clips of the full scene or processed versions of it displaying2D positional data. Then, they were asked to fill out questionnaires assessing their socialperception of the viewed material. We analyzed whether the full scene clips were moreinformative than the 2D positional data clips. First, we calculated the inter-rater agreementbetween participants in both conditions. Then, we employed machine learning classifiersto predict the internal states of the individuals in the videos based on the ratingsobtained. Although we found a higher inter-rater agreement for full scenes comparedto positional data, the level of agreement in the latter case was still above chance,thus demonstrating that the internal states and social constructs under study wereidentifiable in both conditions. A factor analysis run on participants’ responses showedthat participants identified the constructs interaction imbalance, interaction valence andengagement regardless of video condition. The machine learning classifiers achieveda similar performance in both conditions, again supporting the idea that movementalone carries relevant information. Overall, our results suggest it is reasonable to expecta machine learning algorithm, and consequently a robot, to successfully decode andclassify a range of internal states and social constructs using low-dimensional data (suchas the movements and poses of observed individuals) as input.

  • 2.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Svensson, Henrik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lowe, Robert
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Göteborgs Universitet, Tillämpad IT.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Computer and Information Science, Linköping University.
    Finding Your Way from the Bed to the Kitchen: Re-enacting and Re-combining Sensorimotor Episodes Learned from Human Demonstration2016Inngår i: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, nr 9Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Several simulation theories have been proposed as an explanation for how humans and other agents internalize an "inner world" that allows them to simulate interactions with the external real world - prospectively and retrospectively. Such internal simulation of interaction with the environment has been argued to be a key mechanism behind mentalizing and planning. In the present work, we study internal simulations in a robot acting in a simulated human environment. A model of sensory-motor interactions with the environment is generated from human demonstrations, and tested on a Robosoft Kompai robot. The model is used as a controller for the robot, reproducing the demonstrated behavior. Information from several different demonstrations is mixed, allowing the robot to produce novel paths through the environment, towards a goal specified by top-down contextual information. 

    The robot model is also used in a covert mode, where actions are inhibited and perceptions are generated by a forward model. As a result, the robot generates an internal simulation of the sensory-motor interactions with the environment. Similar to the overt mode, the model is able to reproduce the demonstrated behavior as internal simulations. When experiences from several demonstrations are combined with a top-down goal signal, the system produces internal simulations of novel paths through the environment. These results can be understood as the robot imagining an "inner world" generated from previous experience, allowing it to try out different possible futures without executing actions overtly.

    We found that the success rate in terms of reaching the specified goal was higher during internal simulation, compared to overt action. These results are linked to a reduction in prediction errors generated during covert action. Despite the fact that the model is quite successful in terms of generating covert behavior towards specified goals, internal simulations display different temporal distributions compared to their overt counterparts. Links to human cognition and specifically mental imagery are discussed.

  • 3.
    Moore, Roger K.
    et al.
    University of Sheffield, United Kingdom.
    Marxer, Ricard
    University of Sheffield, United Kingdom.
    Thill, Serge
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Vocal interactivity in-and-between humans, animals and robots2016Inngår i: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, artikkel-id 61Artikkel, forskningsoversikt (Fagfellevurdert)
    Abstract [en]

    Almost all animals exploit vocal signals for a range of ecologically-motivated purposes: detecting predators/prey and marking territory, expressing emotions, establishing social relations and sharing information. Whether it is a bird raising an alarm, a whale calling to potential partners, a dog responding to human commands, a parent reading a story with a child, or a business-person accessing stock prices using \emph{Siri}, vocalisation provides a valuable communication channel through which behaviour may be coordinated and controlled, and information may be distributed and acquired. Indeed, the ubiquity of vocal interaction has led to research across an extremely diverse array of fields, from assessing animal welfare, to understanding the precursors of human language, to developing voice-based human-machine interaction. Opportunities for cross-fertilisation between these fields abound; for example, using artificial cognitive agents to investigate contemporary theories of language grounding, using machine learning to analyse different habitats or adding vocal expressivity to the next generation of language-enabled autonomous social agents. However, much of the research is conducted within well-defined disciplinary boundaries, and many fundamental issues remain. This paper attempts to redress the balance by presenting a comparative review of vocal interaction within-and-between humans, animals and artificial agents (such as robots), and it identifies a rich set of open research questions that may benefit from an inter-disciplinary analysis.

1 - 3 of 3
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf