Högskolan i Skövde

his.sePublications
Change search
Refine search result
12 1 - 50 of 84
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Alenljung, Beatrice
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Andreasson, Rebecca
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Information Technology, Visual Information & Interaction. Uppsala University, Uppsala, Sweden.
    Billing, Erik A.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lindblom, Jessica
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    User Experience of Conveying Emotions by Touch2017In: Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2017, p. 1240-1247Conference paper (Refereed)
    Abstract [en]

    In the present study, 64 users were asked to convey eight distinct emotion to a humanoid Nao robot via touch, and were then asked to evaluate their experiences of performing that task. Large differences between emotions were revealed. Users perceived conveying of positive/pro-social emotions as significantly easier than negative emotions, with love and disgust as the two extremes. When asked whether they would act differently towards a human, compared to the robot, the users’ replies varied. A content analysis of interviews revealed a generally positive user experience (UX) while interacting with the robot, but users also found the task challenging in several ways. Three major themes with impact on the UX emerged; responsiveness, robustness, and trickiness. The results are discussed in relation to a study of human-human affective tactile interaction, with implications for human-robot interaction (HRI) and design of social and affective robotics in particular. 

    Download full text (pdf)
    fulltext
  • 2.
    Alenljung, Beatrice
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Andreasson, Rebecca
    Department of Information Technology, Uppsala University.
    Lowe, Robert
    Department of Applied IT, University of Gothenburg.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Lindblom, Jessica
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Conveying Emotions by Touch to the Nao Robot: A User Experience Perspective2018In: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 2, no 4, article id 82Article in journal (Refereed)
    Abstract [en]

    Social robots are expected gradually to be used by more and more people in a widerrange of settings, domestic as well as professional. As a consequence, the features and qualityrequirements on human–robot interaction will increase, comprising possibilities to communicateemotions, establishing a positive user experience, e.g., using touch. In this paper, the focus is ondepicting how humans, as the users of robots, experience tactile emotional communication with theNao Robot, as well as identifying aspects affecting the experience and touch behavior. A qualitativeinvestigation was conducted as part of a larger experiment. The major findings consist of 15 differentaspects that vary along one or more dimensions and how those influence the four dimensions ofuser experience that are present in the study, as well as the different parts of touch behavior ofconveying emotions.

    Download full text (pdf)
    fulltext
  • 3.
    Almér, Alexander
    et al.
    Göteborgs Universitet, Institutionen för tillämpad informationsteknologi.
    Lowe, RobertUniversity of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Göteborgs universitet.Billing, ErikUniversity of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Proceedings of the 2016 Swecog conference2016Conference proceedings (editor) (Refereed)
    Download full text (pdf)
    fulltext
  • 4.
    Andreasson, Rebecca
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Alenljung, Beatrice
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Affective Touch in Human–Robot Interaction: Conveying Emotion to the Nao Robot2018In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 10, no 4, p. 473-491Article in journal (Refereed)
    Abstract [en]

    Affective touch has a fundamental role in human development, social bonding, and for providing emotional support in interpersonal relationships. We present, what is to our knowledge, the first HRI study of tactile conveyance of both positive and negative emotions (affective touch) on the Nao robot, and based on an experimental set-up from a study of human–human tactile communication. In the present work, participants conveyed eight emotions to a small humanoid robot via touch. We found that female participants conveyed emotions for a longer time, using more varied interaction and touching more regions on the robot’s body, compared to male participants. Several differences between emotions were found such that emotions could be classified by the valence of the emotion conveyed, by combining touch amount and duration. Overall, these results show high agreement with those reported for human–human affective tactile communication and could also have impact on the design and placement of tactile sensors on humanoid robots.

    Download full text (pdf)
    fulltext
  • 5.
    Arweström Jansson, Anders
    et al.
    Department of Information Technology, Visual Information & Interaction, Uppsala University, Uppsala, Sweden.
    Axelsson, AntonDepartment of Information Technology, Visual Information & Interaction, Uppsala University, Uppsala, Sweden.Andreasson, RebeccaDepartment of Information Technology, Visual Information & Interaction, Uppsala University, Uppsala, Sweden.Billing, ErikUniversity of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Proceedings of the 13th Swecog conference2017Conference proceedings (editor) (Refereed)
    Download full text (pdf)
    fulltext
  • 6.
    Banaee, Hadi
    et al.
    School of Science and Technology, Örebro University, Sweden.
    Billing, ErikUniversity of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Proceedings of the 17th SweCog Conference: Örebro 2022, 16-17 June2022Conference proceedings (editor) (Refereed)
    Download full text (pdf)
    fulltext
  • 7.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    A New Look at Habits using Simulation Theory2017In: Proceedings of the Digitalisation for a Sustainable Society: Embodied, Embedded, Networked, Empowered through Information, Computation & Cognition, Göteborg, Sweden, 2017Conference paper (Refereed)
    Abstract [en]

    Habits as a form of behavior re-execution without explicit deliberation is discussed in terms of implicit anticipation, to be contrasted with explicit anticipation and mental simulation. Two hypotheses, addressing how habits and mental simulation may be implemented in the brain and to what degree they represent two modes brain function, are formulated. Arguments for and against the two hypotheses are discussed shortly, specifically addressing whether habits and mental simulation represent two distinct functions, or to what degree there may be intermediate forms of habit execution involving partial deliberation. A potential role of habits in memory consolidation is also hypnotized.

    Download full text (pdf)
    fulltext
  • 8.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Cognition Rehearsed: Recognition and Reproduction of Demonstrated Behavior2012Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The work presented in this dissertation investigates techniques for robot Learning from Demonstration (LFD). LFD is a well established approach where the robot is to learn from a set of demonstrations. The dissertation focuses on LFD where a human teacher demonstrates a behavior by controlling the robot via teleoperation. After demonstration, the robot should be able to reproduce the demonstrated behavior under varying conditions. In particular, the dissertation investigates techniques where previous behavioral knowledge is used as bias for generalization of demonstrations.

    The primary contribution of this work is the development and evaluation of a semi-reactive approach to LFD called Predictive Sequence Learning (PSL). PSL has many interesting properties applied as a learning algorithm for robots. Few assumptions are introduced and little task-specific configuration is needed. PSL can be seen as a variable-order Markov model that progressively builds up the ability to predict or simulate future sensory-motor events, given a history of past events. The knowledge base generated during learning can be used to control the robot, such that the demonstrated behavior is reproduced. The same knowledge base can also be used to recognize an on-going behavior by comparing predicted sensor states with actual observations. Behavior recognition is an important part of LFD, both as a way to communicate with the human user and as a technique that allows the robot to use previous knowledge as parts of new, more complex, controllers.

    In addition to the work on PSL, this dissertation provides a broad discussion on representation, recognition, and learning of robot behavior. LFD-related concepts such as demonstration, repetition, goal, and behavior are defined and analyzed, with focus on how bias is introduced by the use of behavior primitives. This analysis results in a formalism where LFD is described as transitions between information spaces. Assuming that the behavior recognition problem is partly solved, ways to deal with remaining ambiguities in the interpretation of a demonstration are proposed.

    The evaluation of PSL shows that the algorithm can efficiently learn and reproduce simple behaviors. The algorithm is able to generalize to previously unseen situations while maintaining the reactive properties of the system. As the complexity of the demonstrated behavior increases, knowledge of one part of the behavior sometimes interferes with knowledge of another parts. As a result, different situations with similar sensory-motor interactions are sometimes confused and the robot fails to reproduce the behavior.

    One way to handle these issues is to introduce a context layer that can support PSL by providing bias for predictions. Parts of the knowledge base that appear to fit the present context are highlighted, while other parts are inhibited. Which context should be active is continually re-evaluated using behavior recognition. This technique takes inspiration from several neurocomputational models that describe parts of the human brain as a hierarchical prediction system. With behavior recognition active, continually selecting the most suitable context for the present situation, the problem of knowledge interference is significantly reduced and the robot can successfully reproduce also more complex behaviors.

    Download full text (pdf)
    FULLTEXT01
  • 9.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Cognition Reversed: Robot Learning from Demonstration2009Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The work presented in this thesis investigates techniques for learning from demonstration (LFD). LFD is a well established approach to robot learning, where a teacher demonstrates a behavior to a robot pupil. This thesis focuses on LFD where a human teacher demonstrates a behavior by controlling the robot via teleoperation. The robot should after demonstration be able to execute the demonstrated behavior under varying conditions.

    Several views on representation, recognition and learning of robot behavior are presented and discussed from a cognitive and computational perspective. LFD-related concepts such as behavior, goal, demonstration, and repetition are defined and analyzed, with focus on how bias is introduced by the use of behavior primitives. This analysis results in a formalism where LFD is described as transitions between information spaces. Assuming that the behavior recognition problem is partly solved, ways to deal with remaining ambiguities in the interpretation of a demonstration are proposed.

    A total of five algorithms for behavior recognition are proposed and evaluated, including the dynamic temporal difference algorithm Predictive Sequence Learning (PSL). PSL is model-free in the sense that it makes few assumptions of what is to be learned. One strength of PSL is that it can be used for both robot control and recognition of behavior. While many methods for behavior recognition are concerned with identifying invariants within a set of demonstrations, PSL takes a different approach by using purely predictive measures. This may be one way to reduce the need for bias in learning. PSL is, in its current form, subjected to combinatorial explosion as the input space grows, which makes it necessary to introduce some higher level coordination for learning of complex behaviors in real-world robots.

    The thesis also gives a broad introduction to computational models of the human brain, where a tight coupling between perception and action plays a central role. With the focus on generation of bias, typical features of existing attempts to explain humans' and other animals' ability to learn are presented and analyzed, from both a neurological and an information theoretic perspective. Based on this analysis, four requirements for implementing general learning ability in robots are proposed. These requirements provide guidance to how a coordinating structure around PSL and similar algorithms should be implemented in a model-free way.

    Download full text (pdf)
    FULLTEXT01
  • 10.
    Billing, Erik
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Cognitive Perspectives on Robot Behavior2010In: Proceedings of the 2nd International Conference on Agents and Artificial Intelligence: Volume 2 / [ed] Joaquim Filipe, Ana Fred and Bernadette Sharp, SciTePress, 2010, p. 373-382Conference paper (Refereed)
    Abstract [en]

    A growing body of research within the field of intelligent robotics argues for a view of intelligence drastically different from classical artificial intelligence and cognitive science. The holistic and embodied ideas expressed by this research promote the view that intelligence is an emergent phenomenon. Similar perspectives, where numerous interactions within the system lead to emergent properties and cognitive abilities beyond that of the individual parts, can be found within many scientific fields. With the goal of understanding how behavior may be represented in robots, the present review tries to grasp what this notion of emergence really means and compare it with a selection of theories developed for analysis of human cognition, including the extended mind, distributed cognition and situated action. These theories reveal a view of intelligence where common notions of objects, goals, language and reasoning have to be rethought. A view where behavior, as well as the agent as such, is defined by the observer rather than given by their nature. Structures in the environment emerge by interaction rather than recognized. In such a view, the fundamental question is how emergent systems appear and develop, and how they may be controlled.

    Download full text (pdf)
    fulltext
  • 11.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Representing behavior: Distributed theories in a context of robotics2007Report (Other academic)
    Abstract [en]

    A growing body of research within the field of intelligent robotics argues for a view of intelligence drastically different from classical artificial intelligence and cognitive science. The holistic and embodied ideas expressed by this research sees emergence as the springing source for intelligence. Similar perspectives, where numerous interactions within the system lead to emergent properties and cognitive abilities beyond that of the individual parts, can be found within many scientific fields. With the goal of understanding how behavior may be represented in robots, the present review tries to grasp what this notion of emergence really means and compare it with a selection of theories developed for analysis of human cognition. These theories reveal a view of intelligence where common notions of objects, goals and reasoning have to be rethought. A view where behavior, as well as the agent as such, is in the eye of the observer rather than given. Structures in the environment is achieved by interaction rather than recognized. In such a view, the fundamental question is how emergent systems appear and develop, and how they may be controlled.

  • 12.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    The DREAM Dataset: Behavioural data from robot enhanced therapies for children with autism spectrum disorder2020Data set
    Abstract [sv]

    Denna databas omfattar beteendedata från 61 barn diagnostiserade med Autismspektrumtillstånd (AST). Insamlat data kommer från en storskalig studie på autismterapi med stöd av robotar. Databasen omfattar över 3000 sessioner från mer än 300 timmar terapi. Hälften av barnen interagerade med den sociala roboten NAO, övervakad av en terapeut. Den andra hälften, vilka utgjorde kontrollgrupp, interagerade direkt med en terapeut. Båda grupperna följde samma standardprotokoll för kognitiv beteendeterapi, Applied Behavior Analysis (ABA). Varje session spelades in med tre RGB-kameror och två RGBD kameror (Kinect) vilka analyserats med bildbehandlingstekniker för att identifiera barnets beteende under terapin. Den här publika versionen av databasen innehåller inget inspelat videomaterial eller andra personuppgifter, utan omfattar i stället anonymiserat data som beskriver barnets rörelser, huvudets position och orientering, samt ögonrörelser, alla angivna i ett gemensamt koordinatsystem. Vidare inkluderas metadata i form av barnets ålder, kön, och autismdiagnos (ADOS).

  • 13.
    Billing, Erik A.
    et al.
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Umeå, Sweden.
    A formalism for learning from demonstration2010In: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 1, no 1, p. 1-13Article in journal (Refereed)
    Abstract [en]

    The paper describes and formalizes the concepts and assumptions involved in Learning from Demonstration (LFD), a common learning technique used in robotics. LFD-related concepts like goal, generalization, and repetition are here defined, analyzed, and put into context. Robot behaviors are described in terms of trajectories through information spaces and learning is formulated as mappings between some of these spaces. Finally, behavior primitives are introduced as one example of good bias in learning, dividing the learning process into the three stages of behavior segmentation, behavior recognition, and behavior coordination. The formalism is exemplified through a sequence learning task where a robot equipped with a gripper arm is to move objects to specific areas. The introduced concepts are illustrated with special focus on how bias of various kinds can be used to enable learning from a single demonstration, and how ambiguities in demonstrations can be identified and handled.

    Download full text (pdf)
    fulltext
  • 14.
    Billing, Erik A.
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Behavior recognition for segmentation of demonstrated tasks2008In: IEEE SMC International Conference on Distributed Human-Machine Systems (DHMS), 2008, p. 228-234Conference paper (Refereed)
    Abstract [en]

    One common approach to the robot learning technique Learning From Demonstration, is to use a set of pre-programmed skills as building blocks for more complex tasks. One important part of this approach is recognition of these skills in a demonstration comprising a stream of sensor and actuator data. In this paper, three novel techniques for behavior recognition are presented and compared. The first technique is function-oriented and compares actions for similar inputs. The second technique is based on auto-associative neural networks and compares reconstruction errors in sensory-motor space. The third technique is based on S-Learning and compares sequences of patterns in sensory-motor space. All three techniques compute an activity level which can be seen as an alternative to a pure classification approach. Performed tests show how the former approach allows a more informative interpretation of a demonstration, by not determining "correct" behaviors but rather a number of alternative interpretations.

  • 15.
    Billing, Erik A.
    et al.
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Janlert, Lars Erik
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Model-free learning from demonstration2010In: Proceedings of the 2nd International Conference on Agents and Artificial Intelligence: Volume 2 / [ed] Joaquim Filipe, Ana Fred and Bernadette Sharp, SciTePress, 2010, p. 62-71Conference paper (Refereed)
    Abstract [en]

    A novel robot learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated. PSL is a model-free prediction algorithm inspired by the dynamic temporal difference algorithm S-Learning. While S-Learning has previously been applied as a reinforcement learning algorithm for robots, PSL is here applied to a Learning from Demonstration problem. The proposed algorithm is evaluated on four tasks using a Khepera II robot. PSL builds a model from demonstrated data which is used to repeat the demonstrated behavior. After training, PSL can control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. PSL was able to successfully learn and repeat the first three (elementary) tasks, but it was unable to successfully repeat the fourth (composed) behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

    Download full text (pdf)
    fulltext
  • 16.
    Billing, Erik A.
    et al.
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Janlert, Lars-Erik
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Behavior recognition for learning from demonstration2010In: 2010 IEEE International Conference on Robotics and Automation / [ed] Nancy M. Amato et. al, 2010, p. 866-872Conference paper (Refereed)
    Abstract [en]

    Two methods for behavior recognition are presented and evaluated. Both methods are based on the dynamic temporal difference algorithm Predictive Sequence Learning (PSL) which has previously been proposed as a learning algorithm for robot control. One strength of the proposed recognition methods is that the model PSL builds to recognize behaviors is identical to that used for control, implying that the controller (inverse model) and the recognition algorithm (forward model) can be implemented as two aspects of the same model. The two proposed methods, PSLE-Comparison and PSLH-Comparison, are evaluated in a Learning from Demonstration setting, where each algorithm should recognize a known skill in a demonstration performed via teleoperation. PSLH-Comparison produced the smallest recognition error. The results indicate that PSLH-Comparison could be a suitable algorithm for integration in a hierarchical control system consistent with recent models of human perception and motor control.

  • 17.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Alenljung, Beatrice
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Gillsjö, Catharina
    University of Skövde, School of Health Sciences. University of Skövde, Digital Health Research (DHEAR).
    What can Socially Assistive Robots bring to quality of life for older adults?2024In: Proceedings of the 19th SweCog Conference / [ed] Jonas Olofsson; Teodor Jernsäther-Ohlsson; Sofia Thunberg; Linus Holm; Erik Billing, Skövde: University of Skövde , 2024, p. 55-55, article id P5Conference paper (Refereed)
    Abstract [en]

    Socially Assistive Robots (SAR) has been suggested as an important technology in the shift of care from institution to home environments, and has been shown to nr effective in addressing loneliness and social isolation among older adults (Lee et al. 2023., Lorenz et al., 2016, Shishehgar et al., 2018). In a newly started research project RO-LIV, we employ a user experience design approach, involving older adults as co-designers and engaged actors, in order to identify needs, solutions, and obstacles for integrating socially assistive robots into older adults' homes. The research is organized into three work packages: Needs Analysis, Current Situation Analysis, and Conditions and Obstacles for Integration into the Home Environments. The expected results include a road map for the integration of socially assistive robots into older adults' homes, informed by a nuanced understanding of user needs and preferences. Overall, we emphasize the importance of adopting a user-centered approach in human-robot interaction research, particularly when designing solutions for older adults. By involving older adults in the design process and addressing their diverse needs, researchers can develop robotic systems that are address real user needs, are socially acceptable, and has an increased potential for adoption and impact on quality of life.

    Download full text (pdf)
    fulltext
  • 18.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Balkenius, Christian
    Lund University Cognitive Science, Lund, Sweden.
    Modeling the Interplay between Conditioning and Attention in a Humanoid Robot: Habituation and Attentional Blocking2014In: IEEE ICDL-EPIROB 2014: The Fourth Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, October 13-16, 2014 Palazzo Ducale, Genoa, Italy, IEEE conference proceedings, 2014, p. 41-47Conference paper (Refereed)
    Abstract [en]

    A novel model of role of conditioning in attention is presented and evaluated on a Nao humanoid robot. The model implements conditioning and habituation in interaction with a dynamic neural field where different stimuli compete for activation. The model can be seen as a demonstration of how stimulus-selection and action-selection can be combined and illustrates how positive or negative reinforcement have different effects on attention and action. Attention is directed toward both rewarding and punishing stimuli, but appetitive actions are only directed toward positive stimuli. We present experiments where the model is used to control a Nao robot in a task where it can select between two objects. The model demonstrates some emergent effects also observed in similar experiments with humans and animals, including attentional blocking and latent inhibition.

  • 19.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Bampouni, Elpida
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Lamb, Maurice
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Automatic Selection of Viewpoint for Digital Human Modelling2020In: DHM2020: Proceedings of the 6th International Digital Human Modeling Symposium, August 31 – September 2, 2020 / [ed] Lars Hanson, Dan Högberg, Erik Brolin, Amsterdam: IOS Press, 2020, p. 61-70Conference paper (Refereed)
    Abstract [en]

    During concept design of new vehicles, work places, and other complex artifacts, it is critical to assess positioning of instruments and regulators from the perspective of the end user. One common way to do these kinds of assessments during early product development is by the use of Digital Human Modelling (DHM). DHM tools are able to produce detailed simulations, including vision. Many of these tools comprise evaluations of direct vision and some tools are also able to assess other perceptual features. However, to our knowledge, all DHM tools available today require manual selection of manikin viewpoint. This can be both cumbersome and difficult, and requires that the DHM user possesses detailed knowledge about visual behavior of the workers in the task being modelled. In the present study, we take the first steps towards an automatic selection of viewpoint through a computational model of eye-hand coordination. We here report descriptive statistics on visual behavior in a pick-and-place task executed in virtual reality. During reaching actions, results reveal a very high degree of eye-gaze towards the target object. Participants look at the target object at least once during basically every trial, even during a repetitive action. The object remains focused during large proportions of the reaching action, even when participants are forced to move in order to reach the object. These results are in line with previous research on eye-hand coordination and suggest that DHM tools should, by default, set the viewpoint to match the manikin’s grasping location.

    Download full text (pdf)
    fulltext
  • 20.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Belpaeme, Tony
    University of Plymouth, United Kingdom / IDLab - imec, Ghent University, Belgium.
    Cai, Haibin
    University of Portsmouth, United Kingdom.
    Cao, Hoang-Long
    Vrije Universiteit Brussel, Belgium / Flanders Make, Lommel, Belgium.
    Ciocan, Anamaria
    Universitatea Babeş-Bolyai, Romania.
    Costescu, Cristina
    Universitatea Babeş-Bolyai, Romania.
    David, Daniel
    Universitatea Babeş-Bolyai, Romania.
    Homewood, Robert
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Hernandez Garcia, Daniel
    University of Plymouth, United Kingdom.
    Gomez Esteban, Pablo
    Vrije Universiteit Brussel, Belgium / Flanders Make, Lommel, Belgium.
    Liu, Honghai
    Universityof Portsmouth, United Kingdom.
    Nair, Vipul
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Matu, Silviu
    Universitatea Babeş-Bolyai, Romania.
    Mazel, Alexandre
    SoftBank Robotics, Paris, France.
    Selescu, Mihaela
    Universitatea Babeş-Bolyai, Romania.
    Senft, Emmanuel
    University of Plymouth, United Kingdom.
    Thill, Serge
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, The Netherlands.
    Vanderborght, Bram
    Vrije Universiteit Brussel, Belgium / Flanders Make, Lommel, Belgium.
    Vernon, David
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Ziemke, Tom
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. Linköping University, Sweden.
    The DREAM Dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy2020In: PLOS ONE, E-ISSN 1932-6203, Vol. 15, no 8, article id e0236939Article in journal (Refereed)
    Abstract [en]

    We present a dataset of behavioral data recorded from 61 children diagnosed with Autism Spectrum Disorder (ASD). The data was collected during a large-scale evaluation of Robot Enhanced Therapy (RET). The dataset covers over 3000 therapy sessions and more than 300 hours of therapy. Half of the children interacted with the social robot NAO supervised by a therapist. The other half, constituting a control group, interacted directly with a therapist. Both groups followed the Applied Behavior Analysis (ABA) protocol. Each session was recorded with three RGB cameras and two RGBD (Kinect) cameras, providing detailed information of children’s behavior during therapy. This public release of the dataset comprises body motion, head position and orientation, and eye gaze variables, all specified as 3D data in a joint frame of reference. In addition, metadata including participant age, gender, and autism diagnosis (ADOS) variables are included. We release this data with the hope of supporting further data-driven studies towards improved therapy methods as well as a better understanding of ASD in general.

    Download full text (pdf)
    fulltext
  • 21.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Brolin, Anna
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Quesada Díaz, Raquel
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Eklund, Malin
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Lämkull, Dan
    Department of Manufacturing Technology, Volvo Cars.
    Predicting repetitive worker behaviour using eye-gaze2024In: Studies in Perception and Action XVII: 22nd International Conference on Perception and Action / [ed] Silje-Adelen Nenseth; Ruud van der Weel; Audrey van der Meer, Trondheim, 2024, p. 4-4Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 22.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Hanson, Lars
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Lamb, Maurice
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Högberg, Dan
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Digital Human Modelling in Action2019In: Proceedings of the 15th SweCog Conference / [ed] Linus Holm; Erik Billing, Skövde: University of Skövde , 2019, p. 25-28Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 23.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Formalising learning from demonstration2008Report (Other academic)
    Abstract [en]

    The paper describes and formalizes the concepts and assumptions involved in Learning from Demonstration (LFD), a common learning technique used in robotics. Inspired by the work on planning and actuation by LaValle, common LFD-related concepts like goal, generalization, and repetition are here defined, analyzed, and put into context. Robot behaviors are described in terms of trajectories through information spaces and learning is formulated as the mappings between some of these spaces. Finally, behavior primitives are introduced as one example of useful bias in the learning process, dividing the learning process into the three stages of behavior segmentation, behavior recognition, and behavior coordination.

    Download full text (pdf)
    FULLTEXT01
  • 24.
    Billing, Erik
    et al.
    Department of Computing Science, Umeå University, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Sweden.
    Janlert, Lars Erik
    Department of Computing Science, Umeå University, Sweden.
    Predictive learning from demonstration2011In: Agents and Artificial Intelligence: Second International Conference, ICAART 2010, Valencia, Spain, January 22-24, 2010. Revised Selected Papers / [ed] Joaquim Filipe; Ana Fred; Bernadette Sharp, Berlin: Springer Berlin/Heidelberg, 2011, 1, p. 186-200Chapter in book (Refereed)
    Abstract [en]

    A model-free learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL is inspired by several functional models of the brain. It constructs sequences of predictable sensory-motor patterns, without relying on predefined higher-level concepts. The algorithm is demonstrated on a Khepera II robot in four different tasks. During training, PSL generates a hypothesis library from demonstrated data. The library is then used to control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. In this way, the robot reproduces the demonstrated behavior. PSL is able to successfully learn and repeat three elementary tasks, but is unable to repeat a fourth, composed behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

    Download full text (pdf)
    fulltext
  • 25.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Institutionen för datavetenskap.
    Simultaneous control and recognition of demonstrated behavior2011Report (Other academic)
    Abstract [en]

    A method for Learning from Demonstration (LFD) is presented and evaluated on a simulated Robosoft Kompai robot. The presented algorithm, called Predictive Sequence Learning (PSL), builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. The generated rule base can be used to control the robot and to predict expected sensor events in response to executed actions. The rule base can be trained under different contexts, represented as fuzzy sets. In the present work, contexts are used to represent different behaviors. Several behaviors can in this way be stored in the same rule base and partly share information. The context that best matches present circumstances can be identified using the predictive model and the robot can in this way automatically identify the most suitable behavior for precent circumstances. The performance of PSL as a method for LFD is evaluated with, and without, contextual information. The results indicate that PSL without contexts can learn and reproduce simple behaviors. The system also successfully identifies the most suitable context in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contexts. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction. 

    Download full text (pdf)
    FULLTEXT01
  • 26.
    Billing, Erik
    et al.
    Department of Computing Science, Umeå University, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Sweden.
    Janlert, Lars-Erik
    Department of Computing Science, Umeå University, Sweden.
    Robot learning from demonstration using predictive sequence learning2012In: Robotic systems: applications, control and programming / [ed] Ashish Dutta, Kanpur, India: IN-TECH , 2012, p. 235-250Chapter in book (Refereed)
    Abstract [en]

    In this chapter, the prediction algorithm Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL generates hypotheses from a sequence of sensory-motor events. Generated hypotheses can be used as a semi-reactive controller for robots. PSL has previously been used as a method for LFD, but suffered from combinatorial explosion when applied to data with many dimensions, such as high dimensional sensor and motor data. A new version of PSL, referred to as Fuzzy Predictive Sequence Learning (FPSL), is presented and evaluated in this chapter. FPSL is implemented as a Fuzzy Logic rule base and works on a continuous state space, in contrast to the discrete state space used in the original design of PSL. The evaluation of FPSL shows a significant performance improvement in comparison to the discrete version of the algorithm. Applied to an LFD task in a simulated apartment environment, the robot is able to learn to navigate to a specific location, starting from an unknown position in the apartment.

  • 27.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Hellström, Thomas
    Institutionen för Datavetenskap, Umeå Universitet.
    Janlert, Lars-Erik
    Institutionen för Datavetenskap, Umeå Universitet.
    Simultaneous recognition and reproduction of demonstrated behavior2015In: Biologically Inspired Cognitive Architectures, ISSN 2212-683X, Vol. 12, p. 43-53, article id BICA114Article in journal (Refereed)
    Abstract [en]

    Predictions of sensory-motor interactions with the world is often referred to as a key component in cognition. We here demonstrate that prediction of sensory-motor events, i.e., relationships between percepts and actions, is sufficient to learn navigation skills for a robot navigating in an apartment environment. In the evaluated application, the simulated Robosoft Kompai robot learns from human demonstrations. The system builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. With this architecture, referred to as Predictive Sequence Learning (PSL), learned associations can be used to control the robot and to predict expected sensor events in response to executed actions. The predictive component of PSL is used in two ways: 1) to identify which behavior that best matches current context and 2) to decide when to learn, i.e., update the confidence of different sensory-motor associations. Using this approach, knowledge interference due to over-fitting of an increasingly complex world model can be avoided. The system can also automatically estimate the confidence in the currently executed behavior and decide when to switch to an alternate behavior. The performance of PSL as a method for learning from demonstration is evaluated with, and without, contextual information. The results indicate that PSL without contextual information can learn and reproduce simple behaviors, but fails when the behavioral repertoire becomes more diverse. When a contextual layer is added, PSL successfully identifies the most suitable behavior in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contextual information. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction. 

  • 28.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Kalckert, AndreasUniversity of Skövde, School of Bioscience. University of Skövde, Systems Biology Research Environment.
    Proceedings of the 16th SweCog Conference2021Conference proceedings (editor) (Refereed)
    Abstract [en]

    We welcome you to the 16’th SweCog conference! After the 2020 meeting had to be cancelled, due to the unusual circumstances of facing a worldwide pandemic, we look forward to finally meet again, although the pandemic makes us meet virtually and not in person. 

    Fittingly, an emerging theme of this year’s meeting is virtual reality. A technology which creates new ways of interacting with each other and with the world. It is not only a subject of active research, but increasingly also a medium for new creative experiments or applications, as evidenced by one of our keynote speakers this year. VR has become now a more widely available tool in different areas of research, and probably has made its full and final impact not yet. 

    SweCog 2021 also features a nod to the word usability day. As technology becomes increasingly present in our daily lives, not the least emphasized through the pandemic, we believe that cognitive science has an important role as a field of research informing the design of usable digital artifacts. As the University of Skövde stands as one example of the close relation between cognitive science and user experience design, we take the opportunity to celebrate the topic of Cognitoon and UX

    This meeting has been organized jointly by the Interaction lab and the Cognitive Neuroscience lab of the University of Skövde. We are glad to see this interaction happening between the two labs and the two fields. We hope this is not perceived as an “invasion” of the brain scientists documenting the failure of cognitive science as a field (see Nunez et al., 2019), but rather a collaborative move of finding synergies in our research. In this spirit, we hope our meetings continue to bring people together from different parts of Sweden, from different departments, and maybe also from more different disciplines, to discuss our latest research. And despite our enthusiasm for virtual reality, we sincerely hope the next meeting will allow us to meet again in person. 

    Download full text (pdf)
    fulltext
  • 29.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lindblom, JessicaUniversity of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.Ziemke, TomUniversity of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Proceedings of the 2015 SWECOG conference2015Conference proceedings (editor) (Refereed)
    Download full text (pdf)
    Proceedings of the 2015 SweCog Conference
  • 30.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Applied IT, University of Gothenburg, Sweden.
    Sandamirskaya, Yulia
    Institute of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland.
    Simultaneous Planning and Action: Neural-dynamic Sequencing of Elementary Behaviors in Robot Navigation2015In: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 23, no 5, p. 243-264Article in journal (Refereed)
    Abstract [en]

    A technique for Simultaneous Planning and Action (SPA) based on Dynamic Field Theory (DFT) is presented. The model builds on previous workon representation of sequential behavior as attractors in dynamic neural fields. Here, we demonstrate how chains of competing attractors can be used to represent dynamic plans towards a goal state. The presentwork can be seen as an addition to a growing body of work that demonstratesthe role of DFT as a bridge between low-level reactive approachesand high-level symbol processing mechanisms. The architecture is evaluatedon a set of planning problems using a simulated e-puck robot, including analysis of the system's behavior in response to noise and temporary blockages ofthe planned route. The system makes no explicit distinction betweenplanning and execution phases, allowing continuous adaptation of the planned path. The proposed architecture exploits the DFT property of stability in relation to noise and changes in the environment. The neural dynamics are also exploited such that stay-or-switch action selection emerges where blockage of a planned path occurs: stay until the transient blockage is removed versus switch to an alternative route to the goal.

    Download full text (pdf)
    Billing-etal-2015-SPA
  • 31.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Quesada Díaz, Raquel
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Eklund, Malin
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Brolin, Anna
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Proactive eye-gaze for predicting repetitive worker behavior2024In: Proceedings of the 19th SweCog Conference / [ed] Jonas Olofsson; Teodor Jernsäther-Ohlsson; Sofia Thunberg; Linus Holm; Erik Billing, Skövde: University of Skövde , 2024, p. 151-154, article id P57Conference paper (Refereed)
    Abstract [en]

    Proactive eye-gaze (PEG) is a behavioural pattern where eye fixations precede actions, such as reaching. With the proliferation of eye-tracking technology, PEG shows promise for predicting human actions, which has many applications, for example, within industrial human-robot collaboration (HRC). This study investigates PEG in repetitive assembly tasks. Eye-tracking data from four experienced workers were recorded and analysed. The study recorded 57 assembly sessions, identifying 3793 fixations, of which 35% were proactive gazes. The mean PEG interval was 795 ms. Contrary to the hypothesis, PEG was found to be as strong, if not stronger, in repetitive tasks compared to previous studies investigating PEG in other contexts. These findings suggest PEG could be a reliable predictor of worker actions in repetitive tasks, enhancing coordination in HRC.

    Download full text (pdf)
    fulltext
  • 32.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Rosén, Julia
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Lamb, Maurice
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Language Models for Human-Robot Interaction2023In: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, ACM Digital Library, 2023, p. 905-906Conference paper (Refereed)
    Abstract [en]

    Recent advances in large scale language models have significantly changed the landscape of automatic dialogue systems and chatbots. We believe that these models also have a great potential for changing the way we interact with robots. Here, we present the first integration of the OpenAI GPT-3 language model for the Aldebaran Pepper and Nao robots. The present work transforms the text-based API of GPT-3 into an open verbal dialogue with the robots. The system will be presented live during the HRI2023 conference and the source code of this integration is shared with the hope that it will serve the community in designing and evaluating new dialogue systems for robots.

    Download full text (pdf)
    fulltext
  • 33.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Rosén, Julia
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lindblom, Jessica
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Expectations of robot technology in welfare2019Conference paper (Refereed)
    Abstract [en]

    We report findings from a survey on expectations of robot technology in welfare, within the coming 20 years. 34 assistant nurses answered a questionnaire on which tasks, from their daily work, that they believe robots can perform, already today or in the near future. Additionally, the Negative attitudes toward robots scale (NARS) was used to estimate participants' attitudes towards robots in general. Results reveal high expectations of robots, where at least half of the participants answered Already today or Within 10 years to 9 out of 10 investigated tasks. Participants were also fairly positive towards robots, reporting low scores on NARS. The obtained results can be interpreted as a serious over-estimation of what robots will be able to do in the near future, but also large varieties in participants' interpretation of what robots are. We identify challenges in communicating both excitement towards a technology in rapid development and realistic limitations of this technology.

    Download full text (pdf)
    fulltext
  • 34.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Sciutti, Alessandra
    Italian Institute of Technology, Genova, Italy.
    Sandini, Giulio
    Italian Institute of Technology, Genova, Italy.
    Proactive eye-gaze in human-robot interaction2019Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 35.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Servin, Martin
    Institutionen för fysik, Umeå universitet.
    Composer: A prototype multilingual model composition tool2013In: MODPROD2013: 7th MODPROD Workshop on Model-Based Product Development / [ed] Peter Fritzson, Umeå: Umeå universitet , 2013Conference paper (Other academic)
    Abstract [en]

    Facing the task to design, simulate or optimize a complex system itis common to find models and data for the system expressed in differentformats, implemented in different simulation software tools. When a newmodel is developed, a target platform is chosen and existing componentsimplemented with different tools have to be converted. This results inunnecessary work duplication and lead times. The Modelica languageinitiative [2] partially solves this by allowing developers to move modelsbetween different tools following the Modelica standard. Another possi-bility is to exchange models using the Functional Mockup Interface (FMI)standard that allows computer models to be used as components in othersimulations, possibly implemented using other programming languages[1]. With the Modelica and FMI standards entering development, there isneed for an easy-to-use tool that supports design, editing and simulationof such multilingual systems, as well as for retracting system informationfor formulating and solving optimization problems.A prototype solution for a graphical block diagram tool for design, edit-ing, simulation and optimization of multilingual systems has been createdand evaluated for a specific system. The tool is named Composer [3].The block diagram representation should be generic, independent ofmodel implementations, have a standardized format and yet support effi-cient handling of complex data. It is natural to look for solutions amongmodern web technologies, specifically HTML5. The format for represent-ing two dimensional vector graphics in HTML5 is Scalable Vector Graphics(SVG). We combine the SVG format with the FMI standard. In a firststage, we take the XML-based model description of FMI as a form for de-scribing the interface for each component, in a language independent way.Simulation parameters can also be expressed on this form, and integratedas metadata into the SVG image. 

    The prototype, using SVG in conjunction with FMI, is implementedin JavaScript and allow creation and modification of block diagrams directly in the web browser. Generated SVG images are sent to the serverwhere they are translated to program code, allowing the simulation ofthe dynamical system to be executed using selected implementations. Analternative mode is to generate optimization problem from the systemdefinition and model parameters. The simulation/optimization result is 

    returned to the web browser where it is plotted or processed using otherstandard libraries.The fiber production process at SCA Packaging Obbola [4] is used asan example system and modeled using Composer. The system consists oftwo fiber production lines that produce fiber going to a storage tank [5].The paper machine is taking fiber from the tank as needed for production.A lot of power is required during fiber production and the purpose of themodel was to investigate weather electricity costs could be reduced byrescheduling fiber production over the day, in accordance with the electricity spot price. Components are implemented for dynamical simulationusing OpenModelica and for discrete event using Python. The Python implementation supports constraint propagation between components andoptimization over specified variables. Each component is interfaced as aFunctional Mock-up Unit (FMU), allowing components to be connectedand properties specified in language independent way. From the SVGcontaining the high-level system information, both Modelica and Pythoncode is generated and executed on the web server, potentially hosted ina high performance data center. More implementations could be addedwithout modifying the SVG system description.We have shown that it is possible to separate system descriptions onthe block diagram level from implementations and interface between thetwo levels using FMI. In a continuation of this project, we aim to integratethe FMI standard also for co-simulation, such that components implemented in different languages could be used together. One open questionis to what extent FMUs of the same component, but implemented withdifferent tools, will have the same model description. For the SVG-basedsystem description to be useful, the FMI model description must remainthe same, or at least contain a large overlap, for a single component implemented in different languages. This will be further investigated in futurework.

  • 36.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Svensson, Henrik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Interaction, Cognition and Emotion Lab, Department of Applied IT, University of Gothenburg, Sweden.
    Ziemke, Tom
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Cognition and Interaction Lab, Department of Computer and Information Science, Linköping University, Sweden.
    Finding Your Way from the Bed to the Kitchen: Re-enacting and Re-combining Sensorimotor Episodes Learned from Human Demonstration2016In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, no March, article id 9Article in journal (Refereed)
    Abstract [en]

    Several simulation theories have been proposed as an explanation for how humans and other agents internalize an "inner world" that allows them to simulate interactions with the external real world - prospectively and retrospectively. Such internal simulation of interaction with the environment has been argued to be a key mechanism behind mentalizing and planning. In the present work, we study internal simulations in a robot acting in a simulated human environment. A model of sensory-motor interactions with the environment is generated from human demonstrations, and tested on a Robosoft Kompai robot. The model is used as a controller for the robot, reproducing the demonstrated behavior. Information from several different demonstrations is mixed, allowing the robot to produce novel paths through the environment, towards a goal specified by top-down contextual information. 

    The robot model is also used in a covert mode, where actions are inhibited and perceptions are generated by a forward model. As a result, the robot generates an internal simulation of the sensory-motor interactions with the environment. Similar to the overt mode, the model is able to reproduce the demonstrated behavior as internal simulations. When experiences from several demonstrations are combined with a top-down goal signal, the system produces internal simulations of novel paths through the environment. These results can be understood as the robot imagining an "inner world" generated from previous experience, allowing it to try out different possible futures without executing actions overtly.

    We found that the success rate in terms of reaching the specified goal was higher during internal simulation, compared to overt action. These results are linked to a reduction in prediction errors generated during covert action. Despite the fact that the model is quite successful in terms of generating covert behavior towards specified goals, internal simulations display different temporal distributions compared to their overt counterparts. Links to human cognition and specifically mental imagery are discussed.

    Download full text (pdf)
    fulltext
  • 37.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Ziemke, Tom
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Computer & Information Science, Linköping University.
    Robot-Enhanced Therapy for Children with Autism2018In: Proceedings of the 14th SweCog Conference / [ed] Tom Ziemke, Mattias Arvola, Nils Dahlbäck, Erik Billing, Skövde: University of Skövde , 2018, p. 19-22Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 38.
    Cai, Haibin
    et al.
    School of Computing, University of Portsmouth, U.K..
    Fang, Yinfeng
    School of Computing, University of Portsmouth, U.K..
    Ju, Zhaojie
    School of Computing, University of Portsmouth, U.K..
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    David, Daniel
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Ziemke, Tom
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Computer and Information Science, Linkoping University, Sweden.
    Thill, Serge
    University of Plymouth, U.K..
    Belpaeme, Tony
    University of Plymouth, U.K..
    Vanderborght, Bram
    Vrije Universiteit Brussel and Flanders Make, Belgium.
    Vernon, David
    Carnegie Mellon University Africa, Rwanda.
    Richardson, Kathleen
    De Montfort University, U.K..
    Liu, Honghai
    School of Computing, University of Portsmouth, U.K..
    Sensing-enhanced Therapy System for Assessing Children with Autism Spectrum Disorders: A Feasibility Study2019In: IEEE Sensors Journal, ISSN 1530-437X, E-ISSN 1558-1748, Vol. 19, no 4, p. 1508-1518Article in journal (Refereed)
    Abstract [en]

    It is evident that recently reported robot-assisted therapy systems for assessment of children with autism spectrum disorder (ASD) lack autonomous interaction abilities and require significant human resources. This paper proposes a sensing system that automatically extracts and fuses sensory features such as body motion features, facial expressions, and gaze features, further assessing the children behaviours by mapping them to therapist-specified behavioural classes. Experimental results show that the developed system has a capability of interpreting characteristic data of children with ASD, thus has the potential to increase the autonomy of robots under the supervision of a therapist and enhance the quality of the digital description of children with ASD. The research outcomes pave the way to a feasible machine-assisted system for their behaviour assessment. IEEE

  • 39.
    Cao, Hoang-Long
    et al.
    Vrije Universiteit Brussel, Belgium.
    Esteban, Pablo G.
    Mechanical Engineering, Vrije Universiteit Brusel, Brussels, Belgium.
    Bartlett, Madeleine
    Plymouth University, United Kingdom.
    Baxter, Paul Edward
    School of Computer Science, University of Lincoln, United Kingdom.
    Belpaeme, Tony
    Faculty of Science and Environment, Plymouth University, United Kingdom.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Cai, Haibin
    School of computing, University of Portsmouth, Southampton, United Kingdom.
    Coeckelbergh, Mark
    University of Twente, The Netherlands.
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Universitatea Babes-Bolyai, Cluj Napoca, Romania.
    David, Daniel
    Babes-Bolyai University, Romania.
    De Beir, Albert
    Robotics & Multibody Mechanics Research Group, Vrije Universiteit Brussel (VUB), Bruxelles, Belgium.
    Hernandez Garcia, Daniel
    School of Computing, Electronics and Mathematics, University of Plymouth, United Kingdom.
    Kennedy, James
    Disney Research Los Angeles, Disney Research, Glendale, California United States of America.
    Liu, Honghai
    Institute of Industrial Research, University of Portsmouth, Portsmouth, United Kingdom.
    Matu, Silviu
    Babes-Bolyai University, Romania.
    Mazel, Alexandre
    Research, Aldebaran-Robotics, Le Kremlin Bicetre, France.
    Pandey, Amit Kumar
    Innovation Department, SoftBank Robotics, Paris, France.
    Richardson, Kathleen
    Faculty of Technology, De Montfort University, Leicester, United Kingdom.
    Senft, Emmanuel
    Centre for Robotics and Neural System, Plymouth University, United Kingdom.
    Thill, Serge
    Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands.
    Van de Perre, Greet
    Applied Mechanics, Vrije Universiteit Brussel, Elsene, Belgium.
    Vanderborght, Bram
    Department of Mechanical Engineering, Vrije Universiteit Brussel, Brussels, Belgium.
    Vernon, David
    Electrical and Computer Engineering, Carnegie Mellon University Africa, Kigali, Rwanda.
    Wakanuma, Kutoma
    De Montfort University, United Kingdom.
    Yu, Hui
    Creative Technologies, University of Portsmouth, Portsmouth, United Kingdom.
    Zhou, Xiaolong
    Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China.
    Ziemke, Tom
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Robot-Enhanced Therapy: Development and Validation of a Supervised Autonomous Robotic System for Autism Spectrum Disorders Therapy2019In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 26, no 2, p. 49-58Article in journal (Refereed)
  • 40.
    Eklund, Malin
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment. Department of Applied IT, University of Gothenburg, Sweden.
    Forslund, Julia
    Department of Applied IT, University of Gothenburg, Sweden.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Effects of body language during conversation with socially assistive robots2024In: Proceedings of the 19th SweCog Conference / [ed] Jonas Olofsson; Teodor Jernsäther-Ohlsson; Sofia Thunberg; Linus Holm; Erik Billing, Skövde: University of Skövde , 2024, p. 106-106, article id P60Conference paper (Refereed)
    Abstract [en]

    It has been shown that interaction methods such as body language and gestures in socially assistive robots (SAR) contribute to engagement, attention, and entertainment value. Studies in social cognition emphasize the significance of body language for facilitating interaction in social exchanges. Inspired by these results, an independent group experiment (N=45) was designed to investigate how body language, as an interaction method in SAR, affects perceived social presence. Participants engaged in semi-structured conversations with the social robot Pepper, equipped with a ChatGPT-based dialogue system with, or without, body language. Perceived social presence was retrieved through the Almere questionnaire. Contrary to our hypothesis, the results did not show any significant differences in perceived social presence. Detailed analysis did however show that the interactive condition enhanced the feeling of being seen and tended to make the robot more entertaining. The lack of support for the hypothesis suggests that the robot's body language might be less significant than previously thought, possibly due to method and design factors, as well as the robot's advanced dialogue system. This study highlights the potential of large language models for SAR and could indicate that some aspects of the robot’s design might overshadow other aspects.

    Download full text (pdf)
    fulltext
  • 41.
    Esteban, Pablo G.
    et al.
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Baxter, Paul
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Belpaeme, Tony
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Cai, Haibin
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Cao, Hoang-Long
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Coeckelbergh, Mark
    Centre for Computing and Social Responsibility, Faculty of Technology, De Montfort University, Leicester, United Kingdom.
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Babeş-Bolyai University, Cluj-Napoca, Romania.
    David, Daniel
    Department of Clinical Psychology and Psychotherapy, Babeş-Bolyai University, Cluj-Napoca, Romania.
    De Beir, Albert
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Fang, Yinfeng
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Ju, Zhaojie
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Kennedy, James
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Liu, Honghai
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Mazel, Alexandre
    Softbank Robotics Europe, Paris, France.
    Pandey, Amit
    Softbank Robotics Europe, Paris, France.
    Richardson, Kathleen
    Centre for Computing and Social Responsibility, Faculty of Technology, De Montfort University, Leicester, United Kingdom.
    Senft, Emmanuel
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Thill, Serge
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Van de Perre, Greet
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Vanderborght, Bram
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Vernon, David
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Yu, Hui
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Ziemke, Tom
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    How to Build a Supervised Autonomous System for Robot-Enhanced Therapy for Children with Autism Spectrum Disorder2017In: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 8, no 1, p. 18-38Article in journal (Refereed)
    Abstract [en]

    Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms.However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.

    Download full text (pdf)
    fulltext
  • 42.
    Fast-Berglund, Åsa
    et al.
    Chalmers University of Technology, Gothenburg, Sweden.
    Thorvald, Peter
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Palmquist, Adam
    Insert Coin, Gothenburg, Sweden.
    Romero, David
    Tecnologico de Monterrey, Mexico.
    Weichhart, Georg
    Profactor, Studgart, Austria.
    Conceptualizing Embodied Automation to Increase Transfer of Tacit knowledge in the Learning Factory2018In: "Theory, Research and Innovation in Applications": 9th International Conference on Intelligent Systems 2018 (IS’18) / [ed] Ricardo Jardim-Gonçalves, João Pedro Mendonça, Vladimir Jotsov, Maria Marques, João Martins, Robert Bierwolf, IEEE, 2018, p. 358-364, article id 8710482Conference paper (Refereed)
    Abstract [en]

    This paper will discuss how cooperative agent-based systems, deployed with social skills and embodied automation features, can be used to interact with the operators in order to facilitate sharing of tacit knowledge and its later conversion into explicit knowledge. The proposal is to combine social software robots (softbots) with industrial collaborative robots (co-bots) to create a digital apprentice for experienced operators in human- robot collaboration workstations. This is to address the problem within industry that experienced operators have difficulties in explaining how they perform their tasks and later, how to turn this procedural knowledge (knowhow) into instructions to be shared among other operators. By using social softbots and co-bots, as cooperative agents with embodied automation features, we think we can facilitate the ‘externalization’ of procedural knowledge in human-robot interaction(s). This enabled by the capabilities of social cooperative agents with embodied automation features of continuously learning by looking over the shoulder of the operators, and documenting and collaborating with them in a non-intrusive way as they perform their daily tasks. 

    Download full text (pdf)
    fulltext
  • 43.
    Gander, Pierre
    et al.
    Deptment of Applied Information Technology, University of Gothenburg.
    Holm, LinusDepartment of Psychology, Umeå University.Billing, ErikUniversity of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Proceedings of the 18th SweCog Conference2023Conference proceedings (editor) (Refereed)
    Download full text (pdf)
    fulltext
  • 44.
    Hanson, Lars
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment. Scania CV AB, Global Industrial Development, Södertälje, Sweden.
    Högberg, Dan
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Brolin, Erik
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Iriondo Pascual, Aitor
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Lamb, Maurice
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Current Trends in Research and Application of Digital Human Modeling2022In: Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021): Volume V: Methods & Approaches / [ed] Nancy L. Black; W. Patrick Neumann; Ian Noy, Cham: Springer, 2022, p. 358-366Conference paper (Refereed)
    Abstract [en]

    The paper reports an investigation conducted during the DHM2020 Symposium regarding current trends in research and application of DHM in academia, software development, and industry. The results show that virtual reality (VR), augmented reality (AR), and digital twin are major current trends. Furthermore, results show that human diversity is considered in DHM using established methods. Results also show a shift from the assessment of static postures to assessment of sequences of actions, combined with a focus mainly on human well-being and only partly on system performance. Motion capture and motion algorithms are alternative technologies introduced to facilitate and improve DHM simulations. Results from the DHM simulations are mainly presented through pictures or animations.

  • 45.
    Hernández García, Daniel
    et al.
    University of Plymouth, United Kingdom.
    Esteban, Pablo G.
    Vrije Universiteit Brussel.
    Lee, Hee Rin
    UC San Diego, United States.
    Romeo, Marta
    University of Manchester, United Kingdom.
    Senft, Emmanuel
    University of Plymouth, United Kingdom.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Social Robots in Therapy and Care2019In: Proceedings of the 14th ACM/IEEE International Conference on Human Robot Interaction, Daegu: IEEE conference proceedings, 2019, p. 669-670Conference paper (Refereed)
    Abstract [en]

    The Social Robots in Therapy workshop series aims at advancing research topics related to the use of robots in the contexts of Social Care and Robot-Assisted Therapy (RAT). Robots in social care and therapy have been a long time promise in HRI as they have the opportunity to improve patients life significantly. Multiple challenges have to be addressed for this, such as building platforms that work in proximity with patients, therapists and health-care professionals; understanding user needs; developing adaptive and autonomous robot interactions; and addressing ethical questions regarding the use of robots with a vulnerable population. The full-day workshop follows last year's edition which centered on how social robots can improve health-care interventions, how increasing the degree of autonomy of the robots might affect therapies, and how to overcome the ethical challenges inherent to the use of robot assisted technologies. This 2nd edition of the workshop will be focused on the importance of equipping social robots with socio-emotional intelligence and the ability to perform meaningful and personalized interactions. This workshop aims to bring together researchers and industry experts in the fields of Human-Robot Interaction, Machine Learning and Robots in Health and Social Care. It will be an opportunity for all to share and discuss ideas, strategies and findings to guide the design and development of robot assisted systems for therapy and social care implementations that can provide personalize, natural, engaging and autonomous interactions with patients (and health-care providers).

  • 46.
    Hjälm, Emma
    et al.
    University of Skövde, School of Informatics.
    Quach, Martina
    University of Skövde, School of Informatics.
    Lagerstedt, Erik
    University of Gothenburg, Department of Philosophy, Linguistics and Theory of Science, Sweden.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Nalin, Kajsa
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Expectation Priming Through Linguistic Framings When Introducing Social Robots: An Empirical Study of Students’ UX in an Educational Context2024In: Proceedings of the 19th SweCog Conference / [ed] Jonas Olofsson; Teodor Jernsäther-Ohlsson; Sofia Thunberg; Linus Holm; Erik Billing, Skövde: University of Skövde , 2024, p. 66-66, article id P20Conference paper (Refereed)
    Abstract [en]

    The field Human-Robot Interaction (HRI) involves new forms of social interactions that are dependent on the many and different earlier expectations of humans. In this study, the impact of linguistic framing on students' expectations and user experiences when being introduced to social robots in an educational setting were investigated. An empirical case study involved the social robot Pepper and 10 students aged 16–19. The introduction to Pepper utilized two forms of linguistic framing: positive and negative terms. Pre- and post-interaction interviews were conducted to assess the students' expectations and experiences. Assessments to measure negative attitudes toward robots and user experiences were conducted using the NARS and Godspeed questionnaires. Furthermore, filmed observations of the students' interactions with Pepper were used to provide additional insights. Results of the study showed that students' expectations and experiences varied depending on the type of introduction and linguistic framing utilized. While none of the differences among the questionnaire responses were statistically significant, the trends were in line with the clear results from the interviews and observations.

    Download full text (pdf)
    fulltext
  • 47.
    Holm, Linus
    et al.
    Department of Psychology, Umeå University .
    Billing, ErikUniversity of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Proceedings of the 15th SweCog Conference2019Conference proceedings (editor) (Refereed)
    Abstract [en]

    In an article published in Nature: Human Behavior, Nunez et al. (2019) asks What happened to cognitive science? The authors review bibliometric and socio-institutional aspects of the field and argues that the transition from a multi-disciplinary program to a mature inter-disciplinary coherent field has failed. Looking at the Swedish environment, we can nothing but agree. Many of us identifying ourselves as researchers in cognitive science are working at departments primarily focused at other disciplines, teaching within other objects and publishing in journals and conferences adjacent to the field. The diversity of cognitive science is also present in the number of directions that has has evolved over the years. The embodied approaches that many of us align with are not evolving towards a coherent view, but is today found under numerous labels such as situated cognition, distributed cognition, extended cognition, and enactive cognition. The so called 4E perspectives on the field have now ventured beyond the four, and is today more often referred to as the multi-E framework.

    While we agree with Nunez et al. that we remain a multi-disciplinary, multi-perspective, and multi-method group of researchers who may share an interest for the science of the mind, rather than a coherent approach or perspective, we disagree that this entails a failure for the enterprise of cognitive science. We dare to say that the Sweish Cognitive Science Society has embraced the multi-perspectives idea by adopting an inclusive approach in the selection of research and methods presented at our conferences. We hope that SweCog will remain a forum for inclusive discussions, working against discipline conformism and isolation, in a time where both public and scientific debate is increasingly shattered.

    Download full text (pdf)
    fulltext
  • 48.
    Lamb, Maurice
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Brundin, Malin
    University of Skövde, School of Informatics.
    Perez Luque, Estela
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Eye-Tracking Beyond Peripersonal Space in Virtual Reality: Validation and Best Practices2022In: Frontiers in Virtual Reality, E-ISSN 2673-4192, Vol. 3, article id 864653Article in journal (Refereed)
    Abstract [en]

    Recent developments in commercial virtual reality (VR) hardware with embedded eye-tracking create tremendous opportunities for human subjects researchers. Accessible eye-tracking in VR opens new opportunities for highly controlled experimental setups in which participants can engage novel 3D digital environments. However, because VR embedded eye-tracking differs from the majority of historical eye-tracking research, in both providing for relatively unconstrained movement and stimulus presentation distances, there is a need for greater discussion around methods for implementation and validation of VR based eye-tracking tools. The aim of this paper is to provide a practical introduction to the challenges of, and methods for, 3D gaze-tracking in VR with a focus on best practices for results validation and reporting. Specifically, first, we identify and define challenges and methods for collecting and analyzing 3D eye-tracking data in VR. Then, we introduce a validation pilot study with a focus on factors related to 3D gaze tracking. The pilot study provides both a reference data point for a common commercial hardware/software platform (HTC Vive Pro Eye) and illustrates the proposed methods. One outcome of this study was the observation that accuracy and precision of collected data may depend on stimulus distance, which has consequences for studies where stimuli is presented on varying distances. We also conclude that vergence is a potentially problematic basis for estimating gaze depth in VR and should be used with caution as the field move towards a more established method for 3D eye-tracking.

    Download full text (pdf)
    fulltext
  • 49.
    Lamb, Maurice
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Pérez Luque, Estela
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Understanding Eye-Tracking in Virtual Reality2022In: AIC 2022 Artificial Intelligence and Cognition 2022: Proceedings of the 8th International Workshop on Artificial Intelligence and Cognition, Örebro, Sweden, 15-17 June, 2022 / [ed] Hadi Banaee; Amy Loutfi; Alessandro Saffiotti; Antonio Lieto, CEUR-WS.org , 2022, p. 180-181Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 50.
    Lamb, Maurice
    et al.
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Seunghun, Lee
    Texas Tech University, United States.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment.
    Högberg, Dan
    University of Skövde, School of Engineering Science. University of Skövde, Virtual Engineering Research Environment.
    Yang, James
    Texas Tech University, United States.
    Forward and Backward Reaching Inverse Kinematics (FABRIK) solver for DHM: A pilot study2022In: Proceedings of the 7th International Digital Human Modeling Symposium (DHM 2022), August 29–30, 2022, Iowa City, Iowa, USA, University of Iowa Press, 2022, Vol. 7, p. 1-11, article id 26Conference paper (Refereed)
    Abstract [en]

    Posture/motion prediction is the basis of the human motion simulations that make up the core of many digital human modeling (DHM) tools and methods. With the goal of producing realistic postures and motions, a common element of posture/motion prediction methods involves applying some set of constraints to biomechanical models of humans on the positions and orientations of specified body parts. While many formulations of biomechanical constraints may produce valid predictions, they must overcome the challenges posed by the highly redundant nature of human biomechanical systems. DHM researchers and developers typically focus on optimization formulations to facilitate the identification and selection of valid solutions. While these approaches produce optimal behavior according to some, e.g., ergonomic, optimization criteria, these solutions require considerable computational power and appear vastly different from how humans produce motion. In this paper, we take a different approach and consider the Forward and Backward Reaching Inverse Kinematics (FABRIK) solver developed in the context of computer graphics for rigged character animation. This approach identifies postures quickly and efficiently, often requiring a fraction of the computation time involved in optimization-based methods. Critically, the FABRIK solver identifies posture predictions based on a lightweight heuristic approach. Specifically, the solver works in joint position space and identifies solutions according to a minimal joint displacement principle. We apply the FABRIK solver to a seven-degree of freedom human arm model during a reaching task from an initial to an end target location, fixing the shoulder position and providing the end effector (index fingertip) position and orientation from each frame of the motion capture data. In this preliminary study, predicted postures are compared to experimental data from a single human subject. Overall the predicted postures were very near the recorded data, with an average RMSE of 1.67°. Although more validation is necessary, we believe that the FABRIK solver has great potential for producing realistic human posture/motion in real-time, with applications in the area of DHM.

12 1 - 50 of 84
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf