Högskolan i Skövde

his.sePublikasjoner
Endre søk
Begrens søket
12 1 - 50 of 77
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Alenljung, Beatrice
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Andreasson, Rebecca
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Information Technology, Visual Information & Interaction. Uppsala University, Uppsala, Sweden.
    Billing, Erik A.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lowe, Robert
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    User Experience of Conveying Emotions by Touch2017Inngår i: Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2017, s. 1240-1247Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In the present study, 64 users were asked to convey eight distinct emotion to a humanoid Nao robot via touch, and were then asked to evaluate their experiences of performing that task. Large differences between emotions were revealed. Users perceived conveying of positive/pro-social emotions as significantly easier than negative emotions, with love and disgust as the two extremes. When asked whether they would act differently towards a human, compared to the robot, the users’ replies varied. A content analysis of interviews revealed a generally positive user experience (UX) while interacting with the robot, but users also found the task challenging in several ways. Three major themes with impact on the UX emerged; responsiveness, robustness, and trickiness. The results are discussed in relation to a study of human-human affective tactile interaction, with implications for human-robot interaction (HRI) and design of social and affective robotics in particular. 

    Fulltekst (pdf)
    fulltext
  • 2.
    Alenljung, Beatrice
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Andreasson, Rebecca
    Department of Information Technology, Uppsala University.
    Lowe, Robert
    Department of Applied IT, University of Gothenburg.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Conveying Emotions by Touch to the Nao Robot: A User Experience Perspective2018Inngår i: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 2, nr 4, artikkel-id 82Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Social robots are expected gradually to be used by more and more people in a widerrange of settings, domestic as well as professional. As a consequence, the features and qualityrequirements on human–robot interaction will increase, comprising possibilities to communicateemotions, establishing a positive user experience, e.g., using touch. In this paper, the focus is ondepicting how humans, as the users of robots, experience tactile emotional communication with theNao Robot, as well as identifying aspects affecting the experience and touch behavior. A qualitativeinvestigation was conducted as part of a larger experiment. The major findings consist of 15 differentaspects that vary along one or more dimensions and how those influence the four dimensions ofuser experience that are present in the study, as well as the different parts of touch behavior ofconveying emotions.

    Fulltekst (pdf)
    fulltext
  • 3.
    Almér, Alexander
    et al.
    Göteborgs Universitet, Institutionen för tillämpad informationsteknologi.
    Lowe, RobertHögskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Göteborgs universitet.Billing, ErikHögskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Proceedings of the 2016 Swecog conference2016Konferanseproceedings (Fagfellevurdert)
    Fulltekst (pdf)
    fulltext
  • 4.
    Andreasson, Rebecca
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Alenljung, Beatrice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lowe, Robert
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Affective Touch in Human–Robot Interaction: Conveying Emotion to the Nao Robot2018Inngår i: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 10, nr 4, s. 473-491Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Affective touch has a fundamental role in human development, social bonding, and for providing emotional support in interpersonal relationships. We present, what is to our knowledge, the first HRI study of tactile conveyance of both positive and negative emotions (affective touch) on the Nao robot, and based on an experimental set-up from a study of human–human tactile communication. In the present work, participants conveyed eight emotions to a small humanoid robot via touch. We found that female participants conveyed emotions for a longer time, using more varied interaction and touching more regions on the robot’s body, compared to male participants. Several differences between emotions were found such that emotions could be classified by the valence of the emotion conveyed, by combining touch amount and duration. Overall, these results show high agreement with those reported for human–human affective tactile communication and could also have impact on the design and placement of tactile sensors on humanoid robots.

    Fulltekst (pdf)
    fulltext
  • 5.
    Arweström Jansson, Anders
    et al.
    Department of Information Technology, Visual Information & Interaction, Uppsala University, Uppsala, Sweden.
    Axelsson, AntonDepartment of Information Technology, Visual Information & Interaction, Uppsala University, Uppsala, Sweden.Andreasson, RebeccaDepartment of Information Technology, Visual Information & Interaction, Uppsala University, Uppsala, Sweden.Billing, ErikHögskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Proceedings of the 13th Swecog conference2017Konferanseproceedings (Fagfellevurdert)
    Fulltekst (pdf)
    fulltext
  • 6.
    Banaee, Hadi
    et al.
    School of Science and Technology, Örebro University, Sweden.
    Billing, ErikHögskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Proceedings of the 17th SweCog Conference: Örebro 2022, 16-17 June2022Konferanseproceedings (Fagfellevurdert)
    Fulltekst (pdf)
    fulltext
  • 7.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    A New Look at Habits using Simulation Theory2017Inngår i: Proceedings of the Digitalisation for a Sustainable Society: Embodied, Embedded, Networked, Empowered through Information, Computation & Cognition, Göteborg, Sweden, 2017Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Habits as a form of behavior re-execution without explicit deliberation is discussed in terms of implicit anticipation, to be contrasted with explicit anticipation and mental simulation. Two hypotheses, addressing how habits and mental simulation may be implemented in the brain and to what degree they represent two modes brain function, are formulated. Arguments for and against the two hypotheses are discussed shortly, specifically addressing whether habits and mental simulation represent two distinct functions, or to what degree there may be intermediate forms of habit execution involving partial deliberation. A potential role of habits in memory consolidation is also hypnotized.

    Fulltekst (pdf)
    fulltext
  • 8.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Cognition Rehearsed: Recognition and Reproduction of Demonstrated Behavior2012Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    The work presented in this dissertation investigates techniques for robot Learning from Demonstration (LFD). LFD is a well established approach where the robot is to learn from a set of demonstrations. The dissertation focuses on LFD where a human teacher demonstrates a behavior by controlling the robot via teleoperation. After demonstration, the robot should be able to reproduce the demonstrated behavior under varying conditions. In particular, the dissertation investigates techniques where previous behavioral knowledge is used as bias for generalization of demonstrations.

    The primary contribution of this work is the development and evaluation of a semi-reactive approach to LFD called Predictive Sequence Learning (PSL). PSL has many interesting properties applied as a learning algorithm for robots. Few assumptions are introduced and little task-specific configuration is needed. PSL can be seen as a variable-order Markov model that progressively builds up the ability to predict or simulate future sensory-motor events, given a history of past events. The knowledge base generated during learning can be used to control the robot, such that the demonstrated behavior is reproduced. The same knowledge base can also be used to recognize an on-going behavior by comparing predicted sensor states with actual observations. Behavior recognition is an important part of LFD, both as a way to communicate with the human user and as a technique that allows the robot to use previous knowledge as parts of new, more complex, controllers.

    In addition to the work on PSL, this dissertation provides a broad discussion on representation, recognition, and learning of robot behavior. LFD-related concepts such as demonstration, repetition, goal, and behavior are defined and analyzed, with focus on how bias is introduced by the use of behavior primitives. This analysis results in a formalism where LFD is described as transitions between information spaces. Assuming that the behavior recognition problem is partly solved, ways to deal with remaining ambiguities in the interpretation of a demonstration are proposed.

    The evaluation of PSL shows that the algorithm can efficiently learn and reproduce simple behaviors. The algorithm is able to generalize to previously unseen situations while maintaining the reactive properties of the system. As the complexity of the demonstrated behavior increases, knowledge of one part of the behavior sometimes interferes with knowledge of another parts. As a result, different situations with similar sensory-motor interactions are sometimes confused and the robot fails to reproduce the behavior.

    One way to handle these issues is to introduce a context layer that can support PSL by providing bias for predictions. Parts of the knowledge base that appear to fit the present context are highlighted, while other parts are inhibited. Which context should be active is continually re-evaluated using behavior recognition. This technique takes inspiration from several neurocomputational models that describe parts of the human brain as a hierarchical prediction system. With behavior recognition active, continually selecting the most suitable context for the present situation, the problem of knowledge interference is significantly reduced and the robot can successfully reproduce also more complex behaviors.

    Fulltekst (pdf)
    FULLTEXT01
  • 9.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Cognition Reversed: Robot Learning from Demonstration2009Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    The work presented in this thesis investigates techniques for learning from demonstration (LFD). LFD is a well established approach to robot learning, where a teacher demonstrates a behavior to a robot pupil. This thesis focuses on LFD where a human teacher demonstrates a behavior by controlling the robot via teleoperation. The robot should after demonstration be able to execute the demonstrated behavior under varying conditions.

    Several views on representation, recognition and learning of robot behavior are presented and discussed from a cognitive and computational perspective. LFD-related concepts such as behavior, goal, demonstration, and repetition are defined and analyzed, with focus on how bias is introduced by the use of behavior primitives. This analysis results in a formalism where LFD is described as transitions between information spaces. Assuming that the behavior recognition problem is partly solved, ways to deal with remaining ambiguities in the interpretation of a demonstration are proposed.

    A total of five algorithms for behavior recognition are proposed and evaluated, including the dynamic temporal difference algorithm Predictive Sequence Learning (PSL). PSL is model-free in the sense that it makes few assumptions of what is to be learned. One strength of PSL is that it can be used for both robot control and recognition of behavior. While many methods for behavior recognition are concerned with identifying invariants within a set of demonstrations, PSL takes a different approach by using purely predictive measures. This may be one way to reduce the need for bias in learning. PSL is, in its current form, subjected to combinatorial explosion as the input space grows, which makes it necessary to introduce some higher level coordination for learning of complex behaviors in real-world robots.

    The thesis also gives a broad introduction to computational models of the human brain, where a tight coupling between perception and action plays a central role. With the focus on generation of bias, typical features of existing attempts to explain humans' and other animals' ability to learn are presented and analyzed, from both a neurological and an information theoretic perspective. Based on this analysis, four requirements for implementing general learning ability in robots are proposed. These requirements provide guidance to how a coordinating structure around PSL and similar algorithms should be implemented in a model-free way.

    Fulltekst (pdf)
    FULLTEXT01
  • 10.
    Billing, Erik
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Cognitive Perspectives on Robot Behavior2010Inngår i: Proceedings of the 2nd International Conference on Agents and Artificial Intelligence: Volume 2 / [ed] Joaquim Filipe, Ana Fred and Bernadette Sharp, SciTePress, 2010, s. 373-382Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A growing body of research within the field of intelligent robotics argues for a view of intelligence drastically different from classical artificial intelligence and cognitive science. The holistic and embodied ideas expressed by this research promote the view that intelligence is an emergent phenomenon. Similar perspectives, where numerous interactions within the system lead to emergent properties and cognitive abilities beyond that of the individual parts, can be found within many scientific fields. With the goal of understanding how behavior may be represented in robots, the present review tries to grasp what this notion of emergence really means and compare it with a selection of theories developed for analysis of human cognition, including the extended mind, distributed cognition and situated action. These theories reveal a view of intelligence where common notions of objects, goals, language and reasoning have to be rethought. A view where behavior, as well as the agent as such, is defined by the observer rather than given by their nature. Structures in the environment emerge by interaction rather than recognized. In such a view, the fundamental question is how emergent systems appear and develop, and how they may be controlled.

    Fulltekst (pdf)
    fulltext
  • 11.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Representing behavior: Distributed theories in a context of robotics2007Rapport (Annet vitenskapelig)
    Abstract [en]

    A growing body of research within the field of intelligent robotics argues for a view of intelligence drastically different from classical artificial intelligence and cognitive science. The holistic and embodied ideas expressed by this research sees emergence as the springing source for intelligence. Similar perspectives, where numerous interactions within the system lead to emergent properties and cognitive abilities beyond that of the individual parts, can be found within many scientific fields. With the goal of understanding how behavior may be represented in robots, the present review tries to grasp what this notion of emergence really means and compare it with a selection of theories developed for analysis of human cognition. These theories reveal a view of intelligence where common notions of objects, goals and reasoning have to be rethought. A view where behavior, as well as the agent as such, is in the eye of the observer rather than given. Structures in the environment is achieved by interaction rather than recognized. In such a view, the fundamental question is how emergent systems appear and develop, and how they may be controlled.

  • 12.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    The DREAM Dataset: Behavioural data from robot enhanced therapies for children with autism spectrum disorder2020Dataset
    Abstract [sv]

    Denna databas omfattar beteendedata från 61 barn diagnostiserade med Autismspektrumtillstånd (AST). Insamlat data kommer från en storskalig studie på autismterapi med stöd av robotar. Databasen omfattar över 3000 sessioner från mer än 300 timmar terapi. Hälften av barnen interagerade med den sociala roboten NAO, övervakad av en terapeut. Den andra hälften, vilka utgjorde kontrollgrupp, interagerade direkt med en terapeut. Båda grupperna följde samma standardprotokoll för kognitiv beteendeterapi, Applied Behavior Analysis (ABA). Varje session spelades in med tre RGB-kameror och två RGBD kameror (Kinect) vilka analyserats med bildbehandlingstekniker för att identifiera barnets beteende under terapin. Den här publika versionen av databasen innehåller inget inspelat videomaterial eller andra personuppgifter, utan omfattar i stället anonymiserat data som beskriver barnets rörelser, huvudets position och orientering, samt ögonrörelser, alla angivna i ett gemensamt koordinatsystem. Vidare inkluderas metadata i form av barnets ålder, kön, och autismdiagnos (ADOS).

  • 13.
    Billing, Erik A.
    et al.
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Umeå, Sweden.
    A formalism for learning from demonstration2010Inngår i: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 1, nr 1, s. 1-13Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The paper describes and formalizes the concepts and assumptions involved in Learning from Demonstration (LFD), a common learning technique used in robotics. LFD-related concepts like goal, generalization, and repetition are here defined, analyzed, and put into context. Robot behaviors are described in terms of trajectories through information spaces and learning is formulated as mappings between some of these spaces. Finally, behavior primitives are introduced as one example of good bias in learning, dividing the learning process into the three stages of behavior segmentation, behavior recognition, and behavior coordination. The formalism is exemplified through a sequence learning task where a robot equipped with a gripper arm is to move objects to specific areas. The introduced concepts are illustrated with special focus on how bias of various kinds can be used to enable learning from a single demonstration, and how ambiguities in demonstrations can be identified and handled.

    Fulltekst (pdf)
    fulltext
  • 14.
    Billing, Erik A.
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Behavior recognition for segmentation of demonstrated tasks2008Inngår i: IEEE SMC International Conference on Distributed Human-Machine Systems (DHMS), 2008, s. 228-234Konferansepaper (Fagfellevurdert)
    Abstract [en]

    One common approach to the robot learning technique Learning From Demonstration, is to use a set of pre-programmed skills as building blocks for more complex tasks. One important part of this approach is recognition of these skills in a demonstration comprising a stream of sensor and actuator data. In this paper, three novel techniques for behavior recognition are presented and compared. The first technique is function-oriented and compares actions for similar inputs. The second technique is based on auto-associative neural networks and compares reconstruction errors in sensory-motor space. The third technique is based on S-Learning and compares sequences of patterns in sensory-motor space. All three techniques compute an activity level which can be seen as an alternative to a pure classification approach. Performed tests show how the former approach allows a more informative interpretation of a demonstration, by not determining "correct" behaviors but rather a number of alternative interpretations.

  • 15.
    Billing, Erik A.
    et al.
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Janlert, Lars Erik
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Model-free learning from demonstration2010Inngår i: Proceedings of the 2nd International Conference on Agents and Artificial Intelligence: Volume 2 / [ed] Joaquim Filipe, Ana Fred and Bernadette Sharp, SciTePress, 2010, s. 62-71Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A novel robot learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated. PSL is a model-free prediction algorithm inspired by the dynamic temporal difference algorithm S-Learning. While S-Learning has previously been applied as a reinforcement learning algorithm for robots, PSL is here applied to a Learning from Demonstration problem. The proposed algorithm is evaluated on four tasks using a Khepera II robot. PSL builds a model from demonstrated data which is used to repeat the demonstrated behavior. After training, PSL can control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. PSL was able to successfully learn and repeat the first three (elementary) tasks, but it was unable to successfully repeat the fourth (composed) behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

    Fulltekst (pdf)
    fulltext
  • 16.
    Billing, Erik A.
    et al.
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Janlert, Lars-Erik
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Behavior recognition for learning from demonstration2010Inngår i: 2010 IEEE International Conference on Robotics and Automation / [ed] Nancy M. Amato et. al, 2010, s. 866-872Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Two methods for behavior recognition are presented and evaluated. Both methods are based on the dynamic temporal difference algorithm Predictive Sequence Learning (PSL) which has previously been proposed as a learning algorithm for robot control. One strength of the proposed recognition methods is that the model PSL builds to recognize behaviors is identical to that used for control, implying that the controller (inverse model) and the recognition algorithm (forward model) can be implemented as two aspects of the same model. The two proposed methods, PSLE-Comparison and PSLH-Comparison, are evaluated in a Learning from Demonstration setting, where each algorithm should recognize a known skill in a demonstration performed via teleoperation. PSLH-Comparison produced the smallest recognition error. The results indicate that PSLH-Comparison could be a suitable algorithm for integration in a hierarchical control system consistent with recent models of human perception and motor control.

  • 17.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Balkenius, Christian
    Lund University Cognitive Science, Lund, Sweden.
    Modeling the Interplay between Conditioning and Attention in a Humanoid Robot: Habituation and Attentional Blocking2014Inngår i: Proceeding of The 4th International Conference on Development and Learning and on Epigenetic Robotics (IEEE ICDL-EPIROB 2014), IEEE conference proceedings, 2014, s. 41-47Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A novel model of role of conditioning in attention is presented and evaluated on a Nao humanoid robot. The model implements conditioning and habituation in interaction with a dynamic neural field where different stimuli compete for activation. The model can be seen as a demonstration of how stimulus-selection and action-selection can be combined and illustrates how positive or negative reinforcement have different effects on attention and action. Attention is directed toward both rewarding and punishing stimuli, but appetitive actions are only directed toward positive stimuli. We present experiments where the model is used to control a Nao robot in a task where it can select between two objects. The model demonstrates some emergent effects also observed in similar experiments with humans and animals, including attentional blocking and latent inhibition.

  • 18.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Bampouni, Elpida
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lamb, Maurice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Automatic Selection of Viewpoint for Digital Human Modelling2020Inngår i: DHM2020: Proceedings of the 6th International Digital Human Modeling Symposium, August 31 – September 2, 2020 / [ed] Lars Hanson, Dan Högberg, Erik Brolin, Amsterdam: IOS Press, 2020, s. 61-70Konferansepaper (Fagfellevurdert)
    Abstract [en]

    During concept design of new vehicles, work places, and other complex artifacts, it is critical to assess positioning of instruments and regulators from the perspective of the end user. One common way to do these kinds of assessments during early product development is by the use of Digital Human Modelling (DHM). DHM tools are able to produce detailed simulations, including vision. Many of these tools comprise evaluations of direct vision and some tools are also able to assess other perceptual features. However, to our knowledge, all DHM tools available today require manual selection of manikin viewpoint. This can be both cumbersome and difficult, and requires that the DHM user possesses detailed knowledge about visual behavior of the workers in the task being modelled. In the present study, we take the first steps towards an automatic selection of viewpoint through a computational model of eye-hand coordination. We here report descriptive statistics on visual behavior in a pick-and-place task executed in virtual reality. During reaching actions, results reveal a very high degree of eye-gaze towards the target object. Participants look at the target object at least once during basically every trial, even during a repetitive action. The object remains focused during large proportions of the reaching action, even when participants are forced to move in order to reach the object. These results are in line with previous research on eye-hand coordination and suggest that DHM tools should, by default, set the viewpoint to match the manikin’s grasping location.

    Fulltekst (pdf)
    fulltext
  • 19.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Belpaeme, Tony
    University of Plymouth, United Kingdom / IDLab - imec, Ghent University, Belgium.
    Cai, Haibin
    University of Portsmouth, United Kingdom.
    Cao, Hoang-Long
    Vrije Universiteit Brussel, Belgium / Flanders Make, Lommel, Belgium.
    Ciocan, Anamaria
    Universitatea Babeş-Bolyai, Romania.
    Costescu, Cristina
    Universitatea Babeş-Bolyai, Romania.
    David, Daniel
    Universitatea Babeş-Bolyai, Romania.
    Homewood, Robert
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Hernandez Garcia, Daniel
    University of Plymouth, United Kingdom.
    Gomez Esteban, Pablo
    Vrije Universiteit Brussel, Belgium / Flanders Make, Lommel, Belgium.
    Liu, Honghai
    Universityof Portsmouth, United Kingdom.
    Nair, Vipul
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Matu, Silviu
    Universitatea Babeş-Bolyai, Romania.
    Mazel, Alexandre
    SoftBank Robotics, Paris, France.
    Selescu, Mihaela
    Universitatea Babeş-Bolyai, Romania.
    Senft, Emmanuel
    University of Plymouth, United Kingdom.
    Thill, Serge
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, The Netherlands.
    Vanderborght, Bram
    Vrije Universiteit Brussel, Belgium / Flanders Make, Lommel, Belgium.
    Vernon, David
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Linköping University, Sweden.
    The DREAM Dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy2020Inngår i: PLOS ONE, E-ISSN 1932-6203, Vol. 15, nr 8, artikkel-id e0236939Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We present a dataset of behavioral data recorded from 61 children diagnosed with Autism Spectrum Disorder (ASD). The data was collected during a large-scale evaluation of Robot Enhanced Therapy (RET). The dataset covers over 3000 therapy sessions and more than 300 hours of therapy. Half of the children interacted with the social robot NAO supervised by a therapist. The other half, constituting a control group, interacted directly with a therapist. Both groups followed the Applied Behavior Analysis (ABA) protocol. Each session was recorded with three RGB cameras and two RGBD (Kinect) cameras, providing detailed information of children’s behavior during therapy. This public release of the dataset comprises body motion, head position and orientation, and eye gaze variables, all specified as 3D data in a joint frame of reference. In addition, metadata including participant age, gender, and autism diagnosis (ADOS) variables are included. We release this data with the hope of supporting further data-driven studies towards improved therapy methods as well as a better understanding of ASD in general.

    Fulltekst (pdf)
    fulltext
  • 20.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Hanson, Lars
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Lamb, Maurice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Högberg, Dan
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Digital Human Modelling in Action2019Inngår i: Proceedings of the 15th SweCog Conference / [ed] Linus Holm; Erik Billing, Skövde: University of Skövde , 2019, s. 25-28Konferansepaper (Fagfellevurdert)
    Fulltekst (pdf)
    fulltext
  • 21.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Formalising learning from demonstration2008Rapport (Annet vitenskapelig)
    Abstract [en]

    The paper describes and formalizes the concepts and assumptions involved in Learning from Demonstration (LFD), a common learning technique used in robotics. Inspired by the work on planning and actuation by LaValle, common LFD-related concepts like goal, generalization, and repetition are here defined, analyzed, and put into context. Robot behaviors are described in terms of trajectories through information spaces and learning is formulated as the mappings between some of these spaces. Finally, behavior primitives are introduced as one example of useful bias in the learning process, dividing the learning process into the three stages of behavior segmentation, behavior recognition, and behavior coordination.

    Fulltekst (pdf)
    FULLTEXT01
  • 22.
    Billing, Erik
    et al.
    Department of Computing Science, Umeå University, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Sweden.
    Janlert, Lars Erik
    Department of Computing Science, Umeå University, Sweden.
    Predictive learning from demonstration2011Inngår i: Agents and Artificial Intelligence: Second International Conference, ICAART 2010, Valencia, Spain, January 22-24, 2010. Revised Selected Papers / [ed] Joaquim Filipe; Ana Fred; Bernadette Sharp, Berlin: Springer Berlin/Heidelberg, 2011, 1, s. 186-200Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    A model-free learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL is inspired by several functional models of the brain. It constructs sequences of predictable sensory-motor patterns, without relying on predefined higher-level concepts. The algorithm is demonstrated on a Khepera II robot in four different tasks. During training, PSL generates a hypothesis library from demonstrated data. The library is then used to control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. In this way, the robot reproduces the demonstrated behavior. PSL is able to successfully learn and repeat three elementary tasks, but is unable to repeat a fourth, composed behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

    Fulltekst (pdf)
    fulltext
  • 23.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Institutionen för datavetenskap.
    Simultaneous control and recognition of demonstrated behavior2011Rapport (Annet vitenskapelig)
    Abstract [en]

    A method for Learning from Demonstration (LFD) is presented and evaluated on a simulated Robosoft Kompai robot. The presented algorithm, called Predictive Sequence Learning (PSL), builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. The generated rule base can be used to control the robot and to predict expected sensor events in response to executed actions. The rule base can be trained under different contexts, represented as fuzzy sets. In the present work, contexts are used to represent different behaviors. Several behaviors can in this way be stored in the same rule base and partly share information. The context that best matches present circumstances can be identified using the predictive model and the robot can in this way automatically identify the most suitable behavior for precent circumstances. The performance of PSL as a method for LFD is evaluated with, and without, contextual information. The results indicate that PSL without contexts can learn and reproduce simple behaviors. The system also successfully identifies the most suitable context in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contexts. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction. 

    Fulltekst (pdf)
    FULLTEXT01
  • 24.
    Billing, Erik
    et al.
    Department of Computing Science, Umeå University, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Sweden.
    Janlert, Lars-Erik
    Department of Computing Science, Umeå University, Sweden.
    Robot learning from demonstration using predictive sequence learning2012Inngår i: Robotic systems: applications, control and programming / [ed] Ashish Dutta, Kanpur, India: IN-TECH , 2012, s. 235-250Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    In this chapter, the prediction algorithm Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL generates hypotheses from a sequence of sensory-motor events. Generated hypotheses can be used as a semi-reactive controller for robots. PSL has previously been used as a method for LFD, but suffered from combinatorial explosion when applied to data with many dimensions, such as high dimensional sensor and motor data. A new version of PSL, referred to as Fuzzy Predictive Sequence Learning (FPSL), is presented and evaluated in this chapter. FPSL is implemented as a Fuzzy Logic rule base and works on a continuous state space, in contrast to the discrete state space used in the original design of PSL. The evaluation of FPSL shows a significant performance improvement in comparison to the discrete version of the algorithm. Applied to an LFD task in a simulated apartment environment, the robot is able to learn to navigate to a specific location, starting from an unknown position in the apartment.

  • 25.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Hellström, Thomas
    Institutionen för Datavetenskap, Umeå Universitet.
    Janlert, Lars-Erik
    Institutionen för Datavetenskap, Umeå Universitet.
    Simultaneous recognition and reproduction of demonstrated behavior2015Inngår i: Biologically Inspired Cognitive Architectures, ISSN 2212-683X, Vol. 12, s. 43-53, artikkel-id BICA114Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Predictions of sensory-motor interactions with the world is often referred to as a key component in cognition. We here demonstrate that prediction of sensory-motor events, i.e., relationships between percepts and actions, is sufficient to learn navigation skills for a robot navigating in an apartment environment. In the evaluated application, the simulated Robosoft Kompai robot learns from human demonstrations. The system builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. With this architecture, referred to as Predictive Sequence Learning (PSL), learned associations can be used to control the robot and to predict expected sensor events in response to executed actions. The predictive component of PSL is used in two ways: 1) to identify which behavior that best matches current context and 2) to decide when to learn, i.e., update the confidence of different sensory-motor associations. Using this approach, knowledge interference due to over-fitting of an increasingly complex world model can be avoided. The system can also automatically estimate the confidence in the currently executed behavior and decide when to switch to an alternate behavior. The performance of PSL as a method for learning from demonstration is evaluated with, and without, contextual information. The results indicate that PSL without contextual information can learn and reproduce simple behaviors, but fails when the behavioral repertoire becomes more diverse. When a contextual layer is added, PSL successfully identifies the most suitable behavior in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contextual information. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction. 

  • 26.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Kalckert, AndreasHögskolan i Skövde, Institutionen för biovetenskap. Högskolan i Skövde, Forskningsmiljön Systembiologi.
    Proceedings of the 16th SweCog Conference2021Konferanseproceedings (Fagfellevurdert)
    Abstract [en]

    We welcome you to the 16’th SweCog conference! After the 2020 meeting had to be cancelled, due to the unusual circumstances of facing a worldwide pandemic, we look forward to finally meet again, although the pandemic makes us meet virtually and not in person. 

    Fittingly, an emerging theme of this year’s meeting is virtual reality. A technology which creates new ways of interacting with each other and with the world. It is not only a subject of active research, but increasingly also a medium for new creative experiments or applications, as evidenced by one of our keynote speakers this year. VR has become now a more widely available tool in different areas of research, and probably has made its full and final impact not yet. 

    SweCog 2021 also features a nod to the word usability day. As technology becomes increasingly present in our daily lives, not the least emphasized through the pandemic, we believe that cognitive science has an important role as a field of research informing the design of usable digital artifacts. As the University of Skövde stands as one example of the close relation between cognitive science and user experience design, we take the opportunity to celebrate the topic of Cognitoon and UX

    This meeting has been organized jointly by the Interaction lab and the Cognitive Neuroscience lab of the University of Skövde. We are glad to see this interaction happening between the two labs and the two fields. We hope this is not perceived as an “invasion” of the brain scientists documenting the failure of cognitive science as a field (see Nunez et al., 2019), but rather a collaborative move of finding synergies in our research. In this spirit, we hope our meetings continue to bring people together from different parts of Sweden, from different departments, and maybe also from more different disciplines, to discuss our latest research. And despite our enthusiasm for virtual reality, we sincerely hope the next meeting will allow us to meet again in person. 

    Fulltekst (pdf)
    fulltext
  • 27.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lindblom, JessicaHögskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.Ziemke, TomHögskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Proceedings of the 2015 SWECOG conference2015Konferanseproceedings (Fagfellevurdert)
    Fulltekst (pdf)
    Proceedings of the 2015 SweCog Conference
  • 28.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lowe, Robert
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Applied IT, University of Gothenburg, Sweden.
    Sandamirskaya, Yulia
    Institute of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland.
    Simultaneous Planning and Action: Neural-dynamic Sequencing of Elementary Behaviors in Robot Navigation2015Inngår i: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 23, nr 5, s. 243-264Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    A technique for Simultaneous Planning and Action (SPA) based on Dynamic Field Theory (DFT) is presented. The model builds on previous workon representation of sequential behavior as attractors in dynamic neural fields. Here, we demonstrate how chains of competing attractors can be used to represent dynamic plans towards a goal state. The presentwork can be seen as an addition to a growing body of work that demonstratesthe role of DFT as a bridge between low-level reactive approachesand high-level symbol processing mechanisms. The architecture is evaluatedon a set of planning problems using a simulated e-puck robot, including analysis of the system's behavior in response to noise and temporary blockages ofthe planned route. The system makes no explicit distinction betweenplanning and execution phases, allowing continuous adaptation of the planned path. The proposed architecture exploits the DFT property of stability in relation to noise and changes in the environment. The neural dynamics are also exploited such that stay-or-switch action selection emerges where blockage of a planned path occurs: stay until the transient blockage is removed versus switch to an alternative route to the goal.

    Fulltekst (pdf)
    Billing-etal-2015-SPA
  • 29.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Rosén, Julia
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Lamb, Maurice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Language Models for Human-Robot Interaction2023Inngår i: HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, ACM Digital Library, 2023, s. 905-906Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Recent advances in large scale language models have significantly changed the landscape of automatic dialogue systems and chatbots. We believe that these models also have a great potential for changing the way we interact with robots. Here, we present the first integration of the OpenAI GPT-3 language model for the Aldebaran Pepper and Nao robots. The present work transforms the text-based API of GPT-3 into an open verbal dialogue with the robots. The system will be presented live during the HRI2023 conference and the source code of this integration is shared with the hope that it will serve the community in designing and evaluating new dialogue systems for robots.

    Fulltekst (pdf)
    fulltext
  • 30.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Rosén, Julia
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Expectations of robot technology in welfare2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We report findings from a survey on expectations of robot technology in welfare, within the coming 20 years. 34 assistant nurses answered a questionnaire on which tasks, from their daily work, that they believe robots can perform, already today or in the near future. Additionally, the Negative attitudes toward robots scale (NARS) was used to estimate participants' attitudes towards robots in general. Results reveal high expectations of robots, where at least half of the participants answered Already today or Within 10 years to 9 out of 10 investigated tasks. Participants were also fairly positive towards robots, reporting low scores on NARS. The obtained results can be interpreted as a serious over-estimation of what robots will be able to do in the near future, but also large varieties in participants' interpretation of what robots are. We identify challenges in communicating both excitement towards a technology in rapid development and realistic limitations of this technology.

    Fulltekst (pdf)
    fulltext
  • 31.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Sciutti, Alessandra
    Italian Institute of Technology, Genova, Italy.
    Sandini, Giulio
    Italian Institute of Technology, Genova, Italy.
    Proactive eye-gaze in human-robot interaction2019Konferansepaper (Fagfellevurdert)
    Fulltekst (pdf)
    fulltext
  • 32.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Servin, Martin
    Institutionen för fysik, Umeå universitet.
    Composer: A prototype multilingual model composition tool2013Inngår i: MODPROD2013: 7th MODPROD Workshop on Model-Based Product Development / [ed] Peter Fritzson, Umeå: Umeå universitet , 2013Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    Facing the task to design, simulate or optimize a complex system itis common to find models and data for the system expressed in differentformats, implemented in different simulation software tools. When a newmodel is developed, a target platform is chosen and existing componentsimplemented with different tools have to be converted. This results inunnecessary work duplication and lead times. The Modelica languageinitiative [2] partially solves this by allowing developers to move modelsbetween different tools following the Modelica standard. Another possi-bility is to exchange models using the Functional Mockup Interface (FMI)standard that allows computer models to be used as components in othersimulations, possibly implemented using other programming languages[1]. With the Modelica and FMI standards entering development, there isneed for an easy-to-use tool that supports design, editing and simulationof such multilingual systems, as well as for retracting system informationfor formulating and solving optimization problems.A prototype solution for a graphical block diagram tool for design, edit-ing, simulation and optimization of multilingual systems has been createdand evaluated for a specific system. The tool is named Composer [3].The block diagram representation should be generic, independent ofmodel implementations, have a standardized format and yet support effi-cient handling of complex data. It is natural to look for solutions amongmodern web technologies, specifically HTML5. The format for represent-ing two dimensional vector graphics in HTML5 is Scalable Vector Graphics(SVG). We combine the SVG format with the FMI standard. In a firststage, we take the XML-based model description of FMI as a form for de-scribing the interface for each component, in a language independent way.Simulation parameters can also be expressed on this form, and integratedas metadata into the SVG image. 

    The prototype, using SVG in conjunction with FMI, is implementedin JavaScript and allow creation and modification of block diagrams directly in the web browser. Generated SVG images are sent to the serverwhere they are translated to program code, allowing the simulation ofthe dynamical system to be executed using selected implementations. Analternative mode is to generate optimization problem from the systemdefinition and model parameters. The simulation/optimization result is 

    returned to the web browser where it is plotted or processed using otherstandard libraries.The fiber production process at SCA Packaging Obbola [4] is used asan example system and modeled using Composer. The system consists oftwo fiber production lines that produce fiber going to a storage tank [5].The paper machine is taking fiber from the tank as needed for production.A lot of power is required during fiber production and the purpose of themodel was to investigate weather electricity costs could be reduced byrescheduling fiber production over the day, in accordance with the electricity spot price. Components are implemented for dynamical simulationusing OpenModelica and for discrete event using Python. The Python implementation supports constraint propagation between components andoptimization over specified variables. Each component is interfaced as aFunctional Mock-up Unit (FMU), allowing components to be connectedand properties specified in language independent way. From the SVGcontaining the high-level system information, both Modelica and Pythoncode is generated and executed on the web server, potentially hosted ina high performance data center. More implementations could be addedwithout modifying the SVG system description.We have shown that it is possible to separate system descriptions onthe block diagram level from implementations and interface between thetwo levels using FMI. In a continuation of this project, we aim to integratethe FMI standard also for co-simulation, such that components implemented in different languages could be used together. One open questionis to what extent FMUs of the same component, but implemented withdifferent tools, will have the same model description. For the SVG-basedsystem description to be useful, the FMI model description must remainthe same, or at least contain a large overlap, for a single component implemented in different languages. This will be further investigated in futurework.

  • 33.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Svensson, Henrik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lowe, Robert
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Interaction, Cognition and Emotion Lab, Department of Applied IT, University of Gothenburg, Sweden.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Cognition and Interaction Lab, Department of Computer and Information Science, Linköping University, Sweden.
    Finding Your Way from the Bed to the Kitchen: Re-enacting and Re-combining Sensorimotor Episodes Learned from Human Demonstration2016Inngår i: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, nr March, artikkel-id 9Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Several simulation theories have been proposed as an explanation for how humans and other agents internalize an "inner world" that allows them to simulate interactions with the external real world - prospectively and retrospectively. Such internal simulation of interaction with the environment has been argued to be a key mechanism behind mentalizing and planning. In the present work, we study internal simulations in a robot acting in a simulated human environment. A model of sensory-motor interactions with the environment is generated from human demonstrations, and tested on a Robosoft Kompai robot. The model is used as a controller for the robot, reproducing the demonstrated behavior. Information from several different demonstrations is mixed, allowing the robot to produce novel paths through the environment, towards a goal specified by top-down contextual information. 

    The robot model is also used in a covert mode, where actions are inhibited and perceptions are generated by a forward model. As a result, the robot generates an internal simulation of the sensory-motor interactions with the environment. Similar to the overt mode, the model is able to reproduce the demonstrated behavior as internal simulations. When experiences from several demonstrations are combined with a top-down goal signal, the system produces internal simulations of novel paths through the environment. These results can be understood as the robot imagining an "inner world" generated from previous experience, allowing it to try out different possible futures without executing actions overtly.

    We found that the success rate in terms of reaching the specified goal was higher during internal simulation, compared to overt action. These results are linked to a reduction in prediction errors generated during covert action. Despite the fact that the model is quite successful in terms of generating covert behavior towards specified goals, internal simulations display different temporal distributions compared to their overt counterparts. Links to human cognition and specifically mental imagery are discussed.

    Fulltekst (pdf)
    fulltext
  • 34.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Computer & Information Science, Linköping University.
    Robot-Enhanced Therapy for Children with Autism2018Inngår i: Proceedings of the 14th SweCog Conference / [ed] Tom Ziemke, Mattias Arvola, Nils Dahlbäck, Erik Billing, Skövde: University of Skövde , 2018, s. 19-22Konferansepaper (Fagfellevurdert)
    Fulltekst (pdf)
    fulltext
  • 35.
    Cai, Haibin
    et al.
    School of Computing, University of Portsmouth, U.K..
    Fang, Yinfeng
    School of Computing, University of Portsmouth, U.K..
    Ju, Zhaojie
    School of Computing, University of Portsmouth, U.K..
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    David, Daniel
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Computer and Information Science, Linkoping University, Sweden.
    Thill, Serge
    University of Plymouth, U.K..
    Belpaeme, Tony
    University of Plymouth, U.K..
    Vanderborght, Bram
    Vrije Universiteit Brussel and Flanders Make, Belgium.
    Vernon, David
    Carnegie Mellon University Africa, Rwanda.
    Richardson, Kathleen
    De Montfort University, U.K..
    Liu, Honghai
    School of Computing, University of Portsmouth, U.K..
    Sensing-enhanced Therapy System for Assessing Children with Autism Spectrum Disorders: A Feasibility Study2019Inngår i: IEEE Sensors Journal, ISSN 1530-437X, E-ISSN 1558-1748, Vol. 19, nr 4, s. 1508-1518Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    It is evident that recently reported robot-assisted therapy systems for assessment of children with autism spectrum disorder (ASD) lack autonomous interaction abilities and require significant human resources. This paper proposes a sensing system that automatically extracts and fuses sensory features such as body motion features, facial expressions, and gaze features, further assessing the children behaviours by mapping them to therapist-specified behavioural classes. Experimental results show that the developed system has a capability of interpreting characteristic data of children with ASD, thus has the potential to increase the autonomy of robots under the supervision of a therapist and enhance the quality of the digital description of children with ASD. The research outcomes pave the way to a feasible machine-assisted system for their behaviour assessment. IEEE

  • 36.
    Cao, Hoang-Long
    et al.
    Vrije Universiteit Brussel, Belgium.
    Esteban, Pablo G.
    Mechanical Engineering, Vrije Universiteit Brusel, Brussels, Belgium.
    Bartlett, Madeleine
    Plymouth University, United Kingdom.
    Baxter, Paul Edward
    School of Computer Science, University of Lincoln, United Kingdom.
    Belpaeme, Tony
    Faculty of Science and Environment, Plymouth University, United Kingdom.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Cai, Haibin
    School of computing, University of Portsmouth, Southampton, United Kingdom.
    Coeckelbergh, Mark
    University of Twente, The Netherlands.
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Universitatea Babes-Bolyai, Cluj Napoca, Romania.
    David, Daniel
    Babes-Bolyai University, Romania.
    De Beir, Albert
    Robotics & Multibody Mechanics Research Group, Vrije Universiteit Brussel (VUB), Bruxelles, Belgium.
    Hernandez Garcia, Daniel
    School of Computing, Electronics and Mathematics, University of Plymouth, United Kingdom.
    Kennedy, James
    Disney Research Los Angeles, Disney Research, Glendale, California United States of America.
    Liu, Honghai
    Institute of Industrial Research, University of Portsmouth, Portsmouth, United Kingdom.
    Matu, Silviu
    Babes-Bolyai University, Romania.
    Mazel, Alexandre
    Research, Aldebaran-Robotics, Le Kremlin Bicetre, France.
    Pandey, Amit Kumar
    Innovation Department, SoftBank Robotics, Paris, France.
    Richardson, Kathleen
    Faculty of Technology, De Montfort University, Leicester, United Kingdom.
    Senft, Emmanuel
    Centre for Robotics and Neural System, Plymouth University, United Kingdom.
    Thill, Serge
    Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands.
    Van de Perre, Greet
    Applied Mechanics, Vrije Universiteit Brussel, Elsene, Belgium.
    Vanderborght, Bram
    Department of Mechanical Engineering, Vrije Universiteit Brussel, Brussels, Belgium.
    Vernon, David
    Electrical and Computer Engineering, Carnegie Mellon University Africa, Kigali, Rwanda.
    Wakanuma, Kutoma
    De Montfort University, United Kingdom.
    Yu, Hui
    Creative Technologies, University of Portsmouth, Portsmouth, United Kingdom.
    Zhou, Xiaolong
    Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Robot-Enhanced Therapy: Development and Validation of a Supervised Autonomous Robotic System for Autism Spectrum Disorders Therapy2019Inngår i: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 26, nr 2, s. 49-58Artikkel i tidsskrift (Fagfellevurdert)
  • 37.
    Esteban, Pablo G.
    et al.
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Baxter, Paul
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Belpaeme, Tony
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Cai, Haibin
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Cao, Hoang-Long
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Coeckelbergh, Mark
    Centre for Computing and Social Responsibility, Faculty of Technology, De Montfort University, Leicester, United Kingdom.
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Babeş-Bolyai University, Cluj-Napoca, Romania.
    David, Daniel
    Department of Clinical Psychology and Psychotherapy, Babeş-Bolyai University, Cluj-Napoca, Romania.
    De Beir, Albert
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Fang, Yinfeng
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Ju, Zhaojie
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Kennedy, James
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Liu, Honghai
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Mazel, Alexandre
    Softbank Robotics Europe, Paris, France.
    Pandey, Amit
    Softbank Robotics Europe, Paris, France.
    Richardson, Kathleen
    Centre for Computing and Social Responsibility, Faculty of Technology, De Montfort University, Leicester, United Kingdom.
    Senft, Emmanuel
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Thill, Serge
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Van de Perre, Greet
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Vanderborght, Bram
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Vernon, David
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Yu, Hui
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    How to Build a Supervised Autonomous System for Robot-Enhanced Therapy for Children with Autism Spectrum Disorder2017Inngår i: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 8, nr 1, s. 18-38Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms.However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.

    Fulltekst (pdf)
    fulltext
  • 38.
    Fast-Berglund, Åsa
    et al.
    Chalmers University of Technology, Gothenburg, Sweden.
    Thorvald, Peter
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningscentrum för Virtuella system.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Palmquist, Adam
    Insert Coin, Gothenburg, Sweden.
    Romero, David
    Tecnologico de Monterrey, Mexico.
    Weichhart, Georg
    Profactor, Studgart, Austria.
    Conceptualizing Embodied Automation to Increase Transfer of Tacit knowledge in the Learning Factory2018Inngår i: "Theory, Research and Innovation in Applications": 9th International Conference on Intelligent Systems 2018 (IS’18) / [ed] Ricardo Jardim-Gonçalves, João Pedro Mendonça, Vladimir Jotsov, Maria Marques, João Martins, Robert Bierwolf, IEEE, 2018, s. 358-364, artikkel-id 8710482Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper will discuss how cooperative agent-based systems, deployed with social skills and embodied automation features, can be used to interact with the operators in order to facilitate sharing of tacit knowledge and its later conversion into explicit knowledge. The proposal is to combine social software robots (softbots) with industrial collaborative robots (co-bots) to create a digital apprentice for experienced operators in human- robot collaboration workstations. This is to address the problem within industry that experienced operators have difficulties in explaining how they perform their tasks and later, how to turn this procedural knowledge (knowhow) into instructions to be shared among other operators. By using social softbots and co-bots, as cooperative agents with embodied automation features, we think we can facilitate the ‘externalization’ of procedural knowledge in human-robot interaction(s). This enabled by the capabilities of social cooperative agents with embodied automation features of continuously learning by looking over the shoulder of the operators, and documenting and collaborating with them in a non-intrusive way as they perform their daily tasks. 

    Fulltekst (pdf)
    fulltext
  • 39.
    Gander, Pierre
    et al.
    Deptment of Applied Information Technology, University of Gothenburg.
    Holm, LinusDepartment of Psychology, Umeå University.Billing, ErikHögskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Proceedings of the 18th SweCog Conference2023Konferanseproceedings (Fagfellevurdert)
    Fulltekst (pdf)
    fulltext
  • 40.
    Hanson, Lars
    et al.
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling. Scania CV AB, Global Industrial Development, Södertälje, Sweden.
    Högberg, Dan
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Brolin, Erik
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Iriondo Pascual, Aitor
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Lamb, Maurice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Current Trends in Research and Application of Digital Human Modeling2022Inngår i: Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021): Volume V: Methods & Approaches / [ed] Nancy L. Black; W. Patrick Neumann; Ian Noy, Cham: Springer, 2022, s. 358-366Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The paper reports an investigation conducted during the DHM2020 Symposium regarding current trends in research and application of DHM in academia, software development, and industry. The results show that virtual reality (VR), augmented reality (AR), and digital twin are major current trends. Furthermore, results show that human diversity is considered in DHM using established methods. Results also show a shift from the assessment of static postures to assessment of sequences of actions, combined with a focus mainly on human well-being and only partly on system performance. Motion capture and motion algorithms are alternative technologies introduced to facilitate and improve DHM simulations. Results from the DHM simulations are mainly presented through pictures or animations.

  • 41.
    Hernández García, Daniel
    et al.
    University of Plymouth, United Kingdom.
    Esteban, Pablo G.
    Vrije Universiteit Brussel.
    Lee, Hee Rin
    UC San Diego, United States.
    Romeo, Marta
    University of Manchester, United Kingdom.
    Senft, Emmanuel
    University of Plymouth, United Kingdom.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Social Robots in Therapy and Care2019Inngår i: Proceedings of the 14th ACM/IEEE International Conference on Human Robot Interaction, Daegu: IEEE conference proceedings, 2019, s. 669-670Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The Social Robots in Therapy workshop series aims at advancing research topics related to the use of robots in the contexts of Social Care and Robot-Assisted Therapy (RAT). Robots in social care and therapy have been a long time promise in HRI as they have the opportunity to improve patients life significantly. Multiple challenges have to be addressed for this, such as building platforms that work in proximity with patients, therapists and health-care professionals; understanding user needs; developing adaptive and autonomous robot interactions; and addressing ethical questions regarding the use of robots with a vulnerable population. The full-day workshop follows last year's edition which centered on how social robots can improve health-care interventions, how increasing the degree of autonomy of the robots might affect therapies, and how to overcome the ethical challenges inherent to the use of robot assisted technologies. This 2nd edition of the workshop will be focused on the importance of equipping social robots with socio-emotional intelligence and the ability to perform meaningful and personalized interactions. This workshop aims to bring together researchers and industry experts in the fields of Human-Robot Interaction, Machine Learning and Robots in Health and Social Care. It will be an opportunity for all to share and discuss ideas, strategies and findings to guide the design and development of robot assisted systems for therapy and social care implementations that can provide personalize, natural, engaging and autonomous interactions with patients (and health-care providers).

  • 42.
    Holm, Linus
    et al.
    Department of Psychology, Umeå University .
    Billing, ErikHögskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Proceedings of the 15th SweCog Conference2019Konferanseproceedings (Fagfellevurdert)
    Abstract [en]

    In an article published in Nature: Human Behavior, Nunez et al. (2019) asks What happened to cognitive science? The authors review bibliometric and socio-institutional aspects of the field and argues that the transition from a multi-disciplinary program to a mature inter-disciplinary coherent field has failed. Looking at the Swedish environment, we can nothing but agree. Many of us identifying ourselves as researchers in cognitive science are working at departments primarily focused at other disciplines, teaching within other objects and publishing in journals and conferences adjacent to the field. The diversity of cognitive science is also present in the number of directions that has has evolved over the years. The embodied approaches that many of us align with are not evolving towards a coherent view, but is today found under numerous labels such as situated cognition, distributed cognition, extended cognition, and enactive cognition. The so called 4E perspectives on the field have now ventured beyond the four, and is today more often referred to as the multi-E framework.

    While we agree with Nunez et al. that we remain a multi-disciplinary, multi-perspective, and multi-method group of researchers who may share an interest for the science of the mind, rather than a coherent approach or perspective, we disagree that this entails a failure for the enterprise of cognitive science. We dare to say that the Sweish Cognitive Science Society has embraced the multi-perspectives idea by adopting an inclusive approach in the selection of research and methods presented at our conferences. We hope that SweCog will remain a forum for inclusive discussions, working against discipline conformism and isolation, in a time where both public and scientific debate is increasingly shattered.

    Fulltekst (pdf)
    fulltext
  • 43.
    Lamb, Maurice
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Brundin, Malin
    Högskolan i Skövde, Institutionen för informationsteknologi.
    Perez Luque, Estela
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Eye-Tracking Beyond Peripersonal Space in Virtual Reality: Validation and Best Practices2022Inngår i: Frontiers in Virtual Reality, E-ISSN 2673-4192, Vol. 3, artikkel-id 864653Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Recent developments in commercial virtual reality (VR) hardware with embedded eye-tracking create tremendous opportunities for human subjects researchers. Accessible eye-tracking in VR opens new opportunities for highly controlled experimental setups in which participants can engage novel 3D digital environments. However, because VR embedded eye-tracking differs from the majority of historical eye-tracking research, in both providing for relatively unconstrained movement and stimulus presentation distances, there is a need for greater discussion around methods for implementation and validation of VR based eye-tracking tools. The aim of this paper is to provide a practical introduction to the challenges of, and methods for, 3D gaze-tracking in VR with a focus on best practices for results validation and reporting. Specifically, first, we identify and define challenges and methods for collecting and analyzing 3D eye-tracking data in VR. Then, we introduce a validation pilot study with a focus on factors related to 3D gaze tracking. The pilot study provides both a reference data point for a common commercial hardware/software platform (HTC Vive Pro Eye) and illustrates the proposed methods. One outcome of this study was the observation that accuracy and precision of collected data may depend on stimulus distance, which has consequences for studies where stimuli is presented on varying distances. We also conclude that vergence is a potentially problematic basis for estimating gaze depth in VR and should be used with caution as the field move towards a more established method for 3D eye-tracking.

    Fulltekst (pdf)
    fulltext
  • 44.
    Lamb, Maurice
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Pérez Luque, Estela
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Understanding Eye-Tracking in Virtual Reality2022Inngår i: AIC 2022 Artificial Intelligence and Cognition 2022: Proceedings of the 8th International Workshop on Artificial Intelligence and Cognition, Örebro, Sweden, 15-17 June, 2022 / [ed] Hadi Banaee; Amy Loutfi; Alessandro Saffiotti; Antonio Lieto, CEUR-WS.org , 2022, s. 180-181Konferansepaper (Fagfellevurdert)
    Fulltekst (pdf)
    fulltext
  • 45.
    Lamb, Maurice
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi. Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Seunghun, Lee
    Texas Tech University, United States.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Högberg, Dan
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningsmiljön Virtuell produkt- och produktionsutveckling.
    Yang, James
    Texas Tech University, United States.
    Forward and Backward Reaching Inverse Kinematics (FABRIK) solver for DHM: A pilot study2022Inngår i: Proceedings of the 7th International Digital Human Modeling Symposium (DHM 2022), August 29–30, 2022, Iowa City, Iowa, USA, University of Iowa Press, 2022, Vol. 7, s. 1-11, artikkel-id 26Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Posture/motion prediction is the basis of the human motion simulations that make up the core of many digital human modeling (DHM) tools and methods. With the goal of producing realistic postures and motions, a common element of posture/motion prediction methods involves applying some set of constraints to biomechanical models of humans on the positions and orientations of specified body parts. While many formulations of biomechanical constraints may produce valid predictions, they must overcome the challenges posed by the highly redundant nature of human biomechanical systems. DHM researchers and developers typically focus on optimization formulations to facilitate the identification and selection of valid solutions. While these approaches produce optimal behavior according to some, e.g., ergonomic, optimization criteria, these solutions require considerable computational power and appear vastly different from how humans produce motion. In this paper, we take a different approach and consider the Forward and Backward Reaching Inverse Kinematics (FABRIK) solver developed in the context of computer graphics for rigged character animation. This approach identifies postures quickly and efficiently, often requiring a fraction of the computation time involved in optimization-based methods. Critically, the FABRIK solver identifies posture predictions based on a lightweight heuristic approach. Specifically, the solver works in joint position space and identifies solutions according to a minimal joint displacement principle. We apply the FABRIK solver to a seven-degree of freedom human arm model during a reaching task from an initial to an end target location, fixing the shoulder position and providing the end effector (index fingertip) position and orientation from each frame of the motion capture data. In this preliminary study, predicted postures are compared to experimental data from a single human subject. Overall the predicted postures were very near the recorded data, with an average RMSE of 1.67°. Although more validation is necessary, we believe that the FABRIK solver has great potential for producing realistic human posture/motion in real-time, with applications in the area of DHM.

  • 46.
    Lindblom, Jessica
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Alenljung, Beatrice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningsmiljön Informationsteknologi.
    Evaluating the User Experience of Human-Robot Interaction2020Inngår i: Human-Robot Interaction: Evaluation Methods and Their Standardization / [ed] Céline Jost, Brigitte Le Pévédic, Tony Belpaeme, Cindy Bethel, Dimitrios Chrysostomou, Nigel Crook, Marine Grandgeorge, Nicole Mirnig, Cham: Springer, 2020, s. 231-256Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    For social robots, like in all other digitally interactive systems, products, services, and devices, positive user experience (UX) is necessary in order to achieve the intended benefits and societal relevance of human–robot interaction (HRI). The experiences that humans have when interacting with robots have the power to enable, or disable, the robots’ acceptance rate and utilization in society. For a commercial robot product, it is the achieved UX in the natural context when fulfilling its intended purpose that will determine its success. The increased number of socially interactive robots in human environments and their level of participation in everyday activities obviously highlights the importance of systematically evaluating the quality of the interaction from a human-centered perspective. There is also a need for robot developers to acquire knowledge about proper UX evaluation, both in theory and in practice. In this chapter we are asking: What is UX evaluation? Why should UX evaluation be performed? When is it appropriate to conduct a UX evaluation? How could a UX evaluation be carried out? Where could UX evaluation take place? Who should perform the UX evaluation and for whom? The aim is to briefly answer these questions in the context of doing UX evaluation in HRI, highlighting evaluation processes and methods that have methodological validity and reliability as well as practical applicability. We argue that each specific HRI project needs to take the UX perspective into account during the whole development process. We suggest that a more diverse use of methods in HRI will benefit the field, and the future users of social robots will benefit even more.

  • 47.
    Lowe, Robert
    et al.
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Almér, Alexander
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Sandamirskaya, Yulia
    Institute of Neuroinformatics, Neuroscience Center Zurich, University and ETH Zurich, Zurich, Switzerland.
    Balkenius, Christian
    Cognitive Science, Lund University, Lund, Sweden.
    Affective–associative two-process theory: a neurocomputational account of partial reinforcement extinction effects2017Inngår i: Biological Cybernetics, ISSN 0340-1200, E-ISSN 1432-0770, Vol. 111, nr 5-6, s. 365-388Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The partial reinforcement extinction effect (PREE) is an experimentally established phenomenon: behavioural response to a given stimulus is more persistent when previously inconsistently rewarded than when consistently rewarded. This phenomenon is, however, controversial in animal/human learning theory. Contradictory findings exist regarding when the PREE occurs. One body of research has found a within-subjects PREE, while another has found a within-subjects reversed PREE (RPREE). These opposing findings constitute what is considered the most important problem of PREE for theoreticians to explain. Here, we provide a neurocomputational account of the PREE, which helps to reconcile these seemingly contradictory findings of within-subjects experimental conditions. The performance of our model demonstrates how omission expectancy, learned according to low probability reward, comes to control response choice following discontinuation of reward presentation (extinction). We find that a PREE will occur when multiple responses become controlled by omission expectation in extinction, but not when only one omission-mediated response is available. Our model exploits the affective states of reward acquisition and reward omission expectancy in order to differentially classify stimuli and differentially mediate response choice. We demonstrate that stimulus–response (retrospective) and stimulus–expectation–response (prospective) routes are required to provide a necessary and sufficient explanation of the PREE versus RPREE data and that Omission representation is key for explaining the nonlinear nature of extinction data.

    Fulltekst (pdf)
    fulltext
  • 48.
    Lowe, Robert
    et al.
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Andreasson, Rebecca
    Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Alenljung, Beatrice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lund, Anja
    Department of Chemistry and Chemical Engineering, Chalmers University of Technology, Gothenburg, Sweden / The Swedish School of Textiles, University of Borås, Borås, Sweden.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Designing for a Wearable Affective Interface for the NAO Robot: A Study of Emotion Conveyance by Touch2018Inngår i: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 2, nr 1Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We here present results and analysis from a study of affective tactile communication between human and humanoid robot (the NAO robot). In the present work, participants conveyed eight emotions to the NAO via touch. In this study, we sought to understand the potential for using a wearable affective (tactile) interface, or WAffI. The aims of our study were to address the following: (i) how emotions and affective states can be conveyed (encoded) to such a humanoid robot, (ii) what are the effects of dressing the NAO in the WAffI on emotion conveyance and (iii) what is the potential for decoding emotion and affective states. We found that subjects conveyed touch for longer duration and over more locations on the robot when the NAO was dressed with WAffI than when it was not. Our analysis illuminates ways by which affective valence, and separate emotions, might be decoded by a humanoid robot according to the different features of touch: intensity, duration, location, type. Finally, we discuss the types of sensors and their distribution as they may be embedded within the WAffI and that would likely benefit Human-NAO (and Human-Humanoid) interaction along the affective tactile dimension.

    Fulltekst (pdf)
    fulltext
  • 49.
    Lowe, Robert
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. University of Gothenburg, Sweden.
    Barakova, Emilia
    Eindhoven University of Technology, The Netherlands.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Broekens, Joost
    Delft University of Technology, The Netherlands.
    Grounding emotions in robots: An introduction to the special issue2016Inngår i: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 24, nr 5, s. 263-266Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Robots inhabiting human environments need to act in relation to their own experience and embodiment as well as to social and emotional aspects. Robots that learn, act upon and incorporate their own experience and perception of others’ emotions into their responses make not only more productive artificial agents but also agents with whom humans can appropriately interact. This special issue seeks to address the significance of grounding of emotions in robots in relation to aspects of physical and homeostatic interaction in the world at an individual and social level. Specific questions concern: How can emotion and social interaction be grounded in the behavioral activity of the robotic system? Is a robot able to have intrinsic emotions? How can emotions, grounded in the embodiment of the robot, facilitate individually and socially adaptive behavior to the robot? This opening chapter provides an introduction to the articles that comprise this special issue and briefly discusses their relationship to grounding emotions in robots.

  • 50.
    Lowe, Robert
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Göteborgs Universitet, Tillämpad IT.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Affective-Associative Two-Process theory: A neural network investigation of adaptive behaviour in differential outcomes training2017Inngår i: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 25, nr 1, s. 5-23Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this article we present a novel neural network implementation of Associative Two-Process (ATP) theory based on an Actor–Critic-like architecture. Our implementation emphasizes the affective components of differential reward magnitude and reward omission expectation and thus we model Affective-Associative Two-Process theory (Aff-ATP). ATP has been used to explain the findings of differential outcomes training (DOT) procedures, which emphasize learning differentially valuated outcomes for cueing actions previously associated with those outcomes. ATP hypothesizes the existence of a ‘prospective’ memory route through which outcome expectations can bring to bear on decision making and can even substitute for decision making based on the ‘retrospective’ inputs of standard working memory. While DOT procedures are well recognized in the animal learning literature they have not previously been computationally modelled. The model presented in this article helps clarify the role of ATP computationally through the capturing of empirical data based on DOT. Our Aff-ATP model illuminates the different roles that prospective and retrospective memory can have in decision making (combining inputs to action selection functions). In specific cases, the model’s prospective route allows for adaptive switching (correct action selection prior to learning) following changes in the stimulus–response–outcome contingencies.

12 1 - 50 of 77
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf