his.sePublications
Change search
Refine search result
1 - 39 of 39
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1.
    Alenljung, Beatrice
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Andreasson, Rebecca
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Information Technology, Visual Information & Interaction. Uppsala University, Uppsala, Sweden.
    Billing, Erik A.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lindblom, Jessica
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    User Experience of Conveying Emotions by Touch2017In: Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2017, p. 1240-1247Conference paper (Refereed)
    Abstract [en]

    In the present study, 64 users were asked to convey eight distinct emotion to a humanoid Nao robot via touch, and were then asked to evaluate their experiences of performing that task. Large differences between emotions were revealed. Users perceived conveying of positive/pro-social emotions as significantly easier than negative emotions, with love and disgust as the two extremes. When asked whether they would act differently towards a human, compared to the robot, the users’ replies varied. A content analysis of interviews revealed a generally positive user experience (UX) while interacting with the robot, but users also found the task challenging in several ways. Three major themes with impact on the UX emerged; responsiveness, robustness, and trickiness. The results are discussed in relation to a study of human-human affective tactile interaction, with implications for human-robot interaction (HRI) and design of social and affective robotics in particular. 

  • 2.
    Almér, Alexander
    et al.
    Göteborgs Universitet, Institutionen för tillämpad informationsteknologi.
    Lowe, RobertUniversity of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Göteborgs universitet.Billing, ErikUniversity of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Proceedings of the 2016 Swecog conference2016Conference proceedings (editor) (Refereed)
  • 3.
    Andreasson, Rebecca
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Alenljung, Beatrice
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Affective Touch in Human–Robot Interaction: Conveying Emotion to the Nao Robot2017In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805Article in journal (Refereed)
    Abstract [en]

    Affective touch has a fundamental role in human development, social bonding, and for providing emotional support in interpersonal relationships. We present, what is to our knowledge, the first HRI study of tactile conveyance of both positive and negative emotions (affective touch) on the Nao robot, and based on an experimental set-up from a study of human–human tactile communication. In the present work, participants conveyed eight emotions to a small humanoid robot via touch. We found that female participants conveyed emotions for a longer time, using more varied interaction and touching more regions on the robot’s body, compared to male participants. Several differences between emotions were found such that emotions could be classified by the valence of the emotion conveyed, by combining touch amount and duration. Overall, these results show high agreement with those reported for human–human affective tactile communication and could also have impact on the design and placement of tactile sensors on humanoid robots.

  • 4.
    Arweström Jansson, Anders
    et al.
    Department of Information Technology, Visual Information & Interaction, Uppsala University, Uppsala, Sweden.
    Axelsson, AntonDepartment of Information Technology, Visual Information & Interaction, Uppsala University, Uppsala, Sweden.Andreasson, RebeccaDepartment of Information Technology, Visual Information & Interaction, Uppsala University, Uppsala, Sweden.Billing, ErikUniversity of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Proceedings of the 13th Swecog conference2017Conference proceedings (editor) (Refereed)
  • 5.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    A New Look at Habits using Simulation Theory2017In: Proceedings of the Digitalisation for a Sustainable Society: Embodied, Embedded, Networked, Empowered through Information, Computation & Cognition, Göteborg, Sweden, 2017Conference paper (Refereed)
    Abstract [en]

    Habits as a form of behavior re-execution without explicit deliberation is discussed in terms of implicit anticipation, to be contrasted with explicit anticipation and mental simulation. Two hypotheses, addressing how habits and mental simulation may be implemented in the brain and to what degree they represent two modes brain function, are formulated. Arguments for and against the two hypotheses are discussed shortly, specifically addressing whether habits and mental simulation represent two distinct functions, or to what degree there may be intermediate forms of habit execution involving partial deliberation. A potential role of habits in memory consolidation is also hypnotized.

  • 6.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Cognition Rehearsed: Recognition and Reproduction of Demonstrated Behavior2012Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The work presented in this dissertation investigates techniques for robot Learning from Demonstration (LFD). LFD is a well established approach where the robot is to learn from a set of demonstrations. The dissertation focuses on LFD where a human teacher demonstrates a behavior by controlling the robot via teleoperation. After demonstration, the robot should be able to reproduce the demonstrated behavior under varying conditions. In particular, the dissertation investigates techniques where previous behavioral knowledge is used as bias for generalization of demonstrations.

    The primary contribution of this work is the development and evaluation of a semi-reactive approach to LFD called Predictive Sequence Learning (PSL). PSL has many interesting properties applied as a learning algorithm for robots. Few assumptions are introduced and little task-specific configuration is needed. PSL can be seen as a variable-order Markov model that progressively builds up the ability to predict or simulate future sensory-motor events, given a history of past events. The knowledge base generated during learning can be used to control the robot, such that the demonstrated behavior is reproduced. The same knowledge base can also be used to recognize an on-going behavior by comparing predicted sensor states with actual observations. Behavior recognition is an important part of LFD, both as a way to communicate with the human user and as a technique that allows the robot to use previous knowledge as parts of new, more complex, controllers.

    In addition to the work on PSL, this dissertation provides a broad discussion on representation, recognition, and learning of robot behavior. LFD-related concepts such as demonstration, repetition, goal, and behavior are defined and analyzed, with focus on how bias is introduced by the use of behavior primitives. This analysis results in a formalism where LFD is described as transitions between information spaces. Assuming that the behavior recognition problem is partly solved, ways to deal with remaining ambiguities in the interpretation of a demonstration are proposed.

    The evaluation of PSL shows that the algorithm can efficiently learn and reproduce simple behaviors. The algorithm is able to generalize to previously unseen situations while maintaining the reactive properties of the system. As the complexity of the demonstrated behavior increases, knowledge of one part of the behavior sometimes interferes with knowledge of another parts. As a result, different situations with similar sensory-motor interactions are sometimes confused and the robot fails to reproduce the behavior.

    One way to handle these issues is to introduce a context layer that can support PSL by providing bias for predictions. Parts of the knowledge base that appear to fit the present context are highlighted, while other parts are inhibited. Which context should be active is continually re-evaluated using behavior recognition. This technique takes inspiration from several neurocomputational models that describe parts of the human brain as a hierarchical prediction system. With behavior recognition active, continually selecting the most suitable context for the present situation, the problem of knowledge interference is significantly reduced and the robot can successfully reproduce also more complex behaviors.

  • 7.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Cognition Reversed: Robot Learning from Demonstration2009Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The work presented in this thesis investigates techniques for learning from demonstration (LFD). LFD is a well established approach to robot learning, where a teacher demonstrates a behavior to a robot pupil. This thesis focuses on LFD where a human teacher demonstrates a behavior by controlling the robot via teleoperation. The robot should after demonstration be able to execute the demonstrated behavior under varying conditions.

    Several views on representation, recognition and learning of robot behavior are presented and discussed from a cognitive and computational perspective. LFD-related concepts such as behavior, goal, demonstration, and repetition are defined and analyzed, with focus on how bias is introduced by the use of behavior primitives. This analysis results in a formalism where LFD is described as transitions between information spaces. Assuming that the behavior recognition problem is partly solved, ways to deal with remaining ambiguities in the interpretation of a demonstration are proposed.

    A total of five algorithms for behavior recognition are proposed and evaluated, including the dynamic temporal difference algorithm Predictive Sequence Learning (PSL). PSL is model-free in the sense that it makes few assumptions of what is to be learned. One strength of PSL is that it can be used for both robot control and recognition of behavior. While many methods for behavior recognition are concerned with identifying invariants within a set of demonstrations, PSL takes a different approach by using purely predictive measures. This may be one way to reduce the need for bias in learning. PSL is, in its current form, subjected to combinatorial explosion as the input space grows, which makes it necessary to introduce some higher level coordination for learning of complex behaviors in real-world robots.

    The thesis also gives a broad introduction to computational models of the human brain, where a tight coupling between perception and action plays a central role. With the focus on generation of bias, typical features of existing attempts to explain humans' and other animals' ability to learn are presented and analyzed, from both a neurological and an information theoretic perspective. Based on this analysis, four requirements for implementing general learning ability in robots are proposed. These requirements provide guidance to how a coordinating structure around PSL and similar algorithms should be implemented in a model-free way.

  • 8.
    Billing, Erik
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Cognitive Perspectives on Robot Behavior2010In: Proceedings of the 2nd International Conference on Agents and Artificial Intelligence: Volume 2 / [ed] Joaquim Filipe, Ana Fred and Bernadette Sharp, SciTePress, 2010, p. 373-382Conference paper (Refereed)
    Abstract [en]

    A growing body of research within the field of intelligent robotics argues for a view of intelligence drastically different from classical artificial intelligence and cognitive science. The holistic and embodied ideas expressed by this research promote the view that intelligence is an emergent phenomenon. Similar perspectives, where numerous interactions within the system lead to emergent properties and cognitive abilities beyond that of the individual parts, can be found within many scientific fields. With the goal of understanding how behavior may be represented in robots, the present review tries to grasp what this notion of emergence really means and compare it with a selection of theories developed for analysis of human cognition, including the extended mind, distributed cognition and situated action. These theories reveal a view of intelligence where common notions of objects, goals, language and reasoning have to be rethought. A view where behavior, as well as the agent as such, is defined by the observer rather than given by their nature. Structures in the environment emerge by interaction rather than recognized. In such a view, the fundamental question is how emergent systems appear and develop, and how they may be controlled.

  • 9.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Representing behavior: Distributed theories in a context of robotics2007Report (Other academic)
    Abstract [en]

    A growing body of research within the field of intelligent robotics argues for a view of intelligence drastically different from classical artificial intelligence and cognitive science. The holistic and embodied ideas expressed by this research sees emergence as the springing source for intelligence. Similar perspectives, where numerous interactions within the system lead to emergent properties and cognitive abilities beyond that of the individual parts, can be found within many scientific fields. With the goal of understanding how behavior may be represented in robots, the present review tries to grasp what this notion of emergence really means and compare it with a selection of theories developed for analysis of human cognition. These theories reveal a view of intelligence where common notions of objects, goals and reasoning have to be rethought. A view where behavior, as well as the agent as such, is in the eye of the observer rather than given. Structures in the environment is achieved by interaction rather than recognized. In such a view, the fundamental question is how emergent systems appear and develop, and how they may be controlled.

  • 10.
    Billing, Erik A.
    et al.
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Umeå, Sweden.
    A formalism for learning from demonstration2010In: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 1, no 1, p. 1-13Article in journal (Refereed)
    Abstract [en]

    The paper describes and formalizes the concepts and assumptions involved in Learning from Demonstration (LFD), a common learning technique used in robotics. LFD-related concepts like goal, generalization, and repetition are here defined, analyzed, and put into context. Robot behaviors are described in terms of trajectories through information spaces and learning is formulated as mappings between some of these spaces. Finally, behavior primitives are introduced as one example of good bias in learning, dividing the learning process into the three stages of behavior segmentation, behavior recognition, and behavior coordination. The formalism is exemplified through a sequence learning task where a robot equipped with a gripper arm is to move objects to specific areas. The introduced concepts are illustrated with special focus on how bias of various kinds can be used to enable learning from a single demonstration, and how ambiguities in demonstrations can be identified and handled.

  • 11.
    Billing, Erik A.
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Behavior recognition for segmentation of demonstrated tasks2008In: IEEE SMC International Conference on Distributed Human-Machine Systems (DHMS), 2008, p. 228-234Conference paper (Refereed)
    Abstract [en]

    One common approach to the robot learning technique Learning From Demonstration, is to use a set of pre-programmed skills as building blocks for more complex tasks. One important part of this approach is recognition of these skills in a demonstration comprising a stream of sensor and actuator data. In this paper, three novel techniques for behavior recognition are presented and compared. The first technique is function-oriented and compares actions for similar inputs. The second technique is based on auto-associative neural networks and compares reconstruction errors in sensory-motor space. The third technique is based on S-Learning and compares sequences of patterns in sensory-motor space. All three techniques compute an activity level which can be seen as an alternative to a pure classification approach. Performed tests show how the former approach allows a more informative interpretation of a demonstration, by not determining "correct" behaviors but rather a number of alternative interpretations.

  • 12.
    Billing, Erik A.
    et al.
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Janlert, Lars Erik
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Model-free learning from demonstration2010In: Proceedings of the 2nd International Conference on Agents and Artificial Intelligence: Volume 2 / [ed] Joaquim Filipe, Ana Fred and Bernadette Sharp, SciTePress, 2010, p. 62-71Conference paper (Refereed)
    Abstract [en]

    A novel robot learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated. PSL is a model-free prediction algorithm inspired by the dynamic temporal difference algorithm S-Learning. While S-Learning has previously been applied as a reinforcement learning algorithm for robots, PSL is here applied to a Learning from Demonstration problem. The proposed algorithm is evaluated on four tasks using a Khepera II robot. PSL builds a model from demonstrated data which is used to repeat the demonstrated behavior. After training, PSL can control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. PSL was able to successfully learn and repeat the first three (elementary) tasks, but it was unable to successfully repeat the fourth (composed) behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

  • 13.
    Billing, Erik A.
    et al.
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Janlert, Lars-Erik
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Behavior recognition for learning from demonstration2010In: 2010 IEEE International Conference on Robotics and Automation / [ed] Nancy M. Amato et. al, 2010, p. 866-872Conference paper (Refereed)
    Abstract [en]

    Two methods for behavior recognition are presented and evaluated. Both methods are based on the dynamic temporal difference algorithm Predictive Sequence Learning (PSL) which has previously been proposed as a learning algorithm for robot control. One strength of the proposed recognition methods is that the model PSL builds to recognize behaviors is identical to that used for control, implying that the controller (inverse model) and the recognition algorithm (forward model) can be implemented as two aspects of the same model. The two proposed methods, PSLE-Comparison and PSLH-Comparison, are evaluated in a Learning from Demonstration setting, where each algorithm should recognize a known skill in a demonstration performed via teleoperation. PSLH-Comparison produced the smallest recognition error. The results indicate that PSLH-Comparison could be a suitable algorithm for integration in a hierarchical control system consistent with recent models of human perception and motor control.

  • 14.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Balkenius, Christian
    Lund University Cognitive Science, Lund, Sweden.
    Modeling the Interplay between Conditioning and Attention in a Humanoid Robot: Habituation and Attentional Blocking2014In: Proceeding of The 4th International Conference on Development and Learning and on Epigenetic Robotics (IEEE ICDL-EPIROB 2014), IEEE conference proceedings, 2014, p. 41-47Conference paper (Refereed)
    Abstract [en]

    A novel model of role of conditioning in attention is presented and evaluated on a Nao humanoid robot. The model implements conditioning and habituation in interaction with a dynamic neural field where different stimuli compete for activation. The model can be seen as a demonstration of how stimulus-selection and action-selection can be combined and illustrates how positive or negative reinforcement have different effects on attention and action. Attention is directed toward both rewarding and punishing stimuli, but appetitive actions are only directed toward positive stimuli. We present experiments where the model is used to control a Nao robot in a task where it can select between two objects. The model demonstrates some emergent effects also observed in similar experiments with humans and animals, including attentional blocking and latent inhibition.

  • 15.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Formalising learning from demonstration2008Report (Other academic)
    Abstract [en]

    The paper describes and formalizes the concepts and assumptions involved in Learning from Demonstration (LFD), a common learning technique used in robotics. Inspired by the work on planning and actuation by LaValle, common LFD-related concepts like goal, generalization, and repetition are here defined, analyzed, and put into context. Robot behaviors are described in terms of trajectories through information spaces and learning is formulated as the mappings between some of these spaces. Finally, behavior primitives are introduced as one example of useful bias in the learning process, dividing the learning process into the three stages of behavior segmentation, behavior recognition, and behavior coordination.

  • 16.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Institutionen för datavetenskap.
    Predictive learning from demonstration2011In: Agents and Artificial Intelligence: Second International Conference, ICAART 2010, Valencia, Spain, January 22-24, 2010. Revised Selected Papers / [ed] Filipe, Joaquim; Fred, Ana; Sharp, Bernadette, Berlin: Springer Verlag , 2011, 1, p. 186-200Chapter in book (Refereed)
    Abstract [en]

    A model-free learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL is inspired by several functional models of the brain. It constructs sequences of predictable sensory-motor patterns, without relying on predefined higher-level concepts. The algorithm is demonstrated on a Khepera II robot in four different tasks. During training, PSL generates a hypothesis library from demonstrated data. The library is then used to control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. In this way, the robot reproduces the demonstrated behavior. PSL is able to successfully learn and repeat three elementary tasks, but is unable to repeat a fourth, composed behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

  • 17.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Institutionen för datavetenskap.
    Simultaneous control and recognition of demonstrated behavior2011Report (Other academic)
    Abstract [en]

    A method for Learning from Demonstration (LFD) is presented and evaluated on a simulated Robosoft Kompai robot. The presented algorithm, called Predictive Sequence Learning (PSL), builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. The generated rule base can be used to control the robot and to predict expected sensor events in response to executed actions. The rule base can be trained under different contexts, represented as fuzzy sets. In the present work, contexts are used to represent different behaviors. Several behaviors can in this way be stored in the same rule base and partly share information. The context that best matches present circumstances can be identified using the predictive model and the robot can in this way automatically identify the most suitable behavior for precent circumstances. The performance of PSL as a method for LFD is evaluated with, and without, contextual information. The results indicate that PSL without contexts can learn and reproduce simple behaviors. The system also successfully identifies the most suitable context in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contexts. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction. 

  • 18.
    Billing, Erik
    et al.
    Department of Computing Science, Umeå University, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Sweden.
    Janlert, Lars-Erik
    Department of Computing Science, Umeå University, Sweden.
    Robot learning from demonstration using predictive sequence learning2012In: Robotic systems: applications, control and programming / [ed] Ashish Dutta, Kanpur, India: IN-TECH , 2012, p. 235-250Chapter in book (Refereed)
    Abstract [en]

    In this chapter, the prediction algorithm Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL generates hypotheses from a sequence of sensory-motor events. Generated hypotheses can be used as a semi-reactive controller for robots. PSL has previously been used as a method for LFD, but suffered from combinatorial explosion when applied to data with many dimensions, such as high dimensional sensor and motor data. A new version of PSL, referred to as Fuzzy Predictive Sequence Learning (FPSL), is presented and evaluated in this chapter. FPSL is implemented as a Fuzzy Logic rule base and works on a continuous state space, in contrast to the discrete state space used in the original design of PSL. The evaluation of FPSL shows a significant performance improvement in comparison to the discrete version of the algorithm. Applied to an LFD task in a simulated apartment environment, the robot is able to learn to navigate to a specific location, starting from an unknown position in the apartment.

  • 19.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Hellström, Thomas
    Institutionen för Datavetenskap, Umeå Universitet.
    Janlert, Lars-Erik
    Institutionen för Datavetenskap, Umeå Universitet.
    Simultaneous recognition and reproduction of demonstrated behavior2015In: Biologically Inspired Cognitive Architectures, ISSN 2212-683X, Vol. 12, p. 43-53, article id BICA114Article in journal (Refereed)
    Abstract [en]

    Predictions of sensory-motor interactions with the world is often referred to as a key component in cognition. We here demonstrate that prediction of sensory-motor events, i.e., relationships between percepts and actions, is sufficient to learn navigation skills for a robot navigating in an apartment environment. In the evaluated application, the simulated Robosoft Kompai robot learns from human demonstrations. The system builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. With this architecture, referred to as Predictive Sequence Learning (PSL), learned associations can be used to control the robot and to predict expected sensor events in response to executed actions. The predictive component of PSL is used in two ways: 1) to identify which behavior that best matches current context and 2) to decide when to learn, i.e., update the confidence of different sensory-motor associations. Using this approach, knowledge interference due to over-fitting of an increasingly complex world model can be avoided. The system can also automatically estimate the confidence in the currently executed behavior and decide when to switch to an alternate behavior. The performance of PSL as a method for learning from demonstration is evaluated with, and without, contextual information. The results indicate that PSL without contextual information can learn and reproduce simple behaviors, but fails when the behavioral repertoire becomes more diverse. When a contextual layer is added, PSL successfully identifies the most suitable behavior in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contextual information. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction. 

  • 20.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lindblom, JessicaUniversity of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.Ziemke, TomUniversity of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Proceedings of the 2015 SWECOG conference2015Conference proceedings (editor) (Refereed)
  • 21.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Applied IT, University of Gothenburg, Sweden.
    Sandamirskaya, Yulia
    Institute of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland.
    Simultaneous Planning and Action: Neural-dynamic Sequencing of Elementary Behaviors in Robot Navigation2015In: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 23, no 5, p. 243-264Article in journal (Refereed)
    Abstract [en]

    A technique for Simultaneous Planning and Action (SPA) based on Dynamic Field Theory (DFT) is presented. The model builds on previous workon representation of sequential behavior as attractors in dynamic neural fields. Here, we demonstrate how chains of competing attractors can be used to represent dynamic plans towards a goal state. The presentwork can be seen as an addition to a growing body of work that demonstratesthe role of DFT as a bridge between low-level reactive approachesand high-level symbol processing mechanisms. The architecture is evaluatedon a set of planning problems using a simulated e-puck robot, including analysis of the system's behavior in response to noise and temporary blockages ofthe planned route. The system makes no explicit distinction betweenplanning and execution phases, allowing continuous adaptation of the planned path. The proposed architecture exploits the DFT property of stability in relation to noise and changes in the environment. The neural dynamics are also exploited such that stay-or-switch action selection emerges where blockage of a planned path occurs: stay until the transient blockage is removed versus switch to an alternative route to the goal.

  • 22.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Servin, Martin
    Institutionen för fysik, Umeå universitet.
    Composer: A prototype multilingual model composition tool2013In: MODPROD2013: 7th MODPROD Workshop on Model-Based Product Development / [ed] Peter Fritzson, Umeå: Umeå universitet , 2013Conference paper (Other academic)
    Abstract [en]

    Facing the task to design, simulate or optimize a complex system itis common to find models and data for the system expressed in differentformats, implemented in different simulation software tools. When a newmodel is developed, a target platform is chosen and existing componentsimplemented with different tools have to be converted. This results inunnecessary work duplication and lead times. The Modelica languageinitiative [2] partially solves this by allowing developers to move modelsbetween different tools following the Modelica standard. Another possi-bility is to exchange models using the Functional Mockup Interface (FMI)standard that allows computer models to be used as components in othersimulations, possibly implemented using other programming languages[1]. With the Modelica and FMI standards entering development, there isneed for an easy-to-use tool that supports design, editing and simulationof such multilingual systems, as well as for retracting system informationfor formulating and solving optimization problems.A prototype solution for a graphical block diagram tool for design, edit-ing, simulation and optimization of multilingual systems has been createdand evaluated for a specific system. The tool is named Composer [3].The block diagram representation should be generic, independent ofmodel implementations, have a standardized format and yet support effi-cient handling of complex data. It is natural to look for solutions amongmodern web technologies, specifically HTML5. The format for represent-ing two dimensional vector graphics in HTML5 is Scalable Vector Graphics(SVG). We combine the SVG format with the FMI standard. In a firststage, we take the XML-based model description of FMI as a form for de-scribing the interface for each component, in a language independent way.Simulation parameters can also be expressed on this form, and integratedas metadata into the SVG image. 

    The prototype, using SVG in conjunction with FMI, is implementedin JavaScript and allow creation and modification of block diagrams directly in the web browser. Generated SVG images are sent to the serverwhere they are translated to program code, allowing the simulation ofthe dynamical system to be executed using selected implementations. Analternative mode is to generate optimization problem from the systemdefinition and model parameters. The simulation/optimization result is 

    returned to the web browser where it is plotted or processed using otherstandard libraries.The fiber production process at SCA Packaging Obbola [4] is used asan example system and modeled using Composer. The system consists oftwo fiber production lines that produce fiber going to a storage tank [5].The paper machine is taking fiber from the tank as needed for production.A lot of power is required during fiber production and the purpose of themodel was to investigate weather electricity costs could be reduced byrescheduling fiber production over the day, in accordance with the electricity spot price. Components are implemented for dynamical simulationusing OpenModelica and for discrete event using Python. The Python implementation supports constraint propagation between components andoptimization over specified variables. Each component is interfaced as aFunctional Mock-up Unit (FMU), allowing components to be connectedand properties specified in language independent way. From the SVGcontaining the high-level system information, both Modelica and Pythoncode is generated and executed on the web server, potentially hosted ina high performance data center. More implementations could be addedwithout modifying the SVG system description.We have shown that it is possible to separate system descriptions onthe block diagram level from implementations and interface between thetwo levels using FMI. In a continuation of this project, we aim to integratethe FMI standard also for co-simulation, such that components implemented in different languages could be used together. One open questionis to what extent FMUs of the same component, but implemented withdifferent tools, will have the same model description. For the SVG-basedsystem description to be useful, the FMI model description must remainthe same, or at least contain a large overlap, for a single component implemented in different languages. This will be further investigated in futurework.

  • 23.
    Billing, Erik
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Svensson, Henrik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Göteborgs Universitet, Tillämpad IT.
    Ziemke, Tom
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Computer and Information Science, Linköping University.
    Finding Your Way from the Bed to the Kitchen: Re-enacting and Re-combining Sensorimotor Episodes Learned from Human Demonstration2016In: Vol. 3, no 9Article in journal (Refereed)
    Abstract [en]

    Several simulation theories have been proposed as an explanation for how humans and other agents internalize an "inner world" that allows them to simulate interactions with the external real world - prospectively and retrospectively. Such internal simulation of interaction with the environment has been argued to be a key mechanism behind mentalizing and planning. In the present work, we study internal simulations in a robot acting in a simulated human environment. A model of sensory-motor interactions with the environment is generated from human demonstrations, and tested on a Robosoft Kompai robot. The model is used as a controller for the robot, reproducing the demonstrated behavior. Information from several different demonstrations is mixed, allowing the robot to produce novel paths through the environment, towards a goal specified by top-down contextual information. 

    The robot model is also used in a covert mode, where actions are inhibited and perceptions are generated by a forward model. As a result, the robot generates an internal simulation of the sensory-motor interactions with the environment. Similar to the overt mode, the model is able to reproduce the demonstrated behavior as internal simulations. When experiences from several demonstrations are combined with a top-down goal signal, the system produces internal simulations of novel paths through the environment. These results can be understood as the robot imagining an "inner world" generated from previous experience, allowing it to try out different possible futures without executing actions overtly.

    We found that the success rate in terms of reaching the specified goal was higher during internal simulation, compared to overt action. These results are linked to a reduction in prediction errors generated during covert action. Despite the fact that the model is quite successful in terms of generating covert behavior towards specified goals, internal simulations display different temporal distributions compared to their overt counterparts. Links to human cognition and specifically mental imagery are discussed.

  • 24.
    Esteban, Pablo G.
    et al.
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Baxter, Paul
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Belpaeme, Tony
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Cai, Haibin
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Cao, Hoang-Long
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Coeckelbergh, Mark
    Centre for Computing and Social Responsibility, Faculty of Technology, De Montfort University, Leicester, United Kingdom.
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Babeş-Bolyai University, Cluj-Napoca, Romania.
    David, Daniel
    Department of Clinical Psychology and Psychotherapy, Babeş-Bolyai University, Cluj-Napoca, Romania.
    De Beir, Albert
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Fang, Yinfeng
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Ju, Zhaojie
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Kennedy, James
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Liu, Honghai
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Mazel, Alexandre
    Softbank Robotics Europe, Paris, France.
    Pandey, Amit
    Softbank Robotics Europe, Paris, France.
    Richardson, Kathleen
    Centre for Computing and Social Responsibility, Faculty of Technology, De Montfort University, Leicester, United Kingdom.
    Senft, Emmanuel
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Thill, Serge
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Van de Perre, Greet
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Vanderborght, Bram
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Vernon, David
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Yu, Hui
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Ziemke, Tom
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    How to Build a Supervised Autonomous System for Robot-Enhanced Therapy for Children with Autism Spectrum Disorder2017In: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 8, no 1, p. 18-38Article in journal (Refereed)
    Abstract [en]

    Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms.However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.

  • 25.
    Jiong, Sun
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Seoane, Fernando
    Swedish School of Textiles, University of Borås, Borås, Sweden / Inst. for Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden / Dept. Biomedical Engineering, Karolinska University Hospital, Stockholm, Sweden.
    Zhou, Bo
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
    Högberg, Dan
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Categories of touch: Classifying human touch using a soft tactile sensor2017Conference paper (Refereed)
    Abstract [en]

    Social touch plays an important role not only in human communication but also in human-robot interaction. We here report results from an ongoing study on affective human-robot interaction. In our previous research, touch type is shown to be informative for communicated emotion. Here, a soft matrix array sensor is used to capture the tactile interaction between human and robot and a method based on PCA and kNN is applied in the experiment to classify different touch types, constituting a pre-stage to recognizing emotional tactile interaction. Results show an average recognition rate for classified touch type of 71%, with a large variability between different types of touch. Results are discussed in relation to affective HRI and social robotics.

  • 26.
    Jiong, Sun
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Redyuk, Sergey
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Högberg, Dan
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Tactile Interaction and Social Touch: Classifying Human Touch using a Soft Tactile Sensor2017In: HAI '17: Proceedings of the 5th International Conference on Human Agent Interaction, New York: Association for Computing Machinery (ACM), 2017, p. 523-526Conference paper (Refereed)
    Abstract [en]

    This paper presents an ongoing study on affective human-robot interaction. In our previous research, touch type is shown to be informative for communicated emotion. Here, a soft matrix array sensor is used to capture the tactile interaction between human and robot and 6 machine learning methods including CNN, RNN and C3D are implemented to classify different touch types, constituting a pre-stage to recognizing emotional tactile interaction. Results show an average recognition rate of 95% by C3D for classified touch types, which provide stable classification results for developing social touch technology. 

  • 27.
    Lowe, Robert
    et al.
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Almér, Alexander
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Sandamirskaya, Yulia
    Institute of Neuroinformatics, Neuroscience Center Zurich, University and ETH Zurich, Zurich, Switzerland.
    Balkenius, Christian
    Cognitive Science, Lund University, Lund, Sweden.
    Affective–associative two-process theory: a neurocomputational account of partial reinforcement extinction effects2017In: Biological Cybernetics, ISSN 0340-1200, E-ISSN 1432-0770, Vol. 111, no 5-6, p. 365-388Article in journal (Refereed)
    Abstract [en]

    The partial reinforcement extinction effect (PREE) is an experimentally established phenomenon: behavioural response to a given stimulus is more persistent when previously inconsistently rewarded than when consistently rewarded. This phenomenon is, however, controversial in animal/human learning theory. Contradictory findings exist regarding when the PREE occurs. One body of research has found a within-subjects PREE, while another has found a within-subjects reversed PREE (RPREE). These opposing findings constitute what is considered the most important problem of PREE for theoreticians to explain. Here, we provide a neurocomputational account of the PREE, which helps to reconcile these seemingly contradictory findings of within-subjects experimental conditions. The performance of our model demonstrates how omission expectancy, learned according to low probability reward, comes to control response choice following discontinuation of reward presentation (extinction). We find that a PREE will occur when multiple responses become controlled by omission expectation in extinction, but not when only one omission-mediated response is available. Our model exploits the affective states of reward acquisition and reward omission expectancy in order to differentially classify stimuli and differentially mediate response choice. We demonstrate that stimulus–response (retrospective) and stimulus–expectation–response (prospective) routes are required to provide a necessary and sufficient explanation of the PREE versus RPREE data and that Omission representation is key for explaining the nonlinear nature of extinction data.

  • 28.
    Lowe, Robert
    et al.
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Andreasson, Rebecca
    Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Alenljung, Beatrice
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lund, Anja
    Department of Chemistry and Chemical Engineering, Chalmers University of Technology, Gothenburg, Sweden / The Swedish School of Textiles, University of Borås, Borås, Sweden.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Designing for a Wearable Affective Interface for the NAO Robot: A Study of Emotion Conveyance by Touch2018In: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 2, no 1Article in journal (Refereed)
    Abstract [en]

    We here present results and analysis from a study of affective tactile communication between human and humanoid robot (the NAO robot). In the present work, participants conveyed eight emotions to the NAO via touch. In this study, we sought to understand the potential for using a wearable affective (tactile) interface, or WAffI. The aims of our study were to address the following: (i) how emotions and affective states can be conveyed (encoded) to such a humanoid robot, (ii) what are the effects of dressing the NAO in the WAffI on emotion conveyance and (iii) what is the potential for decoding emotion and affective states. We found that subjects conveyed touch for longer duration and over more locations on the robot when the NAO was dressed with WAffI than when it was not. Our analysis illuminates ways by which affective valence, and separate emotions, might be decoded by a humanoid robot according to the different features of touch: intensity, duration, location, type. Finally, we discuss the types of sensors and their distribution as they may be embedded within the WAffI and that would likely benefit Human-NAO (and Human-Humanoid) interaction along the affective tactile dimension.

  • 29.
    Lowe, Robert
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. University of Gothenburg, Sweden.
    Barakova, Emilia
    Eindhoven University of Technology, The Netherlands.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Broekens, Joost
    Delft University of Technology, The Netherlands.
    Grounding emotions in robots: An introduction to the special issue2016In: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 24, no 5, p. 263-266Article in journal (Refereed)
    Abstract [en]

    Robots inhabiting human environments need to act in relation to their own experience and embodiment as well as to social and emotional aspects. Robots that learn, act upon and incorporate their own experience and perception of others’ emotions into their responses make not only more productive artificial agents but also agents with whom humans can appropriately interact. This special issue seeks to address the significance of grounding of emotions in robots in relation to aspects of physical and homeostatic interaction in the world at an individual and social level. Specific questions concern: How can emotion and social interaction be grounded in the behavioral activity of the robotic system? Is a robot able to have intrinsic emotions? How can emotions, grounded in the embodiment of the robot, facilitate individually and socially adaptive behavior to the robot? This opening chapter provides an introduction to the articles that comprise this special issue and briefly discusses their relationship to grounding emotions in robots.

  • 30.
    Lowe, Robert
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Göteborgs Universitet, Tillämpad IT.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Affective-Associative Two-Process theory: A neural network investigation of adaptive behaviour in differential outcomes training2017In: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 25, no 1, p. 5-23Article in journal (Refereed)
    Abstract [en]

    In this article we present a novel neural network implementation of Associative Two-Process (ATP) theory based on an Actor–Critic-like architecture. Our implementation emphasizes the affective components of differential reward magnitude and reward omission expectation and thus we model Affective-Associative Two-Process theory (Aff-ATP). ATP has been used to explain the findings of differential outcomes training (DOT) procedures, which emphasize learning differentially valuated outcomes for cueing actions previously associated with those outcomes. ATP hypothesizes the existence of a ‘prospective’ memory route through which outcome expectations can bring to bear on decision making and can even substitute for decision making based on the ‘retrospective’ inputs of standard working memory. While DOT procedures are well recognized in the animal learning literature they have not previously been computationally modelled. The model presented in this article helps clarify the role of ATP computationally through the capturing of empirical data based on DOT. Our Aff-ATP model illuminates the different roles that prospective and retrospective memory can have in decision making (combining inputs to action selection functions). In specific cases, the model’s prospective route allows for adaptive switching (correct action selection prior to learning) following changes in the stimulus–response–outcome contingencies.

  • 31.
    Lowe, Robert
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Sandarmirskaya, Yulia
    Theory of Cognitive Systems, Institut für Neuroinformatik, Ruhr-Universität Bochum, Germany.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    A Neural Dynamic Model of Associative Two-Process Theory: The Differential Outcomes Effect and Infant Development2014In: IEEE ICDL-EPIROB 2014: The Fourth Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, IEEE conference proceedings, 2014, p. 440-447Conference paper (Refereed)
  • 32.
    Montebelli, Alberto
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Billing, Erik A.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lindblom, Jessica
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Messina Dahlberg, Giulia
    Department of Educational Research and Development, University of Borås.
    Reframing HRI Education: A Dialogic Reformulation of HRI Education to Promote Diverse Thinking and Scientific Progress2017In: Journal of Human-Robot Interaction, E-ISSN 2163-0364, Vol. 6, no 2, p. 3-26Article in journal (Refereed)
    Abstract [en]

    Over the last few years, technological developments in semi-autonomous machines have raised awareness about the strategic importance of human-robot interaction (HRI) and its technical and social implications. At the same time, HRI still lacks an established pedagogic tradition in the coordination of its intrinsically interdisciplinary nature. This scenario presents steep and urgent challenges for HRI education. Our contribution presents a normative interdisciplinary dialogic framework for HRI education, denoted InDia wheel, aimed toward seamless and coherent integration of the variety of disciplines that contribute to HRI. Our framework deemphasizes technical mastery, reducing it to a necessary yet not sufficient condition for HRI design, thus modifying the stereotypical narration of HRI-relevant disciplines and creating favorable conditions for a more diverse participation of students. Prospectively, we argue, the design of an educational 'space of interaction’ that focuses on a variety of voices, without giving supremacy to one over the other, will be key to successful HRI education and practice.

  • 33.
    Redyuk, Sergey
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Billing, Erik A.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Challenges in face expression recognition from video2017In: SweDS 2017: The 5th Swedish Workshop on Data Science / [ed] Alexander Schliep, 2017Conference paper (Refereed)
    Abstract [en]

    Identication of emotion from face expressions is a relatively well understood problem where state-of-the-art solutions perform almost as well as humans. However, in many practical applications, disruptingfactors still make identication of face expression a very challenging problem. Within the project DREAM1- Development of Robot Enhanced Therapy for Children with Autism Spectrum Disorder (ASD), we areidentifying face expressions from children with ASD, during therapy. Identied face expressions are usedboth in the online system, to guide the behavior of the robot, and o-line, to automatically annotate videofor measurements of clinical outcomes.

    This setup puts several new challenges on the face expression technology. First of all, in contrast tomost open databases of face expressions comprising adult faces, we are recognizing emotions from childrenbetween the age of 4 to 7 years. Secondly, children with ASD may show emotions dierently, compared totypically developed children. Thirdly, the children move freely during the intervention and, despite the useof several cameras tracking the face of the child from dierent angles, we rarely have a full frontal view ofthe face. Fourthly, and nally, the amount of native data is very limited.

    Although we have access to extensive video recorded material from therapy sessions with ASD children,potentially constituting a very valuable dataset for both training and testing of face expression implemen-tations, this data proved to be dicult to use. A session of 10 minutes of video may comprise only a fewinstances of expressions e.g. smiling. As such, although we have many hours of video in total, the data isvery sparse and the number of clear face expressions is still rather small for it to be used as training data inmost machine learning (ML) techniques.

    We therefore focused on the use of synthetic datasets for transfer learning, trying to overcome thechallenges mentioned above. Three techniques were evaluated: (1) convolutional neural networks for imageclassication by analyzing separate video frames, (2) recurrent neural networks for sequence classication tocapture facial dynamics, and (3) ML algorithms classifying pre-extracted facial landmarks.

    The performance of all three models are unsatisfactory. Although the proposed models were of highaccuracy, approximately 98%, while classifying a test set, they performed poorly on the real-world data.This was due to the usage of a synthetic dataset which had mostly a frontal view of faces. The models whichhave not seen similar examples before failed to classify them correctly. The accuracy decreased drasticallywhen the child rotated her head or covered a part of her face. Even if the frame clearly captured a facialexpression, ML algorithms were not able to provide a stable positive classication rate. Thus, elaborationon training datasets and designing robust ML models are required. Another option is to incorporate voiceand gestures of the child into the model to classify emotional state as a complex concept.

  • 34.
    Richardson, Kathleen
    et al.
    De Montfort University, Leicester, United Kingdom.
    Coeckelbergh, Mark
    De Montfort University, Leicester, United Kingdom.
    Wakunuma, Kutoma
    De Montfort University, Leicester, United Kingdom.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Ziemke, Tom
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Gómez, Pablo
    Vrije Universiteit, Brussel, Belgium.
    Vanderborght, Bram
    Vrije Universiteit, Brussel, Belgium.
    Belpaeme, Tony
    University of Plymouth, Plymouth, United Kingdom.
    Robot Enhanced Therapy for Children with Autism (DREAM): A Social Model of Autism2018In: IEEE technology & society magazine, ISSN 0278-0097, E-ISSN 1937-416X, Vol. 37, no 1, p. 30-39Article in journal (Refereed)
    Abstract [en]

    The development of social robots for children with autism has been a growth field for the past 15 years. This article reviews studies in robots and autism as a neurodevelopmental disorder that impacts socialcommunication development, and the ways social robots could help children with autism develop social skills. Drawing on ethics research from the EU-funded Development of Robot-Enhanced Therapy for Children with Autism (DREAM) project (framework 7), this paper explores how ethics evolves and developed in this European project.

  • 35.
    Rosén, Julia
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Richardson, Kathleen
    De Montfort University, Leicester, United Kingdom.
    Lindblom, Jessica
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    The Robot Illusion: Facts and Fiction2018Conference paper (Refereed)
    Abstract [en]

    "To researchers and technicians working with robots on a daily basis, it is most often obvious what is part of the staging and not, and thus it may be easy to forget that illusions like these are not explicit and the that the general public may actually be deceived. Should the disclosure of the illusion be the responsibility of roboticists? Or should the assumption be that human beings, on the basis of their experiences as an audience in film, theatre, music or video gaming, assume the audience is able to enjoy the experience without needing to know everything in advance about how the illusion is created? Therefore, we believe that a discussion of whether or not researchers should be more transparent in what kinds of machines they are presenting is necessary. How can researchers present interactive robots in an engaging way, without misleading the audience?"

  • 36.
    Syrén, Felicia
    et al.
    Textile Materials Technology, Department of Textile Technology, Faculty of Textiles, Engineering and Business, University of Borås, Borås, Sweden.
    Li, Cai
    University of Skövde, School of Informatics.
    Billing, Erik
    University of Skövde, School of Informatics.
    Lund, Anja
    Textile Materials Technology, Department of Textile Technology, Faculty of Textiles, Engineering and Business, University of Borås, Borås, Sweden.
    Nierstrasz, Vincent
    Textile Materials Technology, Department of Textile Technology, Faculty of Textiles, Engineering and Business, University of Borås, Borås, Sweden.
    Characterization of textile resistive strain sensors2016Conference paper (Other academic)
  • 37.
    Vernon, David
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Hemeren, Paul
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Thill, Serge
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Ziemke, Tom
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Computer and Information Science, Linköping University, Sweden.
    An Architecture-oriented Approach to System Integration in Collaborative Robotics Research Projects: An Experience Report2015In: Journal of Software Engineering for Robotics, ISSN 2035-3928, E-ISSN 2035-3928, Vol. 6, no 1, p. 15-32Article in journal (Refereed)
    Abstract [en]

    Effective system integration requires strict adherence to strong software engineering standards, a practice not much favoured in many collaborative research projects. We argue that component-based software engineering (CBSE) provides a way to overcome this problem because it provides flexibility for developers while requiring the adoption of only a modest number of software engineering practices. This focus on integration complements software re-use, the more usual motivation for adopting CBSE. We illustrate our argument by showing how a large-scale system architecture for an application in the domain of robot-enhanced therapy for children with autism spectrum disorder (ASD) has been implemented. We highlight the manner in which the integration process is facilitated by the architecture implementation of a set of placeholder components that comprise stubs for all functional primitives, as well as the complete implementation of all inter-component communications. We focus on the component-port-connector meta-model and show that the YARP robot platform is a well-matched middleware framework for the implementation of this model. To facilitate the validation of port-connector communication, we configure the initial placeholder implementation of the system architecture as a discrete event simulation and control the invocation of each component’s stub primitives probabilistically. This allows the system integrator to adjust the rate of inter-component communication while respecting its asynchronous and concurrent character. Also, individual ports and connectors can be periodically selected as the simulator cycles through each primitive in each sub-system component. This ability to control the rate of connector communication considerably eases the task of validating component-port-connector behaviour in a large system. Ultimately, over and above its well-accepted benefits for software re-use in robotics, CBSE strikes a good balance between software engineering best practice and the socio-technical problem of managing effective integration in collaborative robotics research projects. 

  • 38.
    Zhou, Bo
    et al.
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany / University of Kaiserslautern, Kaiserslautern, Germany.
    Cruz, Heber Zurian
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany / University of Kaiserslautern, Kaiserslautern, Germany.
    Atefi, Seyed Reza
    Swedish School of Textiles, University of Borås, Borås, Sweden.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Seoane, Fernando
    Inst. for Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden / Dept. Biomedical Engineering, Karolinska University Hospital, Stockholm, Sweden / Swedish School of Textiles, University of Borås, Borås, Sweden.
    Lukowicz, Paul
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany / University of Kaiserslautern, Kaiserslautern, Germany.
    TouchMe: Full-textile Touch Sensitive Skin for Encouraging Human-Robot Interaction2017Conference paper (Refereed)
  • 39.
    Zhou, Bo
    et al.
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
    Velez Altamirano, Carlos Andres
    Department Computer Science, University of Kaiserslautern, Kaiserslautern, Germany.
    Cruz Zurian, Heber
    Department Computer Science, University of Kaiserslautern, Kaiserslautern, Germany.
    Atefi, Seyed Reza
    Swedish School of Textiles, University of Borås, Borås, Sweden.
    Billing, Erik
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Seoane Martinez, Fernando
    Swedish School of Textiles, University of Borås, Borås, Sweden / Institute for Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden / Department Biomedical Engineering, Karolinska University Hospital, Stockholm, Sweden.
    Lukowicz, Paul
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany / Department Computer Science, University of Kaiserslautern, Kaiserslautern, Germany.
    Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction2017In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 17, no 11, article id 2585Article in journal (Refereed)
    Abstract [en]

    In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments’ contribution. The best performing feature-classifier combination can recognize the gestures with a 93.3% accuracy from a known group of participants, and 89.1% from strangers.

1 - 39 of 39
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf