his.sePublikationer
Ändra sökning
Avgränsa sökresultatet
1 - 43 av 43
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi.
    Rosén, Julia
    Högskolan i Skövde, Institutionen för informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi.
    Expectations of robot technology in welfare2019Ingår i: The second workshop on social robots in therapy and care, Daegu, 2019Konferensbidrag (Refereegranskat)
    Abstract [en]

    We report findings from a survey on expectations of robot technology in welfare, within the coming 20 years. 34 assistant nurses answered a questionnaire on which tasks, from their daily work, that they believe robots can perform, already today or in the near future. Additionally, the Negative attitudes toward robots scale (NARS) was used to estimate participants' attitudes towards robots in general. Results reveal high expectations of robots, where at least half of the participants answered Already today or Within 10 years to 9 out of 10 investigated tasks. Participants were also fairly positive towards robots, reporting low scores on NARS. The obtained results can be interpreted as a serious over-estimation of what robots will be able to do in the near future, but also large varieties in participants' interpretation of what robots are. We identify challenges in communicating both excitement towards a technology in rapid development and realistic limitations of this technology.

  • 2.
    Cai, Haibin
    et al.
    School of Computing, University of Portsmouth, U.K..
    Fang, Yinfeng
    School of Computing, University of Portsmouth, U.K..
    Ju, Zhaojie
    School of Computing, University of Portsmouth, U.K..
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    David, Daniel
    Department of Clinical Psychology and Psychotherapy, Babe-Bolyai University, Cluj-Napoca, Romania.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Computer and Information Science, Linkoping University, Sweden.
    Thill, Serge
    University of Plymouth, U.K..
    Belpaeme, Tony
    University of Plymouth, U.K..
    Vanderborght, Bram
    Vrije Universiteit Brussel and Flanders Make, Belgium.
    Vernon, David
    Carnegie Mellon University Africa, Rwanda.
    Richardson, Kathleen
    De Montfort University, U.K..
    Liu, Honghai
    School of Computing, University of Portsmouth, U.K..
    Sensing-enhanced Therapy System for Assessing Children with Autism Spectrum Disorders: A Feasibility Study2019Ingår i: IEEE Sensors Journal, ISSN 1530-437X, E-ISSN 1558-1748, Vol. 19, nr 4, s. 1508-1518Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    It is evident that recently reported robot-assisted therapy systems for assessment of children with autism spectrum disorder (ASD) lack autonomous interaction abilities and require significant human resources. This paper proposes a sensing system that automatically extracts and fuses sensory features such as body motion features, facial expressions, and gaze features, further assessing the children behaviours by mapping them to therapist-specified behavioural classes. Experimental results show that the developed system has a capability of interpreting characteristic data of children with ASD, thus has the potential to increase the autonomy of robots under the supervision of a therapist and enhance the quality of the digital description of children with ASD. The research outcomes pave the way to a feasible machine-assisted system for their behaviour assessment. IEEE

  • 3.
    Hernández García, Daniel
    et al.
    University of Plymouth.
    Esteban, Pablo G.
    Vrije Universiteit Brussel.
    Lee, Hee Rin
    UC San Diego.
    Romeo, Marta
    University of Manchester.
    Senft, Emmanuel
    University of Plymouth.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Social Robots in Therapy and Care2019Ingår i: Proceedings of the 14th ACM/IEEE International Conference on Human Robot Interaction, Daegu: IEEE conference proceedings, 2019Konferensbidrag (Refereegranskat)
  • 4.
    Alenljung, Beatrice
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Andreasson, Rebecca
    Department of Information Technology, Uppsala University.
    Lowe, Robert
    Department of Applied IT, University of Gothenburg.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Conveying Emotions by Touch to the Nao Robot: A User Experience Perspective2018Ingår i: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 2, nr 4, artikel-id 82Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Social robots are expected gradually to be used by more and more people in a widerrange of settings, domestic as well as professional. As a consequence, the features and qualityrequirements on human–robot interaction will increase, comprising possibilities to communicateemotions, establishing a positive user experience, e.g., using touch. In this paper, the focus is ondepicting how humans, as the users of robots, experience tactile emotional communication with theNao Robot, as well as identifying aspects affecting the experience and touch behavior. A qualitativeinvestigation was conducted as part of a larger experiment. The major findings consist of 15 differentaspects that vary along one or more dimensions and how those influence the four dimensions ofuser experience that are present in the study, as well as the different parts of touch behavior ofconveying emotions.

  • 5.
    Andreasson, Rebecca
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Alenljung, Beatrice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lowe, Robert
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Affective Touch in Human–Robot Interaction: Conveying Emotion to the Nao Robot2018Ingår i: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 10, nr 4, s. 473-491Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Affective touch has a fundamental role in human development, social bonding, and for providing emotional support in interpersonal relationships. We present, what is to our knowledge, the first HRI study of tactile conveyance of both positive and negative emotions (affective touch) on the Nao robot, and based on an experimental set-up from a study of human–human tactile communication. In the present work, participants conveyed eight emotions to a small humanoid robot via touch. We found that female participants conveyed emotions for a longer time, using more varied interaction and touching more regions on the robot’s body, compared to male participants. Several differences between emotions were found such that emotions could be classified by the valence of the emotion conveyed, by combining touch amount and duration. Overall, these results show high agreement with those reported for human–human affective tactile communication and could also have impact on the design and placement of tactile sensors on humanoid robots.

  • 6.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Computer & Information Science, Linköping University.
    Robot-Enhanced Therapy for Children with Autism2018Ingår i: Proceedings of the 14th SweCog Conference, Skövde: University of Skövde , 2018, s. 19-22Konferensbidrag (Refereegranskat)
  • 7.
    Fast-Berglund, Åsa
    et al.
    Chalmers University of Technology, Gothenburg, Sweden .
    Thorvald, Peter
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningscentrum för Virtuella system.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Palmquist, Adam
    Insert Coin, Gothenburg, Sweden.
    Romero, David
    Tecnologico de Monterrey, Mexico.
    Weichhart, Georg
    Profactor, Studgart, Austria.
    Conceptualizing Embodied Automation to Increase Transfer of Tacit knowledge in the Learning Factory2018Ingår i: Proceedings of IEEE 2018 International Conference on Intelligent Systems (IS), IEEE, 2018Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    This paper will discuss how cooperative agent-based systems, deployed with social skills and embodied automation features, can be used to interact with the operators in order to facilitate sharing of tacit knowledge and its later conversion into explicit knowledge. The proposal is to combine social software robots (softbots) with industrial collaborative robots (co-bots) to create a digital apprentice for experienced operators in human- robot collaboration workstations. This is to address the problem within industry that experienced operators have difficulties in explaining how they perform their tasks and later, how to turn this procedural knowledge (knowhow) into instructions to be shared among other operators. By using social softbots and co-bots, as cooperative agents with embodied automation features, we think we can facilitate the ‘externalization’ of procedural knowledge in human-robot interaction(s). This enabled by the capabilities of social cooperative agents with embodied automation features of continuously learning by looking over the shoulder of the operators, and documenting and collaborating with them in a non-intrusive way as they perform their daily tasks. 

  • 8.
    Lowe, Robert
    et al.
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Andreasson, Rebecca
    Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Alenljung, Beatrice
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lund, Anja
    Department of Chemistry and Chemical Engineering, Chalmers University of Technology, Gothenburg, Sweden / The Swedish School of Textiles, University of Borås, Borås, Sweden.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Designing for a Wearable Affective Interface for the NAO Robot: A Study of Emotion Conveyance by Touch2018Ingår i: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 2, nr 1Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We here present results and analysis from a study of affective tactile communication between human and humanoid robot (the NAO robot). In the present work, participants conveyed eight emotions to the NAO via touch. In this study, we sought to understand the potential for using a wearable affective (tactile) interface, or WAffI. The aims of our study were to address the following: (i) how emotions and affective states can be conveyed (encoded) to such a humanoid robot, (ii) what are the effects of dressing the NAO in the WAffI on emotion conveyance and (iii) what is the potential for decoding emotion and affective states. We found that subjects conveyed touch for longer duration and over more locations on the robot when the NAO was dressed with WAffI than when it was not. Our analysis illuminates ways by which affective valence, and separate emotions, might be decoded by a humanoid robot according to the different features of touch: intensity, duration, location, type. Finally, we discuss the types of sensors and their distribution as they may be embedded within the WAffI and that would likely benefit Human-NAO (and Human-Humanoid) interaction along the affective tactile dimension.

  • 9.
    Messina Dahlberg, Giulia
    et al.
    University of Gothenburg, Sweden.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Montebelli, Alberto
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Negotiating epistemic spaces for dialogue across disciplines in higher education: The case of the Pepper experiment2018Ingår i: EARLI, Joint SIG10-21 Conference, 2018, Luxembourg, 2018, Luxembourg, 2018Konferensbidrag (Refereegranskat)
  • 10.
    Richardson, Kathleen
    et al.
    De Montfort University, Leicester, United Kingdom.
    Coeckelbergh, Mark
    De Montfort University, Leicester, United Kingdom.
    Wakunuma, Kutoma
    De Montfort University, Leicester, United Kingdom.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Gómez, Pablo
    Vrije Universiteit, Brussel, Belgium.
    Vanderborght, Bram
    Vrije Universiteit, Brussel, Belgium.
    Belpaeme, Tony
    University of Plymouth, Plymouth, United Kingdom.
    Robot Enhanced Therapy for Children with Autism (DREAM): A Social Model of Autism2018Ingår i: IEEE technology & society magazine, ISSN 0278-0097, E-ISSN 1937-416X, Vol. 37, nr 1, s. 30-39Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The development of social robots for children with autism has been a growth field for the past 15 years. This article reviews studies in robots and autism as a neurodevelopmental disorder that impacts socialcommunication development, and the ways social robots could help children with autism develop social skills. Drawing on ethics research from the EU-funded Development of Robot-Enhanced Therapy for Children with Autism (DREAM) project (framework 7), this paper explores how ethics evolves and developed in this European project.

  • 11.
    Rosén, Julia
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Richardson, Kathleen
    De Montfort University, Leicester, United Kingdom.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    The Robot Illusion: Facts and Fiction2018Ingår i: Proceedings of Workshop in Explainable Robotics System (HRI), 2018Konferensbidrag (Refereegranskat)
    Abstract [en]

    "To researchers and technicians working with robots on a daily basis, it is most often obvious what is part of the staging and not, and thus it may be easy to forget that illusions like these are not explicit and the that the general public may actually be deceived. Should the disclosure of the illusion be the responsibility of roboticists? Or should the assumption be that human beings, on the basis of their experiences as an audience in film, theatre, music or video gaming, assume the audience is able to enjoy the experience without needing to know everything in advance about how the illusion is created? Therefore, we believe that a discussion of whether or not researchers should be more transparent in what kinds of machines they are presenting is necessary. How can researchers present interactive robots in an engaging way, without misleading the audience?"

  • 12.
    Alenljung, Beatrice
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Andreasson, Rebecca
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Information Technology, Visual Information & Interaction. Uppsala University, Uppsala, Sweden.
    Billing, Erik A.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lowe, Robert
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    User Experience of Conveying Emotions by Touch2017Ingår i: Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2017, s. 1240-1247Konferensbidrag (Refereegranskat)
    Abstract [en]

    In the present study, 64 users were asked to convey eight distinct emotion to a humanoid Nao robot via touch, and were then asked to evaluate their experiences of performing that task. Large differences between emotions were revealed. Users perceived conveying of positive/pro-social emotions as significantly easier than negative emotions, with love and disgust as the two extremes. When asked whether they would act differently towards a human, compared to the robot, the users’ replies varied. A content analysis of interviews revealed a generally positive user experience (UX) while interacting with the robot, but users also found the task challenging in several ways. Three major themes with impact on the UX emerged; responsiveness, robustness, and trickiness. The results are discussed in relation to a study of human-human affective tactile interaction, with implications for human-robot interaction (HRI) and design of social and affective robotics in particular. 

  • 13.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    A New Look at Habits using Simulation Theory2017Ingår i: Proceedings of the Digitalisation for a Sustainable Society: Embodied, Embedded, Networked, Empowered through Information, Computation & Cognition, Göteborg, Sweden, 2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    Habits as a form of behavior re-execution without explicit deliberation is discussed in terms of implicit anticipation, to be contrasted with explicit anticipation and mental simulation. Two hypotheses, addressing how habits and mental simulation may be implemented in the brain and to what degree they represent two modes brain function, are formulated. Arguments for and against the two hypotheses are discussed shortly, specifically addressing whether habits and mental simulation represent two distinct functions, or to what degree there may be intermediate forms of habit execution involving partial deliberation. A potential role of habits in memory consolidation is also hypnotized.

  • 14.
    Esteban, Pablo G.
    et al.
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Baxter, Paul
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Belpaeme, Tony
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Cai, Haibin
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Cao, Hoang-Long
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Coeckelbergh, Mark
    Centre for Computing and Social Responsibility, Faculty of Technology, De Montfort University, Leicester, United Kingdom.
    Costescu, Cristina
    Department of Clinical Psychology and Psychotherapy, Babeş-Bolyai University, Cluj-Napoca, Romania.
    David, Daniel
    Department of Clinical Psychology and Psychotherapy, Babeş-Bolyai University, Cluj-Napoca, Romania.
    De Beir, Albert
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Fang, Yinfeng
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Ju, Zhaojie
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Kennedy, James
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Liu, Honghai
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Mazel, Alexandre
    Softbank Robotics Europe, Paris, France.
    Pandey, Amit
    Softbank Robotics Europe, Paris, France.
    Richardson, Kathleen
    Centre for Computing and Social Responsibility, Faculty of Technology, De Montfort University, Leicester, United Kingdom.
    Senft, Emmanuel
    Centre for Robotics and Neural Systems, Plymouth University, Plymouth, United Kingdom.
    Thill, Serge
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Van de Perre, Greet
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Vanderborght, Bram
    Robotics and Multibody Mechanics Research Group, Agile & Human Centered Production and Robotic Systems Research Priority of Flanders Make, Vrije Universiteit Brussel, Brussels, Belgium.
    Vernon, David
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Yu, Hui
    School of Computing, University of Portsmouth, Portsmouth, United Kingdom.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    How to Build a Supervised Autonomous System for Robot-Enhanced Therapy for Children with Autism Spectrum Disorder2017Ingår i: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 8, nr 1, s. 18-38Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms.However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.

  • 15.
    Lowe, Robert
    et al.
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Almér, Alexander
    Department of Applied IT, University of Gothenburg, Gothenburg, Sweden.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Sandamirskaya, Yulia
    Institute of Neuroinformatics, Neuroscience Center Zurich, University and ETH Zurich, Zurich, Switzerland.
    Balkenius, Christian
    Cognitive Science, Lund University, Lund, Sweden.
    Affective–associative two-process theory: a neurocomputational account of partial reinforcement extinction effects2017Ingår i: Biological Cybernetics, ISSN 0340-1200, E-ISSN 1432-0770, Vol. 111, nr 5-6, s. 365-388Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The partial reinforcement extinction effect (PREE) is an experimentally established phenomenon: behavioural response to a given stimulus is more persistent when previously inconsistently rewarded than when consistently rewarded. This phenomenon is, however, controversial in animal/human learning theory. Contradictory findings exist regarding when the PREE occurs. One body of research has found a within-subjects PREE, while another has found a within-subjects reversed PREE (RPREE). These opposing findings constitute what is considered the most important problem of PREE for theoreticians to explain. Here, we provide a neurocomputational account of the PREE, which helps to reconcile these seemingly contradictory findings of within-subjects experimental conditions. The performance of our model demonstrates how omission expectancy, learned according to low probability reward, comes to control response choice following discontinuation of reward presentation (extinction). We find that a PREE will occur when multiple responses become controlled by omission expectation in extinction, but not when only one omission-mediated response is available. Our model exploits the affective states of reward acquisition and reward omission expectancy in order to differentially classify stimuli and differentially mediate response choice. We demonstrate that stimulus–response (retrospective) and stimulus–expectation–response (prospective) routes are required to provide a necessary and sufficient explanation of the PREE versus RPREE data and that Omission representation is key for explaining the nonlinear nature of extinction data.

  • 16.
    Lowe, Robert
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Göteborgs Universitet, Tillämpad IT.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Affective-Associative Two-Process theory: A neural network investigation of adaptive behaviour in differential outcomes training2017Ingår i: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 25, nr 1, s. 5-23Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this article we present a novel neural network implementation of Associative Two-Process (ATP) theory based on an Actor–Critic-like architecture. Our implementation emphasizes the affective components of differential reward magnitude and reward omission expectation and thus we model Affective-Associative Two-Process theory (Aff-ATP). ATP has been used to explain the findings of differential outcomes training (DOT) procedures, which emphasize learning differentially valuated outcomes for cueing actions previously associated with those outcomes. ATP hypothesizes the existence of a ‘prospective’ memory route through which outcome expectations can bring to bear on decision making and can even substitute for decision making based on the ‘retrospective’ inputs of standard working memory. While DOT procedures are well recognized in the animal learning literature they have not previously been computationally modelled. The model presented in this article helps clarify the role of ATP computationally through the capturing of empirical data based on DOT. Our Aff-ATP model illuminates the different roles that prospective and retrospective memory can have in decision making (combining inputs to action selection functions). In specific cases, the model’s prospective route allows for adaptive switching (correct action selection prior to learning) following changes in the stimulus–response–outcome contingencies.

  • 17.
    Montebelli, Alberto
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik A.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Messina Dahlberg, Giulia
    Department of Educational Research and Development, University of Borås.
    Reframing HRI Education: A Dialogic Reformulation of HRI Education to Promote Diverse Thinking and Scientific Progress2017Ingår i: Journal of Human-Robot Interaction, E-ISSN 2163-0364, Vol. 6, nr 2, s. 3-26Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Over the last few years, technological developments in semi-autonomous machines have raised awareness about the strategic importance of human-robot interaction (HRI) and its technical and social implications. At the same time, HRI still lacks an established pedagogic tradition in the coordination of its intrinsically interdisciplinary nature. This scenario presents steep and urgent challenges for HRI education. Our contribution presents a normative interdisciplinary dialogic framework for HRI education, denoted InDia wheel, aimed toward seamless and coherent integration of the variety of disciplines that contribute to HRI. Our framework deemphasizes technical mastery, reducing it to a necessary yet not sufficient condition for HRI design, thus modifying the stereotypical narration of HRI-relevant disciplines and creating favorable conditions for a more diverse participation of students. Prospectively, we argue, the design of an educational 'space of interaction’ that focuses on a variety of voices, without giving supremacy to one over the other, will be key to successful HRI education and practice.

  • 18.
    Redyuk, Sergey
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik A.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Challenges in face expression recognition from video2017Ingår i: SweDS 2017: The 5th Swedish Workshop on Data Science / [ed] Alexander Schliep, 2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    Identication of emotion from face expressions is a relatively well understood problem where state-of-the-art solutions perform almost as well as humans. However, in many practical applications, disruptingfactors still make identication of face expression a very challenging problem. Within the project DREAM1- Development of Robot Enhanced Therapy for Children with Autism Spectrum Disorder (ASD), we areidentifying face expressions from children with ASD, during therapy. Identied face expressions are usedboth in the online system, to guide the behavior of the robot, and o-line, to automatically annotate videofor measurements of clinical outcomes.

    This setup puts several new challenges on the face expression technology. First of all, in contrast tomost open databases of face expressions comprising adult faces, we are recognizing emotions from childrenbetween the age of 4 to 7 years. Secondly, children with ASD may show emotions dierently, compared totypically developed children. Thirdly, the children move freely during the intervention and, despite the useof several cameras tracking the face of the child from dierent angles, we rarely have a full frontal view ofthe face. Fourthly, and nally, the amount of native data is very limited.

    Although we have access to extensive video recorded material from therapy sessions with ASD children,potentially constituting a very valuable dataset for both training and testing of face expression implemen-tations, this data proved to be dicult to use. A session of 10 minutes of video may comprise only a fewinstances of expressions e.g. smiling. As such, although we have many hours of video in total, the data isvery sparse and the number of clear face expressions is still rather small for it to be used as training data inmost machine learning (ML) techniques.

    We therefore focused on the use of synthetic datasets for transfer learning, trying to overcome thechallenges mentioned above. Three techniques were evaluated: (1) convolutional neural networks for imageclassication by analyzing separate video frames, (2) recurrent neural networks for sequence classication tocapture facial dynamics, and (3) ML algorithms classifying pre-extracted facial landmarks.

    The performance of all three models are unsatisfactory. Although the proposed models were of highaccuracy, approximately 98%, while classifying a test set, they performed poorly on the real-world data.This was due to the usage of a synthetic dataset which had mostly a frontal view of faces. The models whichhave not seen similar examples before failed to classify them correctly. The accuracy decreased drasticallywhen the child rotated her head or covered a part of her face. Even if the frame clearly captured a facialexpression, ML algorithms were not able to provide a stable positive classication rate. Thus, elaborationon training datasets and designing robust ML models are required. Another option is to incorporate voiceand gestures of the child into the model to classify emotional state as a complex concept.

  • 19.
    Sun, Jiong
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Seoane, Fernando
    Swedish School of Textiles, University of Borås, Borås, Sweden / Inst. for Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden / Dept. Biomedical Engineering, Karolinska University Hospital, Stockholm, Sweden.
    Zhou, Bo
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
    Högberg, Dan
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningscentrum för Virtuella system.
    Hemeren, Paul
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Categories of touch: Classifying human touch using a soft tactile sensor2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    Social touch plays an important role not only in human communication but also in human-robot interaction. We here report results from an ongoing study on affective human-robot interaction. In our previous research, touch type is shown to be informative for communicated emotion. Here, a soft matrix array sensor is used to capture the tactile interaction between human and robot and a method based on PCA and kNN is applied in the experiment to classify different touch types, constituting a pre-stage to recognizing emotional tactile interaction. Results show an average recognition rate for classified touch type of 71%, with a large variability between different types of touch. Results are discussed in relation to affective HRI and social robotics.

  • 20.
    Sun, Jiong
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Redyuk, Sergey
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Högberg, Dan
    Högskolan i Skövde, Institutionen för ingenjörsvetenskap. Högskolan i Skövde, Forskningscentrum för Virtuella system.
    Hemeren, Paul
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Tactile Interaction and Social Touch: Classifying Human Touch using a Soft Tactile Sensor2017Ingår i: HAI '17: Proceedings of the 5th International Conference on Human Agent Interaction, New York: Association for Computing Machinery (ACM), 2017, s. 523-526Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents an ongoing study on affective human-robot interaction. In our previous research, touch type is shown to be informative for communicated emotion. Here, a soft matrix array sensor is used to capture the tactile interaction between human and robot and 6 machine learning methods including CNN, RNN and C3D are implemented to classify different touch types, constituting a pre-stage to recognizing emotional tactile interaction. Results show an average recognition rate of 95% by C3D for classified touch types, which provide stable classification results for developing social touch technology. 

  • 21.
    Zhou, Bo
    et al.
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany / University of Kaiserslautern, Kaiserslautern, Germany.
    Cruz, Heber Zurian
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany / University of Kaiserslautern, Kaiserslautern, Germany.
    Atefi, Seyed Reza
    Swedish School of Textiles, University of Borås, Borås, Sweden.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Seoane, Fernando
    Inst. for Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden / Dept. Biomedical Engineering, Karolinska University Hospital, Stockholm, Sweden / Swedish School of Textiles, University of Borås, Borås, Sweden.
    Lukowicz, Paul
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany / University of Kaiserslautern, Kaiserslautern, Germany.
    TouchMe: Full-textile Touch Sensitive Skin for Encouraging Human-Robot Interaction2017Konferensbidrag (Refereegranskat)
  • 22.
    Zhou, Bo
    et al.
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
    Velez Altamirano, Carlos Andres
    Department Computer Science, University of Kaiserslautern, Kaiserslautern, Germany.
    Cruz Zurian, Heber
    Department Computer Science, University of Kaiserslautern, Kaiserslautern, Germany.
    Atefi, Seyed Reza
    Swedish School of Textiles, University of Borås, Borås, Sweden.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Seoane Martinez, Fernando
    Swedish School of Textiles, University of Borås, Borås, Sweden / Institute for Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden / Department Biomedical Engineering, Karolinska University Hospital, Stockholm, Sweden.
    Lukowicz, Paul
    German Research Center for Artificial Intelligence, Kaiserslautern, Germany / Department Computer Science, University of Kaiserslautern, Kaiserslautern, Germany.
    Textile Pressure Mapping Sensor for Emotional Touch Detection in Human-Robot Interaction2017Ingår i: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 17, nr 11, artikel-id 2585Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper, we developed a fully textile sensing fabric for tactile touch sensing as the robot skin to detect human-robot interactions. The sensor covers a 20-by-20 cm2 area with 400 sensitive points and samples at 50 Hz per point. We defined seven gestures which are inspired by the social and emotional interactions of typical people to people or pet scenarios. We conducted two groups of mutually blinded experiments, involving 29 participants in total. The data processing algorithm first reduces the spatial complexity to frame descriptors, and temporal features are calculated through basic statistical representations and wavelet analysis. Various classifiers are evaluated and the feature calculation algorithms are analyzed in details to determine each stage and segments’ contribution. The best performing feature-classifier combination can recognize the gestures with a 93.3% accuracy from a known group of participants, and 89.1% from strangers.

  • 23.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Svensson, Henrik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lowe, Robert
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Göteborgs Universitet, Tillämpad IT.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Computer and Information Science, Linköping University.
    Finding Your Way from the Bed to the Kitchen: Re-enacting and Re-combining Sensorimotor Episodes Learned from Human Demonstration2016Ingår i: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 3, nr 9Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Several simulation theories have been proposed as an explanation for how humans and other agents internalize an "inner world" that allows them to simulate interactions with the external real world - prospectively and retrospectively. Such internal simulation of interaction with the environment has been argued to be a key mechanism behind mentalizing and planning. In the present work, we study internal simulations in a robot acting in a simulated human environment. A model of sensory-motor interactions with the environment is generated from human demonstrations, and tested on a Robosoft Kompai robot. The model is used as a controller for the robot, reproducing the demonstrated behavior. Information from several different demonstrations is mixed, allowing the robot to produce novel paths through the environment, towards a goal specified by top-down contextual information. 

    The robot model is also used in a covert mode, where actions are inhibited and perceptions are generated by a forward model. As a result, the robot generates an internal simulation of the sensory-motor interactions with the environment. Similar to the overt mode, the model is able to reproduce the demonstrated behavior as internal simulations. When experiences from several demonstrations are combined with a top-down goal signal, the system produces internal simulations of novel paths through the environment. These results can be understood as the robot imagining an "inner world" generated from previous experience, allowing it to try out different possible futures without executing actions overtly.

    We found that the success rate in terms of reaching the specified goal was higher during internal simulation, compared to overt action. These results are linked to a reduction in prediction errors generated during covert action. Despite the fact that the model is quite successful in terms of generating covert behavior towards specified goals, internal simulations display different temporal distributions compared to their overt counterparts. Links to human cognition and specifically mental imagery are discussed.

  • 24.
    Lowe, Robert
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. University of Gothenburg, Sweden.
    Barakova, Emilia
    Eindhoven University of Technology, The Netherlands.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Broekens, Joost
    Delft University of Technology, The Netherlands.
    Grounding emotions in robots: An introduction to the special issue2016Ingår i: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 24, nr 5, s. 263-266Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Robots inhabiting human environments need to act in relation to their own experience and embodiment as well as to social and emotional aspects. Robots that learn, act upon and incorporate their own experience and perception of others’ emotions into their responses make not only more productive artificial agents but also agents with whom humans can appropriately interact. This special issue seeks to address the significance of grounding of emotions in robots in relation to aspects of physical and homeostatic interaction in the world at an individual and social level. Specific questions concern: How can emotion and social interaction be grounded in the behavioral activity of the robotic system? Is a robot able to have intrinsic emotions? How can emotions, grounded in the embodiment of the robot, facilitate individually and socially adaptive behavior to the robot? This opening chapter provides an introduction to the articles that comprise this special issue and briefly discusses their relationship to grounding emotions in robots.

  • 25.
    Syrén, Felicia
    et al.
    Textile Materials Technology, Department of Textile Technology, Faculty of Textiles, Engineering and Business, University of Borås, Borås, Sweden.
    Li, Cai
    Högskolan i Skövde, Institutionen för informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi.
    Lund, Anja
    Textile Materials Technology, Department of Textile Technology, Faculty of Textiles, Engineering and Business, University of Borås, Borås, Sweden.
    Nierstrasz, Vincent
    Textile Materials Technology, Department of Textile Technology, Faculty of Textiles, Engineering and Business, University of Borås, Borås, Sweden.
    Characterization of textile resistive strain sensors2016Konferensbidrag (Övrigt vetenskapligt)
  • 26.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Hellström, Thomas
    Institutionen för Datavetenskap, Umeå Universitet.
    Janlert, Lars-Erik
    Institutionen för Datavetenskap, Umeå Universitet.
    Simultaneous recognition and reproduction of demonstrated behavior2015Ingår i: Biologically Inspired Cognitive Architectures, ISSN 2212-683X, Vol. 12, s. 43-53, artikel-id BICA114Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Predictions of sensory-motor interactions with the world is often referred to as a key component in cognition. We here demonstrate that prediction of sensory-motor events, i.e., relationships between percepts and actions, is sufficient to learn navigation skills for a robot navigating in an apartment environment. In the evaluated application, the simulated Robosoft Kompai robot learns from human demonstrations. The system builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. With this architecture, referred to as Predictive Sequence Learning (PSL), learned associations can be used to control the robot and to predict expected sensor events in response to executed actions. The predictive component of PSL is used in two ways: 1) to identify which behavior that best matches current context and 2) to decide when to learn, i.e., update the confidence of different sensory-motor associations. Using this approach, knowledge interference due to over-fitting of an increasingly complex world model can be avoided. The system can also automatically estimate the confidence in the currently executed behavior and decide when to switch to an alternate behavior. The performance of PSL as a method for learning from demonstration is evaluated with, and without, contextual information. The results indicate that PSL without contextual information can learn and reproduce simple behaviors, but fails when the behavioral repertoire becomes more diverse. When a contextual layer is added, PSL successfully identifies the most suitable behavior in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contextual information. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction. 

  • 27.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Lowe, Robert
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Applied IT, University of Gothenburg, Sweden.
    Sandamirskaya, Yulia
    Institute of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland.
    Simultaneous Planning and Action: Neural-dynamic Sequencing of Elementary Behaviors in Robot Navigation2015Ingår i: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 23, nr 5, s. 243-264Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A technique for Simultaneous Planning and Action (SPA) based on Dynamic Field Theory (DFT) is presented. The model builds on previous workon representation of sequential behavior as attractors in dynamic neural fields. Here, we demonstrate how chains of competing attractors can be used to represent dynamic plans towards a goal state. The presentwork can be seen as an addition to a growing body of work that demonstratesthe role of DFT as a bridge between low-level reactive approachesand high-level symbol processing mechanisms. The architecture is evaluatedon a set of planning problems using a simulated e-puck robot, including analysis of the system's behavior in response to noise and temporary blockages ofthe planned route. The system makes no explicit distinction betweenplanning and execution phases, allowing continuous adaptation of the planned path. The proposed architecture exploits the DFT property of stability in relation to noise and changes in the environment. The neural dynamics are also exploited such that stay-or-switch action selection emerges where blockage of a planned path occurs: stay until the transient blockage is removed versus switch to an alternative route to the goal.

  • 28.
    Vernon, David
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Hemeren, Paul
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Thill, Serge
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Ziemke, Tom
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi. Department of Computer and Information Science, Linköping University, Sweden.
    An Architecture-oriented Approach to System Integration in Collaborative Robotics Research Projects: An Experience Report2015Ingår i: Journal of Software Engineering for Robotics, ISSN 2035-3928, E-ISSN 2035-3928, Vol. 6, nr 1, s. 15-32Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Effective system integration requires strict adherence to strong software engineering standards, a practice not much favoured in many collaborative research projects. We argue that component-based software engineering (CBSE) provides a way to overcome this problem because it provides flexibility for developers while requiring the adoption of only a modest number of software engineering practices. This focus on integration complements software re-use, the more usual motivation for adopting CBSE. We illustrate our argument by showing how a large-scale system architecture for an application in the domain of robot-enhanced therapy for children with autism spectrum disorder (ASD) has been implemented. We highlight the manner in which the integration process is facilitated by the architecture implementation of a set of placeholder components that comprise stubs for all functional primitives, as well as the complete implementation of all inter-component communications. We focus on the component-port-connector meta-model and show that the YARP robot platform is a well-matched middleware framework for the implementation of this model. To facilitate the validation of port-connector communication, we configure the initial placeholder implementation of the system architecture as a discrete event simulation and control the invocation of each component’s stub primitives probabilistically. This allows the system integrator to adjust the rate of inter-component communication while respecting its asynchronous and concurrent character. Also, individual ports and connectors can be periodically selected as the simulator cycles through each primitive in each sub-system component. This ability to control the rate of connector communication considerably eases the task of validating component-port-connector behaviour in a large system. Ultimately, over and above its well-accepted benefits for software re-use in robotics, CBSE strikes a good balance between software engineering best practice and the socio-technical problem of managing effective integration in collaborative robotics research projects. 

  • 29.
    Billing, Erik
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Balkenius, Christian
    Lund University Cognitive Science, Lund, Sweden.
    Modeling the Interplay between Conditioning and Attention in a Humanoid Robot: Habituation and Attentional Blocking2014Ingår i: Proceeding of The 4th International Conference on Development and Learning and on Epigenetic Robotics (IEEE ICDL-EPIROB 2014), IEEE conference proceedings, 2014, s. 41-47Konferensbidrag (Refereegranskat)
    Abstract [en]

    A novel model of role of conditioning in attention is presented and evaluated on a Nao humanoid robot. The model implements conditioning and habituation in interaction with a dynamic neural field where different stimuli compete for activation. The model can be seen as a demonstration of how stimulus-selection and action-selection can be combined and illustrates how positive or negative reinforcement have different effects on attention and action. Attention is directed toward both rewarding and punishing stimuli, but appetitive actions are only directed toward positive stimuli. We present experiments where the model is used to control a Nao robot in a task where it can select between two objects. The model demonstrates some emergent effects also observed in similar experiments with humans and animals, including attentional blocking and latent inhibition.

  • 30.
    Lowe, Robert
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    Sandarmirskaya, Yulia
    Theory of Cognitive Systems, Institut für Neuroinformatik, Ruhr-Universität Bochum, Germany.
    Billing, Erik
    Högskolan i Skövde, Institutionen för informationsteknologi. Högskolan i Skövde, Forskningscentrum för Informationsteknologi.
    A Neural Dynamic Model of Associative Two-Process Theory: The Differential Outcomes Effect and Infant Development2014Ingår i: IEEE ICDL-EPIROB 2014: The Fourth Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, IEEE conference proceedings, 2014, s. 440-447Konferensbidrag (Refereegranskat)
  • 31.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Servin, Martin
    Institutionen för fysik, Umeå universitet.
    Composer: A prototype multilingual model composition tool2013Ingår i: MODPROD2013: 7th MODPROD Workshop on Model-Based Product Development / [ed] Peter Fritzson, Umeå: Umeå universitet , 2013Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Facing the task to design, simulate or optimize a complex system itis common to find models and data for the system expressed in differentformats, implemented in different simulation software tools. When a newmodel is developed, a target platform is chosen and existing componentsimplemented with different tools have to be converted. This results inunnecessary work duplication and lead times. The Modelica languageinitiative [2] partially solves this by allowing developers to move modelsbetween different tools following the Modelica standard. Another possi-bility is to exchange models using the Functional Mockup Interface (FMI)standard that allows computer models to be used as components in othersimulations, possibly implemented using other programming languages[1]. With the Modelica and FMI standards entering development, there isneed for an easy-to-use tool that supports design, editing and simulationof such multilingual systems, as well as for retracting system informationfor formulating and solving optimization problems.A prototype solution for a graphical block diagram tool for design, edit-ing, simulation and optimization of multilingual systems has been createdand evaluated for a specific system. The tool is named Composer [3].The block diagram representation should be generic, independent ofmodel implementations, have a standardized format and yet support effi-cient handling of complex data. It is natural to look for solutions amongmodern web technologies, specifically HTML5. The format for represent-ing two dimensional vector graphics in HTML5 is Scalable Vector Graphics(SVG). We combine the SVG format with the FMI standard. In a firststage, we take the XML-based model description of FMI as a form for de-scribing the interface for each component, in a language independent way.Simulation parameters can also be expressed on this form, and integratedas metadata into the SVG image. 

    The prototype, using SVG in conjunction with FMI, is implementedin JavaScript and allow creation and modification of block diagrams directly in the web browser. Generated SVG images are sent to the serverwhere they are translated to program code, allowing the simulation ofthe dynamical system to be executed using selected implementations. Analternative mode is to generate optimization problem from the systemdefinition and model parameters. The simulation/optimization result is 

    returned to the web browser where it is plotted or processed using otherstandard libraries.The fiber production process at SCA Packaging Obbola [4] is used asan example system and modeled using Composer. The system consists oftwo fiber production lines that produce fiber going to a storage tank [5].The paper machine is taking fiber from the tank as needed for production.A lot of power is required during fiber production and the purpose of themodel was to investigate weather electricity costs could be reduced byrescheduling fiber production over the day, in accordance with the electricity spot price. Components are implemented for dynamical simulationusing OpenModelica and for discrete event using Python. The Python implementation supports constraint propagation between components andoptimization over specified variables. Each component is interfaced as aFunctional Mock-up Unit (FMU), allowing components to be connectedand properties specified in language independent way. From the SVGcontaining the high-level system information, both Modelica and Pythoncode is generated and executed on the web server, potentially hosted ina high performance data center. More implementations could be addedwithout modifying the SVG system description.We have shown that it is possible to separate system descriptions onthe block diagram level from implementations and interface between thetwo levels using FMI. In a continuation of this project, we aim to integratethe FMI standard also for co-simulation, such that components implemented in different languages could be used together. One open questionis to what extent FMUs of the same component, but implemented withdifferent tools, will have the same model description. For the SVG-basedsystem description to be useful, the FMI model description must remainthe same, or at least contain a large overlap, for a single component implemented in different languages. This will be further investigated in futurework.

  • 32.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Cognition Rehearsed: Recognition and Reproduction of Demonstrated Behavior2012Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The work presented in this dissertation investigates techniques for robot Learning from Demonstration (LFD). LFD is a well established approach where the robot is to learn from a set of demonstrations. The dissertation focuses on LFD where a human teacher demonstrates a behavior by controlling the robot via teleoperation. After demonstration, the robot should be able to reproduce the demonstrated behavior under varying conditions. In particular, the dissertation investigates techniques where previous behavioral knowledge is used as bias for generalization of demonstrations.

    The primary contribution of this work is the development and evaluation of a semi-reactive approach to LFD called Predictive Sequence Learning (PSL). PSL has many interesting properties applied as a learning algorithm for robots. Few assumptions are introduced and little task-specific configuration is needed. PSL can be seen as a variable-order Markov model that progressively builds up the ability to predict or simulate future sensory-motor events, given a history of past events. The knowledge base generated during learning can be used to control the robot, such that the demonstrated behavior is reproduced. The same knowledge base can also be used to recognize an on-going behavior by comparing predicted sensor states with actual observations. Behavior recognition is an important part of LFD, both as a way to communicate with the human user and as a technique that allows the robot to use previous knowledge as parts of new, more complex, controllers.

    In addition to the work on PSL, this dissertation provides a broad discussion on representation, recognition, and learning of robot behavior. LFD-related concepts such as demonstration, repetition, goal, and behavior are defined and analyzed, with focus on how bias is introduced by the use of behavior primitives. This analysis results in a formalism where LFD is described as transitions between information spaces. Assuming that the behavior recognition problem is partly solved, ways to deal with remaining ambiguities in the interpretation of a demonstration are proposed.

    The evaluation of PSL shows that the algorithm can efficiently learn and reproduce simple behaviors. The algorithm is able to generalize to previously unseen situations while maintaining the reactive properties of the system. As the complexity of the demonstrated behavior increases, knowledge of one part of the behavior sometimes interferes with knowledge of another parts. As a result, different situations with similar sensory-motor interactions are sometimes confused and the robot fails to reproduce the behavior.

    One way to handle these issues is to introduce a context layer that can support PSL by providing bias for predictions. Parts of the knowledge base that appear to fit the present context are highlighted, while other parts are inhibited. Which context should be active is continually re-evaluated using behavior recognition. This technique takes inspiration from several neurocomputational models that describe parts of the human brain as a hierarchical prediction system. With behavior recognition active, continually selecting the most suitable context for the present situation, the problem of knowledge interference is significantly reduced and the robot can successfully reproduce also more complex behaviors.

  • 33.
    Billing, Erik
    et al.
    Department of Computing Science, Umeå University, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Sweden.
    Janlert, Lars-Erik
    Department of Computing Science, Umeå University, Sweden.
    Robot learning from demonstration using predictive sequence learning2012Ingår i: Robotic systems: applications, control and programming / [ed] Ashish Dutta, Kanpur, India: IN-TECH , 2012, s. 235-250Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    In this chapter, the prediction algorithm Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL generates hypotheses from a sequence of sensory-motor events. Generated hypotheses can be used as a semi-reactive controller for robots. PSL has previously been used as a method for LFD, but suffered from combinatorial explosion when applied to data with many dimensions, such as high dimensional sensor and motor data. A new version of PSL, referred to as Fuzzy Predictive Sequence Learning (FPSL), is presented and evaluated in this chapter. FPSL is implemented as a Fuzzy Logic rule base and works on a continuous state space, in contrast to the discrete state space used in the original design of PSL. The evaluation of FPSL shows a significant performance improvement in comparison to the discrete version of the algorithm. Applied to an LFD task in a simulated apartment environment, the robot is able to learn to navigate to a specific location, starting from an unknown position in the apartment.

  • 34.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Institutionen för datavetenskap.
    Predictive learning from demonstration2011Ingår i: Agents and Artificial Intelligence: Second International Conference, ICAART 2010, Valencia, Spain, January 22-24, 2010. Revised Selected Papers / [ed] Filipe, Joaquim; Fred, Ana; Sharp, Bernadette, Berlin: Springer Verlag , 2011, 1, s. 186-200Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    A model-free learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL is inspired by several functional models of the brain. It constructs sequences of predictable sensory-motor patterns, without relying on predefined higher-level concepts. The algorithm is demonstrated on a Khepera II robot in four different tasks. During training, PSL generates a hypothesis library from demonstrated data. The library is then used to control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. In this way, the robot reproduces the demonstrated behavior. PSL is able to successfully learn and repeat three elementary tasks, but is unable to repeat a fourth, composed behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

  • 35.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Institutionen för datavetenskap.
    Simultaneous control and recognition of demonstrated behavior2011Rapport (Övrigt vetenskapligt)
    Abstract [en]

    A method for Learning from Demonstration (LFD) is presented and evaluated on a simulated Robosoft Kompai robot. The presented algorithm, called Predictive Sequence Learning (PSL), builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. The generated rule base can be used to control the robot and to predict expected sensor events in response to executed actions. The rule base can be trained under different contexts, represented as fuzzy sets. In the present work, contexts are used to represent different behaviors. Several behaviors can in this way be stored in the same rule base and partly share information. The context that best matches present circumstances can be identified using the predictive model and the robot can in this way automatically identify the most suitable behavior for precent circumstances. The performance of PSL as a method for LFD is evaluated with, and without, contextual information. The results indicate that PSL without contexts can learn and reproduce simple behaviors. The system also successfully identifies the most suitable context in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contexts. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction. 

  • 36.
    Billing, Erik
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Cognitive Perspectives on Robot Behavior2010Ingår i: Proceedings of the 2nd International Conference on Agents and Artificial Intelligence: Volume 2 / [ed] Joaquim Filipe, Ana Fred and Bernadette Sharp, SciTePress, 2010, s. 373-382Konferensbidrag (Refereegranskat)
    Abstract [en]

    A growing body of research within the field of intelligent robotics argues for a view of intelligence drastically different from classical artificial intelligence and cognitive science. The holistic and embodied ideas expressed by this research promote the view that intelligence is an emergent phenomenon. Similar perspectives, where numerous interactions within the system lead to emergent properties and cognitive abilities beyond that of the individual parts, can be found within many scientific fields. With the goal of understanding how behavior may be represented in robots, the present review tries to grasp what this notion of emergence really means and compare it with a selection of theories developed for analysis of human cognition, including the extended mind, distributed cognition and situated action. These theories reveal a view of intelligence where common notions of objects, goals, language and reasoning have to be rethought. A view where behavior, as well as the agent as such, is defined by the observer rather than given by their nature. Structures in the environment emerge by interaction rather than recognized. In such a view, the fundamental question is how emergent systems appear and develop, and how they may be controlled.

  • 37.
    Billing, Erik A.
    et al.
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Umeå, Sweden.
    A formalism for learning from demonstration2010Ingår i: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 1, nr 1, s. 1-13Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The paper describes and formalizes the concepts and assumptions involved in Learning from Demonstration (LFD), a common learning technique used in robotics. LFD-related concepts like goal, generalization, and repetition are here defined, analyzed, and put into context. Robot behaviors are described in terms of trajectories through information spaces and learning is formulated as mappings between some of these spaces. Finally, behavior primitives are introduced as one example of good bias in learning, dividing the learning process into the three stages of behavior segmentation, behavior recognition, and behavior coordination. The formalism is exemplified through a sequence learning task where a robot equipped with a gripper arm is to move objects to specific areas. The introduced concepts are illustrated with special focus on how bias of various kinds can be used to enable learning from a single demonstration, and how ambiguities in demonstrations can be identified and handled.

  • 38.
    Billing, Erik A.
    et al.
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Janlert, Lars Erik
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Model-free learning from demonstration2010Ingår i: Proceedings of the 2nd International Conference on Agents and Artificial Intelligence: Volume 2 / [ed] Joaquim Filipe, Ana Fred and Bernadette Sharp, SciTePress, 2010, s. 62-71Konferensbidrag (Refereegranskat)
    Abstract [en]

    A novel robot learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated. PSL is a model-free prediction algorithm inspired by the dynamic temporal difference algorithm S-Learning. While S-Learning has previously been applied as a reinforcement learning algorithm for robots, PSL is here applied to a Learning from Demonstration problem. The proposed algorithm is evaluated on four tasks using a Khepera II robot. PSL builds a model from demonstrated data which is used to repeat the demonstrated behavior. After training, PSL can control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. PSL was able to successfully learn and repeat the first three (elementary) tasks, but it was unable to successfully repeat the fourth (composed) behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

  • 39.
    Billing, Erik A.
    et al.
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Hellström, Thomas
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Janlert, Lars-Erik
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Behavior recognition for learning from demonstration2010Ingår i: 2010 IEEE International Conference on Robotics and Automation / [ed] Nancy M. Amato et. al, 2010, s. 866-872Konferensbidrag (Refereegranskat)
    Abstract [en]

    Two methods for behavior recognition are presented and evaluated. Both methods are based on the dynamic temporal difference algorithm Predictive Sequence Learning (PSL) which has previously been proposed as a learning algorithm for robot control. One strength of the proposed recognition methods is that the model PSL builds to recognize behaviors is identical to that used for control, implying that the controller (inverse model) and the recognition algorithm (forward model) can be implemented as two aspects of the same model. The two proposed methods, PSLE-Comparison and PSLH-Comparison, are evaluated in a Learning from Demonstration setting, where each algorithm should recognize a known skill in a demonstration performed via teleoperation. PSLH-Comparison produced the smallest recognition error. The results indicate that PSLH-Comparison could be a suitable algorithm for integration in a hierarchical control system consistent with recent models of human perception and motor control.

  • 40.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Cognition Reversed: Robot Learning from Demonstration2009Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [sv]

    Arbetet som presenteras i den här avhandlingen undersöker tekniker för att lära robotar från demonstrationer (LFD). LFD är en väl etablerad teknik där en lärare visar roboten hur den ska göra. Den här avhandlingen fokuserar på LFD där en människa fjärrstyr roboten, som i sin tur tolkar demonstrationen så att den kan repetera beteendet vid ett senare tillfälle, även då omgivningen förändrats. Flera perspektiv på representation, igenkänning och inlärning av beteende presenteras och diskuteras från ett kognitionsvetenskaplig och datavetenskapligt perspektiv. LFD-relaterade koncept så som beteende, mål, demonstration och repetition definieras och analyseras, med fokus på hur förkunskap kan implementeras genom beteendeprimitiv. Analysen resulterar i en formalism där LFD beskrivs som övergångar mellan informationsrymder. I termer av formalismen föreslås även sätt att hantera tvetydigheter efter att en demonstration tolkats genom igenkänning av beteendeprimitiv.

    Fem algoritmer för igenkänning av beteende presenteras och utvärderas, däribland algoritmen Predictive Sequence Learning (PSL). PSL är modellfri i bemärkelsen att den introducerar få antaganden om inlärningssituationen. PSL kan fungera som en algoritm för både kontroll och igenkänning av beteende. Till skillnad från flertalet tekniker för igenkänning av beteende använder PSL inte likheter i beteende mellan demonstrationer. PSL utnyttjar i stället prediktiva mått som kan minska behovet av domänkunskap vid inlärning. Ett problem med algoritmen är dock att den drabbas av kombinatorisk explosion då inputrymden växer vilket gör att någon form av högre koordination behövs för inlärning av komplexa beteenden.

    Avhandlingen ger dessutom en introduktion till beräkningsmässiga modeller av hjärnan där en stark koppling mellan perception och handling spelar en central roll. Typiska egenskaper hos dessa modeller presenters och analyseras från ett neurologiskt och informationsteoretiskt perspektiv. Denna analys resulterar i fyra krav för att implementera generell inlärningsförmåga i robotar. Dessa krav ger vägledning till hur en koordinerande struktur för PSL och liknande algoritmer skulle kunna implementeras på ett modellfritt sätt.

  • 41.
    Billing, Erik A.
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Behavior recognition for segmentation of demonstrated tasks2008Ingår i: IEEE SMC International Conference on Distributed Human-Machine Systems (DHMS), 2008, s. 228-234Konferensbidrag (Refereegranskat)
    Abstract [en]

    One common approach to the robot learning technique Learning From Demonstration, is to use a set of pre-programmed skills as building blocks for more complex tasks. One important part of this approach is recognition of these skills in a demonstration comprising a stream of sensor and actuator data. In this paper, three novel techniques for behavior recognition are presented and compared. The first technique is function-oriented and compares actions for similar inputs. The second technique is based on auto-associative neural networks and compares reconstruction errors in sensory-motor space. The third technique is based on S-Learning and compares sequences of patterns in sensory-motor space. All three techniques compute an activity level which can be seen as an alternative to a pure classification approach. Performed tests show how the former approach allows a more informative interpretation of a demonstration, by not determining "correct" behaviors but rather a number of alternative interpretations.

  • 42.
    Billing, Erik
    et al.
    Umeå universitet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Institutionen för datavetenskap.
    Formalising learning from demonstration2008Rapport (Övrigt vetenskapligt)
    Abstract [en]

    The paper describes and formalizes the concepts and assumptions involved in Learning from Demonstration (LFD), a common learning technique used in robotics. Inspired by the work on planning and actuation by LaValle, common LFD-related concepts like goal, generalization, and repetition are here defined, analyzed, and put into context. Robot behaviors are described in terms of trajectories through information spaces and learning is formulated as the mappings between some of these spaces. Finally, behavior primitives are introduced as one example of useful bias in the learning process, dividing the learning process into the three stages of behavior segmentation, behavior recognition, and behavior coordination.

  • 43.
    Billing, Erik
    Umeå universitet, Institutionen för datavetenskap.
    Representing behavior: Distributed theories in a context of robotics2007Rapport (Övrigt vetenskapligt)
    Abstract [en]

    A growing body of research within the field of intelligent robotics argues for a view of intelligence drastically different from classical artificial intelligence and cognitive science. The holistic and embodied ideas expressed by this research sees emergence as the springing source for intelligence. Similar perspectives, where numerous interactions within the system lead to emergent properties and cognitive abilities beyond that of the individual parts, can be found within many scientific fields. With the goal of understanding how behavior may be represented in robots, the present review tries to grasp what this notion of emergence really means and compare it with a selection of theories developed for analysis of human cognition. These theories reveal a view of intelligence where common notions of objects, goals and reasoning have to be rethought. A view where behavior, as well as the agent as such, is in the eye of the observer rather than given. Structures in the environment is achieved by interaction rather than recognized. In such a view, the fundamental question is how emergent systems appear and develop, and how they may be controlled.

1 - 43 av 43
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf