his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Cognition Rehearsed: Recognition and Reproduction of Demonstrated Behavior
Umeå universitet, Institutionen för datavetenskap.
2012 (English)Doctoral thesis, comprehensive summary (Other academic)
Resource type
Text
Alternative title
Robotövningar : Igenkänning och återgivande av demonstrerat beteende (Swedish)
Abstract [en]

The work presented in this dissertation investigates techniques for robot Learning from Demonstration (LFD). LFD is a well established approach where the robot is to learn from a set of demonstrations. The dissertation focuses on LFD where a human teacher demonstrates a behavior by controlling the robot via teleoperation. After demonstration, the robot should be able to reproduce the demonstrated behavior under varying conditions. In particular, the dissertation investigates techniques where previous behavioral knowledge is used as bias for generalization of demonstrations.

The primary contribution of this work is the development and evaluation of a semi-reactive approach to LFD called Predictive Sequence Learning (PSL). PSL has many interesting properties applied as a learning algorithm for robots. Few assumptions are introduced and little task-specific configuration is needed. PSL can be seen as a variable-order Markov model that progressively builds up the ability to predict or simulate future sensory-motor events, given a history of past events. The knowledge base generated during learning can be used to control the robot, such that the demonstrated behavior is reproduced. The same knowledge base can also be used to recognize an on-going behavior by comparing predicted sensor states with actual observations. Behavior recognition is an important part of LFD, both as a way to communicate with the human user and as a technique that allows the robot to use previous knowledge as parts of new, more complex, controllers.

In addition to the work on PSL, this dissertation provides a broad discussion on representation, recognition, and learning of robot behavior. LFD-related concepts such as demonstration, repetition, goal, and behavior are defined and analyzed, with focus on how bias is introduced by the use of behavior primitives. This analysis results in a formalism where LFD is described as transitions between information spaces. Assuming that the behavior recognition problem is partly solved, ways to deal with remaining ambiguities in the interpretation of a demonstration are proposed.

The evaluation of PSL shows that the algorithm can efficiently learn and reproduce simple behaviors. The algorithm is able to generalize to previously unseen situations while maintaining the reactive properties of the system. As the complexity of the demonstrated behavior increases, knowledge of one part of the behavior sometimes interferes with knowledge of another parts. As a result, different situations with similar sensory-motor interactions are sometimes confused and the robot fails to reproduce the behavior.

One way to handle these issues is to introduce a context layer that can support PSL by providing bias for predictions. Parts of the knowledge base that appear to fit the present context are highlighted, while other parts are inhibited. Which context should be active is continually re-evaluated using behavior recognition. This technique takes inspiration from several neurocomputational models that describe parts of the human brain as a hierarchical prediction system. With behavior recognition active, continually selecting the most suitable context for the present situation, the problem of knowledge interference is significantly reduced and the robot can successfully reproduce also more complex behaviors.

Place, publisher, year, edition, pages
Umeå: Department of Computing Science, Umeå University , 2012. , 30 p.
Series
Report / UMINF, ISSN 0348-0542 ; 11.16
Keyword [en]
Behavior Recognition, Learning and Adaptive Systems, Learning from Demonstration, Neurocomputational Modeling, Robot Learning
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:his:diva-12139ISBN: 978-91-7459-349-5 (print)OAI: oai:DiVA.org:his-12139DiVA: diva2:1076501
Public defence
2012-01-26, S1031, Norra Beteendevetarhuset, Umeå Universitet, 13:15 (English)
Opponent
Supervisors
Available from: 2017-04-18 Created: 2017-02-22 Last updated: 2017-04-18Bibliographically approved
List of papers
1. Cognitive Perspectives on Robot Behavior
Open this publication in new window or tab >>Cognitive Perspectives on Robot Behavior
2010 (English)In: Proceedings of the 2nd International Conference on Agents and Artificial Intelligence: Volume 2 / [ed] Joaquim Filipe, Ana Fred and Bernadette Sharp, SciTePress, 2010, 373-382 p.Conference paper, (Refereed)
Abstract [en]

A growing body of research within the field of intelligent robotics argues for a view of intelligence drastically different from classical artificial intelligence and cognitive science. The holistic and embodied ideas expressed by this research promote the view that intelligence is an emergent phenomenon. Similar perspectives, where numerous interactions within the system lead to emergent properties and cognitive abilities beyond that of the individual parts, can be found within many scientific fields. With the goal of understanding how behavior may be represented in robots, the present review tries to grasp what this notion of emergence really means and compare it with a selection of theories developed for analysis of human cognition, including the extended mind, distributed cognition and situated action. These theories reveal a view of intelligence where common notions of objects, goals, language and reasoning have to be rethought. A view where behavior, as well as the agent as such, is defined by the observer rather than given by their nature. Structures in the environment emerge by interaction rather than recognized. In such a view, the fundamental question is how emergent systems appear and develop, and how they may be controlled.

Place, publisher, year, edition, pages
SciTePress, 2010
Keyword
Behavior based control, Cognitive artificial intelligence, Distributed cognition, Ontology, Reactive robotics, Sensory-motor coordination, Situated action
National Category
Computer Science
Research subject
Computer and Information Science
Identifiers
urn:nbn:se:his:diva-12141 (URN)10.5220/0002782103730382 (DOI)978-989-674-022-1 (ISBN)
Conference
2nd International Conference on Agents and Artificial Intelligence (ICAART 2010), Valencia, Spain, January 22-24, 2010
Available from: 2017-02-22 Created: 2017-02-22 Last updated: 2017-04-18Bibliographically approved
2. Behavior recognition for segmentation of demonstrated tasks
Open this publication in new window or tab >>Behavior recognition for segmentation of demonstrated tasks
2008 (English)In: IEEE SMC International Conference on Distributed Human-Machine Systems (DHMS), 2008, 228-234 p.Conference paper, (Refereed)
Abstract [en]

One common approach to the robot learning technique Learning From Demonstration, is to use a set of pre-programmed skills as building blocks for more complex tasks. One important part of this approach is recognition of these skills in a demonstration comprising a stream of sensor and actuator data. In this paper, three novel techniques for behavior recognition are presented and compared. The first technique is function-oriented and compares actions for similar inputs. The second technique is based on auto-associative neural networks and compares reconstruction errors in sensory-motor space. The third technique is based on S-Learning and compares sequences of patterns in sensory-motor space. All three techniques compute an activity level which can be seen as an alternative to a pure classification approach. Performed tests show how the former approach allows a more informative interpretation of a demonstration, by not determining "correct" behaviors but rather a number of alternative interpretations.

Keyword
Learning from demonstration, Segmentation, Generalization, Sequence Learning, Auto-associative neural networks, S-Learning
National Category
Computer Science
Identifiers
urn:nbn:se:his:diva-12145 (URN)978-80-01-04027-0 (ISBN)
Conference
IEEE SMC International Conference on Distributed Human-Machine Systems (DHMS)
Available from: 2008-03-19 Created: 2017-02-22 Last updated: 2017-04-18Bibliographically approved
3. A formalism for learning from demonstration
Open this publication in new window or tab >>A formalism for learning from demonstration
2010 (English)In: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 1, no 1, 1-13 p.Article in journal (Refereed) Published
Abstract [en]

The paper describes and formalizes the concepts and assumptions involved in Learning from Demonstration (LFD), a common learning technique used in robotics. LFD-related concepts like goal, generalization, and repetition are here defined, analyzed, and put into context. Robot behaviors are described in terms of trajectories through information spaces and learning is formulated as mappings between some of these spaces. Finally, behavior primitives are introduced as one example of good bias in learning, dividing the learning process into the three stages of behavior segmentation, behavior recognition, and behavior coordination. The formalism is exemplified through a sequence learning task where a robot equipped with a gripper arm is to move objects to specific areas. The introduced concepts are illustrated with special focus on how bias of various kinds can be used to enable learning from a single demonstration, and how ambiguities in demonstrations can be identified and handled.

Place, publisher, year, edition, pages
De Gruyter Open, 2010
Keyword
Learning from demonstration, ambiguities, behavior, bias, generalization, robot learning
National Category
Human Computer Interaction Computer Science
Identifiers
urn:nbn:se:his:diva-12143 (URN)10.2478/s13230-010-0001-5 (DOI)
Available from: 2017-02-22 Created: 2017-02-22 Last updated: 2017-07-11Bibliographically approved
4. Model-free learning from demonstration
Open this publication in new window or tab >>Model-free learning from demonstration
2010 (English)In: Proceedings of the 2nd International Conference on Agents and Artificial Intelligence: Volume 2 / [ed] Joaquim Filipe, Ana Fred and Bernadette Sharp, SciTePress, 2010, 62-71 p.Conference paper, (Refereed)
Abstract [en]

A novel robot learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated. PSL is a model-free prediction algorithm inspired by the dynamic temporal difference algorithm S-Learning. While S-Learning has previously been applied as a reinforcement learning algorithm for robots, PSL is here applied to a Learning from Demonstration problem. The proposed algorithm is evaluated on four tasks using a Khepera II robot. PSL builds a model from demonstrated data which is used to repeat the demonstrated behavior. After training, PSL can control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. PSL was able to successfully learn and repeat the first three (elementary) tasks, but it was unable to successfully repeat the fourth (composed) behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

Place, publisher, year, edition, pages
SciTePress, 2010
Keyword
Learning from Demonstration, Prediction, Robot Imitation, Motor Control, Model-free Learning
National Category
Computer Science
Research subject
Computer and Information Science
Identifiers
urn:nbn:se:his:diva-12150 (URN)10.5220/0002729500620071 (DOI)978-989-674-022-1 (ISBN)
Conference
2nd International Conference on Agents and Artificial Intelligence (ICAART 2010), Valencia, Spain, January 22-24, 2010
Available from: 2017-02-22 Created: 2017-02-22 Last updated: 2017-04-18
5. Behavior recognition for learning from demonstration
Open this publication in new window or tab >>Behavior recognition for learning from demonstration
2010 (English)In: 2010 IEEE International Conference on Robotics and Automation / [ed] Nancy M. Amato et. al, 2010, 866-872 p.Conference paper, (Refereed)
Abstract [en]

Two methods for behavior recognition are presented and evaluated. Both methods are based on the dynamic temporal difference algorithm Predictive Sequence Learning (PSL) which has previously been proposed as a learning algorithm for robot control. One strength of the proposed recognition methods is that the model PSL builds to recognize behaviors is identical to that used for control, implying that the controller (inverse model) and the recognition algorithm (forward model) can be implemented as two aspects of the same model. The two proposed methods, PSLE-Comparison and PSLH-Comparison, are evaluated in a Learning from Demonstration setting, where each algorithm should recognize a known skill in a demonstration performed via teleoperation. PSLH-Comparison produced the smallest recognition error. The results indicate that PSLH-Comparison could be a suitable algorithm for integration in a hierarchical control system consistent with recent models of human perception and motor control.

Series
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729
Keyword
learning and adaptive systems, neurorobotics, autonomous agents
National Category
Computer Science
Research subject
Computer and Information Science
Identifiers
urn:nbn:se:his:diva-12149 (URN)10.1109/ROBOT.2010.5509912 (DOI)2-s2.0-77955785914 (Scopus ID)978-1-4244-5040-4 (ISBN)978-1-4244-5038-1 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA), Anchorage, Alaska, USA, May 3-7, 2010
Available from: 2017-02-22 Created: 2017-02-22 Last updated: 2017-04-18Bibliographically approved

Open Access in DiVA

fulltext(494 kB)4 downloads
File information
File name FULLTEXT01.pdfFile size 494 kBChecksum SHA-512
074287820f9a6ac6daef5635aff823222768afeb8bed0c33dd8b4365915ae21920ad60fe1b1e7b52efcdc398a8c17dfe6ebba892c5e89eb99ab496819bb62a22
Type fulltextMimetype application/pdf

Other links

http://www.cognitionreversed.com

Search in DiVA

By author/editor
Billing, Erik
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 4 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 27 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf