Högskolan i Skövde

his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Simultaneous recognition and reproduction of demonstrated behavior
University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. (Interaction Lab)ORCID iD: 0000-0002-6568-9342
Institutionen för Datavetenskap, Umeå Universitet.
Institutionen för Datavetenskap, Umeå Universitet.
2015 (English)In: Biologically Inspired Cognitive Architectures, ISSN 2212-683X, Vol. 12, p. 43-53, article id BICA114Article in journal (Refereed) Published
Abstract [en]

Predictions of sensory-motor interactions with the world is often referred to as a key component in cognition. We here demonstrate that prediction of sensory-motor events, i.e., relationships between percepts and actions, is sufficient to learn navigation skills for a robot navigating in an apartment environment. In the evaluated application, the simulated Robosoft Kompai robot learns from human demonstrations. The system builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. With this architecture, referred to as Predictive Sequence Learning (PSL), learned associations can be used to control the robot and to predict expected sensor events in response to executed actions. The predictive component of PSL is used in two ways: 1) to identify which behavior that best matches current context and 2) to decide when to learn, i.e., update the confidence of different sensory-motor associations. Using this approach, knowledge interference due to over-fitting of an increasingly complex world model can be avoided. The system can also automatically estimate the confidence in the currently executed behavior and decide when to switch to an alternate behavior. The performance of PSL as a method for learning from demonstration is evaluated with, and without, contextual information. The results indicate that PSL without contextual information can learn and reproduce simple behaviors, but fails when the behavioral repertoire becomes more diverse. When a contextual layer is added, PSL successfully identifies the most suitable behavior in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contextual information. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction. 

Place, publisher, year, edition, pages
Elsevier, 2015. Vol. 12, p. 43-53, article id BICA114
Keywords [en]
Behavior recognition, Context dependent, Learning from demonstration
National Category
Robotics
Research subject
Technology; Interaction Lab (ILAB)
Identifiers
URN: urn:nbn:se:his:diva-11010DOI: 10.1016/j.bica.2015.03.002ISI: 000357235100005Scopus ID: 2-s2.0-84960806556OAI: oai:DiVA.org:his-11010DiVA, id: diva2:818283
Available from: 2015-06-08 Created: 2015-06-08 Last updated: 2018-08-01Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopushttp://www.sciencedirect.com/science/article/pii/S2212683X15000092

Authority records

Billing, Erik

Search in DiVA

By author/editor
Billing, Erik
By organisation
School of InformaticsThe Informatics Research Centre
Robotics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 1294 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf