Behavior-based artificial systems, e.g. mobile robots, are frequently designed using (various degrees and levels of) biology as inspiration, but rarely modeled based on actual quantitative empirical data. This paper presents a data-driven behavior-based model of a simple biological organism, the hydra. Four constituent behaviors were implemented in a simulated animal, and the overall behavior organization was accomplished using a colony-style architecture (CSA). The results indicate that the CSA, using a priority-based behavioral hierarchy suggested in the literature, can be used to model behavioral properties like latency, activation threshold, habituation, and duration of the individual behaviors of the hydra. Limitations of this behavior-based approach are also discussed.
Socially interactive robots are expected to have an increasing importance in human society. For social robots to provide long-term added value to people’s lives, it is of major importance to stressthe need for positive user experience (UX) of such robots. The human-centered view emphasizes various aspects that emerge in the interaction between humans and robots. However, a positive UX does not appear by itself but has to be designed for and evaluated systematically. In this paper, the focus is on the role and relevance of UX in human-robot interaction (HRI) and four trends concerning the role and relevance of UX related to socially interactive robots are identified, and three challenges related to its evaluation are also presented. It is argued that current research efforts and directions are not sufficient in HRI research, and that future research needs to further address interdisciplinary research in order to achieve long-term success of socially interactive robots.
This paper provides an overview of a collaborative research program in information fusion from databases, sensors and simulations. Information fusion entails the combination of data from multiple sources, to generate information that cannot be derived from the individual sources. This area is of strategic importance for industry and defense, as well as public administration areas such as health care, and needs to be pursued as an academic subject. A large number of industrial partners are supporting and participating in the development of the area. The paper describes the program’s general approach and main research areas, with a particular focus on the role of information fusion in systems development
We present a dataset of behavioral data recorded from 61 children diagnosed with Autism Spectrum Disorder (ASD). The data was collected during a large-scale evaluation of Robot Enhanced Therapy (RET). The dataset covers over 3000 therapy sessions and more than 300 hours of therapy. Half of the children interacted with the social robot NAO supervised by a therapist. The other half, constituting a control group, interacted directly with a therapist. Both groups followed the Applied Behavior Analysis (ABA) protocol. Each session was recorded with three RGB cameras and two RGBD (Kinect) cameras, providing detailed information of children’s behavior during therapy. This public release of the dataset comprises body motion, head position and orientation, and eye gaze variables, all specified as 3D data in a joint frame of reference. In addition, metadata including participant age, gender, and autism diagnosis (ADOS) variables are included. We release this data with the hope of supporting further data-driven studies towards improved therapy methods as well as a better understanding of ASD in general.
Several simulation theories have been proposed as an explanation for how humans and other agents internalize an "inner world" that allows them to simulate interactions with the external real world - prospectively and retrospectively. Such internal simulation of interaction with the environment has been argued to be a key mechanism behind mentalizing and planning. In the present work, we study internal simulations in a robot acting in a simulated human environment. A model of sensory-motor interactions with the environment is generated from human demonstrations, and tested on a Robosoft Kompai robot. The model is used as a controller for the robot, reproducing the demonstrated behavior. Information from several different demonstrations is mixed, allowing the robot to produce novel paths through the environment, towards a goal specified by top-down contextual information.
The robot model is also used in a covert mode, where actions are inhibited and perceptions are generated by a forward model. As a result, the robot generates an internal simulation of the sensory-motor interactions with the environment. Similar to the overt mode, the model is able to reproduce the demonstrated behavior as internal simulations. When experiences from several demonstrations are combined with a top-down goal signal, the system produces internal simulations of novel paths through the environment. These results can be understood as the robot imagining an "inner world" generated from previous experience, allowing it to try out different possible futures without executing actions overtly.
We found that the success rate in terms of reaching the specified goal was higher during internal simulation, compared to overt action. These results are linked to a reduction in prediction errors generated during covert action. Despite the fact that the model is quite successful in terms of generating covert behavior towards specified goals, internal simulations display different temporal distributions compared to their overt counterparts. Links to human cognition and specifically mental imagery are discussed.
Analysis of internal structures of embodied and situated agents may provide insights into the mechanisms underlying adaptive behaviour. This paper is concerned with the evolution and analysis of visually-guided approach behaviour in a simulated robotic agent controlled by a recurrent artificial neural network, whose connection weights have been evolved using evolutionary algorithms. Analysis of the evolved behaviours and their network-internal mechanisms reveals a behavioural structure and organization resembling a Brooksian subsumption architecture. The task decomposition, as well as the resulting individual behaviours and their integration, however, are realized as network-internal state space dynamics, evolved in the course of agent-environment interaction, i.e. with a minimum of designer intervention.
A more precise definition of the field of information fusion can be of benefit to researchers within the field, who may use uch a definition when motivating their own work and evaluating the contribution of others. Moreover, it can enable researchers and practitioners outside the field to more easily relate their own work to the field and more easily understand the scope of the techniques and methods developed in the field. Previous definitions of information fusion are reviewed from that perspective, including definitions of data and sensor fusion, and their appropriateness as definitions for the entire research field are discussed. Based on strengths and weaknesses of existing definitions, a novel definition is proposed, which is argued to effectively fulfill the requirements that can be put on a definition of information fusion as a field of research.
This paper presents a series of simulation experiments that incrementally extend previous work on neural robot controllers in a predator-prey scenario, in particular the work of Floreano and Nolfi, and integrates it with ideas from work on the ‘co-evolution’ of robot morphologies and control systems. The aim of these experiments has been to further systematically investigate the tradeoffs and interdependencies between morphological parameters and behavioral strategies through a series of predator-prey experiments in which increasingly many aspects are subject to self-organization through competitive co-evolution. Motivated by the fact that, despite the emphasis of the interdependence of brain, body and environment in much recent research, the environment has actually received relatively little attention, the last set of experiments lets robots/species actively adapt their environments to their own needs, rather than just adapting themselves to a given environment.
It is evident that recently reported robot-assisted therapy systems for assessment of children with autism spectrum disorder (ASD) lack autonomous interaction abilities and require significant human resources. This paper proposes a sensing system that automatically extracts and fuses sensory features such as body motion features, facial expressions, and gaze features, further assessing the children behaviours by mapping them to therapist-specified behavioural classes. Experimental results show that the developed system has a capability of interpreting characteristic data of children with ASD, thus has the potential to increase the autonomy of robots under the supervision of a therapist and enhance the quality of the digital description of children with ASD. The research outcomes pave the way to a feasible machine-assisted system for their behaviour assessment. IEEE
A growing body of evidence in cognitive science and neuroscience points towards the existence of a deep interconnection between cognition, perception and action. According to this embodied perspective language is grounded in the sensorimotor system and language understanding is based on a mental simulation process (Jeannerod, 2007; Gallese, 2008; Barsalou, 2009). This means that during action words and sentence comprehension the same perception, action, and emotion mechanisms implied during interaction with objects are recruited. Among the neural underpinnings of this simulation process an important role is played by a sensorimotor matching system known as the mirror neuron system (Rizzolatti and Craighero, 2004). Despite a growing number of studies, the precise dynamics underlying the relation between language and action are not yet well understood. In fact, experimental studies are not always coherent as some report that language processing interferes with action execution while others find facilitation. In this work we present a detailed neural network model capable of reproducing experimentally observed influences of the processing of action-related sentences on the execution of motor sequences. The proposed model is based on three main points. The first is that the processing of action-related sentences causes the resonance of motor and mirror neurons encoding the corresponding actions. The second is that there exists a varying degree of crosstalk between neuronal populations depending on whether they encode the same motor act, the same effector or the same action-goal. The third is the fact that neuronal populations’ internal dynamics, which results from the combination of multiple processes taking place at different time scales, can facilitate or interfere with successive activations of the same or of partially overlapping pools.
The question motivating the work presented here, starting from a view of music as embodied and situated activity, is how can we account for the complexity of interactive music performance situations. These are situations in which human performers interact with responsive technologies, such as sensor-driven technology or sound synthesis affected by analysis of the performed sound signal. This requires investigating in detail the underlying mechanisms, but also providing a more holistic approach that does not lose track of the complex whole constituted by the interactions and relationships of composers, performers, audience, technologies, etc. The concept of affordances has frequently been invoked in musical research, which has seen a "bodily turn" in recent years, similar to the development of the embodied cognition approach in the cognitive sciences. We therefore begin by broadly delineating its usage in the cognitive sciences in general, and in music research in particular. We argue that what is still missing in the discourse on musical affordances is an encompassing theoretical framework incorporating the sociocultural dimensions that are fundamental to the situatedness and embodiment of interactive music performance and composition. We further argue that the cultural affordances framework, proposed by Rietveld and Kiverstein (2014) and recently articulated further by Ramstead et al. (2016) in this journal, although not previously applied to music, constitutes a promising starting point. It captures and elucidates this complex web of relationships in terms of shared landscapes and individual fields of affordances. We illustrate this with examples foremost from the first author's artistic work as composer and performer of interactive music. This sheds new light on musical composition as a process of construction-and embodied mental simulation-of situations, guiding the performers' and audience's attention in shifting fields of affordances. More generally, we believe that the theoretical perspectives and concrete examples discussed in this paper help to elucidate how situations-and with them affordances-are dynamically constructed through the interactions of various mechanisms as people engage in embodied and situated activity.
Computer games are increasingly used for purposes beyond mere entertainment, and current hi-tech simulators can provide quite, naturalistic contexts for purposes such as traffic education. One of the critical concerns in this area is the validity or transferability of acquired skills from a simulator to the real world context. In this paper, we present our work in which we compared driving in the real world with that in the simulator at two levels, that is, by using performance measures alone, and by combining psychophysiological measures with performance measures. For our study, we gathered data using questionnaires as well as by logging vehicle dynamics, environmental conditions, video data, and users' psychophysiological measurements. For the analysis, we used several novel approaches such as scatter plots to visualize driving tasks of different contexts and to obtain vigilance estimators from electroencephalographic (EEG) data in order to obtain important results about the differences between the driving in the two contexts. Our belief is that both experimental procedures and findings of our experiment are very important to the field of serious games concerning how to evaluate the fitness of driving simulators and measure driving performance. © 2013 Hiran B. Ekanayake et al.
This paper presents a study focused on comparing real actors based scenarios and animated characters based scenarios with respect to their similarity in evoking psychophysiological activity for certain events by measuring galvanic skin response (GSR). In the experiment, one group (n=11) watched the real actors’ film whereas another group (n=7) watched the animated film, which had the same story and dialogue as the real actors’ film. The results have shown that there is no significant difference in the skin conductance response (SCR) scores between the two groups; however, responses significantly differ when SCR amplitudes are taken into account. Moreover, Pearson’s correlation reported as high as over 80% correlation between the two groups’ SCRs for certain time intervals. The authors believe that this finding is of general importance for the domain of simulation-based tutoring systems in development of and decisions regarding use of animated characters based scenarios.
In-process assessment of trainee learners in game-based simulators is a challenging activity. This typically involves human instructor time and cost, and does not scale to the one tutor per learner vision of computer-based learning. Moreover, evaluation from a human instructor is often subjective and comparisons between learners are not accurate. Therefore, in this paper, we propose an automated, formula-driven quantitative evaluation method for assessing performance competence in serious training games. Our proposed method has been empirically validated in a game-based driving simulator using 7 subjects and 13 sessions, and accuracy up to 90.25% has been achieved when compared to an existing qualitative method. We believe that by incorporating quantitative evaluation methods like these future training games could be enriched with more meaningful feedback and adaptive game-play so as to better monitor and support player motivation, engagement and learning performance.
There is an increasing interest to use computer games for non-traditional education, such as for training purposes. For training education, simulators are considered as offering more realistic learning environments to experience situations that are similar to real world. This type of learning is more beneficial for practicing critical situations which are difficult or impossible in real world training, for instance experience the consequences of unsafe driving. However, the effectiveness of simulation-based learning of this nature is dependent upon the learner’s engagement and explorative behaviour. Most current learner evaluation systems are unable to capture this type of learning. Therefore, in this paper we introduce the concept of game interaction state graphs (GISGs) to capture the engagement in explorative and experience-based training tasks. These graphs are constructed based on rules which capture psychologically significant learner behaviours and situations. Simple variables reflecting game state and learner’s controller actions provide the ingredients to the rules. This approach eliminates the complexity involved with other similar approaches, such as constructing a full-fledged cognitive model for the learner. GISGs, at minimum, can be used to evaluate the explorative behaviour, the training performance and personal preferences of a learner.
This paper presents a study focused on comparing driving behavior of expert and novice drivers in a mid-range driving simulator with the intention of evaluating the validity of driving simulators for driver training. For the investigation, measurements of performance, psychophysiological measurements, and self-reported user experience under different conditions of driving tracks and driving sessions were analyzed. We calculated correlations
between quantitative and qualitative measures to enhance the reliability of the findings. The experiment was conducted involving 14 experienced drivers and 17 novice drivers. The results indicate that driving behaviors of expert and novice drivers differ from each other in several ways but it heavily depends on the characteristics of the task. Moreover, our belief is that the analytical framework proposed in this paper can be used as a tool for selecting appropriate driving tasks as well as for evaluating driving performance in driving simulators.
Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms.However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.
It has been suggested that evolving developmental programs instead of direct genotype-phenotype mappings may increase the scalability of Genetic Algorithms. Many of these Artificial Embryogeny (AE) models have been proposed and their evolutionary properties are being investigated. One of these properties concerns the fault-tolerance of at least a particular class of AE, which models the development of artificial multicellular organisms. It has been shown that such AE evolves designs capable of recovering phenotypic faults during development, even if fault-tolerance is not selected for during evolution. This type of adaptivity is clearly very interesting both for theoretical reasons and possible robotic applications. In this paper we provide empirical evidence collected from a multicellular AE model showing a subtle relationship between evolution and development. These results explain why developmental fault-tolerance necessarily emerges during evolution.
The embodied and situated approach to artificial intelligence (AI) has matured and become a viable alternative to traditional computationalist approaches with respect to the practical goal of building artificial agents, which can behave in a robust and flexible manner under changing real-world conditions. Nevertheless, some concerns have recently been raised with regard to the sufficiency of current embodied AI for advancing our scientific understanding of intentional agency. While from an engineering or computer science perspective this limitation might not be relevant, it is of course highly relevant for AI researchers striving to build accurate models of natural cognition. We argue that the biological foundations of enactive cognitive science can provide the conceptual tools that are needed to diagnose more clearly the shortcomings of current embodied AI. In particular, taking an enactive perspective points to the need for AI to take seriously the organismic roots of autonomous agency and sense-making. We identify two necessary systemic requirements, namely constitutive autonomy and adaptivity, which lead us to introduce two design principles of enactive AI. It is argued that the development of such enactive AI poses a significant challenge to current methodologies. However, it also provides a promising way of eventually overcoming the current limitations of embodied AI, especially in terms of providing fuller models of natural embodied cognition. Finally, some practical implications and examples of the two design principles of enactive AI are also discussed.
In this paper we suggest a biologically inspired approach to flexible behavior through emotion modeling. We consider emotion to emerge from relational interaction of body, nervous system and world, through sensory-motor attunement of internal parameters to concern-relevant relationships. We interpret such relationships with the notions of collective variable and control parameters. We introduce a simple robotic implementation of this model of appraisal, following the techniques of evolutionary neuro-robotics.
In order to explain and model emotion we need to attend to the role internal states play in the generation of behavior. We argue that motivational and perceptual roles emerge from the dynamical interaction between physiological processes, sensory-motor processes and the environment. We investigate two aspects inherent to emotion appraisal and response which rely on physiological process: the ability to categorize relations with the environment and to modulate response generating different action tendencies.
This paper presents a quantitative investigation of the differences between rule extraction through breadth first search and through sampling the states of the RNN in interaction with its domain. We show that for an RNN trained to predict symbol sequences in formal grammar domains, the breadth first search is especially inefficient for languages sharing properties with realistic real world domains. We also identify some important research issues, needed to be resolved to ensure further development in the field of rule extraction from RNNs.
In this paper, it will be shown that it is feasible to extract finite state machines in a domain of, for rule extraction, previously unencountered complexity. The algorithm used is called the Crystallizing Substochastic Sequential Machine Extractor, or CrySSMEx. It extracts the machine from sequence data generated from the RNN in interaction with its domain. CrySSMEx is parameter free, deterministic and generates a sequence of increasingly deterministic extracted stochastic models until a fully deterministic machine is found.
A definition of information fusion (IF) as a field of research can benefit researchers within the field, who may use such a definition when motivating their own work and evaluating the contributions of others. Moreover, it can enable researchers and practitioners outside the field to more easily relate their own work to the field and more easily understand the scope of IF techniques and methods. Based on strengths and weaknesses of existing definitions, a definition is proposed that is argued to effectively fulfill the requirements that can be put on a definition of IF as a field of research. Although the proposed definition aims to be precise, it does not fully capture the richness and versatility of the IF field. To address that limitation, we highlight some topics to explore the scope of IF, covering the systems perspective of IF and its relation to ma-chine learning, optimization, robot behavior, opinion aggregation, and databases.
- A classical appraisal model of emotions extended with artificial metabolic mechanisms is presented. The new architecture is based on two existing models: WASABI and a model of Microbial Fuel Cell technology. WASABI is a top-down cognitive model which is implemented in several virtual world applications such as a museum guide. Microbial fuel cells provide energy for the robot through digesting food. The presented work is a first step towards imbuing a physical robot with emotions of human-like complexity. Classically, such integration has only been attempted in the virtual domain. The research aim is to study the embodied appraisal theory and to show the role of the body in the emotion mechanisms. Some initial tests of the architecture with humanoid NAO robot in a minimalistic scenario are presented. © Selection and peer-review under responsibility of FET11 conference organizers and published by Elsevier B.V.
In this article, a simple CPG network is shown to model early infant walking, in particular the onset of independent walking. The difference between early infant walking and early adult walking is addressed with respect to the underlying neurophysiology and evaluated according to gait attributes. Based on this, we successfully model the early infant walking gait on the NAO robot and compare its motion dynamics and performance to those of infants. Our model is able to capture the core properties of early infant walking. We identify differences in the morphologies between the robot and infant and the effect of this on their respective performance. In conclusion, early infant walking can be seen to develop as a function of the CPG network and morphological characteristics.
In this article, a generic CPG architecture is used to model infant crawling gaits and is implemented on the NAO robot platform. The CPG architecture is chosen via a systematic approach to designing CPG networks on the basis of group theory and dynamic systems theory. The NAO robot performance is compared to the iCub robot which has a different anatomical structure. Finally, the comparison of performance and NAO whole-body stability are assessed to show the adaptive property of the CPG architecture and the extent of its ability to transfer to different robot morphologies. © 2011 IEEE.
The identification of learning mechanisms for locomotion has been the subject of much research for some time but many challenges remain. Dynamic systems theory (DST) offers a novel approach to humanoid learning through environmental interaction. Reinforcement learning (RL) has offered a promising method to adaptively link the dynamic system to the environment it interacts with via a reward-based value system. In this paper, we propose a model that integrates the above perspectives and applies it to the case of a humanoid (NAO) robot learning to walk the ability of which emerges from its value-based interaction with the environment. In the model, a simplified central pattern generator (CPG) architecture inspired by neuroscientific research and DST is integrated with an actor-critic approach to RL (cpg-actor-critic). In the cpg-actor-critic architecture, least-square-temporal-difference based learning converges to the optimal solution quickly by using natural gradient learning and balancing exploration and exploitation. Futhermore, rather than using a traditional (designer-specified) reward it uses a dynamic value function as a stability indicator that adapts to the environment. The results obtained are analyzed using a novel DST-based embodied cognition approach. Learning to walk, from this perspective, is a process of integrating levels of sensorimotor activity and value.
In this article, we use a recurrent neural network including four-cell core architecture to model the walking gait and implement it with the simulated and physical NAO robot. Meanwhile, inspired by the biological CPG models, we propose a simplified CPG model which comprises motorneurons, interneurons, sensor neurons and the simplified spinal cord. Within this model, the CPGs do not directly output trajectories to the servo motors. Instead, they only work to maintain the phase relation among ipsilateral and contralateral limbs. The final output is dependent on the integration of CPG signals, outputs of interneurons, motor neurons and sensor neurons (sensory feedback).
Embodiment has become an important concept in many areas of cognitive science during the past two decades, but yet there is no common understanding of what actually constitutes embodied cognition. Much focus has been on what kind of ‘bodily realization’ is necessary for embodied cognition, but crucial factors such as the role of social interaction and the body-in-motion have still not received much attention. Based on empirical evidence from child development, we emphasize the experience of self-produced locomotion behavior as a crucial driving force to the emergence of the so-called “ninemonth revolution” in human infants. We argue that the intertwining of social scaffolding and self-produced locomotion behavior is fundamental to the development of joint attention activities and a ‘self’ in the human child.
This chapter contrasts traditional, disembodied information-processing approaches to intersubjectivity in socio-cognitive research with more recent, embodied approaches. Based on an analysis of the shortcomings of the former, it focuses on the latter, but also clarifies different notions of embodiment and its role in cognition and social interaction. Integrating a broad range of theoretical perspectives and empirical evidence from mainly social psychology, social neuroscience, embodied linguistics and gesture studies, four fundamental functions of the body in social interaction are identified: (1) the body as a social resonance mechanism, (2) the body as a means and end in communication and social interaction, (3) embodied action and gesture as a ‘helping hand’ in shaping, expressing and sharing thoughts, and (4) the body as a representational device. The theoretical discussions are illustrated with an example from a case study of insitu embodied social interaction, with a focus on the importance of crossmodal interaction in the process of scaffolding. It is concluded that the body is of crucial importance in understanding social interaction and cognition in general, and in particular the relational nature of mind and intersubjectivity.
During the past two decades, embodiment has become an important concept in many areas of cognitive science, but so far there is no common understanding of what constitutes embodied cognition and what kind of body an artificial humanlike cognizer would require. Work in embodied artificial intelligence and robotics has addressed, to some degree, what kind of bodily implementation is necessary for embodied cognition, but crucial factors such as the role of social interaction and the 'body-in-motion' have still not received much attention. We argue that, in the human child, the interplay of social scaffolding and self-induced locomotion is fundamental to the development of joint attention and a 'self'. Furthermore, we discuss the implications of the social dynamics of bodily experience for android science. We argue that keeping scientific and engineering perspectives apart, but also understanding their relation, is important for clarifying the objectives of android science.
Choice behaviour where outcome-contingencies vary or are prohabilistic has been the focus of many benchmark tasks of infant to adult development in the psychology literature. Dynamic field theoretic (DFT) investigations of cognitive and behavioural competencies have been used in order to identify parameters critical to infant development. In this paper we report the findings of a DFT model that is able to replicate normal functioning adult performance on the Iowa gambling task (IGT). The model offers a simple demonstration proof of the parsimonious reversal learning alternative to Damasio’s somatic marker explanation of IGT performance. Our simple model demonstrates a potentially important role for reinforcement/reward learning to generating behaviour that allows for advantageous performance. We compare our DFT modelling approach to one used on the A-not-B infant paradigm and suggest that a critical aspect of development lies in the ability to flexibly trade off perseverative versus exploratory behaviour in order to capture statistical choice-outcome contingencies. Finally, we discuss the importance of an investigation of the IGT in an embodied setting where reward prediction learning may provide critical means by which adaptive behavioural reversals can be enacted.
Emotions can be considered inextricably linked to embodied appraisals - perceptions of bodily states that inform agents of how they are faring in the world relative to their own well-being. Emotion-appraisals are thus relational phenomena the relevance of which can be learned or evolutionarily selected for given a reliable coupling between agent-internal and environmental states. An emotion-appraisal attentional disposition permits agents to produce behaviour that exploits such couplings allowing for adaptive agent performance across agent-environment interactions. This chapter discusses emotions in terms of dynamical processes whereby attentional dispositions are considered central to an understanding of behaviour. The need to reconcile a dynamical systems perspective with an approach that views emotions as attentional dispositions representative of embodied relational phenomena (embodied appraisals) is argued for. Attention and emotion are considered to be features of adaptive agent behaviour that are interdependent in their temporal, structural and organizational relations.
Research on the neural bases of emotion raises much controversy and few quantitative models exist that can help address the issues raised. Here we replicate and dissect one of those models, Armony and colleagues’neurocomputational model of fear conditioning, which is based on LeDoux’s dual-route hypothesis regarding the rat fear circuitry. The importance of the model’s modular abstraction of the neuroanatomy, its use of population coding, and in particular the interplay between thalamo-amygdala and thalamo-cortical pathways are tested. We show that a trivially minimal version of the model can produce conditioning to a reinforced stimulus without recourse to the dual pathway structure, but a modification of the original model, which nevertheless preserves the thalamo-amygdala and (reduced) thalamo-cortical pathways, enables stronger conditioning to a conditioned stimulus. Implications for neurocomputational modelling approaches are discussed.
We present an evolutionary robotics investigation into the metabolism constrained homeostatic dynamics of a simulated robot. Unlike existing research that has focused on either energy or motivation autonomy the robot described here is considered in terms of energy-motivation autonomy. This stipulation is made according to a requirement of autonomous systems to spatiotemporally integrate environmental and physiological sensed information. In our experiment, the latter is generated by a simulated artificial metabolism (a microbial fuel cell batch) and its integration with the former is determined by an E-GasNet-active vision interface. The investigation centres on robot performance in a three-dimensional simulator on a stereotyped two-resource problem. Motivationlike states emerge according to periodic dynamics identifiable for two viable sensorimotor strategies. Robot adaptivity is found to be sensitive to experimenter-manipulated deviations from evolved metabolic constraints. Deviations detrimentally affect the viability of cognitive (anticipatory) capacities even where constraints are significantly lessened. These results support the hypothesis that grounding motivationally autonomous robots is critical to adaptivity and cognition.