This paper presents experiments, based on a neuroscientific hypothesis, exploring the possibility of an 'inner world' based on internal simulation of perception rather than an explicit representational world model. First a series of initial experiments is discussed, in which recurrent neural networks were evolved to (a) control collision-free corridor following behavior in a simulated Khepera robot, and (b) predict the next time step's sensory input as accurately as possible. Attempts to let the robot act 'blindly', repeatedly using its own prediction instead of the real sensory input, were not particularly successful. This motivated the second series of experiments, on which this paper focuses. A feed-forward network was used which, as above, controlled behavior and predicted sensory input. However, weight evolution was now guided by the sole fitness criterion of successful, 'blind' corridor following behaviour, including timely turns, as above using as input only own predictions rather than real sensory input. The trained robot is in some cases actually able to move 'blindly' in a simple environment for hundreds of time steps, successfully handling several multi-step turns. Somewhat surprisingly, however, it does so based on self-generated input that is very different from the actual sensory values.
HS-IDA-TR-02-001