This paper explores the possibility of providing robots with an ‘inner world’ based on internal simulation of perception rather than an explicit representational world model. First a series of initial experiments is discussed, in which recurrent neural networks were evolved to control collision-free corridor following behavior in a simulated Khepera robot and predict the next time step's sensory input as accurately as possible. Attempts to let the robot act blindly, i.e. repeatedly using its own prediction instead of the real sensory input, were not particularly successful. This motivated the second series of experiments, on which this paper focuses. A feed-forward network was used which, as above, controlled behavior and predicted sensory input. However, weight evolution was now guided by the sole fitness criterion of successful, ‘blindfolded’ corridor following behavior, including timely turns, as above using as input only own sensory predictions rather than actual sensory input. The trained robot is in some cases actually able to move blindly in a simple environment for hundreds of time steps, successfully handling several multi-step turns. Somewhat surprisingly, however, it does so based on self-generated input that is not particularly similar to the actual sensory values.