We live in a complex world where we proactively plan and execute various behaviors by forming expectations in real time. Expectations are beliefs regarding the future state of affairs and they play an integral part of our perception, attention, and behavior. Over time, our expectations become more accurate as we interact with the world and others around us. People interact socially with other people by inferring others' purposes, intentions, preferences, beliefs, emotions, thoughts, and goals. Similar inferences may occur when we interact with social robots. With anthropomorphic design, these robots are designed to mimic people physically and behaviorally. As a result, users predominantly infer agency in social robots, often leading to mismatched expectations of the robots' capabilities, which ultimately influences the user experience.
In this thesis, the role and relevance of users' expectations in first-hand social human-robot interaction (sHRI) was investigated. There are two major findings. First, in order to study expectations in sHRI, the social robot expectation gap evaluation framework was developed. This framework supports the systematic study and evaluation of expectations over time, considering the unique context where the interaction is unfolding. Use of the framework can inform sHRI researchers and designers on how to manage users’ expectations, not only in the design, but also during evaluation and presentation of social robots. Expectations can be managed by identifying what kinds of expectations users have and aligning these through design and dissemination which ultimately creates more transparent and successful interactions and collaborations. The framework is a tool for achieving this goal. Second, results show that previous experience has a strong impact on users’ expectations. People have different expectations of social robots and view social robots as both human-like and as machines. Expectations of social robots can vary according to the source of the expectation, with those who had previous direct experiences of robots having different expectations than those who relied on indirect experiences to generate expectations.
One consequence of these results is that expectations can be a confounding variable in sHRI research. Previous experience with social robots can prime users in future interactions with social robots. These findings highlight the unique experiences users have, even when faced with the same robot. Users' expectations and how they change over time shapes the users’ individual needs and preferences and should therefore be considered in the interpretation of sHRI. In doing so, the social robot expectation gap can be reduced.