The aim of this extended abstract is to discuss how speech and voice in robots could impact user expectations, and how we, within the human-robot interaction (HRI) research community, ought to handle human-like speech both in research and in the development of robots. Human-like speech refers to both emotions that are expressed through speech and the synthetic voice profile by the robot. The latter is especially important as artificial human-like speech is becoming indistinguishable from actual human speech. Together, these characteristics may cause certain expectations of what the robot is and what it is capable of which may impact both the immediate interactions between a user and robot, as well as a user's future interactions with robots. While there are many ethical considerations around robot designs, we focus specifically on the ethical implications of speech design choices as these choices affect user expectations. We believe this particular dimension is of importance because it not only effects the user immediately, but also the field of HRI, both as a field of research and design. The stance on deception may vary across the different domains that robots are used within; for example, there is a wider acknowledgment of deception in scientific research compared to commercial use of robots. Some of this variation may turn on technical definitions of deception for specific areas or cases. In this paper, we will take on a more general understanding of deception as an attempt to distort or withhold facts with the aim to mislead.