Curiosity is a core drive for learning in humans which is increasingly being looked at in developing robots capable of internally motivated lifelong learning. However, there are also potential risks associated with robots being curious when people do not expect or want robots to be curious, especially when the robot already has work tasks to perform. Further, how robots’ mental states are described may prime expectations inhuman counterparts. This thesis presents a pair of experiments on people’s perceptions of four levels of curious robot behavior, designed by a professional animator. We also explored the impacts of a curious robot being seen as on-duty vs. off-duty. In addition,we examined whether our curious robot behavior matched peoples’ expectations when primed to expect a “curious” vs. “learning” vs. “autonomous” robot. In both studies, as curious behavior increased so to did ratings of the robots’ thinking ability coupled with a decrease in ratings of it as an effective working and social agent, particularly when it was framed as on-duty. Further, each cognitive prime resulted in a unique trajectory of expectation matching showing that robots’ internal mental states should be matched with analogous external behaviors. Our findings have implications for the design and development of robots that engage in learning modeled on human cognition.