Conversational AI devices are increasingly present in our lives and even used by children to ask questions, play, and learn. These entities not only blur the line between objects and agents—they are speakers (objects) that respond to speech and engage in conversations (agents)—but also operate differently from humans. Here we use a variant of a classic false-belief task to explore adults' and children's attributions of mental states to conversational AI versus human agents. While adults understood that two conversational AI devices, unlike two human agents, may share the same "beliefs" (Exp.1), 3- to 8-year-old children treated two conversational AI devices just like human agents (Exp.2); by 5 years of age, they expected the two devices to maintain separate beliefs rather than share the same belief, with hints of developmental change. Our results suggest that children initially rely on their understanding of agents to make sense of conversational AI.