The nature of consciousness has been a long-debated concept related to human cognition and self-understanding. As AI systems become more capable and autonomous, it is an increasingly pressing matter whether they can be called conscious. In line with narrative-based theories, here we present a simple but concrete computational criterion for consciousness grounded in the querying of a virtual self-representation. We adopt a reinforcement learning (RL) setting and implement these ideas in SubjectZero, a planning-based deep RL agent which has an explicit virtual self-model and whose architecture draws similarities to multiple prominent consciousness theories. Being able to self-localize, simulate the world, and model its own internal state, it can support a primitive virtual narrative, the quality of which depends on the number of abstractions that the underlying generative model sustains. Task performance still ultimately depends on the modeling capabilities of the agent where intelligence, understood simply as the ability to model complicated relationships, is what matters.