This paper argues that the headline-grabbing nature of existential risk (X-Risk) diverts attention away from immediate artificial intelligence (AI) threats, including fairly disseminating AI risks and benefits and justly transitioning towards AI-centred societies. Section I introduces a working definition of X-Risk, considers its likelihood and explores possible subtexts. It highlights conflicts of interest that arise when tech luminaries lead ethics debates in the public square. Section II flags AI ethics concerns brushed aside by focusing on X-Risk, including AI existential benefits (X-Benefits), non-AI X-Risk and AI harms occurring now. Taking the entire landscape of X-Risk into account requires considering how big risks compare, combine and rank relative to one another. As we transition towards more AI-centred societies, which we, the authors, would like to be fair, we urge embedding fairness in the transition process, especially with respect to groups historically disadvantaged and marginalised. Section III concludes by proposing a wide-angle lens that takes X-Risk seriously alongside other urgent ethics concerns.