Since the weather is chaotic, it is necessary to forecast an ensemble of future states. Recently, multiple AI weather models have emerged claiming breakthroughs in deterministic skill. Unfortunately, it is hard to fairly compare ensembles of AI forecasts because variations in ensembling methodology become confounding and the baseline data volume is immense. We address this by scoring lagged initial condition ensembles—whereby an ensemble can be constructed from a library of deterministic hindcasts. This allows the first parameter-free intercomparison of leading AI weather models' probabilistic skill against an operational baseline. Lagged ensembles of the two leading AI weather models, GraphCast and Pangu, perform similarly even though the former outperforms the latter in deterministic scoring. These results are elaborated upon by sensitivity tests showing that commonly used multiple time-step loss functions damage ensemble calibration.