Recent work has connected adversarial attack methods and algorithmic recourse methods: both seek minimal changes to an input instance which alter a model’s classification decision. It has been shown that traditional adversarial training, which seeks to minimize a classifier’s susceptibility to malicious perturbations, increases the cost of generated recourse; with larger attack tolerances (known as attack radii) during adversarial training correlating with higher recourse costs. From the perspective of algorithmic recourse, however, the appropriate adversarial training radius has always been unknown. Another recent line of work has motivated adversarial training with adaptive adversarial training radii to address the issue of instance-wise variable adversarial vulnerability, showing success in domains with unknown attack radii. This work studies the effects of adaptive adversarial training on algorithmic recourse costs, establishing that the improvements in model robustness induced by adaptive adversarial training show minimal effects on algorithmic recourse costs. This provides a potential avenue for affordable robustness in domains where recoursability is critical.