We empirically show that an adversarial perturbation for one image can be used to accelerate attacks on another image. Specifically, we show how to improve the initialization of the hard-label black-box attack Sign-OPT, operating in the most challenging attack setting, by using previously known adversarial perturbations. Whereas Sign-OPT initializes its attack by searching along random directions for the nearest boundary point, we search for the nearest boundary point along the direction of previously known perturbations. This initialization strategy leads to a significant drop in initial distortion in both the MNIST and CIFAR-10 datasets. Identifying the similar vulnerability of images is a promising direction for future research.