As the domain of adversarial attack countermeasures continues to expand, the accurate evaluationof these defenses remains a challenge. Adversarial attacks pose significant challenges
to the security and robustness of deep learning models. Traditional methods typically depend
on predetermined parameters, such as ensembles of certain methods and manually designed
rules, which may not be optimal for generating effective attacks. In this research, we propose
a parameter-free adversarial attack by leveraging a learning-to-learn (L2L) framework.
We train a recurrent neural network-based optimizer to adaptively update directions
and steps, enabling more efficient and adaptive adversarial attacks. We conduct extensive
experiments on robust models trained on the MNIST and CIFAR-10 datasets.
Our findings show that the learned optimizer outperforms traditional methods, such as
PGD, in generating adversarial attacks for small networks and smaller datasets like MNIST.
For larger networks, our method demonstrates improved performance only for smaller attack
steps. These results highlight the potential of parameter-free attacks in evaluating and
understanding the robustness of deep learning models.