Adversarial examples for neural network image classifiers are known to be
transferable: examples optimized to be misclassified by a source classifier are
often misclassified as well by classifiers with different architectures.
However, targeted adversarial examples — optimized to be classified as a
chosen target class — tend to be less transferable between architectures.
While prior research on constructing transferable targeted attacks has focused
on improving the optimization procedure, in this work we examine the role of
the source classifier. Here, we show that training the source classifier to be
“slightly robust” — that is, robust to small-magnitude adversarial examples —
substantially improves the transferability of targeted attacks, even between
architectures as different as convolutional neural networks and transformers.
We argue that this result supports a non-intuitive hypothesis: on the spectrum
from non-robust (standard) to highly robust classifiers, those that are only
slightly robust exhibit the most universal features — ones that tend to
overlap with the features learned by other classifiers trained on the same
dataset. The results we present provide insight into the nature of adversarial
examples as well as the mechanisms underlying so-called “robust” classifiers.

By admin