It is well known that adversarial attacks can fool deep neural networks with
imperceptible perturbations. Although adversarial training significantly
improves model robustness, failure cases of defense still broadly exist. In
this work, we find that the adversarial attacks can also be vulnerable to small
perturbations. Namely, on adversarially-trained models, perturbing adversarial
examples with a small random noise may invalidate their misled predictions.
After carefully examining state-of-the-art attacks of various kinds, we find
that all these attacks have this deficiency to different extents. Enlightened
by this finding, we propose to counter attacks by crafting more effective
defensive perturbations. Our defensive perturbations leverage the advantage
that adversarial training endows the ground-truth class with smaller local
Lipschitzness. By simultaneously attacking all the classes, the misled
predictions with larger Lipschitzness can be flipped into correct ones. We
verify our defensive perturbation with both empirical experiments and
theoretical analyses on a linear model. On CIFAR10, it boosts the
state-of-the-art model from 66.16% to 72.66% against the four attacks of
AutoAttack, including 71.76% to 83.30% against the Square attack. On ImageNet,
the top-1 robust accuracy of FastAT is improved from 33.18% to 38.54% under the
100-step PGD attack.

By admin