Universal Adversarial Perturbations (UAPs) are imperceptible, image-agnostic
vectors that cause deep neural networks (DNNs) to misclassify inputs from a
data distribution with high probability. Existing methods do not create UAPs
robust to transformations, thereby limiting their applicability as a real-world
attacks. In this work, we introduce a new concept and formulation of robust
universal adversarial perturbations. Based on our formulation, we build a
novel, iterative algorithm that leverages probabilistic robustness bounds for
generating UAPs robust against transformations generated by composing arbitrary
sub-differentiable transformation functions. We perform an extensive evaluation
on the popular CIFAR-10 and ILSVRC 2012 datasets measuring robustness under
human-interpretable semantic transformations, such as rotation, contrast
changes, etc, that are common in the real-world. Our results show that our
generated UAPs are significantly more robust than those from baselines.

By admin