Production machine learning systems are consistently under attack by
adversarial actors. Various deep learning models must be capable of accurately
detecting fake or adversarial input while maintaining speed. In this work, we
propose one piece of the production protection system: detecting an incoming
adversarial attack and its characteristics. Detecting types of adversarial
attacks has two primary effects: the underlying model can be trained in a
structured manner to be robust from those attacks and the attacks can be
potentially filtered out in realtime before causing any downstream damage. The
adversarial image classification space is explored for models commonly used in
transfer learning.

By admin