UnMask: Adversarial Detection and Defense Through Robust Feature Alignment
Scott Freitas, Shang-Tse Chen, Zijie J. Wang, Duen Horng (Polo) Chau
Abstract
Recent research has demonstrated that deep learning architectures are vulnerable to adversarial attacks, highlighting the vital need for defensive techniques to detect and mitigate these attacks before they occur. We present UnMask, an adversarial detection and defense framework based on robust feature alignment. UnMask combats adversarial attacks by extracting robust features (e.g., beak, wings, eyes) from an image (e.g., “bird”) and comparing them to the expected features of the classification. For example, if the extracted features for a “bird” image are wheel, saddle and frame, the model may be under attack. UnMask detects such attacks and defends the model by rectifying the misclassification, re-classifying the image based on its robust features. Our extensive evaluation shows that UnMask detects up to 96.75% of attacks, and defends the model by correctly classifying up to 93% of adversarial images produced by the current strongest attack, Projected Gradient Descent, in the gray-box setting. UnMask provides significantly better protection than adversarial training across 8 attack vectors, averaging 31.18% higher accuracy. We open source the code repository and data with this paper: https://github.com/safreita1/unmask.
Citation
UnMask: Adversarial Detection and Defense Through Robust Feature Alignment
Scott Freitas,
Shang-Tse Chen,
Zijie J. Wang,
Duen Horng (Polo) Chau
IEEE International Conference on Big Data (Big Data). Atlanta, GA, 2020.
Project
PDF
Blog
Video
Code
BibTeX