Unconventional Approaches to Hacking Neural Networks
Spring 2018 - Present
Much work analyzing the security of neural network-based applications is somewhat stifled in creativity, focusing mostly on training data poisoning and adversarial inputs. In this work, we are looking at new strateges of attack. As an initial foray into the unconventional, we present a new type of backdoor attack that exploits a vulnerability of convolutional neural networks (CNNs) that has been previously unstudied. In particular, we examine the application of facial recognition. Deep learning techniques are at the top of the game for facial recognition, which means they have now been implemented in many production-level systems. Alarmingly, unlike other commercial technologies such as operating systems and network devices, deep learning-based facial recognition algorithms are not presently designed with security requirements or audited for security vulnerabilities before deployment. Given how young the technology is and how abstract many of the internal workings of these algorithms are, neural networkbased facial recognition systems are prime targets for security breaches. As more and more of our personal information begins to be guarded by facial recognition (e.g., the iPhone X), exploring the security vulnerabilities of these systems from a penetration testing standpoint is crucial. Along these lines, we describe a general methodology for backdooring CNNs via targeted weight perturbations. Using a five-layer CNN and ResNet-50 as case studies, we show that an attacker is able to significantly increase the chance that inputs they supply will be falsely accepted by a CNN while simultaneously preserving the error rates for legitimate enrolled classes.
- "Backdooring Convolutional Neural Networks via Targeted Weight Perturbations,", ,Proceedings of the IAPR/IEEE International Joint Conference on Biometrics (IJCB),September 2020.