ABSTRACT
Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this work, we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model.We generate adversarial perturbations targeting the node's features and the graph structure, thus, taking the dependencies between instances in account. Moreover, we ensure that the perturbations remain unnoticeable by preserving important data characteristics. To cope with the underlying discrete domain we propose an efficient algorithm Nettack exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful even when only limited knowledge about the graph is given.
Supplemental Material
- Lada A Adamic and Natalie Glance . 2005. The political blogosphere and the 2004 US election: divided they blog International workshop on Link discovery. 36--43. Google ScholarDigital Library
- Alessandro Bessi . 2015. Two samples test for discrete power-law distributions. arXiv preprint arXiv:1503.00643 (2015).Google Scholar
- Battista Biggio, Giorgio Fumera, and Fabio Roli . 2014. Security evaluation of pattern classifiers under attack. IEEE TKDE Vol. 26, 4 (2014), 984--996. Google ScholarDigital Library
- Aleksandar Bojchevski and Stephan Günnemann . 2018. Bayesian Robust Attributed Graph Clustering: Joint Learning of Partial Anomalies and Group Structure. In AAAI. 2738--2745.Google Scholar
- Aleksandar Bojchevski and Stephan Günnemann . 2018. Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking ICLR.Google Scholar
- Aleksandar Bojchevski, Yves Matkovic, and Stephan Günnemann . 2017. Robust Spectral Clustering for Noisy Data: Modeling Sparse Corruptions Improves Latent Embeddings. In SIGKDD. 737--746. Google ScholarDigital Library
- Hongyun Cai, Vincent W Zheng, and Kevin Chang . 2018. A comprehensive survey of graph embedding: problems, techniques and applications. IEEE TKDE (2018).Google Scholar
- Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien . 2006. Semi-Supervised Learning. Adaptive Computation and Machine Learning series. The MIT Press. Google ScholarDigital Library
- Yizheng Chen, Yacin Nadji, Athanasios Kountouras, Fabian Monrose, Roberto Perdisci, Manos Antonakakis, and Nikolaos Vasiloglou . 2017. Practical Attacks Against Graph-based Clustering. arXiv preprint arXiv:1708.09056 (2017). Google ScholarDigital Library
- Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman . 2009. Power-law distributions in empirical data. SIAM review Vol. 51, 4 (2009), 661--703. Google ScholarDigital Library
- Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst . 2016. Convolutional neural networks on graphs with fast localized spectral filtering NIPS. 3837--3845. Google ScholarDigital Library
- Dhivya Eswaran, Stephan Günnemann, Christos Faloutsos, Disha Makhija, and Mohit Kumar . 2017. ZooBP: Belief Propagation for Heterogeneous Networks. PVLDB Vol. 10, 5 (2017), 625--636. Google ScholarDigital Library
- Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl . 2017. Neural Message Passing for Quantum Chemistry. In ICML. 1263--1272.Google ScholarDigital Library
- Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy . 2015. Explaining and harnessing adversarial examples. In ICLR.Google Scholar
- Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel . 2017. Adversarial Examples for Malware Detection. In European Symposium on Research in Computer Security. 62--79.Google Scholar
- Aditya Grover and Jure Leskovec . 2016. node2vec: Scalable feature learning for networks. In SIGKDD. 855--864. Google ScholarDigital Library
- William L Hamilton, Rex Ying, and Jure Leskovec . 2017. Inductive Representation Learning on Large Graphs. In NIPS.Google Scholar
- Bryan Hooi, Neil Shah, Alex Beutel, Stephan Günnemann, Leman Akoglu, Mohit Kumar, Disha Makhija, and Christos Faloutsos . 2016. BIRDNEST: Bayesian Inference for Ratings-Fraud Detection SIAM SDM. 495--503.Google Scholar
- Thomas N Kipf and Max Welling . 2017. Semi-supervised classification with graph convolutional networks ICLR.Google Scholar
- Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik . 2016. Data poisoning attacks on factorization-based collaborative filtering NIPS. 1885--1893. Google ScholarDigital Library
- Ben London and Lise Getoor . 2014. Collective Classification of Network Data. Data Classification: Algorithms and Applications Vol. 399 (2014).Google Scholar
- Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore . 2000. Automating the construction of internet portals with machine learning. Information Retrieval Vol. 3, 2 (2000), 127--163. Google ScholarDigital Library
- Shike Mei and Xiaojin Zhu . 2015. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners. In AAAI. 2871--2877. Google ScholarDigital Library
- Ahmed Mohamed Mohamed El-Sayed . 2016. Modeling Multivariate Correlated Binary Data. American Journal of Theoretical and Applied Statistics Vol. 5, 4 (2016), 225--233.Google ScholarCross Ref
- Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein . 2017. Geometric deep learning on graphs and manifolds using mixture model CNNs CVPR, Vol. Vol. 1. 3.Google Scholar
- Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami . 2016. The limitations of deep learning in adversarial settings IEEE European Symposium on Security and Privacy. 372--387.Google Scholar
- Bryan Perozzi, Rami Al-Rfou, and Steven Skiena . 2014. Deepwalk: Online learning of social representations SIGKDD. 701--710. Google ScholarDigital Library
- Trang Pham, Truyen Tran, Dinh Q. Phung, and Svetha Venkatesh . 2017. Column Networks for Collective Classification. In AAAI. 2485--2491.Google Scholar
- Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad . 2008. Collective classification in network data. AI magazine Vol. 29, 3 (2008), 93.Google Scholar
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Google Inc, Joan Bruna, Dumitru Erhan, Google Inc, Ian Goodfellow, and Rob Fergus . 2014. Intriguing properties of neural networks. In ICLR.Google Scholar
- Mohamad Ali Torkamani and Daniel Lowd . 2013. Convex adversarial collective classification. In ICML. 642--650. Google ScholarDigital Library
- Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel . 2017. The Space of Transferable Adversarial Examples. arXiv preprint arXiv:1704.03453 (2017).Google Scholar
- Mengchen Zhao, Bo An, Yaodong Yu, Sulin Liu, and Sinno Jialin Pan . 2018. Data Poisoning Attacks on Multi-Task Relationship Learning AAAI. 2628--2635.Google Scholar
Index Terms
- Adversarial Attacks on Neural Networks for Graph Data
Recommendations
TDGIA: Effective Injection Attacks on Graph Neural Networks
KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningGraph Neural Networks (GNNs) have achieved promising performance in various real-world applications. However, recent studies have shown that GNNs are vulnerable to adversarial attacks. In this paper, we study a recently-introduced realistic attack ...
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks
DYNAMICS '20: Proceedings of the 2020 Workshop on DYnamic and Novel Advances in Machine Learning and Intelligent Cyber SecurityData poisoning is one of the most relevant security threats against machine learning and data-driven technologies. Since many applications rely on untrusted training data, an attacker can easily craft malicious samples and inject them into the training ...
All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs
WSDM '20: Proceedings of the 13th International Conference on Web Search and Data MiningRecent studies have demonstrated that machine learning approaches like deep learning methods are easily fooled by adversarial attacks. Recently, a highly-influential study examined the impact of adversarial attacks on graph data and demonstrated that ...
Comments