Abstract
While machine learning has proven to be promising in several application domains, our understanding of its behavior and limitations is still in its nascent stages. One such domain is that of cybersecurity, where machine learning models are replacing traditional rule based systems, owing to their ability to generalize and deal with large scale attacks which are not seen before. However, the naive transfer of machine learning principles to the domain of security needs to be taken with caution. Machine learning was not designed with security in mind and as such is prone to adversarial manipulation and reverse engineering. While most data based learning models rely on a static assumption of the world, the security landscape is one that is especially dynamic, with an ongoing never ending arms race between the system designer and the attackers. Any solution designed for such a domain needs to take into account an active adversary and needs to evolve over time, in the face of emerging threats. We term this as the "Dynamic Adversarial Mining" problem, and this paper provides motivation and foundation for this new interdisciplinary area of research, at the crossroads of machine learning, cybersecurity, and streaming data mining.
- Papernot, N., McDaniel, P., and Goodfellow, I. Transferability in machine learning: From Phenomena to black-box attacks using adversarial samples. ArXiv preprint arXiv: 1605.07277. 2016.Google Scholar
- Papernot, N. et.al. Practical black-box attacks against deep learning systems using adversarial examples. ArXiv preprint arXiv: 1602.02697. 2016.Google Scholar
- Papernot, N., McDaniel, P., Sinha, A., and Wellman, M. Towards the science of security and privacy in machine learning. arXiv preprint arXiv:1611.03814. 2016.Google Scholar
- Xu, W., Qi, Y., and Evans, D. Automatically evading classifiers. In Proceedings of the 2016 Network and Distributed Systems Symposium. 2016.Google ScholarCross Ref
- Wang, G., Wang, T., Zheng, H., and Zhao, B. Y. Man vs. machine: Practical adversarial detection of malicious crowdsourcing workers. In 23rd USENIX Security Symposium (USENIX Security '14). USENIX Association, Berkeley, CA, 2014, 239-254. Google ScholarDigital Library
- Walgampaya, C., and Kantardzic, M. Cracking the smart clickbot. In 2011 13th IEEE International Symposium on Web Systems Evolution (WSE). IEEE, Washington D.C., 2011, 125-134.Google ScholarCross Ref
- Tramèr, F., Zhang, F., Juels, A., Reiter, M. K., and Ristenpart, T. Stealing machine learning models via prediction APIs. In In the Proceedings of the 25th USENIX Security Symposium (Aug. 10-12, Austin)). USENIX Association, Berkeley, CA, 2016, 601-616.Google ScholarDigital Library
- Lowd, D., and Meek, C. Adversarial learning. In Proceedings of the 11th ACM SIGKDD International Conference on Knowledge Discovery in Data Mining. ACM, New York, 2005, 641-647. Google ScholarDigital Library
- Žliobaitė, I., Bifet, A., Pfahringer, B., and Holmes, G.. Active learning with drifting streaming data. IEEE Transactions on Neural Networks and Learning Systems 25, 1 (2014), 27-39.Google ScholarCross Ref
- Sethi, T. S., Kantardzic, M., and Hu, H. A grid density based framework for classifying streaming data in the presence of concept drift. Journal of Intelligent Information Systems 46, 1 (2016), 179-211. Google ScholarDigital Library
- Sethi, T. S., and Kantardzic, M. Don't pay for validation: Detecting drifts from unlabeled data using margin density. Procedia Computer Science 53 (2015), 103-112.Google ScholarCross Ref
- Barreno, M., Nelson, B., Joseph, A. D., and Tygar, J. D. The security of machine learning. Machine Learning 81, 2 (2010), 121-148. Google ScholarDigital Library
- Ng, A. What Artificial Intelligence Can and Can't Do Right Now. Harvard Business Review. https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now. Nov. 9, 2016.Google Scholar
- Sethi, T. S., and Kantardzic, M. Monitoring Classification Blindspots to Detect Drifts from Unlabeled Data. In 17th IEEE International Conference on Information Reuse and Integration (IRI). IEEE, Washington D.C., 2016.Google ScholarCross Ref
- Roli, F., Biggio, B., and Fumera, G. Pattern recognition systems under attack. In Iberoamerican Congress on Pattern Recognition. Springer Berlin Heidelberg, 2013, 1-8. Google ScholarDigital Library
Index Terms
- When Good Machine Learning Leads to Bad Security: Big Data (Ubiquity symposium)
Recommendations
Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
In recent years, machine learning algorithms, and more specifically deep learning algorithms, have been widely used in many fields, including cyber security. However, machine learning systems are vulnerable to adversarial attacks, and this limits the ...
Machine Learning for Hardware Security: Opportunities and Risks
Recently, machine learning algorithms have been utilized by system defenders and attackers to secure and attack hardware, respectively. In this work, we investigate the impact of machine learning on hardware security. We explore the defense and attack ...
Comments