- Szegedy, C., Zaremba, E., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. Intriguing properties of neural networks International Conference on Learning Representations 2014. ArXiv:1312.6199 (12/2013).Google Scholar
- Carlini, N. and Wagner D. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text 1st IEEE Deep Learning and Security Workshop (2018). ArXiv:1801.01944 (3/2018).Google Scholar
- Jonas, M.A and Evans D. Enhancing Adversarial Example Defenses Using Internal Layers IEEE Symposium on Security and Privacy 2018. [https://www.ieee-security.org/TC/SP2018/poster-abstracts/oakland2018-paper29-poster-abstract.pdf].Google Scholar
- Papernot, N. and McDaniel P. Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning ArXiv:1803.04765 (3/2018).Google Scholar
Index Terms
- Hidden messages fool AI
Recommendations
Optimizing number of hidden neurons in neural networks
AIAP'07: Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applicationsIn this paper, a novel and effective criterion based on the estimation of the signal-to-noise-ratio figure (SNRF) is proposed to optimize the number of hidden neurons in neural networks to avoid overfitting in the function approximation. SNRF can ...
Fool's gold: extracting finite state machines from recurrent network dynamics
NIPS'93: Proceedings of the 6th International Conference on Neural Information Processing SystemsSeveral recurrent networks have been proposed as representations for the task of formal language learning. After training a recurrent network recognize a formal language or predict the next symbol of a sequence, the next logical step is to understand ...
Comments