MACHINE LEARNING

Adversarial Patch

Resource type
Authors/contributors
Title
Adversarial Patch
Abstract
We present a method to create universal, robust, targeted adversarial image patches in the real world. The patches are universal because they can be used to attack any scene, robust because they work under a wide variety of transformations, and targeted because they can cause a classifier to output any target class. These adversarial patches can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class. To reproduce the results from the paper, our code is available at https://github.com/tensorflow/cleverhans/tree/master/examples/adversarial_patch
Publication
arXiv:1712.09665 [cs]
Date
2018-05-16
Accessed
2019-11-23T14:10:12Z
Library Catalog
Extra
ZSCC: NoCitationData[s0] arXiv: 1712.09665
Citation
Brown, T. B., Mané, D., Roy, A., Abadi, M., & Gilmer, J. (2018). Adversarial Patch. ArXiv:1712.09665 [Cs]. Retrieved from http://arxiv.org/abs/1712.09665
Attachment
Processing time: 0.03 seconds

Graph of references

(from Zotero to Gephi via Zotnet with this script)
Graph of references