Invisibility cloak
Overview
This paper studies the art and science of creating adversarial attacks on object detectors. Most work on real-world adversarial attacks has focused on classifiers, which assign a holistic label to an entire image, rather than detectors which localize objects within an image. Detectors work by considering thousands of “priors” (potential bounding boxes) within the image with different locations, sizes, and aspect ratios. To fool an object detector, an adversarial example must fool every prior in the image, which is much more difficult than fooling the single output of a classifier.