Why do object detection model adversaries look different from image classifiers

Follow the full discussion on Reddit.
Hello people. I was just messing around to see the behavior for adversarial attacks on image classifiers, and decided to try it with an object detector as well. I noticed that an untargeted adversarial attack on these models yielded some interested masks. The image classifier generated the usually expected noise mask that is popular, but the object detector under the same conditions generated a mask that closely resembles the objects in question. What is the reasoning behind this? Thank you for your help!

Comments

There's unfortunately not much to read here yet...

Discover the Best of Machine Learning.

Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.

Join over 900 Machine Learning Engineers receiving our weekly digest.

Best of Machine LearningBest of Machine Learning

Discover the best guides, books, papers and news in Machine Learning, once per week.

Twitter