Deep neural networks for object recognition are a high performing method in the object detection and recognition problem domains. However, these systems have been shown to be brittle to slight variations in their input domains. Lots of research using things like generative adversarial networks has been developed for reducing the sensitivity of neural networks to attackers. However, many times there are variations in images that are not due to attackers, but instead due to things like image blue, image saturation and lighting effects, etc… We appoint these kinds of image variation as ‘natural noise’. Previously, there had been no work or exploration done to determine the sensitivity of various deep neural network components to these kinds of natural image variation.
Solution:
I developed a benchmark suite that injected a variety of natural noise into images, and then performed a systematic analysis of the effects of these noise on image object recognition tasks using the COCO-MS object detection data set across all object kinds, and variations in deep neural network architectures for this task.