This paper documents an initial investigation into the effect of image degradation on the performance of transfer learning (TL) as the number of retrained layers is varied, using a well-documented, commonly-used, and well- performing deep learning classifier (VGG16). Degradations were performed on a publicly-available data set to simulate the effects of noise and varying optical resolution by electro-optical (EO/IR) imaging sensors. Performance measurements were gathered on TL performance on the base image-set as well as modified image-sets with different numbers of retrained layers, with and without data augmentation. It is shown that TL mitigates against corrupt data, and improves classifier performance with increased numbers of retrained layers. Data augmentation also improves performance. At the same time, the phenomenal performance of TL cannot overcome the lack of feature information in severely degraded images. This experiment provides a qualitative sense of when transfer learning cannot be expected to improve classification results.
|