Object detection AI’s enable robust solutions for fast, automated detection of anomalies in operating environments such as airfields. Implementation of AI solutions requires training models on a large and diverse corpus of representative training data. To reliably detect craters and other damage on airfields, the AI must be trained on a large, varied, and realistic set of images of craters and other damage. The current method for obtaining this training data is to set explosives in the concrete surface of a test airfield to create actual damage and to record images of this real data. This approach is extremely expensive and time consuming, results in relatively little data representing just a few damage cases and does not adequately represent damage to UXO and other artifacts that are detected. To address this paucity of training data, we have begun development of a training data generation and labeling pipeline that leverages Unreal Engine 4 to create realistic synthetic environments populated with realistic damage and artifacts. We have also developed a labeling system for automatic labeling of the detection segments in synthetic training images, in order to provide relief from the tedious and time-consuming process of manually labeling segments in training data and eliminate human errors incurred by manual labeling. We present comparisons of performance of two object detection AI’s trained on real and synthetic data and discuss cost and schedule savings enabled by the automated labeling system used for labeling of detection segments.
|