State-of-the-art style-based generative adversarial networks (StyleGANs) synthesize high-quality images by learning a mapping from a disentangled latent space onto the image manifold. Thereby, learned representations can be analyzed by interpreting the latent space and used subsequently to control the properties of the synthesized industrial machine vision data. StyleGANs in combination with an embedding into the latent space enable the assessment of the properties of embedded images by means of their latent space representations, however, a trade-off between the dimensionality of the StyleGAN’s latent space and the quality of generated images must be found. While a smaller latent space is easier to interpret, it might not capture all quality characteristics if lossless compression cannot be achieved. This work presents an evaluation scheme that uses statistical hypothesis testing to identify an advantageous dimensionality of the latent space for industrial machine vision applications. As quality measure of images synthesized by GANs, often the Fr´echet Inception distance (FID) based on features learned from the ImageNet dataset is used. However, the features of the underlying Inception network are opaque and might not be representative for application specific quality characteristics. Herein, synthetic data is evaluated instead by means of a Fr´echet distance based on selected and application specific features extracted from the used industrial machine vision dataset. With these application specific features, the image quality of multiple StyleGANs trained with different latent space dimensionalities is compared using statistical tests to select an advantageous latent space dimension.
|