KEYWORDS: Eye, Education and training, Picosecond phenomena, Radio over Fiber, Optical sensing, Affine motion model, Amplitude modulation, Data modeling, Machine learning, RGB color model
A designed visual geometry group (VGG)-based convolutional neural network (CNN) model with small computational cost and high accuracy is utilized to monitor pulse amplitude modulation-based intensity modulation and direct detection channel performance using eye diagram measurements. Experimental results show that the proposed technique can achieve a high accuracy in jointly monitoring modulation format, probabilistic shaping, roll-off factor, baud rate, optical signal-to-noise ratio, and chromatic dispersion. The designed VGG-based CNN model outperforms the other four traditional machine-learning methods in different scenarios. Furthermore, the multitask learning model combined with MobileNet CNN is designed to improve the flexibility of the network. Compared with the designed VGG-based CNN, the MobileNet-based MTL does not need to train all the classes, and it can simultaneously monitor single parameter or multiple parameters without sacrificing accuracy, indicating great potential in various monitoring scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.