KEYWORDS: Skin, Image segmentation, 3D modeling, 3D image processing, In vivo imaging, Optical coherence tomography, Artificial intelligence, Confocal microscopy, 3D acquisition
Line-field confocal optical coherence tomography (LC-OCT) is an imaging technique based on a combination of reflectance confocal microscopy and optical coherence tomography, allowing three-dimensional (3D) imaging of skin in vivo with an isotropic spatial resolution of about 1.3 micron and up to 400 microns in depth. Cellular-resolution 3D images obtained with LC-OCT offer a considerable amount of information for description and quantification of the upper layers of in vivo skin using morphological metrics, which can be critical for better understanding the skin changes leading to aging or some pathologies. This study introduces metrics for the quantification of the epidermis, and uses them to describe the variability of healthy epidermis between different body sites. These metrics include the stratum corneum thickness, the undulation of the dermal-epidermal junction (DEJ), and the quantification of the keratinocyte network. In order to generate relevant metrics over entire 3D images, an artificial intelligence approach was applied to automate the calculation of the metrics. We were able to quantify the epidermis of eight volunteers on seven body areas on the head, the upper limbs and the trunk. Epidermal thicknesses and DEJ undulation variations were observed between different body sites. The cheek presented the thinnest stratum corneum the least undulated DEJ, while the back of the hand presented the thickest stratum corneum and the back the most undulated DEJ. The process of keratinocyte maturation was evidenced in vivo. These 3D in vivo quantifications open the door in clinical practice to diagnose and monitor pathologies for which the epidermis is impaired.
Daniel Gareau, James Browning, Joel Correa Da Rosa, Mayte Suarez-Farinas, Samantha Lish , Amanda Zong, Benjamin Firester, Charles Vrattos, Yael Renert-Yuval, Mauricio Gamboa, María Vallone, Zamira Barragán-Estudillo, Alejandra Tamez-Peña, Javier Montoya, Miriam Jesús-Silva, Cristina Carrera, Josep Malvehy, Susana Puig, Ashfaq Marghoob, John Carucci, James Krueger
Significance: Melanoma is a deadly cancer that physicians struggle to diagnose early because they lack the knowledge to differentiate benign from malignant lesions. Deep machine learning approaches to image analysis offer promise but lack the transparency to be widely adopted as stand-alone diagnostics.
Aim: We aimed to create a transparent machine learning technology (i.e., not deep learning) to discriminate melanomas from nevi in dermoscopy images and an interface for sensory cue integration.
Approach: Imaging biomarker cues (IBCs) fed ensemble machine learning classifier (Eclass) training while raw images fed deep learning classifier training. We compared the areas under the diagnostic receiver operator curves.
Results: Our interpretable machine learning algorithm outperformed the leading deep-learning approach 75% of the time. The user interface displayed only the diagnostic imaging biomarkers as IBCs.
Conclusions: From a translational perspective, Eclass is better than convolutional machine learning diagnosis in that physicians can embrace it faster than black box outputs. Imaging biomarkers cues may be used during sensory cue integration in clinical screening. Our method may be applied to other image-based diagnostic analyses, including pathology and radiology.
The effective and non-invasive diagnosis of skin cancer is a hot topic in biophotonics since the current gold standard, biopsy followed by histological examination, is a slow and costly procedure for the healthcare system. Therefore, authors have put their efforts in characterizing skin cancer quantitatively through optical and photonic techniques such as 3D topography and multispectral imaging. Skin relief is an important biophysical feature that can be difficult to appreciate by touch, but can be precisely characterized with 3D imaging techniques, such as fringe projection. Color and spectral features given by skin chromophores, which are routinely analyzed by the naked eye and through dermoscopy, can also be quantified by means of multispectral imaging systems. In this study, the outcomes of these two imaging modalities were combined in a machine learning process to enhance classification of melanomas and nevi obtained from the two systems when operating isolately. The results suggest that the combination of 3D and multispectral data is relevant for the medical diagnosis of skin cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.