Bladder cancer (BC) in US men is costly and common; its high cost largely from its high recurrence rate (>50%), which necessitates frequent surveillance. We aim to change the paradigm around how BC surveillance is performed by validating new tools with high sensitivity and specificity for carcinoma in situ. In this talk, I discuss our innovative solutions to improve mapping the bladder for longitudinal tracking of suspicious lesions and to create miniature tools for optical detection based on machine learning, computer vision and optical coherence tomography.
Blue light cystoscopy (BLC) and white light cystoscopy (WLC) are standard of care tools to image the bladder for suspicious areas of tumor development. Having clear, high-quality frames in cystoscopy videos are crucial to sensitive, efficient detection of bladder tumors. Vessel features carry rich information but are often lost or poorly visualized in frames containing illumination artifacts or impacted by impurities in the bladder. In our study, we introduced an automatic WLC and BLC classification method for cystoscopy video analysis and proposed an image enhancement pipeline that addresses the loss of features for cystoscopy videos containing WLC and BLC frames.
Blue light cystoscopy (BLC) has been demonstrated to detect bladder tumors with better sensitivity than white light cystoscopy (WLC); however, the use of BLC is limited to the operating room. In this study, we aim to bring BLC to the clinic by transforming WLC frames into digitally-stained BLC-like frames. We collected region-matched WLC and BLC videos from TURBT procedures and generated BLC-like frames, using WLC frames as input and the matched BLC frames as target. We will discuss the staining performances with perfectly registered WLC-BLC datasets, as well as WLC and BLC video clips collected with commercial clinical systems.
Synthetic-Aperture-Radar (SAR) is a commonly used modality in mission-critical remote-sensing applications, including battlefield intelligence, surveillance, and reconnaissance (ISR). Processing SAR sensory inputs with deep learning is challenging because deep learning methods generally require large training datasets and high- quality labels, which are expensive for SAR. In this paper, we introduce a new approach for learning from SAR images in the absence of abundant labeled SAR data. We demonstrate that our geometrically-inspired neural architecture, together with our proposed self-supervision scheme, enables us to leverage the unlabeled SAR data and learn compelling image features with few labels. Finally, we show the test results of our proposed algorithm on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset.
Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.
diagnostic standard is a pleural biopsy with subsequent histologic examination of the tissue demonstrating invasion by
the tumor. The diagnostic tissue is obtained through thoracoscopy or open thoracotomy, both being highly invasive
procedures. Thoracocenthesis, or removal of effusion fluid from the pleural space, is a far less invasive procedure that
can provide material for cytological examination. However, it is insufficient to definitively confirm or exclude the
diagnosis of malignant mesothelioma, since tissue invasion cannot be determined. In this study, we present a
computerized method to detect and classify malignant mesothelioma based on the nuclear chromatin distribution from
digital images of mesothelial cells in effusion cytology specimens. Our method aims at determining whether a set of
nuclei belonging to a patient, obtained from effusion fluid images using image segmentation, is benign or malignant, and
has a potential to eliminate the need for tissue biopsy. This method is performed by quantifying chromatin morphology
of cells using the optimal transportation (Kantorovich–Wasserstein) metric in combination with the modified Fisher
discriminant analysis, a k-nearest neighborhood classification, and a simple voting strategy. Our results show that we can
classify the data of 10 different human cases with 100% accuracy after blind cross validation. We conclude that nuclear
structure alone contains enough information to classify the malignant mesothelioma. We also conclude that the
distribution of chromatin seems to be a discriminating feature between nuclei of benign and malignant mesothelioma
cells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.