The task of detecting and tracking of mitosis is important in many biomedical areas such as cancer and stem cell research. This task becomes complex when done in a high-density cell array, largely due to an extremely imbalanced data, with a very small number of proliferating cells in each image. Using the fact that before proliferating, cells seems to get rounder and brighter, our group extracted bright blobs in each image and considered the patch around each blob as a candidate for mitosis. These candidates were labeled and divided into training, validation and test sets, and used for training of a Convolutional Neural Network (CNN). In the current work, in order to overcome the small number of mitosis samples in the training set, we generated synthetic patches of mitosis using Generative Adversarial Networks (GANs). Trying to predict the labels of the test set candidates using a CNN trained by both real and the synthetically generated images showed an increase in both sensitivity and specificity, in comparison to a CNN trained only on real examples.
With the advances in deep learning analysis a question arises – if for a robust network learning cycle, the shift between raw data and image-based data, is of importance. In the current work, we start exploring this topic by focusing on the following case study: categorizing cancer cells from holographic images. The problem in the categorization process is the time consumption of the transformation from off-axis image holograms to OPD maps. While there have been attempts for fast transformation, they still take non-negligible time. We propose a novel approach for fast classification that skips the pre-processing of creating OPD maps and directly uses the raw data of holographic images. Our dataset contains two separate image acquisitions of primary cancer cells (SW480) and metastatic cancer cells (SW620) of colorectal adenocarcinoma imaged during flow. We extracted the OPD maps of those cells and used them to train and evaluate a ResNet model to create baseline results. Our convolutional neural network (CNN) model is based on Y-Net approach: during training synthetic OPD images are created from input holograms, using the real OPD maps while simultaneously classifying the cells. During inference time only the classification branch operates to further reduce running time. This approach saves computational time by over 90%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.