Joseph Ross Mitchell, Konstantinos Kamnitsas, Kyle Singleton, Scott Whitmire, Kamala Clark-Swanson, Sara Ranjbar, Cassandra Rickertsen, Sandra Johnston, Kathleen Egan, Dana Rollison, John Arrington, Karl Krecke, Theodore Passe, Jared Verdoorn, Alex Nagelschneider, Carrie Carr, John Port, Alice Patton, Norbert Campeau, Greta Liebo, Laurence Eckel, Christopher Wood, Christopher Hunt, Prasanna Vibhute, Kent Nelson, Joseph Hoxworth, Ameet Patel, Brian Chong, Jeffrey Ross, Jerrold Boxerman, Michael Vogelbaum, Leland Hu, Ben Glocker, Kristin Swanson
Purpose: Deep learning (DL) algorithms have shown promising results for brain tumor segmentation in MRI. However, validation is required prior to routine clinical use. We report the first randomized and blinded comparison of DL and trained technician segmentations.
Approach: We compiled a multi-institutional database of 741 pretreatment MRI exams. Each contained a postcontrast T1-weighted exam, a T2-weighted fluid-attenuated inversion recovery exam, and at least one technician-derived tumor segmentation. The database included 729 unique patients (470 males and 259 females). Of these exams, 641 were used for training the DL system, and 100 were reserved for testing. We developed a platform to enable qualitative, blinded, controlled assessment of lesion segmentations made by technicians and the DL method. On this platform, 20 neuroradiologists performed 400 side-by-side comparisons of segmentations on 100 test cases. They scored each segmentation between 0 (poor) and 10 (perfect). Agreement between segmentations from technicians and the DL method was also evaluated quantitatively using the Dice coefficient, which produces values between 0 (no overlap) and 1 (perfect overlap).
Results: The neuroradiologists gave technician and DL segmentations mean scores of 6.97 and 7.31, respectively (p < 0.00007). The DL method achieved a mean Dice coefficient of 0.87 on the test cases.
Conclusions: This was the first objective comparison of automated and human segmentation using a blinded controlled assessment study. Our DL system learned to outperform its “human teachers” and produced output that was better, on average, than its training data.
Glioblastoma (GBM), the most aggressive primary brain tumor, is primarily diagnosed and monitored using gadoliniumenhanced T1-weighted and T2-weighted (T2W) magnetic resonance imaging (MRI). Hyperintensity on T2W images is understood to correspond with vasogenic edema and infiltrating tumor cells. GBM’s inherent heterogeneity and resulting non-specific MRI image features complicate assessing treatment response. To better understand treatment response, we propose creating a patient-specific untreated virtual imaging control (UVIC), which represents an individual tumor’s growth if it had not been treated, for comparison with actual post-treatment images. We generated a T2W MRI UVIC by combining a patient-specific mathematical model of tumor growth with a multi-compartmental MRI signal equation. GBM growth was mathematically modeled using the previously developed Proliferation-Invasion-Hypoxia-Necrosis- Angiogenesis-Edema (PIHNA-E) model, which simulated tumor as being comprised of three cellular phenotypes: normoxic, hypoxic and necrotic cells interacting with a vasculature species, angiogenic factors and extracellular fluid. Within the PIHNA-E model, both hypoxic and normoxic cells emitted angiogenic factors, which recruited additional vessels and caused the vessels to leak, allowing fluid, or edema, to escape into the extracellular space. The model’s output was spatial volume fraction maps for each glioma cell type and edema/extracellular space. Volume fraction maps and corresponding T2 values were then incorporated into a multi-compartmental Bloch signal equation to create simulated T2W images. T2 values for individual compartments were estimated from the literature and a normal volunteer. T2 maps calculated from simulated images had normal white matter, normal gray matter, and tumor tissue T2 values within range of literature values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.