Proceedings Article | 2 April 2024
KEYWORDS: Kidney, Education and training, Machine learning, Image segmentation, Data modeling, Magnetic resonance imaging, Performance modeling, Computed tomography, Liver, Medical imaging
Medical image segmentation benefits from machine learning advancements, offering potential automation. Yet, accuracy depends on substantial annotated data and significant computing resources. Transfer learning addresses these challenges by leveraging a model's knowledge from one task for another with minor adjustments. The idea is to adapt learned features to new tasks, even with differing datasets but shared characteristics. Studies explore the impact of using large source datasets for limited target datasets. This investigation focuses on transferring knowledge from a limited source to enhance model versatility across various tasks. Our goal involved transferring knowledge from an advanced model trained on T2 weighted MR images related to Autosomal Dominant Polycystic Kidney Disease (ADPKD) for kidney and cyst segmentation (referred to as "Lsource"). This transfer was directed towards five distinct target datasets: CT liver, CT kidneys, CT spleen, MRI kidneys, and CT multimodal data (target datasets 1 through 5). The primary objective was to achieve accurate segmentation on these target datasets while saving time and computational resources. This approach is especially valuable when obtaining a substantial, labeled mouse PKD MRI target dataset is challenging, and the source dataset itself is resource-intensive. Using transfer learning from source 1 onto target sets 1 to 5 resulted in mean Dice Similarity Coefficients (DSCs) of 0.94±0.04, 0.97±0.02, 0.95±0.03, 0.96±0.01, 0.93±0.02, respectively. Similarly, employing source 2 yielded mean DSCs of 0.95±0.04, 0.96±0.02, 0.95±0.02, 0.96±0.02, and 0.93±0.02 for the same target sets. Despite variations in pathological conditions, image characteristics, and imaging modalities, the transfer learning approach produced DSC values comparable to the initial published outcomes. This accomplishment was achieved with reduced training requirements, faster convergence times, and decreased computational complexities.