The parasternal long axis (PLAX) is a routine imaging plane used by clinicians to assess the overall function of the left ventricle during an echocardiogram. Measurements from the PLAX view, in particular left ventricle internal dimension at both end-diastole (LVIDd) and end-systole (LVIDs), are significant markers used to identify cardiovascular disease and can provide an estimation of ejection fraction (EF). However, due to the user-dependent nature of echocardiograms, these measurements suffer from a large amount of inter-observer variability, which greatly affect the sensitive formula used to calculate PLAX EF. While few previous works have attempted to reduce this variability by automating LVID measurements, their models not only lack reliable accuracy and precision, but also generally are not suited to be adapted in point-of-care ultrasound (POCUS) which has limited computing resources. In this paper, we propose a fully automatic, light-weight landmark detection network for detecting LVID and rapidly estimating PLAX EF. Our model is built upon recent advances in deep video landmark tracking with extremely sparse annotations.1 The model is trained on only two frames in the cardiac cine that contain either the LVIDd or LVIDs measurements labeled by clinicians. Using data from 34,305 patients for our experiments, the proposed model accurately tracks the contraction of left ventricular walls. Our model achieves a mean absolute error and standard deviation of 2:65 ± 2:36 mm, 2:77 ± 2:58 mm, and 8:45 ± 7:43% for predicting LVIDd length, LVIDs length, and PLAX EF, respectively. As a light-weight network with less than 125,000 parameters, our model is extremely accessible for POCUS applications.
Echocardiography (echo) is one of the widely used imaging techniques to evaluate cardiac function. Left ventricular ejection fraction (EF) is a commonly assessed echocardiographic measurement to study systolic function and is a primary index of cardiac contractility. EF indicates the percentage of blood volume ejected from the left ventricle in a cardiac cycle. Several deep learning (DL) works have contributed to the automatic measurements of EF in echo via LV segmentation and visual assessment,1-8 but still the design of a lightweight and robust video-based model for EF estimation in portable mobile environments remains a challenge. To overcome this limitation, here we propose a modified Tiny Video Network (TVN) with sampling-free uncertainty estimation for video-based EF measurement in echo. Our key contribution is to achieve comparable accuracy with the contemporary state-of-the-art video-based model, Echonet-Dynamic approach1 while having a small model size. Moreover, we consider the aleatoric uncertainty in our network to model the inherent noise and ambiguity of EF labels in echo data to improve prediction robustness. The proposed network is suitable for real-time video-based EF estimation compatible with portable mobile devices. For experiments, we use the publically available Echonet-Dynamic dataset1 with 10,030 four-chamber echo videos and their corresponding EF labels. The experiments show the advantages of the proposed method in performance and robustness.
Transthoracic echocardiography (echo) is the most common imaging modality for diagnosis of cardiac conditions. Echo is acquired from a multitude of views, each of which distinctly highlights specific regions of the heart anatomy. In this paper, we present an approach based on knowledge distillation to obtain a highly accurate lightweight deep learning model for classification of 12 standard echocardiography views. The knowledge of several deep learning architectures based on the three common state-of-the-art architectures, VGG-16, DenseNet, and Resnet, are distilled to train a set of lightweight models. Networks were developed and evaluated using a dataset of 16,612 echo cines obtained from 3,151 unique patients across several ultrasound imaging machines. The best accuracy of 89.0% is achieved by an ensemble of the three very deep models while we show an ensemble of lightweight models has a comparable accuracy of 88.1%. The lightweight models have approximately 1% of the very deep model parameters and are six times faster in run-time. Such lightweight view classification models could be used to build fast mobile applications for real-time point-of-care ultrasound diagnosis.
Echocardiography (echo) is the most common test for diagnosis and management of patients with cardiac condi- tions. While most medical imaging modalities benefit from a relatively automated procedure, this is not the case for echo and the quality of the final echo view depends on the competency and experience of the sonographer. It is not uncommon that the sonographer does not have adequate experience to adjust the transducer and acquire a high quality echo, which may further affect the clinical diagnosis. In this work, we aim to aid the operator during image acquisition by automatically assessing the quality of the echo and generating the Automatic Echo Score (AES). This quality assessment method is based on a deep convolutional neural network, trained in an end-to-end fashion on a large dataset of apical four-chamber (A4C) echo images. For this project, an expert car- diologist went through 2,904 A4C images obtained from independent studies and assessed their condition based on a 6-scale grading system. The scores assigned by the expert ranged from 0 to 5. The distribution of scores among the 6 levels were almost uniform. The network was then trained on 80% of the data (2,345 samples). The average absolute error of the trained model in calculating the AES was 0.8 ± 0:72. The computation time of
the GPU implementation of the neural network was estimated at 5 ms per frame, which is sufficient for real-time
deployment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.