KEYWORDS: Electrocardiography, Education and training, Data modeling, Deep learning, Feature extraction, Ablation, Signal processing, Performance modeling, Statistical modeling, Machine learning
Targeting the challenge where the substantial labeling expense of ECG data contributes to the present dearth of labeled ECG datasets and the subpar segmentation precision of contemporary models, this paper proposes an ECG segmentation model NGA-Net,the model is based on RRU-Net, with the addition of the ASPNL module and the improved Ghost module, in which the improved Ghost module is designed to generate an increased quantity of feature maps using a reduced parameter set, thereby boosting computational efficiency; The ASPNL module can capture ECG signal features from multiple scales to enhance the efficiency of feature extraction. Experimental evidence indicates that the ECG segmentation model, NGA-Net, introduced in this research, exhibits superior performance in comparison to other methodologies when tested on the publicly available LUDB dataset, which demonstrates the effectiveness of NGANet.In this research, we adopt a semi-supervised learning strategy for training the NGA-Net in scenarios with small sample sizes, leveraging data augmentation and consistency training methodologies. The experimental findings corroborate the effectiveness of semi-supervised learning in augmenting the performance of deep learning models.
KEYWORDS: Video, Detection and tracking algorithms, Video surveillance, Feature extraction, Cameras, Video compression, Data conversion, Neural networks
A keyframe is a crucial image frame used to describe a shot, and the use of keyframe technology can significantly reduce the amount of data for video retrieval. For example,video-on-demand, face recognition under the camera, key lens retrieval of medical images, etc. Aiming at the problems in the current video keyframe extraction process that the extraction accuracy is low and cannot meet the real-time performance, this paper proposes a real-time video keyframe extraction algorithm CTM-NN based on the inter-frame difference method combined with clustering and neural network. The algorithm uses the inter-frame difference method based on the set threshold, HOG plus HSV first-order moment feature extraction algorithm, and uses the K-means++ clustering algorithm to finally train its own ResNet-50 model, aiming to accurately and efficiently extract real-time video Keyframes. In order to verify the algorithm proposed in this paper, experiments were carried out in the finished news video, landscape video, and real-time concrete mixing video. The experimental results show that the method proposed in this paper can meet the extraction accuracy and meet the keyframe extraction speed of the real-time video so that it can save the keyframes, automatically label while maintaining the time sequence. All in all, the CTM-NN algorithm proposed in this paper has achieved good results in the extraction and storage of real-time video keyframes
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.