Object detection in urban traffic scenarios is a great challenge in computer vision due of the factors like occlusion, illumination, small target size and complex background. To enhance the accuracy of object detection in urban traffic scenarios, several innovations is applied to original YOLOv8. First, a novel C2f module embedded with deformable convolution network (DCN) is proposed to increase model's receptive field and therefore strengthen the model's robustness. Then, channel priority convolution attention (CPCA) mechanism is introduced to extract important features and thus enhance the regression and localization capability of the model. Furthermore, the loss function CIoU is replaced by ECIoU to improve detection frame positioning accuracy and convergence speed. From the experimental results on the VisDrone2019 dataset, the improved YOLOv8 model has obtained better performance than original YOLO models. This study provides a theoretical basis for more accurate object detection in complex urban traffic scenarios.
In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.