Human action recognition has been utilized in many applications such as human-computer interaction, video surveillance, assistive living, and gaming. Deployment of human action recognition demands the processing to be carried out in real-time or in a computationally efficient manner. The real-time requirement is addressed by only a subset of the developed methods in the literature. This paper provides a review of computationally efficient human action recognition methods in which a vision sensor is used. The reviewed papers are categorized in terms of conventional and deep learning approaches as well as in terms of single vision and multi-vision modality sensing.
Diseases in plants are substantially problematic for agricultural yield management. Compounded with insufficient information to correctly diagnose crops, they can lead to significant economic loss and yield inefficiencies. Due to the success of deep learning in many image processing applications, the first part of this paper involves designing a deep neural network for the detection of disease in maize due to its economic significance. A convolutional neural network is designed and trained on a public domain dataset with labeled images of maize plant leaves with disease present, or lack thereof. In the second part of this paper, the trained convolutional neural network is turned into a smartphone app running in real-time for the purpose of performing maize crop disease detection in the field in an on-the-fly manner. The smartphone app offers a cost-effective, portable, and universally accessible way to detect disease in maize. The approach developed in this paper enables recognizing early signs of plant pathogens from maize crop images in realtime in the field, thus leading to preemptive corrective actions prior to significant yield loss. Keywords: Artificial Intelligence in agriculture, real-time detection of crop disease, smartphone
This paper presents the real-time implementation of deep neural networks on smartphone platforms to detect and classify diabetic retinopathy from eye fundus images. This implementation is an extension of a previously reported implementation by considering all the five stages of diabetic retinopathy. Two deep neural networks are first trained, one for detecting four stages and the other to further classify the last stage into two more stages, based on the EyePACS and APTOS datasets fundus images and by using transfer learning. Then, it is shown how these trained networks are turned into a smartphone app, both Android and iOS versions, to process images captured by smartphone cameras in real-time. The app is designed in such a way that fundus images can be captured and processed in real-time by smartphones together with lens attachments that are commercially available. The developed real-time smartphone app provides a costeffective and widely accessible approach for conducting first-pass diabetic retinopathy eye exams in remote clinics or areas with limited access to fundus cameras and ophthalmologists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.