Over 20,000 children experience a cardiac arrest annually in the U.S., only 17-50% survive. There is massive variability in the quality of lifesaving cardiopulmonary resuscitation (CPR) that children receive due to limited availability of pediatric specialized emergency resources, suppressing the survival rate. High quality CPR is performed to replace the function of the beating heart during a cardiac arrest, preventing the asphyxiation of the vital organs, while the inciting process can be investigated. A collaboration between the Johns Hopkins University Applied Physics Laboratory and School of Medicine was developed to address this critical healthcare gap, with the goal of saving the lives of dying children. We developed a novel effective, portable, usable, affordable, and equitable augmented reality system, called AR-CPR, that provides real-time CPR feedback. AR-CPR improves the rate of Pediatric Advanced Life Support guideline adherence to 73% (SD 18%) from 17% (SD 26%) (p<0.001). We engineered a custom array of inertial measurement units (IMU) and microprocessors to sense and analyze the quality of CPR. This information is then wirelessly transmitted to an AR head mounted display. The medical practitioner receives imperative feedback on their performance with actionable guidance. AR-CPR has promise as the preeminent CPR feedback tool and is the first such device being developed for international clinical use. With continued development, AR-CPR could be used anywhere a child may experience a cardiac arrest, including emergency departments, ambulances, and/or malls, households airports, or schools in conjunction with automated external defibrillators (AED). Thousands of children’s lives could be saved.
Ear infections are exceedingly common, yet challenging to diagnose correctly. The diagnosis requires a clinician (such as a physician, nurse practitioner, or physician assistant) to use an otoscope and inspect the eardrum (i.e. tympanic membrane). Once visualized the clinician must rely on clinical judgment to determine the presence of changes typically associated with an ear infection such as eardrum color and/or position. Research has however consistently demonstrated systemic failure among clinicians to correctly diagnose and manage ear infections. With recent advancements of pattern recognition techniques, including deep learning, there has been increasing interest in the opportunity to automate the diagnosis of ear infections. While there are some previous studies that successfully apply machine learning to classify ear drum photos, these methods were developed and evaluated in non-real world settings and used single, crisp, still-shot photos of the eardrum that would be labor-intensive to acquire in uncooperative pediatric patients. Contrary to previous works, we present a deep anomaly detection based method that flags otoscopy video sequences as normal or abnormal, achieving a promising first step towards automated analysis of otoscopy video for in-clinic or at-home screening.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.