Donate to Science & Enterprise

S&E on Mastodon

S&E on LinkedIn

S&E on Flipboard

Please share Science & Enterprise

Phone Video Algorithm Screens for Stroke

Stroke screening test

Kathryn Atkinson, a patient at Houston Methodist Hospital, participates in a smartphone screening test to analyze stroke-like symptoms. (Houston Methodist)

23 Oct. 2020. A medical informatics team designed a computer model from patient videos to help emergency room physicians quickly diagnose stroke. Researchers from Pennsylvania State University in University Park and Houston Methodist Hospital in Texas describe the system in a paper at the International Conference on Medical Image Computing and Computer-Assisted Intervention, a virtual event held earlier this month.

The team led by Penn State bioinformatics professor James Wang is seeking a quick, reliable, and easy-to-use solution for physicians to identify stroke symptoms in emergency room patients. “When a patient experiences symptoms of a stroke, every minute counts,” says Wang in a university statement. “But when it comes to diagnosing a stroke, emergency room physicians have limited options: send the patient for often expensive and time-consuming radioactivity-based scans or call a neurologist, a specialist who may not be immediately available, to perform clinical diagnostic tests.”

Wang’s lab studies technologies for gaining knowledge more systematically from images, particularly artificial intelligence tools such as machine learning, and statistical modeling. For this task, the researchers aim to emulate standard clinical checklists — Cincinnati Pre-hospital Stroke Scale, and Face Arm Speech Test — used by neurologists to diagnose stroke, but to put these analytical tools in the hands of emergency room physicians, who may not to able to call in a neurologist to conduct these tests.

The Penn State researchers enlisted colleagues at Houston Methodist Hospital, who asked patients with stroke symptoms to participate. The team built a collection of videos from 80 Houston Methodist patients with suspected stroke symptoms, captured on iPhones, where participants completed a speech test. The researchers then used the videos to train machine-learning algorithms, combined with image analysis and natural language processing, for inspecting visual and audio evidence to find signs of stroke, such as facial motion weakness, slurred speech, and other speech disorders.

“The acquisition of facial data in natural settings makes our work robust and useful for real-world clinical use,” says Penn State information sciences professor and team member Sharon Huang, “and ultimately empowers our method for remote diagnosis of stroke and self-assessment.”

The researchers tested their algorithm analyzing smartphone videos with emergency room patients performing actual speech tests for stroke. The results show the computer model matches results of the speech tests, confirmed by CT scans, 79 percent of the time. In addition, the algorithm shows a true-positive sensitivity of 93 percent. And the computer model returned results in about four minutes.

John Volpi, a vascular neurologist at Houston Methodist and co-author of the paper, says emergency room physicians often use a binary method for diagnosing stroke, focusing more on patients with obvious symptoms, which may overlook stroke patients with less severe symptoms. “If we can improve diagnostics at the front end, then we can better expose the right patients to the right risks and not miss patients who would potentially benefit,” notes Volpi, who adds, “We have great therapeutics, medicines, and procedures for strokes, but we have very primitive and, frankly, inaccurate diagnostics.”

Penn State and Houston Methodist filed a provisional patent for the algorithm.

More from Science & Enterprise:

*     *     *

Comments are closed.