Parkinson’s Voice Initiative
The first project is the Parkinson’s Voice Initiative, launched by applied mathematician Max Little of the MIT Media Lab with with a bold mandate – to crowdsource 10,000 three minute recordings of Parkinson’s patients which Little and his team hope to use to validate the algorithms they are developing for the voice-based diagnosis of Parkinson’s Disease.
According to MayoClinic.com, there are currently no biomarker tests available for the diagnosis of Parkinson’s. In fact, the site indicates that the preferred method for confirming diagnosis by a clinician is to begin treatment and wait to observe improvements in the patient’s symptoms. Current objective symptom tests are expensive, time-consuming, and logistically difficult, so they are not done outside clinical trials.
Max Little and his research team claim voice-based tests are as accurate as clinical tests for objective diagnosis of Parkinson’s Disease, and importantly they can be conducted remotely. Voice is affected as much by the symptoms of Parkinson’s as limb movements, so the software uses voice recordings alone to assess and diagnose patients.
There are four core driving motivations behind the development of the team’s voice-based test;
- Reduce logistical difficulties in routine practice – no need to visit the clinic for checkups.
- High-frequency monitoring for individualized treatment decisions. With this data, we can optimize drug timing and dosage for maximum effect.
- Cost-effective mass recruitment for treatment trials. Recruiting very large numbers into trials for new treatments will speed up the search for a cure.
- Population-scale screening programs. Searching for early ‘biomarkers’ could find the signs of the disease before the damage done is irreparable.
Eulerean Video Magnification for Revealing Subtle Changes in the World
The second mind-blowing project features work done at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) by Michael Rubinstein and William Freeman. The technology, which they call Eulerean Video Magnification (EVM), can detect patterns of color and movement that reveal individuals heart rate, breathing patterns and how blood circulates through the body.
The software works by amplifying variations in successive frames of video that are imperceptible to the naked eye. More specifically, EVM takes a standard video sequence as input, and applies spatial decomposition, followed by temporal filtering to the frames.
The resulting signal is then amplified to reveal hidden information, enabling the visualization of flowing bood as it fills the face. One potential application of the technology could be “contactless monitoring” of hospital patients’ vital signs.
An example of using Eulerian Video Magnification framework for visualizing the human pulse. (a) Four frames from the original video sequence. (b) The same four frames with the subject’s pulse signal amplified. (c) A vertical scan line from the input (top) and output (bottom) videos plotted over time shows how the method amplifies the periodic color variation. In the input sequence the signal is imperceptible, but in the magnified sequence the variation is clear.