In the healthcare world, the ability to see through walls, could monitor diseases and help elderly people live longer in their own homes. Using a neural network to analyse radio signals bouncing of people’s bodies, researchers have managed to create moving stick figures that mimic the person’s movements in real time.
MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed the AI tool, RF-Pose, in a move that could help monitor diseases like Parkinson’s, multiple sclerosis (MS), and muscular dystrophy. RF-Pose would allow doctors to better understand the disease progression to improve how the patient is medicated.
CSAIL researchers also think RF-Pose could monitor falls, injuries and changes in activity patterns to help elderly people live independently.
“We’ve seen that monitoring patients’ walking speed and ability to do basic activities on their own gives health care providers a window into their lives that they didn’t have before, which could be meaningful for a whole range of diseases,” says Professor Dina Katabi, who led the research team and co-wrote a new paper about the project. Patients wouldn’t have to wear sensors or charge devices, offering “a key advantage” over other approaches.
The paper’s authors also believe it could be used in videogames or search and rescue operation to locate survivors.
Researchers say they had to overcome the fact that most neural networks are trained using data labelled by hand. In the past, for example, a neural network trained to identify cats would require people to look at a dataset of images and label them as “cat” or “not cat”. But radio signals can’t easily be labelled by humans. The project has seen researchers teach RF-Pose to learn an association between a radio signal and stick figures extracted from images of people doing a range of activities.
“Post training, RF-Pose was able to estimate a person’s posture and movements without cameras, using only the wireless reflections that bounce of people’s bodies,” researchers wrote in an article on the MIT website.
The network was never explicitly trained on data form the other side of a wall. Surprised by the network’s ability to generalise its knowledge to handle through-wall movement, MIT Professor Antonio Torralba said: “If you think of the computer vision system as the teacher, this is a truly fascinating example of the student outperforming the teacher.”
It’s not just movement that can be detected. Authors showed RF-Pose could identify people. Using wireless signals, the system hit 83% accuracy when identifying a person out of a line up of 100 individuals. It is this ability that suggests the system could be used search and rescue operations.
Mingmin Zhao, PhD student and co-lead author of the paper, outlined his hopes for the technology. “By using this combination of visual data and AI to see through walls, we can enable better scene understanding and smarter environments to live safer, more productive lives,” he said.
Researchers are also working to create 3D representations that would show actions more accurately. In the case of patient monitoring, you might even be able to see if a person’s hands were shaking regularly.
Katabi co-wrote the new paper with PhD student and lead author Mingmin Zhao, MIT Professor Antonio Torralba, postdoc Mohammad Abu Alsheikh, graduate student Tianhong Li, and PhD students Yonglong Tian and Hang Zhao. They will present it later this month at the Conference on Computer Vision and Pattern Recognition (CVPR) in Salt Lake City, Utah.
The research team anonymised and encrypted all the data it has collected to protect user’s privacy and said it plans to come up with a mechanism that would ensure consent.
Image credits: MIT CSAIL