Sonar on stock smartwatches leads to hand-tracking breakthrough
By Louis DiPietro
Imagine tapping your thumb and index finger together twice to skip to the next song or clicking around your laptop or desktop computer without a mouse, using discreet finger motions.
New first-of-its-kind wearable technology from researchers at Cornell and KAIST, in South Korea, brings that vision closer to reality. The system, called WatchHand, equips off-the-shelf smartwatches with AI-powered micro sonar capable of tracking hand movements.
The technology has the potential to revolutionize how we interact with our devices by continuously tracking hand poses in real time using smartwatches and their built-in speaker and microphone, the researchers said. It is the first time AI-powered acoustic sensing for hand-pose tracking has been implemented on off-the-shelf smartwatches without the need for additional hardware.
“In the future, with this kind of hand-tracking technology, we might be able to track our typing with just our smartwatch,” said Chi-Jung Lee, a doctoral student in the field of information science in the Cornell Ann S. Bower College of Computing and Information Science, and the co-lead author of “WatchHand: Enabling Continuous Hand Pose Tracking On Off-the-Shelf Smartwatches,” which will be presented at the Association for Computing Machinery (ACM) CHI conference on Human Factors in Computing Systems beginning April 13 in Barcelona.
“Our hands can act as an input device with computers,” she said.
Along with gesture-based device interaction, WatchHand’s direct application could support assistive technologies for users with limited mobility or speech and be used as a controller in augmented reality and virtual reality environments, researchers said.
The device represents a significant breakthrough for hand-based, human-computer interaction, said co-lead author Jiwan Kim, a doctoral student in the field of electrical engineering at KAIST and a visiting researcher in the SciFi Lab last year.
“WatchHand substantially lowers the barriers to hand-pose tracking,” Kim said. “If any device has a single speaker and microphone, our approach is applicable.”
Existing wearable hand-tracking prototypes require bulky hardware, rendering them impractical for everyday use, the researchers said, but WatchHand uses the existing microphone and speaker within standard smartwatches. Equipped with WatchHand, the smartwatch’s speaker emits inaudible sound waves that bounce off the hand and back into the watch’s microphone, creating an echo profile image. WatchHand’s machine learning algorithm, which runs on the smartwatch, reads this echo profile and estimates the hand pose in 3D and in real time.
All hand-pose data and processing would take place locally on the watch, meaning that personal data wouldn’t be shared, the researchers said.
WatchHand was tested with 40 participants across four studies, totaling around 36 hours of gesture data. It was evaluated across different smartwatch models, on right and left hands, and in noisy conditions, and was found to reliably track finger movement and wrist rotations.
Its performance isn’t perfect, researchers noted. For starters, it works on Android smartwatches, not Apple iOS. While performing well in noisy spaces, WatchHand had trouble registering hand poses if the user was walking, for instance.
“WatchHand reflects my lab’s broader vision of transforming everyday wearables into intelligent behavior-sensing platforms,” said Cheng Zhang, associate professor of information science in Cornell Bowers and director of Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab, which leverages machine learning and AI to develop technologies that sense – and make sense of – the data we produce through our movements.
Previously, scholars in the SciFi Lab have designed, developed, and tweaked sensors that track the wrist, hand, tonque, face, body and even teeth. They’ve put their technology on rings, glasses, necklaces and headphones, and they’ve sewn it into the seams of clothing. In recent years, the lab shifted to using acoustic sensing on wearables because of its accuracy and low energy use.
“With just a software update, we can potentially unlock entirely new capabilities on millions of existing devices,” said Zhang.
Along with Zhang, Lee, and Kim, the paper’s co-authors are Ian Oakley and Hohurn Jung of KAIST and doctoral students Tianhong Catherine Yu and Ruidong Zhang.
This research received support from the National Science Foundation and South Korea’s Ministry of Science and Information and Communication Technology.
Louis DiPietro is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.
Media Contact
Get Cornell news delivered right to your inbox.
Subscribe