UMD Team Receives Best Demo Award for Mobile Sound Localization System
Researchers at the University of Maryland were recently recognized for their prototype of a miniature spatial acoustic sensing system that can be used for insect-sized robots and on-body sensing devices.
The UMD team (pictured left to right)—Nakul Garg, a second-year computer science doctoral student, Nirupam Roy, an assistant professor of computer science, and Yang Bai, a first-year doctoral student—received “Best Demo” award at the 19th ACM International Conference on Mobile Systems, Applications, and Services (MobiSys2021).
The annual conference, hosted by the Association for Computing Machinery, is considered a leading forum for presenting innovative research on the design, implementation, usage and evaluation of mobile computing and wireless systems.
The UMD team was honored for their efforts in creating Owlet, a miniaturized and low-powered acoustic spatial sensing system that is nine times smaller—and requires 100 times less power to operate—compared to traditional systems. This work is described in detail in “Owlet: Enabling Spatial Information in Ubiquitous Acoustic Devices.” Garg and Bai were the paper’s lead authors.
The researchers state that smaller and more efficient sound localization systems are opening new dimensions in ubiquitous computing within areas like health monitoring, robot navigation and ambient sensing.
Traditional localization techniques require devices in these applications to have multiple microphones. But these elaborate microphone arrays no longer fit devices as they evolve to become smaller and more efficient.
Aiming to overcome these fundamental limitations while achieving the same accuracy, the UMD researchers used advanced computational techniques to engineer Owlet.
First, they developed a 3D-printed metamaterial cover that surrounds a microphone. This cylindrical structure was carefully designed with particular hole patterns that assign directional signatures to sound. Then the researchers applied a deep learning-based method to extract the signature of an incoming sound and compare it with the pre-collected database.
The researchers added another microphone as a reference channel to develop techniques that eliminate environmental variation, making the design robust to noises and multipath in arbitrary locations. The system requires only one calibration during manufacturing and can perform at any arbitrary localization.
Owlet can pinpoint sound as accurately as a traditional nine microphone array—but by analyzing acoustic signatures through its porous and intricate structure.
“Owlet combines the principles of communication, acoustic metamaterial, microstructural design, and statistical data-driven methods to enable precise spatial sensing,” explains Roy, who also holds an appointment in the University of Maryland Institute for Advanced Computer Studies.
Roy describes Owlet as practical because it overcomes challenges of environmental and source signal dependencies, as well as innovative because it opens up new possibilities of low-power sensing through passive microstructures.
Go here to watch the team’s demonstration of Owlet.
—Story by Maria Herd
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.