Faculty Receive Meta Research Awards to Advance AI
Two faculty members in the department of Computer Science have each received an award from Meta Research, the science and innovation arm of the social media giant that has more than 2.85 billion users worldwide.
Jordan Boyd-Graber, an associate professor of computer science, and Nirupam Roy, an assistant professor of computer science, will each use their Meta funding to further projects that are based in artificial intelligence (AI).
Boyd-Graber’s project, “A Leaderboard and Competition for Human-Computer Adversarial Q&A,” seeks to create challenging human-in-the-loop (HITL) examples for question-answering tasks assigned to computers. HITL is a branch of AI that leverages both human and machine intelligence to create better machine-learning models or predictions.
The goal of the project is to create better “adversarial” examples: questions that humans can easily answer but a computer cannot. To help authors craft these examples, they define a metric to encourage human participation in a question-answering writing task, as well as developing tools and visualizations to encourage the authoring of diverse questions, it creates new datasets that are more effective in human versus computer competitions that are gaining in popularity.
Ultimately, Boyd-Graber says, these types of competitions can help people gain more trust in AI systems and interact with them more comfortably in the real world. These questions will be used in a live trivia human vs. computer event next spring.
Yoo Yeon Sung, a third-year information science doctoral student, is collaborating with Boyd-Graber on the project, which received $52K in funding from Meta’s Dynabench Data Collection and Benchmarking Platform program.
Roy’s project, “Physical Context-Aware Voice Assistant for Smart Homes,” is focused on providing better security protocols for voice-enabled devices such as smart TVs and thermostats.
Voice activation for devices continues to be a growing trend, Roy says, with the latest smart TVs and even washing machines, ovens and refrigerators now coming equipped with voice activation features.
While these voice-activated devices make life easier and more convenient, they also present new security and privacy challenges.
To that end, it is critical for voice-enabled devices to sense their physical surroundings and better understand the context in which they should interact.
To accomplish this, Roy and his research team aim to develop a device-free, non-obtrusive acoustic sensing technique to not only detect humans in the surrounding environment, but also infer the direction of their voice and thereby associate addressability with voice commands.
Collaborating with Roy on the project—which received $75K in funding from Meta’s Toward Trustworthy Products in AR, VR, and Smart Devices and Security Research program—are Anupam Das, co-principal investigator and an assistant professor of computer science at North Carolina State University; Harshvardhan Takawale, a first-year doctoral student in computer science; and Yang Bai, a second-year doctoral student in computer science.
—Story by Melissa Brachfeld, UMIACS communications group
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.