Deep Thinking
Human-inspired thinking systems can solve complex logical reasoning problems.
My research lies at the intersection of machine learning and optimization, and targets applications in computer vision and signal processing. I work at the boundary between theory and practice, leveraging mathematical foundations, complex models, and efficient hardware to build practical, high-performance systems. I design optimization methods for a wide range of platforms ranging from powerful cluster/cloud computing environments to resource limited integrated circuits and FPGAs. Before joining the faculty at Maryland, I completed my PhD in Mathematics at UCLA, and was a research scientist at Rice University and Stanford University. I have been the recipient of several awards, including SIAM’s DiPrima Prize, a DARPA Young Faculty Award, and a Sloan Fellowship.
Here are some of my most recent projects. I believe in reproducible research, and I try to develop open-source tools to accompany my research when possible. For a full list of software and projects, see my complete research page.
on October 22, 2021
Human-inspired thinking systems can solve complex logical reasoning problems.
on October 20, 2021
We sonify (rather than visualize) what neurons respond to in a speech recognition model.
on November 2, 2019
We construct clothing that makes the wearer invisible to common object detectors.
on June 1, 2019
The origins of generalization in neural nets are mysterious and have eluded understanding. We gain an intuitive grasp on generalization through carefully crafted experiments.
on May 28, 2019
We show that content control systems are vulnerable to adversarial attacks. Using small perturbations, we can fool important industrial systems like YouTube’s Content ID.
on March 8, 2019
Adversarial training hardens neural nets against attacks, but it costs 10-100X more than regular training. We show how to do adversarial training with no added cost, and train a robust ImageNet model on a desktop computer in just a day.
on September 5, 2018
A pattern has emerged in which the majority of adversarial defenses are quickly broken by new attacks. Given the lack of success at generating robust defenses, we are led to ask a fundamental question: Are adversarial attacks inevitable?
on June 1, 2018
Stacked U-Nets are simple, easy-to-train neural architecture for image segmentation and other image-to-image regression tasks. SUNets attain state of the art performance and fast inference with very few parameters.