Active Projects

Securing the web's public key infrastructure

The importance of the web's public key infrastructure (PKI) cannot be overstated: it provides users with the ability to verify with whom they are communicating online, and enables encryption of those communications. While the online use of the PKI is mostly automated, there is a surprising amount of human intervention in management tasks that are crucial to its proper operation.

This project investigates the roles played by all of the PKI's principals: website administrators, browsers, certificate authorities, and content delivery networks (CDNs). Only by understanding the humans in the loop can we hope to truly secure this critical infrastructure.

Key findings:

  • Website admins don't revoke or reissue their certificates: After the Heartbleed vulnerability, 93% of compromised servers were patched, but only 13% revoked and 27% reissued compromised certificates. IMC'14
  • No modern web browser fully checks for revocations: No modern mobile browser checks for revocations at all. IMC'15
  • Key sharing is widespread: The majority of all websites give their private keys to third-party hosting providers like CDNs, cloud providers, or web-hosting services. Amazon alone has 60% of the 1000 most popular websites' private keys. CCS'16
  • Pushing all revocations to all clients is possible: We have developed a system, CRLite, that drastically reduces the amount of data necessary to represent revocations (less than 1 byte per). S&P'17


Project Page   Papers

Provably avoiding nation-state censorship

Traditional Internet communication leaves communicating parties vulnerable to persecution and disruption from online censorship. Although there are many systems in active use today that seek to resist censorship through anonymous, confidential communication (most notably Tor), they are currently rather brittle in the presence of a large censoring regime. In particular, some countries censor not just their own citizens' traffic, but any traffic that happens to pass through their borders.

In this project, we are developing new systems that empower users with greater control over where their packets don't go. Rather than rely on inaccurate maps of the Internet, we use novel measurement techniques and universal constraints like the fact that information cannot travel faster than the speed of light to provide provable guarantees.

Key contributions:

  • Routing around adversaries with Alibi Routing: We have developed Alibi Routing, a system that allows users to specify regions of the world they wish their traffic to avoid, and then finds peer-to-peer relays that provably avoid those regions. SIGCOMM'15
  • Provably avoiding geographic regions in Tor: We have developed DeTor, a system that extends the ideas in Alibi Routing to apply to Tor. DeTor protects against attacks from particularly powerful nationstate adversaries. USENIX'17
  • Measuring Tor latencies Ting: We have developed a measurement tool called Ting that measures latencies between any two Tor routers, and have applied this to weaken and strengthen Tor's anonymity guarantees. IMC'15


Alibi Routing   DeTor   Ting   Papers

Measuring DNS from the root

The Domain Name Service (DNS) is responsible for converting human-readable domain names (like www.cs.umd.edu) to an Internet-routable IP address (128.8.127.30). Without it, the Internet would be largely unusable. DNS is made up of a hierarchical namespace, and its 13 roots (A-root through M-root) are run by separate entities. These root servers are some of the most critical pieces of network infrastructure.

The University of Maryland has always run D-root. Working with UMD's Division of IT, we have been collecting and analyzing data about how root servers are used, misused, and abused.

Key contributions:

  • Changing worldwide constants can reveal bugs: Root server addresses are hardcoded in every DNS resolver. In early 2013, D-root's address changed. We studied resolver behavior before and after the change, and found that a bug in PowerDNS ultimately resulted in a 50% increase in traffic to D-root. Perhaps periodic change-overs should take place as a form of "garbage collection". IMC'13
  • New tools for analyzing big networking data: D-root services tens of thousands of queries every second, and we have been receiving this data streaming for years. Making sense of this much data, particularly over historical datasets, has required us to develop new tools for storing, querying, and detecting anomalies. DINR'16 DINR'16


Project Page   Papers

Finer-grained cloud computing

Today's cloud computing platforms have pricing models that work very well for popular servers or computationally-intensive tasks like protein folding. However, they are ill-suited to tasks that are long-lived but mostly idle, such as personalized servers.

In this project, we are developing a new approach to cloud computing that only consumes (and charges for) precisely the resources needed to run a process. This results in lower costs to users, more efficient pricing models for cloud providers, and avenues to more secure cloud computing.

Key contributions:

  • Insanely fast process migration: We have developed a cloud computing infrastructure that swaps out users' processes to cold storage and swaps in exactly the memory pages the process needs to service incoming client requests. This allows us to swap processes from cold storage to running in less time than a user would notice. EuroSys'16


Project Page   Papers

Secure programming competitions

Despite the increased attention to security as a first-order design principle, many developers continue to produce insecure software, which remains vulnerable despite billions spent annually on security appliances and other defenses. We believe this situation arises, at least in part, from a lack of evidence and education. For example, code reviews, penetration testing, and static code analysis are all known to improve security by finding vulnerabilities, but the relative costs and benefits of these techniques are largely unknown.

As a remedy to this state of affairs, we have developed and continue to run a secure coding contest called Build-it Break-it Fix-it that seeks to (1) give student contestants a competitive setting to learn about secure software creation and (2) experimentally measure the outcomes of the contest to add to the evidence of what works and what does not.

Key contributions:

  • Build-it Break-it Fix-it, a secure programming competition: We have designed and run BIBIFI, a three-phase secure programming competition. Central to BIBIFI's design is a point system that encourages behavior representative of what happens in the real world. For example, bug-finders have incentive to seek out uncommon, hard-to-find vulnerabilities. CSET'15
  • Empirical understanding of what does (and doesn't) work: We have analyzed several runs of BIBIFI to better understand what factors influence more successful building and breaking. Smaller groups tend to build better code while larger groups tend to be better breakers; memory errors are the primary reason that C leads to more bugs; see the paper for more. CCS'16


Project Page   Papers

Past Projects

Peer-to-peer incentives

The Internet is no longer the collaborative playground it once was; it now comprises millions of corporate entities and billions of users who often have their own conflicting interests. In addition to acts of malice, there are rampant acts of selfishness wherein users seek to gain from the network, regardless of what impact this might have on others. Unfortunately, the Internet's protocols are not suited to support self-interest.

This project seeks to analyze existing protocols' susceptibility to selfish manipulation, and to develop new systems that perform correctly and efficiently, even if all participants act selfishly. At the core of this work is the practical application of game theory and mechanism design to large-scale, decentralized systems.

Key contributions:

  • BitTorrent is an auction: The popular BitTorrent protocol allows users to download files quickly by cooperating with one another, trading pieces of the file so that all downloaders are in effect uploaders, as well. It was long believed that BitTorrent employs a "tit-for-tat" strategy, which is know not to be manipulable. However, we showed that it is not tit-for tat, but more accurately modeled as an auction—and a bad auction, at that. We showed how to manipulate BitTorrent's trading strategies, and proposed a new one, PropShare, that is more resilient and results in faster downloads. SIGCOMM'08
  • iOwe, A digital currency without wasted work: Digital currencies, including Bitcoin, rely on proofs of work: some computationally expensive task that serves to limit the speed at which one can generate new currency. We have developed an alternative, iOwe, that makes creating money trivial by making it difficult to spend money. NetEcon'11


Project Page   Papers