Processing math: 100%

Project

Stabilizing GANs with Prediction

Why are GANs hard to train?

Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function. The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates.

Continue reading

PhasePack

contour_plot

PhasePack


A phase retrieval library



Phase retrieval is the recovery of a signal from only the magnitudes, and not the phases, of complex-valued linear measurements. Phase retrieval problems arise in many different applications, particularly in crystallography and microscopy. Mathematically, phase retrieval recovers a complex valued signal xCn from m measurements of the form

Continue reading

Training Quantized Nets: A Deeper Understanding

Why quantized nets?

Deep neural networks are an integral part of state-of-the-art computer vision and natural language processing systems. Because of their high memory requirements and computational complexity, networks are usually trained using powerful hardware. There is an increasing interest in training and deploying neural networks directly on battery-powered devices, such as cell phones or other platforms. Such low-power embedded systems are memory and power limited, and in some cases lack basic support for floating-point arithmetic.

Continue reading

PhaseMax

contour_plot

PhaseMax


convex relaxation without lifting



-

Phase retrieval deals with the recovery of an n-dimensional signal x0Cn, from mn magnitude measurements of the form bi=|ai,x0|,i=1,2,,m, where aiCn, and i=1,2,,m are measurement vectors. While the recovery of x0 from these non-linear measurements is non-convex, it can be convexified via “lifting” methods that convert the phase retrieval problem to a semidefinite program (SDP). But convexity comes at a high cost: lifting methods square the dimensionality of the problem.

Continue reading

Distributed Machine Learning

server

Distributed


Machine Learning



-

Classical machine learning methods, include stochastic gradient descent (also known as backprop), work great on one machine, but don’t scale well to the cloud or cluster setting. We propose a variety of algorithmic frameworks for scaling machine learning across many workers. Many of our distributed ML experiments are done using USNA’s Grace Supercomputer, which is currently hosted at University of Maryland.

Continue reading

FASTA

fast adaptive
shrinkage/thresholding
algorithm

FASTA (Fast Adaptive Shrinkage/Thresholding Algorithm) is an efficient, easy-to-use implementation of the Forward-Backward Splitting (FBS) method (also known as the proximal gradient method) for regularized optimization problems. Many variations on FBS are available in FASTA, including the popular accelerated variant FISTA (Beck and Teboulle ’09), the adaptive stepsize rule SpaRSA (Wright, Nowak, Figueiredo ’09), and other variants described in the review A Field Guide to Forward-Backward Splitting with a FASTA Implementation. Whether the problem you are solving is simple or complex, FASTA makes things easy by handling issues like stepsize selection, acceleration, and stopping conditions for you.

Continue reading

Primal-dual hybrid gradient method

The Primal-Dual Hybrid Gradient (PDHG) method, also known as the Chambolle-Pock method, is a powerful splitting method that can solve a wide range of constrained and non-differentiable optimization problems. Unlike the popular ADMM method, the PDHG approach usually does not require expensive minimization sub-steps. We provide adaptive stepsize selection rules that automate the solver, and increase its speed and robustness. The test problems and adaptive stepsize strategies presented here were proposed in our papers Adaptive Primal-Dual Hybrid Gradient Methods for Saddle-Point Problems and Adaptive Primal-Dual Splitting Methods for Statistical Learning and Image Processing.

Continue reading

The stone transform: flexible compressed sensing

Overview

Compressive sensing enables the reconstruction of high-resolution signals from under-sampled data. While compressive methods simplify data acquisition, they require the solution of difficult recovery problems to make use of the resulting measurements. The stone transform is a new sensing framework that combines the advantages of both conventional and compressive sensing. Using the proposed stone transform, measurements can be reconstructed instantly at Nyquist rates at any power-of-two resolution. The same data can then be “enhanced” to higher resolutions using compressive methods that leverage sparsity to “beat” the Nyquist limit. The availability of a fast direct reconstruction enables compressive measurements to be processed on small embedded devices. We demonstrate this by constructing a real-time compressive video camera.

Continue reading

PIT

The Perfusion Imaging Toolkit (PIT) is a comprehensive set of tools for MR-based perfusion imaging. In addition to several different perfusion calculation tools, the software smoothly integrates file formatting, image denoising, registration, segmentation, mean curve extraction, and many other image processing tasks. PIT has the capability to generate perfusion measurements from regions of interest, as well as to generate pixel-by-pixel perfusion maps.

Continue reading

Split Bregman

The Split Bregman method is a technique for solving a variety of L1-regularized optimization problems, and is particularly effective for problems involving total-variation regularization. Split Bregman is one of the fastest solvers for Total-Variation denoising, image reconstruction from Fourier coefficients, convex image segmentation, and many other problems.

Continue reading