Non-Convex

Visualizing the Loss Landscape of Neural Nets

Neural network training relies on our ability to find “good” minimizers of highly non-convex loss functions. It is well known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effects on the underlying loss landscape, are not well understood.

Continue reading

Stabilizing GANs with Prediction

Why are GANs hard to train?

Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function. The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates.

Continue reading

PhasePack

contour_plot

PhasePack


A phase retrieval library



Phase retrieval is the recovery of a signal from only the magnitudes, and not the phases, of complex-valued linear measurements. Phase retrieval problems arise in many different applications, particularly in crystallography and microscopy. Mathematically, phase retrieval recovers a complex valued signal \(x\in \mathbb{C}^n\) from \(m\) measurements of the form

Continue reading

Training Quantized Nets: A Deeper Understanding

Why quantized nets?

Deep neural networks are an integral part of state-of-the-art computer vision and natural language processing systems. Because of their high memory requirements and computational complexity, networks are usually trained using powerful hardware. There is an increasing interest in training and deploying neural networks directly on battery-powered devices, such as cell phones or other platforms. Such low-power embedded systems are memory and power limited, and in some cases lack basic support for floating-point arithmetic.

Continue reading

PhaseMax

contour_plot

PhaseMax


convex relaxation without lifting



-

Phase retrieval deals with the recovery of an \(n\)-dimensional signal \(x^0\in\mathbb{C}^n\), from \(m\geq n\) magnitude measurements of the form \begin{align} \label{eq:original} b_i = |\langle a_i, x^0\rangle|, \quad i = 1,2,\ldots,m, \end{align} where \(a_i\in\mathbb{C}^n\), and \(i=1,2,\ldots,m\) are measurement vectors. While the recovery of \(x^0\) from these non-linear measurements is non-convex, it can be convexified via “lifting” methods that convert the phase retrieval problem to a semidefinite program (SDP). But convexity comes at a high cost: lifting methods square the dimensionality of the problem.

Continue reading