Available Technical Reports
Arxiv manuscript
Roozbeh Yousefzadeh and Dianne P. O'Leary,
Auditing and Debugging Deep Learning Models via Decision Boundaries: Individual-level and Group-level Analysis,
January 2020.
Deep learning models have been criticized for their lack of easy interpretation, which undermines confidence in their use for important applications. Nevertheless, they are consistently utilized in many applications, consequential to humans' lives, mostly because of their better performance. Therefore, there is a great need for computational methods that can explain, audit, and debug such models. Here, we use flip points to accomplish these goals for deep learning models with continuous output scores (e.g., computed by softmax), used in social applications. A flip point is any point that lies on the boundary between two output classes: e.g. for a model with a binary yes/no output, a flip point is any input that generates equal scores for "yes" and "no". The flip point closest to a given input is of particular importance because it reveals the least changes in the input that would change a model's classification, and we show that it is the solution to a well-posed optimization problem. Flip points also enable us to systematically study the decision boundaries of a deep learning classifier. The resulting insight into the decision boundaries of a deep model can clearly explain the model's output on the individual-level, via an explanation report that is understandable by non-experts. We also develop a procedure to understand and audit model behavior towards groups of people. Flip points can also be used to alter the decision boundaries in order to improve undesirable behaviors. We demonstrate our methods by investigating several models trained on standard datasets used in social applications of machine learning. We also identify the features that are most responsible for particular classifications and misclassifications.
Arxiv manuscript
Roozbeh Yousefzadeh and Dianne P. O'Leary,
A Probabilistic Framework and a Homotopy Method for Real-Time Hierarchical Freight Dispatch Decisions,
December 2019.
We propose a real-time decision framework for multimodal freight dispatch through a system of hierarchical hubs, using a probabilistic model for transit times. Instead of assigning a fixed time to each transit, we advocate using historical records to identify characteristics of the probability density function for each transit time. We formulate a nonlinear optimization problem that defines dispatch decisions that minimize expected cost, using this probabilistic information. Finally, we propose an effective homotopy algorithm that (empirically) outperforms standard optimization algorithms on this problem by taking advantage of its structure, and we demonstrate its effectiveness on numerical examples.
Arxiv manuscript
Roozbeh Yousefzadeh and Dianne P. O'Leary,
Investigating Decision Boundaries of Trained Neural Networks,
August 2019.
Deep learning models have been the subject of study from various perspectives, for example, their training process, interpretation, generalization error, robustness to adversarial attacks, etc. A trained model is defined by its decision boundaries, and therefore, many of the studies about deep learning models speculate about the decision boundaries, and sometimes make simplifying assumptions about them. So far, finding exact points on the decision boundaries of trained deep models has been considered an intractable problem. Here, we compute exact points on the decision boundaries of these models and provide mathematical tools to investigate the surfaces that define the decision boundaries. Through numerical results, we confirm that some of the speculations about the decision boundaries are accurate, some of the computational methods can be improved, and some of the simplifying assumptions may be unreliable, for models with nonlinear activation functions. We advocate for verification of simplifying assumptions and approximation methods, wherever they are used. Finally, we demonstrate that the computational practices used for finding adversarial examples can be improved and computing the closest point on the decision boundary reveals the weakest vulnerability of a model against adversarial attack.
Arxiv manuscript
Roozbeh Yousefzadeh and Dianne P. O'Leary,
Refining the Structure of Neural Networks Using Matrix Conditioing,
August 2019.
Deep learning models have proven to be exceptionally useful in performing many machine learning tasks. However, for each new dataset, choosing an effective size and structure of the model can be a time-consuming process of trial and error. While a small network with few neurons might not be able to capture the intricacies of a given task, having too many neurons can lead to overfitting and poor generalization. Here, we propose a practical method that employs matrix conditioning to automatically design the structure of layers of a feed-forward network, by first adjusting the proportion of neurons among the layers of a network and then scaling the size of network up or down. Results on sample image and non-image datasets demonstrate that our method results in small networks with high accuracies. Finally, guided by matrix conditioning, we provide a method to effectively squeeze models that are already trained. Our techniques reduce the human cost of designing deep learning models and can also reduce training time and the expense of using neural networks for applications.
Arxiv manuscript
Interpreting Neural Networks Using Flip Points,
Roozbeh Yousefadeh and Dianne P. O'Leary,
March 2019.
Neural networks have been criticized for their lack of easy interpretation, which undermines confidence in their use for important applications. Here, we introduce a novel technique, interpreting a trained neural network by investigating its flip points. A flip point is any point that lies on the boundary between two output classes: e.g. for aneural network with a binary yes/no output, a flip point is any input that generates equal scores for "yes" and "no". The flip point closest to a given input is of particular importance, and this point is the solution to a well-posed optimization problem. This paper gives an overview of the uses of flip points and how they are computed. Through results on standard datasets, we demonstrate how flip points can be used to provide detailed interpretation of the output produced by a neural network. Moreover, for a given input, flip points enable us to measure confidence in the correctness of outputs much more effectively than softmax score.They also identify influential features of the inputs, identify bias, and find changes in the input that change the output of the model. We show that distance between an input and the closest flip point identifies the most influential points in the training data. Using principal component analysis (PCA) and rank-revealing QR factorization(RR-QR), the set of directions from each training input to its closest flip point provides explanations of how a trained neural network processes an entire dataset: what features are most important for classification into a given class, which features are most responsible for particular misclassifications, how an adversary might fool thenetwork, etc. Although we investigate flip points for neural networks, their usefulness is actually model-agnostic.
Unpublished manuscript
and varpro.m software
Variable Projection for Nonlinear Least Squares Problems,
Dianne P. O'Leary and Bert W. Rust,
2011.
The variable projection algorithm of Golub and Pereyra (1973) has proven to be quite valuable in the solution of nonlinear least squares problems in which a substantial number of the parameters are linear. Its advantages are efficiency and, more importantly, a better likelihood of finding a global minimizer rather than a local one. The purpose of our work is to provide a more robust implementation of this algorithm, include constraints on the parameters, more clearly identify key ingredients so that improvements can be made, compute the Jacobian matrix more accurately, and make future implementations in other languages easy.
CS-TR-4807
Modified Cholesky Algorithms: A Catalog with New Approaches,
Haw-ren Fang and Dianne P. O'Leary,
August 2006.
Given an $n \times n$ symmetric possibly indefinite matrix $A$, a modified Cholesky algorithm computes a factorization of the positive definite matrix $A+E$, where $E$ is a correction matrix. Since the factorization is often used to compute a Newton-like downhill search direction for an optimization problem, the goals are to compute the modification without much additional cost and to keep $A+E$ well-conditioned and close to $A$. Gill, Murray and Wright introduced a stable algorithm, with a bound of $\|E\|_2=O(n^2)$. An algorithm of Schnabel and Eskow further guarantees $\|E\|_2=O(n)$. We present variants that also ensure $\|E\|_2=O(n)$. Mor\'{e} and Sorensen and Cheng and Higham used the block $LBL^T$ factorization with blocks of order $1$ or $2$. Algorithms in this class have a worst-case cost $O(n^3)$ higher than the standard Cholesky factorization, We present a new approach using an $LTL^T$ factorization, with $T$ tridiagonal, that guarantees a modification cost of at most $O(n^2)$.
SANDIA Report
QCS: A System for Querying, Clustering, and Summarizing Documents,
Daniel M. Dunlavy, Dianne P. O'Leary, John M. Conroy, and
Judith D. Schlesinger,
July 2006.
Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system---the Query, Cluster, Summarize (QCS) system---which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components methods in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, {\tt ROUGE}. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence ``trimming,'' and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.
Parallelism for Quantum Computation with Qudits
Dianne P. O'Leary, Gavin K. Brennen, and Stephen S. Bullock,
Robust quantum computation with $d$-level quantum systems
(qudits) poses two requirements:
fast, parallel quantum gates and high fidelity two-qudit gates.
We first describe how to implement parallel single qudit operations.
It is by now well known that
any single-qudit unitary can be decomposed into a sequence of
Givens rotations on two-dimensional subspaces of the qudit state space.
Using a coupling graph to represent physically allowed couplings
between pairs of
qudit states, we then show that the logical depth
{(time)} of the parallel gate
sequence is equal to the height of an associated tree.
The implementation of a given unitary can then optimize
the tradeoff between gate time and resources used.
These ideas are illustrated for qudits encoded in the ground hyperfine states
of the atomic alkalies $^{87}$Rb
and $^{133}$Cs. Second, we provide a protocol for
implementing parallelized non-local
two-qudit gates
using the assistance of entangled qubit pairs. {Using known
protocols for qubit entanglement purification}, this
offers the possibility of high fidelity two-qudit gates.
Homotopy Optimization Methods for Global Optimization,
Daniel M. Dunlavy and Dianne P. O'Leary,
December 2005.
We define a new method for global optimization, the Homotopy Optimization
Method (HOM). This method differs from previous homotopy and continuation
methods in that its aim is to find a minimizer for each of a set of
values of the homotopy parameter, rather than to follow a path of minimizers.
We define a second method, called HOPE,
by allowing HOM to follow an ensemble of
points obtained by perturbation of previous ones. We relate this
new method to standard methods such as simulated annealing
and show under what circumstances it is superior. We present results
of extensive numerical experiments demonstrating performance of HOM
and HOPE.
http://xxx.lanl.gov/abs/quant-ph/0509161
Efficient Circuits for Exact-Universal Computation with Qudits,
Gavin K. Brennen, Stephen S. Bullock, and Dianne P. O'Leary,
September 2005.
This paper concerns the efficient implementation of quantum
circuits for qudits.
We show that controlled two-qudit gates can be implemented
without ancillas and prove that the gate library containing
arbitrary local unitaries and one two-qudit gate, CINC,
is exact-universal.
A recent paper [S.Bullock, D.O'Leary, and G.K. Brennen, Phys. Rev. Let
t.
94, 230502 (2005)] describes quantum circuits for qudits which
require $O(d^n)$ two-qudit gates for state synthesis and $O(d^{2n})$
two-qudit gates for unitary synthesis, matching the respective lower
bound complexities.
In this work, we present the state synthesis
circuit in much greater detail and prove that it is correct.
Also, the $\lceil (n-2)/(d-2) \rceil$ ancillas required in
the original algorithm may be removed without changing the asymptotics.
Further, we present a new algorithm for unitary synthesis, inspired by
the QR matrix decomposition, which is also asymptotically optimal.
CS-TR-4733
Stable Factorizations of Symmetric Tridiagonal and Triadic Matrices
Haw-ren Fang and Dianne P. O'Leary,
July, 2005
We call a matrix triadic if it has no more than two nonzero off-diagonal elements in any column. A symmetric tridiagonal matrix is a special case. In this paper we consider $LXL^T$ factorizations of symmetric triadic matrices, where $L$ is unit lower triangular and $X$ is diagonal, block diagonal with 1x1 and 2x2 blocks, or the identity with $L$ lower triangular. We prove that with diagonal pivoting, the $LXL^T$ factorization of a symmetric triadic matrix is sparse, study some pivoting algorithms, discuss their growth factor and performance, analyze their stability, and develop perturbation bounds. These factorizations are useful in computing inertia, in solving linear systems of equations, and in determining modified Newton search directions.
http://xxx.lanl.gov/abs/quant-ph/0410116
Asymptotically Optimal Quantum Circuits for d-level Systems, Stephen S. Bullock, Dianne P. O'Leary, and Gavin K. Brennen, October 2004
As a qubit is a two-level quantum system whose state space is spanned by |0>, |1>, so a qudit is a d-level quantum system whose state space is spanned by |0>,...,|d-1>. Quantum computation has stimulated much recent interest in algorithms factoring unitary evolutions of an n-qubit state space into component two-particle unitary evolutions. In the absence of symmetry, Shende, Markov and Bullock use Sard's theorem to prove that at least C 4^n two-qubit unitary evolutions are required, while Vartiainen, Moettoenen, and Salomaa (VMS) use the QR matrix factorization and Gray codes in an optimal order construction involving two-particle evolutions. In this work, we note that Sard's theorem demands C d^{2n} two-qudit unitary evolutions to construct a generic (symmetry-less) n-qudit evolution. However, the VMS result applied to virtual-qubits only recovers optimal order in the case that d is a power of two. We further construct a QR decomposition for d-multi-level quantum logics, proving a sharp asymptotic of Theta(d^{2n}) two-qudit gates and thus closing the complexity question for all d-level systems (d finite.) Gray codes are not required, and the optimal Theta(d^{2n}) asymptotic also applies to gate libraries where two-qubit interactions are restricted by a choice of certain architectures.
On Sensitivity of Gauss-Christoffel Quadrature Estimates, Dianne P. O'Leary and Zdenek Strakos, October 2004
In this work, we investigate conditions under which Gauss-Christoffel quadrature estimates are insensitive under perturbation of the distribution function, deriving an error bound that depends on the perturbation and the function to be integrated. We show that for the Riemann-Stieltjes integral even a small perturbation of a discontinuous distribution function can cause large difference of Gauss-Christoffel quadrature estimates.
http://xxx.lanl.gov/abs/quant-ph/0407223
Criteria for Exact Qudit Universality, Gavin K. Brennen, Dianne P. O'Leary, and Stephen S. Bullock, July 2004
We describe criteria for implementation of quantum computation in qudits. A
qudit is a d-dimensional system whose Hilbert space is spanned by states |0>, |1
>,... |d-1>. An important earlier work of Mathukrishnan and Stroud [1] describes
how to exactly simulate an arbitrary unitary on multiple qudits using a 2d-1 pa
rameter family of single qudit and two qudit gates. Their technique is based on
the spectral decomposition of unitaries. Here we generalize this argument to sho
w that exact universality follows given a discrete set of single qudit Hamiltoni
ans and one two-qudit Hamiltonian. The technique is related to the QR-matrix dec
omposition of numerical linear algebra. We consider a generic physical system in
which the single qudit Hamiltonians are a small collection of H_{jk}^x=\hbar\Om
ega (|k>
CS-TR-4594 June 2004. updated version (April 2005)
Universal Duality in Conic Convex Optimization, Simon P. Schurr, Andre' L. Tits, and Dianne P. O'Leary,
Given a primal-dual pair of linear programs, it is known that if their optimal values are viewed as lying on the extended real line, then the duality gap is zero, unless both problems are infeasible, in which case the optimal values are $+\infty$ and $-\infty$. In contrast, for optimization problems over nonpolyhedral convex cones, a nonzero duality gap can exist even in the case where the primal and dual problems are both feasible. For a pair of dual conic convex programs, we provide simple conditions on the ``constraint matrices'' and cone under which the duality gap is zero for \textit{every} choice of linear objective function and ``right-hand-side''. We refer to this property as ``universal duality''. Our conditions possess the following properties: (i) they are necessary and sufficient, in the sense that if (and only if) they do not hold, the duality gap is nonzero for some linear objective function and ``right-hand-side''; (ii) they are metrically and topologically generic; and (iii) they can be verified by solving a single conic convex program. As a side result, we also show that the feasible sets of a primal conic convex program and its dual cannot both be bounded, unless they are both empty, and we relate this to universal duality.
QR Factorizations Using a Restricted Set of Rotations, Dianne P. O'Leary and Stephen S. Bullock, May 2004
Any matrix $A$ of dimension $m \times n$ ($m \ge n$) can be reduced to upper triangular form by multiplying by a sequence of $mn-n(n+1)/2$ appropriately chosen rotation matrices. In this work, we address the question of whether such a factorization exists when the set of allowed rotation planes is restricted. We introduce the rotation graph as a tool to devise elimination orderings in QR factorizations. Properties of this graph characterize sets of rotation planes that are sufficient (or sufficient under permutation) and identify rotation planes to add to complete a deficient set. We also devise a constructive way to determine all feasible rotation sequences for performing the QR factorization using a restricted set of rotation planes. We present applications to quantum circuit design and parallel QR factorization.
http://xxx.lanl.gov/abs/quant-ph/0402051
Time reversal and n-qubit Canonical Decompositions, Stephen S. Bullock, Gavin K. Brennen, and Dianne P. O'Leary, February 2004
The n-qubit concurrence canonical decomposition ({\tt CCD}) is a generalization of the two-qubit canonical decomposition SU(4)=[SU(2) \otimes SU(2)] \Delta [SU(2) \otimes SU(2)], where \Delta is the commutative group which phases the maximally entangled Bell basis. A prequel manuscript creates the {\tt CCD} as a particular example of the G=KAK metadecomposition theorem of Lie theory. We hence denote it by SU(2^n)=KAK. If C_n(\ket{\psi})= |\overline{\bra{\psi}}(-i\sigma^y)^{\otimes n} \ket{\psi}| is the concurrence entanglement monotone, then computations in the K group are symmetries of a related bilinear form and so do not change the concurrence. Hence for a quantum computation v=k_1ak_2, analysis of a\in A allows one to study one aspect of the entanglement dynamics of the evolution v, i.e. the concurrence dynamics. Note that analysis of such an a \in A is simpler than the generic case, since A is a commutative group whose dimension is exponentially less than that of SU(N).
ftp://ftp.esat.kuleuven.ac.be/pub/SISTA/lemmerli/reports/whole5f.zip
Regularized structured total least squares algorithms for blind image deblurring, Nicola Mastronardi, Philip Lemmerling, Anoop Kalsi, Dianne O'Leary, and Sabine Van Huffel, 2003, Internal Report 03-129, ESAT-SISTA, K.U.Leuven (Leuven, Belgium)
The structured total least squares (STLS) problem has been introduced to handle problems involving structured matrices corrupted by noise. Often the problem is ill--posed. Recently, regularization has been proposed in the STLS framework to solve ill-posed blind deconvolution problems encountered in image deblurring when both the image and the blurring function have uncertainty. The kernel of the regularized STLS (RSTLS) problem is a least squares problem involving Block--Toeplitz--Toeplitz--Block matrices. In this paper an algorithm is described to solve this problem, based on a particular implementation of the generalized Schur Algorithm (GSA). It is shown that this new implementation improves the computational efficiency of the straightforward implementation of GSA from $O(N^{2.5})$ to $O(N^2)$, where $N$ is the number of pixels in the image.
Efficient Iterative Algorithms for the Stochastic Finite Element Method with Application to Acoustic Scattering, Howard C. Elman, Oliver G. Ernst, Dianne P. O'Leary, and Michael Stewart, November 2002.
In this study, we describe the algebraic computations required to implement the stochastic finite element method for solving problems in which uncertainty is restricted to right hand side data coming from forcing functions or boundary conditions. We show that the solution can be represented in a compact outer product form which leads to efficiencies in both work and storage, and we demonstrate that block iterative methods for algebraic systems with multiple right hand sides can be used to advantage to compute this solution. We also show how to generate a variety of statistical quantities from the computed solution. Finally, we examine the behavior of these statistical quantities in one setting derived from a model of acoustic scattering.
Algorithms for Structured Total Least Squares Problems with Applications to Blind Image Deblurring Anoop Kalsi and Dianne P. O'Leary August 2002.
Mastronardi, Lemmerling, and van Huffel presented an algorithm for solving a total least squares problem when the matrix and its perturbations are Toeplitz. A Toeplitz matrix is a special kind of matrix with small displacement rank. Here we generalize the fast algorithm to any matrix with small displacement rank. In particular, we show how to efficiently construct the generators whenever M has small displacement rank and show that in many important cases the Cholesky factorization of the matrix M^TM can also be determined fast. We further extend this problem to Tikhonov regularization of ill-posed problems and illustrate the use of the algorithm on an image deblurring problem. (Also cross-referenced as UMIACS-TR-2002-70) University of Maryland Institute for Advanced Computer Studies, Department of Computer Science, University of Maryland,
Stagnation of GMRES Ilya Zavorin, Dianne P. O'Leary, and Howard Elman October 2001.
We study problems for which the iterative method GMRES for solving linear systems of equations makes no progress in its initial iterations. Our tool for analysis is a nonlinear system of equations, the stagnation system, that characterizes this behavior. For problems of dimension 2 we can solve this system explicitly, determining that every choice of eigenvalues leads to a stagnating problem for eigenvector matrices that are sufficiently poorly conditioned. We partially extend this result to higher dimensions for a class of eigenvector matrices called extreme. We give necessary and sufficient conditions for stagnation of systems involving unitary matrices, and show that if a normal matrix stagnates then so does an entire family of nonnormal matrices with the same eigenvalues. Finally, we show that there are real matrices for which stagnation occurs for certain complex right-hand sides but not for real ones. (Also cross-referenced as UMIACS-TR-2001-74) University of Maryland Institute for Advanced Computer Studies, Department of Computer Science, University of Maryland,
Blind Deconvolution Using a Regularized Structured Total Least Norm Approach Armin Pruessner and Dianne P. O'Leary. September 2001.
Rosen, Park and Glick proposed the structured total least norm (STLN) algorithm for solving problems in which both the matrix and the right-hand side contain errors. We extend this algorithm for ill-posed problems by adding regularization and use the resulting algorithm to solve blind deconvolution problems as encountered in image deblurring when both the image and the blurring function have uncertainty. The resulting regularized structured total least norm (RSTLN) algorithm preserves any affine structure of the matrix and minimizes the discrete L-p-norm error, where p=1,2, or infinity. We demonstrate the effectiveness of these algorithms for blind deconvolution. (Also cross-referenced as UMIACS-TR-2001-66 University of Maryland Institute for Advanced Computer Studies, University of Maryland)
Scaling Symmetric Positive Definite Matrices to Prescribed Row Sums Dianne P. O'Leary. September 2001.
We show that any symmetric positive definite matrix can be symmetrically scaled by a positive diagonal matrix, or by a diagonal matrix with arbitrary signs, to have arbitrary positive rows sums. The scaling can be constructed by solving an ordinary differential equation. (Also cross-referenced as UMIACS-TR-2001-62 University of Maryland Institute for Advanced Computer Studies, University of Maryland)
Text Summarization via Hidden Markov Models and Pivoted QR Matrix Decomposition John Conroy. Dianne P. O'Leary. February 2001.
A sentence extract summary of a document is a subset of the document's sentences that contains the main ideas in the document. We present two approaches to generating such summaries. The first uses a pivoted QR decomposition of the term-sentence matrix in order to identify sentences that have ideas that are distinct from those in other sentences. The second is based on a hidden Markov model that judges the likelihood that each sentence should be contained in the summary. We compare the results of these methods with summaries generated by humans, showing that we obtain higher agreement than do earlier methods. (Also cross-referenced as UMIACS-TR-2001-11 University of Maryland Institute for Advanced Computer Studies, University of Maryland)
Image Restoration through Subimages and Confidence Images. James G. Nagy. Dianne P. O'Leary. July 2000.
Some very effective but expensive image reconstruction algorithms cannot be applied to large images because of their cost. In this work, we first show how to apply such algorithms to subimages, giving improved reconstruction of regions of interest. Our second contribution is to construct confidence intervals for pixel values, by generalizing a theorem of O'Leary and Rust to allow both upper and lower bounds on variables. All current algorithms for image deblurring or deconvolution output an image. This provides an estimated value for each pixel in the image. What is lacking is an estimate of the statistical confidence that we can have in those pixel values or in the features they form in the image. There are two obstacles in determining confidence intervals for pixel values: first, the process is computationally quite intensive, and second, there has been no proposal for providing the results in a visually useful way. In this work we overcome the first of those limitations and use a recently developed algorithm called Twinkle to overcome the second. We demonstrate the usefulness of these techniques on astronomical and motion-blurred images. (Also cross-referenced as UMIACS-TR-2000-52 University of Maryland Institute for Advanced Computer Studies, University of Maryland)
Displaying Confidence Images. James G. Nagy. Dianne P. O'Leary. July 2000.
Algorithms for computing images result in an estimate of an image. The image may result from deblurring a measured image, from deconvolving a set of measurements, or from computing an image by modeling physical processes such as the weather. These computations provide an estimated value for each pixel in the image. What is lacking, however, is an estimate of the statistical confidence that we can have in those pixel values or in the features they form. In this work we discuss novel ways to display confidence information, using an algorithm called Twinkle, in order to give the viewer valuable visual insight into uncertainties. The technique is useful whether the confidence information is in the form of a confidence interval or a distribution of possible values. We demonstrate how to display confidence information in a variety of applications: weather forecasts, intensity of a star, and rating a potential tumor in a diagnostic image. (Also cross-referenced as UMIACS-TR-2000-51, Universty of Maryland Institute for Advanced Computer Studies, University of Maryland)
Animated companion to the above two papers, Nagy and O'Leary, July 2000
A Multigrid Method Enhanced by Krylov Subspace Iteration for Discrete Helmholtz Equations. Howard C. Elman. Oliver G. Ernst. Dianne P. O'Leary. June 1999.
Standard multigrid algorithms have proven ineffective for the solution of discretizations of Helmholtz equations. In this work we modify the standard algorithm by adding GMRES iterations at coarse levels and as an outer iteration. We demonstrate the algorithm's effectiveness through theoretical analysis of a model problem and experimental results. In particular, we show that the combined use of GMRES as a smoother and outer iteration produces an algorithm whose performance depends relatively mildly on wave number and is robust for normalized wave numbers as large as two hundred. For fixed wave numbers, it displays grid-independent convergence rates and has costs proportional to number of unknowns. (Also cross-referenced as UMIACS-TR-99-36 University of Maryland Institute for Advanced Computer Studies, Department of Computer Science, University of Maryland)
Symbiosis between Linear Algebra and Optimization. Dianne P. O'Leary. May 1999.
The efficiency and effectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. Effective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. This essay will highlight contributions of numerical linear algebra to optimization, as well as some optimization problems encountered within linear algebra that contribute to a symbiotic relationship. (Also cross-referenced as UMIACS-TR-99-30 University of Maryland Institute for Advanced Computer Studies, University of Maryland)
Computation and Uses of the Semidiscrete Matrix Decomposition. Tamara G. Kolda. Dianne P. O'Leary. April 1999.
We derive algorithms for computing a semidiscrete approximation to a matrix in the Frobenius and weighted norms. The approximation is formed as a weighted sum of outer products of vectors whose elements are plus or minus $1$ or $0$, so the storage required by the approximation is quite small. We also present a related algorithm for approximation of a tensor. Applications of the algorithms are presented to data compression, filtering, and information retrieval; and software is provided in C and in Matlab. (Also cross-referenced as UMIACS-TR-99-22, University of Maryland Institute for Advanced Computer Studies, University of Maryland)
Supplement to CS-TR-4012 April 1999, Kolda and O'Leary, SDDPACK software
Adaptive Use of Iterative Methods in Predictor-Corrector Interior Point. Weichung Wang. Dianne P. O'Leary. April 1999.
In this work we devise efficient algorithms for finding the search directions for interior point methods applied to linear programming problems. There are two innovations. The first is the use of updating of preconditioners computed for previous barrier parameters. The second is an adaptive automated procedure for determining whether to use a direct or iterative solver, whether to reinitialize or update the preconditioner, and how many updates to apply. These decisions are based on predictions of the cost of using the different solvers to determine the next search direction, given costs in determining earlier directions. We summarize earlier results using a modified version of the OB1-R code of Lustig, Marsten, and Shanno, and we present results from a predictor-corrector code PCx modified to use adaptive iteration. If a direct method is appropriate for the problem, then our procedure chooses it, but when an iterative procedure is helpful, substantial gains in efficiency can be obtained. (Also cross-referenced as UMIACS-TR-99-21, University of Maryland Institute for Advanced Computer Studies, University of Maryland)
Near-Optimal Parameters for Tikhonov and Other Regularization Methods. Dianne P. O'Leary. March 1999.
Choosing the regularization parameter for an ill-posed problem is an art based on good heuristics and prior knowledge of the noise in the observations. In this work we propose choosing the parameter, without a priori information, by approximately minimizing the distance between the true solution to the discrete problem and the family of regularized solutions. We demonstrate the usefulness of this approach for Tikhonov regularization and for an alternate family of solutions. Further, we prove convergence of the regularization parameter to zero as the standard deviation of the noise goes to zero. We also prove that the alternate family produces solutions closer to the true solution than the Tikhonov family when the noise is small enough. (Also cross-referenced as UMIACS-TR-99-17, University of Maryland Institute for Advanced Computer Studies, University of Maryland.
Choosing Regularization Parameters in Iterative Methods for Ill-Posed Problems. Misha E. Kilmer. Dianne P. O'Leary. October 1998.
Numerical solution of ill-posed problems is often accomplished by discretization (projection onto a finite dimensional subspace) followed by regularization. If the discrete problem has high dimension, though, typically we compute an approximate solution by projection onto an even smaller dimensional space, via iterative methods based on Krylov subspaces. In this work we present efficient algorithms that regularize after this second projection rather than before it. We prove some results on the approximate equivalence of this approach to other forms of regularization and we present numerical examples. (Also cross-referenced as UMIACS-TR-98-48) University of Maryland Institute for Advanced Computer Studies, Department of Computer Science, University of Maryland.
Eigenanalysis of Some Preconditioned Helmholtz Problems. Howard C. Elman. Dianne P. O'Leary. March 1998.
In this work we calculate the eigenvalues obtained by preconditioning the discrete Helmholtz operator with Sommerfeld-like boundary conditions on a rectilinear domain, by a related operator with boundary conditions that permit the use of fast solvers. The main innovation is that the eigenvalues for two and three-dimensional domains can be calculated exactly by solving a set of one-dimensional eigenvalue problems. This permits analysis of quite large problems. For grids fine enough to resolve the solution for a given wave number, preconditioning using Neumann boundary conditions yields eigenvalues that are uniformly bounded, located in the first quadrant, and outside the unit circle. In contrast, Dirichlet boundary conditions yield eigenvalues that approach zero as the product of wave number with the mesh size is decreased. These eigenvalue properties yield the first insight into the behavior of iterative methods such as GMRES applied to these preconditioned problems. (Also cross-referenced as UMIACS-TR-98-22) University of Maryland Institute for Adavcanced Computer Studies, Department of Computer Science, University of Maryland.
On the Convergence of a New Rayleigh Quotient Method with Applications. D. P. O'Leary. G. W. Stewart. October 1997.
In this paper we propose a variant of the Rayleigh quotient method to compute an eigenvalue and corresponding eigenvectors of a matrix. It is based on the observation that eigenvectors of a matrix with eigenvalue zero are also singular vectors corresponding to zero singular values. Instead of computing eigenvector approximations by the inverse power method, we take them to be the singular vectors corresponding to the smallest singular value of the shifted matrix. If these singular vectors are computed exactly the method is quadratically convergent. However, exact singular vectors are not required for convergence, and the resulting method combined with Golub--Kahan--Krylov bidiagonalization looks promising for enhancement/refinement methods for large eigenvalue problems. (Also cross-referenced as UMIACS-97-74) Institute for Advanced Computer Studies, University of Maryland. Department of Computer Science, University of Maryland.
Tikhonov Regularization and Total Least Squares. Gene H. Golub. Per Christian Hansen. Dianne P. O'Leary. August 1997.
Discretizations of inverse problems lead to systems of linear equations with a highly ill-conditioned coefficient matrix, and in order to compute stable solutions to these systems it is necessary to apply regularization methods. We show how Tikhonov's regularization method, which in its original formulation involves a least squares problem, can be recast in a total least squares formulation, suited for problems in which both the coefficient matrix and the right-hand side are known only approximately. We analyze the regularizing properties of this method and demonstrate by a numerical example that in certain cases with large perturbations, the new method is superior to standard regularization methods. (Also cross-referenced as UMIACS-TR-97-65) University of Maryland Institute for Advanced Computer Studies, Department of Computer Science, University of Maryland.
Efficient Iterative Solution of the Three-Dimensional Helmholtz Problem. Howard C. Elman. Dianne P. O'Leary. August 1997.
We examine preconditioners for the discrete indefinite Helmholtz equation on a three-dimensional box-shaped domain with Sommerfeld-like boundary conditions. The preconditioners are of two types. The first is derived by discretization of a related continuous operator that differs from the original only in its boundary conditions. The second is derived by a block Toeplitz approximation to the discretized problem. The resulting preconditioning matrices allow the use of fast transform methods and differ from the discrete Helmholtz operator by an operator of low rank. We present experimental results demonstrating that when these methods are combined with Krylov subspace iteration, convergence rates depend only mildly on both the wave number and discretization mesh size. In addition, the methods display high efficiencies in an implementation on an IBM SP-2 parallel computer. (Also cross-referenced as UMIACS-TR-97-63) University of Maryland Institute for Advanced Computer Studies, Department of Computer Science, University of Maryland.
Fast Iterative Image Restoration with a Spatially-Varying PSF. James G. Nagy. Dianne P. O'Leary. June 1997.
We describe how to efficiently apply a spatially-variant blurring operator using linear interpolation of measured point spread functions. Numerical experiments illustrate that substantially better resolution can be obtained at very little additional cost compared to piecewise constant interpolation. (Also cross-referenced as UMIACS-TR-97-53) University of Maryland Institute for Advanced Computer Studies, Dept. of Computer Science, Univ. of Maryland,
A Semi-Discrete Matrix Decomposition for Latent Semantic Indexing in. Tamara G. Kolda. Dianne P. O'Leary. December 1996.
The vast amount of textual information available today is useless unless it can be effectively and efficiently searched. In information retrieval, we wish to match queries with relevant documents. Documents can be represented by the terms that appear within them, but literal matching of terms does not necessarily retrieve all relevant documents. Latent Semantic Indexing represents documents by approximations and tends to cluster documents on similar topics even if their term profiles are somewhat different. This approximate representation is usually accomplished using a low-rank singular value decomposition (SVD) approximation. In this paper, we use an alternate decomposition, the semi-discrete decomposition (SDD). In our tests, for equal query times, the SDD does as well as the SVD and uses less than one-tenth the storage. Additionally, we show how to update the SDD for a dynamically changing document collection. (Also cross-referenced as UMIACS-TR-96-92) University of Maryland Institute for Advanced Computer Studies, Dept. of Computer Science, Univ. of Maryland,
Large Latent Semantic Indexing via a Semi-Discrete Matrix Decomposition. Tamara G. Kolda. Dianne P. O'Leary. November 1996.
With the electronic storage of documents comes the possibility of building search engines that can automatically choose documents relevant to a given set of topics. In information retrieval, we wish to match queries with relevant documents. Documents can be represented by the terms that appear within them, but literal matching of terms does not necessarily retrieve all relevant documents. There are a number of information retrieval systems based on inexact matches. Latent Semantic Indexing represents documents by approximations and tends to cluster documents on similar topics even if their term profiles are somewhat different. This approximate representation is usually accomplished using a low-rank singular value decomposition (SVD) approximation. In this paper, we use an alternate decomposition, the semi-discrete decomposition (SDD). For equal query times, the SDD does as well as the SVD and uses less than one-tenth the storage for the MEDLINE test set. (Also cross-referenced as UMIACS-TR-96-83) University of Maryland Institute for Advanced Computer Studies, Dept. of Computer Science, Univ. of Maryland,
Regularization Algorithms Based on Total Least Squares. Per Christian Hansen. Dianne P. O'Leary. September 1996.
Discretizations of inverse problems lead to systems of linear equations with a highly ill-conditioned coefficient matrix, and in order to compute stable solutions to these systems it is necessary to apply regularization methods. Classical regularization methods, such as Tikhonov's method or truncated SVD, are not designed for problems in which both the coefficient matrix and the right-hand side are known only approximately. For this reason, we develop TLS-based regularization methods that take this situation into account. Here, we survey two different approaches to incorporation of regularization, or stabilization, into the TLS setting. The two methods are similar in spirit to Tikhonov regularization and truncated SVD, respectively. We analyze the regularizing properties of the methods and demonstrate by numerical examples that in certain cases with large perturbations, these new methods are able to yield more accurate regularized solutions than those produced by the standard methods. (Also cross-referenced as UMIACS-TR-96-65) University of Maryland Institute for Advanced Computer Studies, Dept. of Computer Science, Univ. of Maryland, Deparment of Mathematical Modelling, Technical Univ. of Denmark,
Pivoted Cauchy-like Preconditioners for Regularized Solution of. Misha E. Kilmer. Dianne P. O'Leary. September 1996.
Many ill-posed problems are solved using a discretization that results in a least squares problem or a linear system involving a Toeplitz matrix. The exact solution to such problems is often hopelessly contaminated by noise, since the discretized problemis quite ill-conditioned, and noise components in the approximate null-space dominate the solution vector. Therefore we seek an approximate solution that does not have large components in these directions. We use a preconditioned conjugate gradient algorithm to compute such a regularized solution. An orthogonal change of coordinates transforms the Toeplitz matrix to a Cauchy-like matrix, and we choose our preconditioner to be a low rank Cauchy-like matrix determined in the course of Gu's fast modified complete pivoting algorithm. We show that if the kernel of the ill-posed problem is smooth, then this preconditioner has desirable properties: the largest singular values of the preconditioned matrix are clustered around one, the smallest singular values, corresponding to the noise subspace, remain small, and the signal and noise spaces are relatively unmixed. The preconditioned algorithm costs only $O(n \lg n)$ operations per iteration for a problem with $n$ variables. The effectiveness of the preconditioner for filtering noise is demonstrated on three examples. (Also cross-referenced as UMIACS-TR-96-63) University of Maryland Institute for Advanced Computer Studies, Dept. of Computer Science, Univ. of Maryland, Applied Mathematics Program, Univ. of Maryland,
BFGS with Update Skipping and Varying Memory. Tamara Gibson. Dianne P. O'Leary. Larry Nazareth. July 1996.
We give conditions under which limited-memory quasi-Newton methods with exact line searches will terminate in $n$ steps when minimizing $n$-dimensional quadratic functions. We show that although all Broyden family methods terminate in $n$ steps in their full-memory versions, only BFGS does so with limited-memory. Additionally, we show that full-memory Broyden family methods with exact line searches terminate in at most $n+p$ steps when $p$ matrix updates are skipped. We introduce new limited-memory BFGS variants and test them on nonquadratic minimization problems. (Also cross-referenced as UMIACS-TR-96-49) University of Maryland Institute for Advanced Computer Studies, Dept. of Computer Science, Univ. of Maryland,
Overcomimg Instability in Computing the Fundamental Matrix. Daniel P. Heyman. Dianne P. O'Leary. April 1996.
We present an algorithm for solving linear systems involving the probability or rate matrix for a Markov chain. It is based on a UL factorization but works only with a submatrix of the factor U. We demonstrate its utility on Erlang-B models as well as more complicated models of a telephone multiplexing system. (Also cross-referenced as UMIACS-TR-96-24) Dept. of Computer Science, Univ. of Maryland, University of Maryland Insititute for Advanced Computer Studies,
Adaptive Use of Iterative Methods in Interior Point Methods for Linear. Weichung Wang. Dianne P. O'Leary. November 1995.
In this work we devise efficient algorithms for finding the search directions for interior point methods applied to linear programming problems. There are two innovations. The first is the use of updating of preconditioners computed for previous barrier parameters. The second is an adaptive automated procedure for determining whether to use a direct or iterative solver, whether to reinitialize or update the preconditioner, and how many updates to apply. These decisions are based on predictions of the cost of using the different solvers to determine the next search direction, given costs in determining earlier directions. These ideas are tested by applying a modified version of the OB1-R code of Lustig, Marsten, and Shanno to a variety of problems from the NETLIB and other collections. If a direct method is appropriate for the problem, then our procedure chooses it, but when an iterative procedure is helpful, substantial gains in efficiency can be obtained. (Also cross-referenced as UMIACS-TR-95-111) University of Maryland Institute for Advanced Computer Studies, Dept. of Computer Science, Univ. of Maryland,
Conjugate Gradients and Related KMP Algorithms: The Beginnings. Dianne P. O'Leary. November 1995.
In the late 1940's and early 1950's, newly available computing machines generated intense interest in solving ``large'' systems of linear equations. Among the algorithms developed were several related methods, all of which generated bases for Krylov subspaces and used the bases to minimize or orthogonally project a measure of error. These methods include the conjugate gradient algorithm and the Lanczos algorithm. We refer to these algorithms as the KMP family and discuss its origins, emphasizing research themes that continue to have central importance. (Also cross-referenced as UMIACS-TR-95-107) University of Maryland Institute for Advanced Computer Studies, Dept. of Computer Science, Univ. of Maryland,
Restoring Images Degraded by Spatially-Variant Blur. James G. Nagy. Dianne P. O'Leary. February 1995.
Restoration of images that have been blurred by the effects of a Gaussian blurring function is an ill-posed but well-studied problem. Any blur that is spatially invariant can be expressed as a convolution kernel in an integral equation. Fast and effective algorithms then exist for determining the original image by preconditioned iterative methods. If the blurring function is spatially variant, however, then the problem is more difficult. In this work we develop fast algorithms for forming the convolution and for recovering the original image when the convolution functions are spatially variant but have a small domain of support. This assumption leads to a discrete problem involving a banded matrix. We devise an effective preconditioner and prove that the preconditioned matrix differs from the identity by a matrix of small rank plus a matrix of small norm. Numerical examples are given, related to the Hubble Space Telescope Wide-Field / Planetary Camera. The algorithms that we develop are applicable to other ill-posed integral equations as well. (Also cross-referenced as UMIACS-TR-95-26) University of Maryland Institute for Advanced Computer Studies, Dept. of Computer Science, Univ. of Maryland,
A Parallel Inexact Newton Method Using a Krylov Multisplitting Algorithm. Chiou-Ming Huang. Dianne P. O'Leary. July 1993.
We present a parallel variant of the inexact Newton algorithm that uses the Krylov multisplitting algorithm (KMS) to compute the approxrmate Newton direction. The algorithm can be used for solving unconstrained optimization problems or systems of nonlinear equations. The KMS algorithm is a more efficient paraUel implementation of Krylov subspace methods (GMRES, Arnoldi, etc.) with multisplitting preconditioners. The work of the KMS algorithm is divided into the multisplitting tasks and a direction forrning task. There is a great deal of paraUelism within each task and the number of synchronization points between the tasks is greatly reduced. We study the local and global convergence properties of the algorithm and present results of numerical examples on a sequential computer. (Also cross-referenced as UMIACS-TR-93-71) University of Maryland Institute for Advanced Computer Studies, Dept. of Computer Science, Univ. of Maryland,
Why Broyden's Nonsymmetric Method Terminates on Linear Equations. Dianne P. O'Leary. March 1993.
The family of algorithms introduced by Broyden in 1965 for solving systems of nonlinear equations has been used quite effectively on a variety of problems. In 1979, Gay proved the then surprising result that the algorithms terminate in at most 2n steps on linear problems with n variables. His very clever proof gives no insight into properties of the intermediate iterates, however. In this work we show that Broyden's methods are projection methods, forcing the residuals to lie in a nested set of subspaces of decreasing dimension. (Also cross-referenced as UMIACS-TR-93-23) University of Maryland Institute for Advanced Computer Studies, Dept. of Computer Science, Univ. of Maryland,