The topics we could have talked about this semester under the title "advanced linear numerical analysis" are quite diverse, and we can only cover a few in class. For your project, I want you to "dig into" one topic in depth -- either something we have talked about or something we don't talk about that you find interesting.
Deadlines and points:
What to submit for the project:
Your project must have these components:
Your project might also includeHow to submit: Submit your project by e-mail. The time stamp on the e-mail will determine whether the project is on time or late.
I'll acknowledge your submission by e-mail after I have successfully extracted the files.Some questions that will be asked while evaluating a project:
Warning: The only failing grades I have given on term projects have resulted from lateness or plagiarism. If you use someone's ideas, cite the source. If you use a direct quote, use quotation marks and cite the source.
How to get started: Some sample topics are listed below, but I welcome proposals of other topics, especially if they have relevance to your research interests.
The Survival Guide gives journals and other sources of information that might lead you to ideas. If you have a topic that you find interesting but don't know whether it is a good idea, let's talk. And if you need help finding a topic, I'll be glad to talk to you about your interests.
Throughout the semester: Let's talk about your project if you have questions, or if you have trouble finding appropriate references.
A note on software:
In general your grade will be higher if you use any
available Matlab software rather than writing your own,
since this will give you more time to be original.
Projects chosen by other students this semester: Your project must be different from all of these, so either pick a different topic or check with me to make sure that your ideas are sufficiently different from what these students chose.
Saddlepoint problems
If we minimize a quadratic function subject to linear equality
constraints, we obtain a linear system with a matrix of
the form H = [A, B; B^T, 0], where A is symmetric and positive
definite. The same problem often arises in solving pdes.
The matrix H is symmetric but indefinite (both positive and negative
eigenvalues) and iterative methods tend to be slow. There
are many open questions about computing preconditioners.
Implicitly Restarted Arnoldi
and relation to linear system solving
The implicitly restarted Arnoldi method is the method of
choice for finding a few eigenvalues of a large sparse
matrix. It is based on restarting the Krylov sequence
using a vector (or set of vectors) derived from the
previous sequence. Can the information it builds upon
be used to restart the Krylov sequence for linear system
solving?
Field of values and applications
The field of values of a matrix is the set containing the
values x^H A x for all vectors x of norm 1. The concept
has surprisingly practical applications; a recent
issue of SIAM News discussed how a computation of the field
of values of a matrix can be used to save fuel in space
flight. Open questions include efficient algorithms
for the computation of the field of values and new
uses for it.
Multilevel aggregation for Markov chain problems
Multilevel algorithms have been applied to Markov chains
to find the stationary vector (i.e., the eigenvector
corresponding to the eigenvalue 1), but the theory is
much less developed than that for so-called multigrid
algorithms for differential equations. Can progress be
made in the theory or in improving the heuristic efficiency?
Image processing using linearization of total variation
Image deblurring has typically been done using linear techniques,
but a minimization problem called total variation is quite
successful. Open questions include whether its cost can be
reduced to be competitive in time with the linear methods.
Cholesky-infinity and alternatives
In applications such as function minimization by Newton's method,
we form a Cholesky factorization of a matrix that should be
positive definite but sometimes fails to be. Chol-inf is one
algorithm for "fixing up" the matrix as the factorization is
computed. Alternatives are also available, for example, those
collected in a paper by Fang and O'Leary.
Open questions include why Chol-inf works well and which
method is best.
CUR factorizations and accuracy guarantees
CUR factorizations approximate a matrix A using C, containing
a subset of the columns of A, R, containing a subset of the rows
of A, and U, determined to minimize ||A - CUR ||. When the
rows and columns are chosen by random sampling, there are
some impressive theoretical bounds on how good the approximation
is guaranteed to be (almost all of the time). Unfortunately,
the dimensions of A need to be in the billions before these
bounds apply.
Open questions include how to reliably apply such methods to
matrices with dimensions in the thousands.