10:00-11:00am Adaptive Cross Approximation for Ill-Posed Problems
Integral equations of the first kind with a smooth kernel and perturbedright-hand side, which represents available contaminated data, arise in manyapplications. Discretization gives rise to linear systems of equations witha matrix whose singular values cluster at the origin. The solution of thesesystems of equations requires regularization, which has the effect thatcomponents in the computed solution connected to singular vectors associatedwith small singular values are damped or ignored. In order to compute auseful approximate solution typically approximations of only a fairly smallnumber of the largest singular values and associated singular vectors of thematrix are required. The present paper explores the possibility ofdetermining these approximate singular values and vectors by adaptive crossapproximation. This approach is particularly useful when a finediscretization of the integral equation is required and the resulting linearsystem of equations is of large dimensions, because adaptive crossapproximation makes it possible to compute only fairly few of the matrixentries. This talk presents joint work with T. Mach, M. Van Barel, and R.Vandebril.
11:00-12:00am Convergence Rates for Inverse-Free Rational Approximation of MatrixFunctions
Many applications in Science and Engineering require the evaluation ofmatrix functions, such as the matrix exponential or matrix logarithm, of alarge matrix. We are concerned with the situation when one is interested incomputing a matrix function of the form f(A)v, where A is a large squarematrix and v is a vector. In situations when it is impractical or impossibleto evaluate f(A) explicitly, one often approximates f(A)v by first reducingA by a Krylov subspace method. Standard Krylov methods deliver polynomialapproximations of f(A), while rational Krylov subspace methods give rationalapproximations with predetermined poles. The former methods generallyrequire more Krylov steps and a Krylov subspace of larger dimension than thelatter to yield approximations of comparable accuracy. Therefore, rationalKrylov methods often yield approximations that are faster to evaluate thanstandard Krylov methods. It follows that if many evaluations of f(A)v arerequired (e.g., because f depends on a parameter that is varied), then itmay be advantageous to use a rational Krylov method instead of a standardone. However, the solution of linear systems of equations with shiftedmatrices A required to construct an orthogonal basis for a rational Krylovsubspace may create numerical difficulties and/or require excessivecomputing time. It therefore may be attractive to use inverse-free rationalKrylov methods, which require less storage space and yield simplerapproximations of f(A)v than standard Krylov methods, and avoid the solutionof linear systems of equations with shifted matrices A. We derive geometricconvergence rates for approximating matrix functions by using inverse-freerational Krylov methods. This talk presents joint work with C. Jagels, T.Mach, M. Van Barel, and R. Vandebril.