In this contribution we consider the GMRES method, the most widely known and used representative of the class of nonsymmetric Krylov subspace method. This method consists of constructing the basis of associated Krylov subspace via the Arnoldi method and then solving the transformed Hessenberg least squares problem at each iteration step. In exact arithmetic the Arnoldi vectors are orthogonal. However, in finite precision computation the orthogonality is lost, which may potentially affect both the convergence rate and the ultimate attainable accuracy of the computed approximate solution. One may therefore want to keep the computed orthogonality as close to the machine precision as possible using proper orthogonal transformations, e.g. Householder orthogonalizations. The price is, unfortunately, too high for most of the applications. The Gram-Schmidt process is a cheaper alternative and its modified version represents the most frequently used compromise. Although, the (classical or modified) Gram-Schmidt orthogonalization may end up with the basis which lost its orthogonality completely, in the GMRES context, however, there is a very important relation between the loss of orthogonality among the Arnoldi vectors and the decrease of the residual of the computed approximation close to its final value. It was proved that, for the modified Gram-Schmidt GMRES, the Arnoldi vectors loose their orthogonality completely only after the residual of the computed approximation is reduced close to its final level of accuracy, which is proportional to the machine precision multiplied by the condition number of the system matrix. For the classical Gram-Schmidt the corresponding level of limiting accuracy, of course, is significantly different. The modified Gram-Schmidt GMRES however performs almost exactly as well as the Householder implementation and both implementations are backward stable. This suggests that unless the system matrix is extremely ill-conditioned, the use of the Householder or modified Gram-Schmidt GMRES is theoretically well justified. Presented results lead to important conclusions about the parallel implementation and application of the GMRES method. In the end of our talk we mention results on other minimum residual methods implementation that are mathematically equivalent to GMRES (Simpler GMRES, GCR, ORTHOMIN, ORTHODIR) but their numerical behavior can be significantly different.

References [1] C.C. Paige, M. Rozloznik, Z. Strakos, Modified Gram-Schmidt (MGS), Least Squares, and Backward Stability of MGS-GMRES, SIAM J. Matrix Anal. Appl. 28 (2006), 264-284. [2] A. Greenbaum, M. Rozloznik, Z. Strakos, Numerical Behaviour of the Modified Gram Schmidt GMRES Method, BIT 37:3 (1997), 706-719. [3] J. Drkosova, A. Greenbaum, M. Rozlolznik, Z. Strakos, Numerical Stability of the GMRES method, BIT 35 (1995), 309-330. [4] P. Jiranek, M. Rozloznik, M. H. Gutknecht, How to make Simpler GMRES and GCR more stable, SIAM J. Matrix Anal. Appl., 30 (2008), pp. 1483-1499. [5] P. Jiranek, M. Rozloznik, Adaptive version of Simpler GMRES, Numerical Algorithms, 53 (2010), pp. 93-112.

Talk Two

Date:

June 3rd (Wednesday), 2015

Time:

3:00-4:00pm

Title:

Numerical behavior of saddle point solvers

Summary:

Symmetric indefinite saddle-point problems arise in many application areas such as computational fluid dynamics, electromagnetism, optimization and nonlinear programming. Particular attention has been paid to their iterative solution. In this contribution we analyze several theoretical issues and practical aspects related to the application of preconditioners in Krylov subspace methods. Several structure-dependent schemes have been proposed and analyzed. Indeed, the nature of these systems enables to take into account not only simple preconditioning strategies and scalings, but also preconditioners with a particular block structure. It is well-known that the application of positive definite block-diagonal preconditioner stillleads to preconditioned system with a symmetric structure similar to the original saddle point system. On the other hand, the application of symmetric indefinite or nonsymmetric block-triangular preconditioner leads to nonsymmetric triangular preconditioned systems and therefore general nonsymmetric iterative solvers should be considered. The experiments however indicate that Krylov subspace methods perform surprisingly well on practical problems even those which should theoretically work only for symmetric systems. We illustrate our theory mainly on the constraint (null-space projection) preconditioner, but several results hold and can be extended for other classes of methods and preconditioners. The research in case of the indefinite constraint preconditioner has focused on the use of the conjugate gradient method (PCG). The convergence of PCG for a typical choice of right-hand side has been analyzed and it was shown that solving the preconditioned system by means of PCG is mathematically equivalent to using the CG method applied to the projected system onto the kernel of the constraint operator. Consequently, the primary variables in the PCG approximate solution always converge to the exact solution, while the dual variables may not converge or they can even diverge. The (non)convergence of the dual variables is then reflected onto the (non)convergence of the total residual vector. It can be often observed in practical problems that even simple scaling of the leading diagonal block by diagonal entries may easily recover the convergence of dual iterates. An alternative strategy consists in changing the conjugate gradient direction vector when computing the dual iterates into a minimum residual direction vector. The necessity of scaling the system is even more profound if the method is applied in finite precision arithmetic. It can be shown that rounding errors may considerably influence the numerical behavior of the scheme. More precisely, bad scaling, and thus nonconvergence of dual iterates, affects significantly the maximum attainable accuracy of the computed primary iterates. Therefore, applying a safeguard or pre-scaling technique, not only ensures the convergence of the method, but it also leads to a high maximum attainable accuracy of (all) computed iterates. For large-scale saddle point problems, the application of exact iterative schemes and preconditioners may be computationally expensive. In practical situations, only approximations to the inverses of the diagonal block or the related cross-product matrices are considered, giving rise to inexact versions of various solvers. Therefore, the approximation effects must be carefully studied. Two main representatives of the segregated solution approach are analyzed: the Schur complement reduction method, based on an (iterative) elimination of primary variables and the null-space projection method which relies on a basis for the null-space for the constraints. In particular, for several mathematically equivalent implementations we study the influence of inexactly solving the inner systems and estimate their maximum attainable accuracies. We can show that some implementations lead ultimately to residuals on the level of the roundoff, independently of the fact that the inner systems were solved inexactly on a much higher level than their level of limiting accuracy. Indeed, our results confirm that the generic and cheapest implementations deliver approximate solutions which satisfy either the second or the first block equation to working accuracy. We give a theoretical explanation for some behavior which has been observed, or is tacitly known. The implementations that we point out as optimal are seen to be those which are widely used and often suggested in applications.

Contact

Ken Hayami E-mail: hayami[at]nii.ac.jp *Please replace [at] with @.