The variety of applications of this general approach is phenomenal. They range from the well-known uses of KSMs for
solving linear systems and eigenvalue problems, to less established ones such as computing approximate
spectral densities, solving systems of Ordinary Differential Equations, and dimension reduction in machine
learning.
June 4: Tutorials
________________
13-13:30 Registration, coffee and tea
13:30-16:30 Krylov subspace methods
Yousef Saad, University of Minneapolis, MN, USA
This tutorial of two lectures will give an overview of Krylov subspace methods, and discuss the related algorithms,
theory, and applications.
13:30-14:45 First lecture
14:45-15:15 Coffee and tea
15:15-16:00 Second lecture
16:00-16:30 Discussions and Q&A
June 5: Colloquium
___________________
13-13:30 Registration, coffee and tea
13:30-14:15 Divide and conquer algorithms for large Hermitian eigenvalue problems
Yousef Saad, University of Minnesota, MN, USA
Algorithms based on Divide and conquer paradigms can lead to complex, but efficient and flexible algorithms for
solving large Hermitian eigenvalue problems. This talk will discuss how various such strategies can be combined to
exploit both `spectrum slicing', i.e., computing slices of the spectrum independently, and domain decomposition.
These strategies are independent of each other but both are essential if one has to compute very large parts of the
spectrum, as is the case in approaches that deal with excited states in solid state physics. The presentation will
begin with an overview of polynomial filtering techniques. This general approach can be quite efficient in the
situation where the matrix-vector product operation is inexpensive and when a large number of eigenvalues is
sought. We present an algorithm (and package) that combines the Lanczos algorithm with partial
reorthogonalization and polynomial filtering. n alternative to polynomial filtering that is generating a growing
interest is a class of methods that exploit filtering by rational functions. Good representatives of this general
approach are the FEAST eigensolver and the Sakurai-Sugiura algorithm. Here we will argue that the standard way of
selecting these rational filters, which is based on using the Cauchy integral, can be substantially improved
-- especially when iterative solvers are involved. Finally, the talk will discuss our ongoing work on Domain
Decomposition (DD) type methods that rely on spectral Schur complements combined with Newton's iteration.
14:15-15:00 Unite and Conquer Approach for High Scale Numerical Computation, Application to Krylov Methods
Nahid Emad, University of Versailles (Maison de la Simulation, PRiSM)
The ability to exploit emerging exascale computational systems will require a careful review and redesign of core
numerical algorithms and their implementations to fully exploit multiple levels of concurrency, hierarchical memory
structures and heterogeneous processing units available in these computational platforms. In this talk, we highlight
the constraints imposed by these architectures allowing the conception of smart co-design algorithms. Following
this insight, we present the "unite and conquer" approach to solve linear systems of equations and eigenvalue
problems for extreme scale computing. Unite and conquer approach focuses on decreasing the number of restart
cycles in restarted numerical methods by coupling either synchronously or asynchronously several restarted
methods called co-methods. In the end of a restart cycle, each co-method locally gathers available results of all
collaborating co-methods and selects the best one in order to create its restarting information. This can permit the
global reduction of the number of cycles to convergence. We discuss the properties of these methods that make
them well adapted to large-scale multi-level parallel architectures and highlight the generality of the approach. We
present some experiments validating the approach for unite and conquer Krylov methods on several parallel
platforms.
15:00-15:30 Coffee and tea
15:30-16:15 Parameter auto-tunings at runtime for Parallel and distributed KSM
Serge Petiton, University of Lille Sciences and Technologies and Maison de la Simulation
The convergence and the efficiencies of parallel and distributed KSM methods, such as GMRES and ERAM, greatly
depend on the supercomputers used and on the choice of several parameters, such as the subspace size or the
restarting strategies, which are difficult to set efficiently before the computation. Minimizing and avoiding
communication strategies may also be developed to minimize several costs. Thus, these methods have a lot of
correlated parameters which may be optimized using auto-tuning strategies to accelerate convergence, minimize
storage space, data movements, and energy consumption.
In this talk, we will survey some auto-tuning potential strategies for parallel and distributed KSM methods. We’ll
discuss results obtained on clusters of accelerators concerning auto-tunings of subspace sizes, orthogonalisations
and restarting strategies. As a conclusion, we will discuss auto-tuning strategies for KSM on future exascale
supercomputers, leading to intelligent KSM methods capable of “smart” decisions during computation.