High Performance Computing II

General aims of the course

The aim of the course is to provide computational scientists with knowledge and practical skills that are necessary to actively participate in the development of parallel and high-performance software within their respective research fields. The course is open to researchers from any discipline with basic knowledge of parallel computer architectures, parallel programming models, basic linear algebra, and numerical methods.

Learning outcomes

After completing the course you should be able to…

  1. …describe and evaluate various data decomposition and load balancing techniques.
  2. …describe common parallel algorithms for various problems.
  3. …use software libraries in HPC and show familiarity with some  state-of-the-art HPC software libraries and tools.
  4. …write structured parallel programs based on parallel programming patterns.
  5. …analyze the computational patterns and data flows in algorithms used in your research field.

Course contents

The course focuses on three themes:

  1. General HPC techniques,
    includes such things as weak and  strong scalability, data distribution and load balancing techniques, parallel programming patterns, and explicit and recursive blocking.
  2. Fundamental parallel algorithms,
    covers algorithms from a variety of different areas such as dense and sparse linear algebra (systems of linear equations, eigenvalue problems), problems on graphs (partitioning, clustering), N-body simulations (range-limited methods, fast methods), and pseudo-random number generation.
  3. HPC library software and tools,
    gives an introduction to  state-of-the-art HPC software libraries. Examples include libraries  for dense linear algebra (BLAS, LAPACK, ScaLAPACK, PLASMA), sparse linear algebra (PETSc, SuiteSparse), fast transforms (FFTW), graph problems (Metis), and random number generation (SPRNG). Sample tools are profilers, tracers, and task graph visualization software.


Basic understanding of parallel computer architectures and practical skills in parallel programming using OpenMP and MPI as well as introductory linear algebra and numerical methods.

Course schedule 2014

  •   Week 11 (preparatory work at home)
  •   Week 12 (lectures and labs in Umeå)
  •   Week 13 (homework)


A number of carefully documented programming assignments including an individually tailored HPC project.


Send an e-mail to Mikael Rännar: mikael.rannar@hpc2n.umu.se,  with the following information:

  • SeSE, High Performance Computing II
  • Name
  • e-mail (You must use your university email address, not gmail, yahoo, hotmail etc.)
  • Affiliation
  • Supervisor
  • Subject of PhD-project.

Indicate also if you apply for a travel grant.

Deadline for registration:  17 February 2014