057.020 VSC-School I Courses in High Performance Computing
This course is in all assigned curricula part of the STEOP.
This course is in at least 1 assigned curriculum part of the STEOP.

2019W, VU, 2.0h, 1.5EC, to be held in blocked form

Properties

  • Semester hours: 2.0
  • Credits: 1.5
  • Type: VU Lecture and Exercise

Learning outcomes

After successful completion of the course, students are able to

1) Linux and First Steps on the VSC Clusters

  • login to the VSC Systems using Secure Shell (SSH) and individually configure their own Linux environment to speed up work on the cluster,
  • use at least one common text editor to modify text,
  • use the 25 most important Linux shell commands, for example to create, copy, move and delete files,
  • create shell scripts that automate simple sequences of commands.

2) Introduction to Working on the VSC Clusters

  • describe in word and sketch how a typical high-performance computing cluster is structured, 
  • describe how the batch system works at the VSC,
  • develop the workflows on the VSC necessary for their own research work,
  • use the module environment on the VSC,
  • compile programs on the VSC,
  • create batch jobs for the workload manager SLURM deployed at VSC and submit them for execution,
  • check the status of the jobs sent for execution and, after processing them, check the success of the job especially in terms of correct execution and of the runtime. 

3) Parallelization with MPI (Message Passing Interface)

  • differentiate between pure shared-memory architecturen and high-performance computing clusters (combination of distributed-memory and shared-memory architecturen) and name the consequences for the parallelization and execution of programs,
  • explain the main advantages and disadvantages of the parallelization concepts (distributed-memoryshared-memory and hybrid parallelization),
  • select the most suitable method for parallelization depending on the situation,
  • describe the essential concepts of the Message Passing Interface (MPI),
  • describe the communication between the individual MPI processes and the generally implicit synchronization between the MPI processes,
  • create a parallel program using MPI,
  • parallelize a serial program using MPI,
  • select methods of MPI communication that prevent deadlocks and ensure the correctness of the program,
  • compare these communication methods with regard to the runtime of the parallel program on a specific cluster,
  • determine from these last two points the best method of MPI communication on this particular cluster,
  • identify any errors in MPi programs and
  • fix identified errors in MPI programs.

 

Subject of course

1) Linux and First Steps on the VSC Clusters

  • Components of a high-performance computing cluster and the actual structure of the VSC,
  • login to VSC and transfering files between workstation and VSC,
  • the most important Linux shell commands and working with a text editor,
  • the most important functionalities of a Linux shell and writing of shell scripts,
  • configuration of the own working environment by setting environment variables and editing  configuration files.

2) Introduction to Working on the VSC Clusters

  • Components of a high-performance computing cluster, difference between login and compute nodes and difference between shared-memory und distributed-memory architectures,
  • structure of the VSC and a brief overview of the available special purpose hardware such as graphics cards,
  • the module environment on the VSC and compiling own programs on the VSC,
  • the workload manager SLURM, its functioning and main options,
  • the most important possibilities of data storage on the VSC,
  • individual work steps on the systems of the VSC.

3) Parallelization with MPI (Message Passing Interface)

  • The main concepts of parallelizing programs on high-performance computing clusters (distributed-memory vs. shared-memory architectures), their main advantages and disadvantages,
  • selection of the most suitable method for parallelization depending on the situation,
  • overview of all aspects of the current standard for MPI and their application field,
  • the essential concepts of MPI in detail,
  • the implementation of different possibilities of MPI communication,
  • concretely formulated tasks to create parallel MPI programs.

 

Teaching methods

1) Linux and First Steps on the VSC Clusters

Lecture about:

  • components of a high-performance computing cluster and the actual structure of the VSC.

Lecture and practical exercises about:

  • login to VSC and transfering files between workstation and VSC,
  • the most important Linux shell commands and working with a text editor,
  • the most important functionalitis of a Linux shell and writing of shell scripts,
  • configuration of the own working environment by setting environment variables and editing  configuration files.

2) Introduction to Working on the VSC Clusters

Lecture about:

  • components of a high-performance computing cluster, difference between login and compute nodes and difference between shared-memory und distributed-memory architectures,
  • structure of the VSC and a brief overview of the available special purpose hardware such as graphics cards.

Lecture and practical exercises about:

  • the module environment on the VSC and compiling own programs on the VSC,
  • the workload manager SLURM, its functioning and main options,
  • the most important possibilities of data storage on the VSC,
  • individual work steps on the systems of the VSC.

3) Parallelization with MPI (Message Passing Interface)

Lecture about:

  • the main concepts of parallelizing programs on high-performance computing clusters (distributed-memory vs. shared-memory architectures), their main advantages and disadvantages,
  • selection of the most suitable method for parallelization depending on the situation,
  • overview of all aspects of the current standard for MPI and their application field,
  • the essential concepts of MPI in detail.

Lecture and practical exercises about:

  • the implementation of different possibilities of MPI communication.

Practical exercises about:

  • concretely formulated tasks after each new topic to create parallel MPI programs that can be done independently alone or in teams of two students,
  • discussion among the participants as well as in a one-to-one with the course instructors.

 

Mode of examination

Immanent

Additional information

The course is devided into individual blocks. Individual registration for each block should be done via the homepage of the course.

Lecturers

Institute

Course dates

DayTimeDateLocationDescription
Tue14:00 - 18:0008.10.2019 FH Schulungsraum TU.it (TU Wien, Wiedner Hauptstraße 8-10, ground floor, red area)Linux and First Steps on the VSC Clusters
Tue09:00 - 16:0015.10.2019 FH Schulungsraum TU.it (TU Wien, Wiedner Hauptstraße 8-10, ground floor, red area)Introduction to Working on the VSC Clusters
09:00 - 16:3006.11.2019 - 08.11.2019 FH Internet-Raum FH1 (TU Wien, Wiedner Hauptstraße 8-10, ground floor, red area)Parallelization with MPI (Message Passing Interface)
Tue09:00 - 16:0014.01.2020 FH Schulungsraum TU.it (TU Wien, Wiedner Hauptstraße 8-10, ground floor, red area)Introduction to Working on the VSC Clusters
VSC-School I Courses in High Performance Computing - Single appointments
DayDateTimeLocationDescription
Tue08.10.201914:00 - 18:00 FH Schulungsraum TU.it (TU Wien, Wiedner Hauptstraße 8-10, ground floor, red area)Linux and First Steps on the VSC Clusters
Tue15.10.201909:00 - 16:00 FH Schulungsraum TU.it (TU Wien, Wiedner Hauptstraße 8-10, ground floor, red area)Introduction to Working on the VSC Clusters
Wed06.11.201909:00 - 16:30 FH Internet-Raum FH1 (TU Wien, Wiedner Hauptstraße 8-10, ground floor, red area)Parallelization with MPI (Message Passing Interface)
Thu07.11.201909:00 - 16:30 FH Internet-Raum FH1 (TU Wien, Wiedner Hauptstraße 8-10, ground floor, red area)Parallelization with MPI (Message Passing Interface)
Fri08.11.201909:00 - 16:30 FH Internet-Raum FH1 (TU Wien, Wiedner Hauptstraße 8-10, ground floor, red area)Parallelization with MPI (Message Passing Interface)
Tue14.01.202009:00 - 16:00 FH Schulungsraum TU.it (TU Wien, Wiedner Hauptstraße 8-10, ground floor, red area)Introduction to Working on the VSC Clusters
Course is held blocked

Examination modalities

The performance review takes place by participation in the courses and by reviewing the submitted program examples.

Course registration

Registration modalities:

This course will be held in blocks, please see http://vsc.ac.at/training.

Registration for each of the blocks should be done within the corresponding course at http://vsc.ac.at/training.

Contact for this course: vsc-seminar@list.tuwien.ac.at

Curricula

Study CodeSemesterPrecon.Info
ALG For all Students

Literature

No lecture notes are available.

Previous knowledge

Previous knowledge for the individual blocks: 

1) Linux and First Steps on the VSC Clusters

  • There is no previous knowledge required for this block.

2) Introduction to Working on the VSC Clusters

  • Students are able to independently apply the skills developed as learning  outcomes of  block 1 (Linux).

3) Parallelization with MPI (Message Passing Interface)

  • Students are able to independently apply the skills developed as learning  outcomes of  block 1 (Linux), can create a serial program in at least one of the C/C++ or Fortran programming languages, compile and execute it.

Continuative courses

Miscellaneous

Language

English