185.A64 Compilers for Parallel Systems
This course is in all assigned curricula part of the STEOP.
This course is in at least 1 assigned curriculum part of the STEOP.

2019S, VU, 2.0h, 3.0EC, to be held in blocked form

Properties

  • Semester hours: 2.0
  • Credits: 3.0
  • Type: VU Lecture and Exercise

Aim of course

Computer programs have a certain degree of parallelism, i.e.,
operations can be run on parallel hardware simultaneously. The
parallelism exists either implicitly in the program (e.g. a loop whose
iterations are independent of each other), or is explicitly specified
by means of parallel language constructs.

How can a compiler uncover the implicit parallelism and map it - as
well as the explicitly specified one - from an abstract representation
to a parallel target platform, such that its parallel resources are
utilized as much as possible?

Upon successful completion of this course, students will know
compilation and optimization techniques that meet this
requirements. They will understand the concept of data dependency and
its relevance for the parallelization of sequential code. They will be
able to: identify program transformations for dependency elimination
and locality optimization; describe and implement basic
vectorization and parallelization schemes; explain the polyhedral
representation of loops and use it for their automatic
parallelization; discuss analytical techniques for parallel programs
and present compilation strategies for commonly used parallel
languages.

 

Subject of course

Overview of parallel Systems, data dependence, dependence analysis and
testing, program transformations, loop transformations, control
dependence and if-conversion, vectorization, parallelization for
systems with shared (do-all, do-across loops) and distributed memory,
pipelining, locality optimizations, data reuse, unimodular and affine
transformations, parallelization in the polyhedral model, intermediate
representations, program analysis, compilation of data-parallel
languages (HPF), OpenMP, Cilk, PGAS languages (data distribution,
communication optimization, iteration scheduling, runtime environment)
and accelerators (GPUs); outlook: autotuning, run-time
parallelization.

Additional information

{ ECTS Breakdown (3.0 ECTS <=> 75 Std.):
** Vorlesung inkl. Nachbereitung: 20 Std.
** Übungs-/Programmieraufgaben: 30 Std.
** Ausarbeitung Präsentation: 15 Std.
** Prüfungsvorbereitung und Prüfung: 10 Std. }

Lecturers

Institute

Examination modalities

Übungsbeispiele, Programmieraufgaben, Präsentation zu einem ausgewählten Thema, schriftl. Prüfung

Course registration

Begin End Deregistration end
18.02.2019 23:59 18.03.2019 23:59

Curricula

Study CodeObligationSemesterPrecon.Info
066 931 Logic and Computation Mandatory elective
066 937 Software Engineering & Internet Computing Mandatory elective

Literature

Randy Allen, Ken Kennedy. Optimizing compilers for modern architectures. Kaufmann, 2002.
Alfred V. Aho, Monica S. Lam, Ravi Sethi, Jeffrey D. Ullman, Compilers: Principles, Techniques, & Tools (2nd Edition). Pearson Addison Wesley, 2007.
Michael J. Wolfe. High-Performance Compilers for Parallel Computing. Addison-Wesley, 1996.
Hans Zima, Barbara Chapman. Supercompilers for Parallel and Vector Computers. ACM Press, 1990.

Previous knowledge

Basic concepts of compiler construction, parallel programming.

Preceding courses

Accompanying courses

Miscellaneous

Language

if required in English