Computer programs have a certain degree of parallelism, i.e.,
operations can be run on parallel hardware simultaneously. The
parallelism exists either implicitly in the program (e.g. a loop whose
iterations are independent of each other), or is explicitly specified
by means of parallel language constructs.
How can a compiler uncover the implicit parallelism and map it - as
well as the explicitly specified one - from an abstract representation
to a parallel target platform, such that its parallel resources are
utilized as much as possible?
Upon successful completion of this course, students will know
compilation and optimization techniques that meet this
requirements. They will understand the concept of data dependency and
its relevance for the parallelization of sequential code. They will be
able to: identify program transformations for dependency elimination
and locality optimization; describe and implement basic
vectorization and parallelization schemes; explain the polyhedral
representation of loops and use it for their automatic
parallelization; discuss analytical techniques for parallel programs
and present compilation strategies for commonly used parallel
languages.
Overview of parallel Systems, data dependence, dependence analysis and
testing, program transformations, loop transformations, control
dependence and if-conversion, vectorization, parallelization for
systems with shared (do-all, do-across loops) and distributed memory,
pipelining, locality optimizations, data reuse, unimodular and affine
transformations, parallelization in the polyhedral model, intermediate
representations, program analysis, compilation of data-parallel
languages (HPF), OpenMP, Cilk, PGAS languages (data distribution,
communication optimization, iteration scheduling, runtime environment)
and accelerators (GPUs); outlook: autotuning, run-time
parallelization.
Übungsbeispiele, Programmieraufgaben, Präsentation zu einem ausgewählten Thema, schriftl. Prüfung
Randy Allen, Ken Kennedy. Optimizing compilers for modern architectures. Kaufmann, 2002.
Alfred V. Aho, Monica S. Lam, Ravi Sethi, Jeffrey D. Ullman, Compilers: Principles, Techniques, & Tools (2nd Edition). Pearson Addison Wesley, 2007.
Michael J. Wolfe. High-Performance Compilers for Parallel Computing. Addison-Wesley, 1996.
Hans Zima, Barbara Chapman. Supercompilers for Parallel and Vector Computers. ACM Press, 1990.