194.055 Security, Privacy and Explainability in Machine Learning
This course is in all assigned curricula part of the STEOP.
This course is in at least 1 assigned curriculum part of the STEOP.

2019S, VU, 2.0h, 3.0EC


  • Semester hours: 2.0
  • Credits: 3.0
  • Type: VU Lecture and Exercise

Aim of course

Machine Learning, as a discipline of Artificial Intelligence research, has gained increased interest and uptake in the recent years, due to great advances in the power and effectiveness of algorithms for various complex task, such as in medical applications, computer vision, etc.

Machine learning models are thus deployed in an ever growing number of systems in various domains, often without the users interacting with the systems being aware of that. Besides the great benefits that machine learning offers, a number of aspects are critical to consider, such as the privacy implications when personal or sensitive data is analysed, the security aspects of machine learning applications, and finally the explainability and thus acceptance of machine learning models.

This course therefore deals with a couple of advanced topics related to Machine Learning (and statistics in the broader sense).

Subject of course

  • Privacy-preserving techniques to anonymize sensitive information in the input data, e.g. to facilitate data sharing, with a specific focus on the implications on the utility of the data and the models trained thereon. This includes e.g. k-anonymity and related models such as l-diversity, as well as differential privacy, etc.
  • Privacy-preserving techniques, such as differential privacy, to prevent information leaks from trained models
  • Attack vectors on machine learning models, e.g. membership attacks, and model stealing, Adversary Input Generation and how to limit them
  • Backdoor embedding to manipulate the behaviour of seemingly benign models for malicious purposes
  • Privacy-preserving computation of machine learning models, e.g. with secure multi-party computation, and homomorphic encryption approaches
  • Explainability of machine learning models to facilitate a better understanding and trust in the models, e.g. via visualization, rule extraction, Zero-Shot Learning


Additional information


7.3. Preliminary talk


For all other dates, please see TUWEL! Note that the lecture won't take place every week!



Course dates

Thu13:00 - 15:0007.03.2019 - 27.06.2019FAV Hörsaal 2 Vorlesung
Thu13:00 - 15:0023.05.2019FAV Hörsaal 3 Zemanek (Seminarraum Zemanek) Lecture (Replacement Room)
Wed16:00 - 18:0029.05.2019Seminarraum FAV EG B (Seminarraum von Neumann) Lecture (Replacement Date)
Security, Privacy and Explainability in Machine Learning - Single appointments
Thu07.03.201913:00 - 15:00FAV Hörsaal 2 Vorlesung
Thu14.03.201913:00 - 15:00FAV Hörsaal 2 Vorlesung
Thu21.03.201913:00 - 15:00FAV Hörsaal 2 Vorlesung
Thu28.03.201913:00 - 15:00FAV Hörsaal 2 Vorlesung
Thu11.04.201913:00 - 15:00FAV Hörsaal 2 Vorlesung
Thu02.05.201913:00 - 15:00FAV Hörsaal 2 Vorlesung
Thu09.05.201913:00 - 15:00FAV Hörsaal 2 Vorlesung
Thu16.05.201913:00 - 15:00FAV Hörsaal 2 Vorlesung
Thu23.05.201913:00 - 15:00FAV Hörsaal 2 Vorlesung
Thu23.05.201913:00 - 15:00FAV Hörsaal 3 Zemanek (Seminarraum Zemanek) Lecture (Replacement Room)
Wed29.05.201916:00 - 18:00Seminarraum FAV EG B (Seminarraum von Neumann) Lecture (Replacement Date)
Thu06.06.201913:00 - 15:00FAV Hörsaal 2 Vorlesung
Thu13.06.201913:00 - 15:00FAV Hörsaal 2 Vorlesung
Thu27.06.201913:00 - 15:00FAV Hörsaal 2 Vorlesung

Examination modalities

Exercises and written exam


DayTimeDateRoomMode of examinationApplication timeApplication modeExam
Wed14:00 - 16:0009.12.2020GM 1 Audi. Max.- ARCH-INF written17.10.2020 00:00 - 06.12.2020 23:59TISSExam (2nd retake)

Course registration

Begin End Deregistration end
24.01.2019 00:00 05.05.2019 23:59 31.03.2019 23:59



No lecture notes are available.

Previous knowledge

184.702 Machine Learning

Preceding courses