Artificial intelligence (AI) and machine learning (ML) techniques are being increasingly deployed in cyber-security settings. Examples of critical applications include network anomaly detection, biometric authentication, spam detection, and data analytics based financial fraud detection. At the same time, advanced ML algorithms also give attacker’s an advantage, setting up a complex interplay between attackers and defenders. An important example is in the area of web privacy; it has been shown sophisticated attackers can use advanced inference techniques to compromise the identity of web users. In response, web users can intentionally add ``noise” to their online behaviors to evade advanced recognition attacks, borrowing tools from the literature on differential privacy.
At the same time, as ML techniques become more sophisticated, they themselves are vulnerable to attack. These include stealthy training data poisoning attacks, and so-called ``adversarial input perturbations” which have to been shown to be particularly pernicious for deep neural networks. For these reasons, there is growing interest in techniques to develop and deploy verifiably safe and secure ML systems, adopting and adapting techniques from the software security domain. A final vulnerability involves the fact that modern ML systems and especially deep learning systems are trained and executed in the cloud, raising concerns about the privacy of the user’s data. New solutions are being developed to address these privacy concerns.
Course schedule:The course will be held from January 7 - 15, 2019.
Details will be presented in the introductory lecture on Jan 7, 2pm, lecture room EI4
Please register in TISS.