Loading…
Enigma 2020 has ended
Monday, January 27 • 4:30pm - 5:00pm
What Does It Mean for Machine Learning to Be Trustworthy?

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

The attack surface of machine learning is large: training data can be poisoned, predictions manipulated using adversarial examples, models exploited to reveal sensitive information contained in training data, etc. This is in large parts due to the absence of security considerations in the design of ML algorithms. Yet, adversaries have clear incentives to target these systems. Thus, there is a need to ensure that computer systems that rely on ML are trustworthy.

Fortunately, we are at a turning point where ML is still being adopted, which creates a rare opportunity to address the shortcomings of the technology before it is widely deployed. Designing secure ML requires that we have a solid understanding as to what we expect legitimate model behavior to look like.

In this talk, we lay the basis of a framework that fosters trust in deployed ML algorithms. The approach uncovers the influence of training data on test time predictions, which helps identify poison in training data but also adversarial examples or queries that would potentially result in a leak of private information. Beyond immediate implications to security and privacy, we demonstrate how this helps interpret and cast some light on the model's internal behavior. We conclude by asking what data representations need to be extracted at training time to enable trustworthy machine learning.

Speakers
NP

Nicolas Papernot

University of Toronto and Vector Institute
Nicolas Papernot is an Assistant Professor of Electrical and Computer Engineering at the University of Toronto and Canada CIFAR AI Chair at the Vector Institute. His research interests span the security and privacy of machine learning. Nicolas received a best paper award at ICLR 2017... Read More →


Monday January 27, 2020 4:30pm - 5:00pm PST
Grand Ballroom