Inference Risks in Machine Learning at ICLR Workshop
Discussing inference risks in ML during the ICLR 2021 DPML Workshop, with a recorded talk on April 19, 2021. π

David Evans
830 views β’ May 7, 2021

About this video
Invited talk at Distributed and Private Machine Learning (DPML)
Workshop at ICLR 2021
7 May 2021 (Talk recorded 19 April 2021)
https://dp-ml.github.io/2021-workshop-ICLR/
When models are trained on private data, such as medical records or personal emails, there is a risk that those models not only learn the hoped-for patterns, but will also learn and expose sensitive information about their training data. Several different types of inference attacks on machine learning models have been found, and we will characterize inference risks according to whether they expose statistical properties of the distribution used for training or specific information in the training dataset. Differential privacy provides formal guarantees bounding some (but not all) types of inference risk, but providing substantive differential privacy guarantees with state-of-the-art methods requires adding so much noise to the training process for complex models that the resulting models are useless. Experimental evidence, however, suggests that in practice inference attacks have limited power, and in many cases, a very small amount of privacy noise seems to be enough to defuse inference attacks. In this talk, I will overview of a variety of different inference risks for machine learning models and report on some experiments to better understand the power of inference attacks in more realistic settings.
https://uvasrg.github.io
Workshop at ICLR 2021
7 May 2021 (Talk recorded 19 April 2021)
https://dp-ml.github.io/2021-workshop-ICLR/
When models are trained on private data, such as medical records or personal emails, there is a risk that those models not only learn the hoped-for patterns, but will also learn and expose sensitive information about their training data. Several different types of inference attacks on machine learning models have been found, and we will characterize inference risks according to whether they expose statistical properties of the distribution used for training or specific information in the training dataset. Differential privacy provides formal guarantees bounding some (but not all) types of inference risk, but providing substantive differential privacy guarantees with state-of-the-art methods requires adding so much noise to the training process for complex models that the resulting models are useless. Experimental evidence, however, suggests that in practice inference attacks have limited power, and in many cases, a very small amount of privacy noise seems to be enough to defuse inference attacks. In this talk, I will overview of a variety of different inference risks for machine learning models and report on some experiments to better understand the power of inference attacks in more realistic settings.
https://uvasrg.github.io
Video Information
Views
830
Likes
19
Duration
23:20
Published
May 7, 2021
Related Trending Topics
LIVE TRENDSRelated trending topics. Click any trend to explore more videos.