HotPETs _ Session 1: My Tool is Cool

To join this session, simply click on the button above. If you cannot see this button, make sure you are logged in (see the upper-right corner of your screen).
Talk:ML Privacy Meter: Aiding regulatory compliance by quantifying the privacy risks of ML
Authors: Sasi Kumar Murakonda and Reza Shokri
Abstract: For a safe and secure use of machine learning models, it is important to have a quantitative assessment of the privacy risks of these models, and to make sure that they do not reveal sensitive information about their training data. Article 35 of GDPR requires all organizations to conduct a DPIA (Data Protection Impact Assessment) for systematically analyzing, identifying and minimizing the data protection risks of a project that uses innovative technologies such as machine learning [1, 2]. In this talk, we will present our tool ML Privacy Meter based on well-established algorithms to measure privacy risks of machine learning models through membership inference attacks [3, 4]. We will also discuss how our tool can help practitioners in DPIA (Data Protection Impact Assessment) by providing a quantitative assessment of the privacy risk of learning from sensitive data to members of the dataset. The tool is public and is available at:https://github.com/privacytrustlab/ml_privacy_meter
We will specifically present the scenarios in which our tool can help and how it can aid practitioners in risk reduction. ML Privacy Meter implements membership inference attacks and the privacy risks of models can be evaluated as the accuracy of such attacks against their training data. As the tool can immediately measure the privacy risks for training data, practitioners take simple actions such as finetuning their regularization techniques, sub-sampling, re-sampling their data, etc., to reduce the risk. The tool can also help in the selection of privacy parameters (epsilon) for differential privacy by quantifying the risk posed at each value of epsilon. We will also discuss some requirements of a DPIA (for example, estimating if the processing could contribute to loss of control over the use of personal data or loss of confidentiality or reputational damage) and how our tool can be useful for such assessments. The tool can be used to estimate the aggregate privacy risk, of making a machine learning model public/ providing query access to the model, for members of the training dataset.
[3] R. Shokri, M. Stronati, C. Song, and V. Shmatikov. Membership Inference Attacks against Machine Learning Models (https://www.comp.nus.edu.sg/~reza/files/Shokri-SP2017.pdf) in IEEE Symposium on Security and Privacy, 2017.
[4] M. Nasr, R. Shokri, and A. Houmansadr. Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks (https://www.comp.nus.edu.sg/~reza/files/Shokri-SP2019.pdf) in IEEE Symposium on Security and Privacy, 2019
Talk: I see a cookie banner -- is it even legal?
Authors: Nataliia Bielova and Christiana Santos
Abstract: To comply with the General Data Protection Regulation (GDPR) and the ePrivacy Directive (ePD), website publishers can collect personal data only after they have obtained a user's valid consent. A common method to obtain consent is through the use of a cookie banner that pops up when a user visits a website for the first time. EU websites often rely on IAB Europe Transparency and Consent Framework (TCF) -- the standardized framework for collecting consent. The IAB Europe is the advertising industry’s primary lobbying organization, and many popular EU websites actually use IAB TCF for example, popular news website https://reuters.com or top cooking website in France https://www.marmiton.org/. The critical problem is that this framework's consent standard is illegal and widely promotes non-compliant ways to collect consent.
Who's Attending
-
41 other(s)