Skip to main page content

Track: ​Privacy Attacks

What:
Talk
Part of:
When:
12:30 PM, Tuesday 14 Jul 2020 EDT (1 hour 40 minutes)
How:

Use the red Join on YouTube button above to join the livestream. If you cannot see this button, make sure you are logged in (see the upper-right corner of your screen).


Session chair: Rebekah Overdorf


SPy: Car Steering Reveals Your Trip Route!

Mert D. Pesé (University of Michigan), Xiaoying Pu (University of Michigan), and Kang G. Shin (University of Michigan)

Pre-recorded presentation

Summary:

Vehicular data-collection platforms as part of Original Equipment Manufacturers’ (OEMs’) connected telematics services are on the rise in order to provide diverse connected services to the users. They also allow the collected data to be shared with third-parties upon users’ permission. Under the current suggested permission model, we find these platforms leaking users’ location information without explicitly obtaining users’ permission. We analyze the accuracy of inferring a vehicle’s location from seemingly benign steering wheel angle (SWA) traces, and show its impact on the driver’s location privacy. By collecting and processing real-life SWA traces, we can infer the users’ exact traveled routes with up to 71% accuracy, which is much higher than the state-of-the-art.


Averaging Attacks on Bounded Noise-based Disclosure Control Algorithms
Hassan Jameel Asghar (Macquarie University & Data61, CSIRO) and Mohamed Ali Kaafar (Macquarie University & Data61, CSIRO)

Pre-recorded presentation

Summary:

We describe and evaluate an attack that reconstructs the histogram of any target attribute of a sensitive dataset which can only be queried through a specific class of real-world privacy-preserving algorithms which we call bounded perturbation algorithms. A defining property of such an algorithm is that it perturbs answers to the queries by adding zero-mean noise distributed within a bounded (possibly undisclosed) range. Other key properties of the algorithm include only allowing restricted queries (enforced via an online interface), suppressing answers to queries which are only satisfied by a small group of individuals (e.g., by returning a zero as an answer), and adding the same perturbation to two queries which are satisfied by the same set of individuals (to thwart differencing or averaging attacks). A real-world example of such an algorithm is the one deployed by the Australian Bureau of Statistics’ (ABS) online tool called TableBuilder, which allows users to create tables, graphs and maps of Australian census data [30]. We assume an attacker (say, a curious analyst) who is given oracle access to the algorithm via an interface. We describe two attacks on the algorithm. Both attacks are based on carefully constructing (different) queries that evaluate to the same answer. The first attack finds the hidden perturbation parameter r (if it is assumed not to be public knowledge). The second attack removes the noise to obtain the original answer of some (counting) query of choice. We also show how to use this attack to find the number of individuals in the dataset with a target attribute value of any attribute A, and then for all attribute values in A. None of the attacks presented here depend on any background information. Our attacks are a practical illustration of the (informal) fundamental law of information recovery which states that "overly accurate estimates of too many statistics completely destroys privacy" [9, 15].


Exposing Private User Behaviors of Collaborative Filtering via Model Inversion Techniques
Seira Hidano (KDDI Research, Inc.), Takao Murakami (National Institute of Advanced Industrial Science and Technology), Shuichi Katsumata (National Institute of Advanced Industrial Science and Technology), Shinsaku Kiyomoto (KDDI Research, Inc.), and Goichiro Hanaoka (National Institute of Advanced Industrial Science and Technology)

Pre-recorded presentation

Summary:

Privacy risks of collaborative filtering (CF) have been widely studied. The current state-of-the-art inference attack on user behaviors (e.g., ratings/purchases on sensitive items) for CF is by Calandrino et al. (S&P, 2011). They showed that if an adversary obtained a moderate amount of user’s public behavior before some time T, she can infer user’s private behavior after time T. However, the existence of an attack that infers user’s private behavior before T remains open. In this paper, we propose the first inference attack that reveals past private user behaviors. Our attack departs from previous techniques and is based on model inversion (MI). In particular, we propose the first MI attack on factorization-based CF systems by leveraging data poisoning by Li et al. (NIPS, 2016) in a novel way. We inject malicious users into the CF system so that adversarialy chosen “decoy” items are linked with user’s private behaviors. We also show how to weaken the assumption made by Li et al. on the information available to the adversary from the whole rating matrix to only the item profile and how to create malicious ratings effectively. We validate the effectiveness of our inference algorithm using two real-world datasets.


Illuminating the Dark or how to recover what should not be seen in FE-based classifiers
Sergiu Carpov (Inpher), Caroline Fontaine (LSV, CNRS), Damien Ligier (Wallix), and Renaud Sirdey (CEA LIST)

Pre-recorded presentation

Summary:

Classification algorithms become more and more powerful and pervasive. Yet, for some use cases, it is necessary to be able to protect data privacy while benefiting from the functionalities they provide. Among the tools that may be used to ensure such privacy, we are focusing in this paper on functional encryption. These relatively new cryptographic primitives enable the evaluation of functions over encrypted inputs, outputting cleartext results. Theoretically, this property makes them well-suited to process classification over encrypted data. In this work, we study the security and privacy issues of classifiers using today practical functional encryption schemes. We provide an analysis of the information leakage about the input data that are processed in the encrypted domain with state-of-the-art functional encryption schemes. This study, based on experiments ran on MNIST and Census Income datasets, shows that neural networks are able to partially recover information that should have been kept secret. Hence, great care should be taken when using the currently available functional encryption schemes to build privacy-preserving classification services.

Who's Attending 

  • 20 other(s)
Session detail
Allows attendees to send short textual feedback to the organizer for a session. This is only sent to the organizer and not the speakers.
To respect data privacy rules, this option only displays profiles of attendees who have chosen to share their profile information publicly.

Changes here will affect all session detail pages