Skip to main page content

Track: ​Differential privacy applications

What:
Talk
Part of:
When:
12:30 PM, Monday 13 Jul 2020 EDT (1 hour 40 minutes)
Breaks:
Hallway Track: Meet me at the Beach!   02:10 PM to 03:00 PM (50 minutes)
Hallway Track: Meet me in the Hallway!   02:10 PM to 03:00 PM (50 minutes)
How:

Session chair: Catuscia Palamidessi

Pre-recorded presentation

Summary:

Differential privacy (DP) provides formal guarantees that the output of a database query does not reveal too much information about any individual present in the database. While many differentially private algorithms have been proposed in the scientific literature, there are only a few end-to-end implementations of differentially private query engines. Crucially, existing systems assume that each individual is associated with at most one database record, which is unrealistic in practice. We propose a generic and scalable method to perform differentially private aggregations on databases, even when individuals can each be associated with arbitrarily many rows. We express this method as an operator in relational algebra, and implement it in an SQL engine. To validate this system, we test the utility of typical queries on industry benchmarks, and verify its correctness with a stochastic test framework we developed. We highlight the promises and pitfalls learned when deploying such a system in practice, and we publish its core components as open-source software.

Pre-recorded presentation

Summary:

Massive amounts of videos are ubiquitously generated in personal devices and dedicated video recording facilities. Analyzing such data would be extremely beneficial in the real-world (e.g., urban traffic analysis). However, videos contain considerable sensitive in-formation, such as human faces, identities, and activities. Most of the existing video sanitization techniques simply obfuscate the video by detecting and blurring the region of interests (e.g., faces, vehicle plates, locations and timestamps). Unfortunately, privacy leakage in the blurred video cannot be effectively bounded, especially against unknown background knowledge. In this paper, to our best knowledge, we propose the first differentially private video analytics platform (VideoDP) which flexibly supports different video analyses with a rigorous privacy guarantee. Given the input video, VideoDP randomly generates a utility-driven private video in which adding or removing any sensitive visual element (e.g., human, and object) does not significantly affect the output video. Then, different video analyses requested by untrusted video analysts can be flexibly performed over the sanitized video with differential privacy. Finally, we conduct experiments on real videos, and the experimental results demonstrate that VideoDP can generate accurate results for video analytics.

Pre-recorded presentation

Summary:

We present a novel method for publishing differentially private synthetic attributed graphs. Our method allows, for the first time, to publish synthetic graphs simultaneously preserving structural properties, user attributes and the community structure of the original graph. Our proposal relies on CAGM, a new community-preserving generative model for attributed graphs. We equip CAGM with efficient methods for attributed graph sampling and parameter estimation. For the latter, we introduce differentially private computation methods, which allow us to release community preserving synthetic attributed social graphs with a strong formal privacy guarantee. Through comprehensive experiments, we show that our new model outperforms its most relevant counterparts in synthesising differentially private attributed social graphs that preserve the community structure of the original graph, as well as degree sequences and clustering coefficients.

Pre-recorded Presentation

Summary
:


Location privacy has became an emerging topic due to the pervasiveness of Location-Based Services (LBSs). When sharing location, a certain degree of privacy can be achieved through the use of Location Privacy-Preserving Mechanisms (LPPMs), in where an obfuscated version of the exact user location is reported instead. However, even obfuscated location reports disclose information which poses a risk to privacy. Based on the formal notion of differential privacy, Geo-indistinguishability has been proposed to design LPPMs that limit the amount of information that is disclosed to a potential adversary observing the reports. While promising, this notion considers reports to be independent from each other, thus discarding the potential threat that arises from exploring the correlation between reports. This assumption might hold for the sporadic release of data, however, there is still no formal nor quantitative boundary between sporadic and continuous reports and thus we argue that the consideration of independence is valid depending on the frequency of reports made by the user. This work intends to fill this research gap through a quantitative evaluation of the impact on the privacy level of Geo-indistinguishability under different frequency of reports. Towards this end, state-of-the-art localization attacks and a tracking attack are implemented against a Geo-indistinguishable LPPM under several values of privacy budget and the privacy level is measured along different frequencies of updates using real mobility data.

Who's Attending 

  • 23 other(s)
Session detail
Allows attendees to send short textual feedback to the organizer for a session. This is only sent to the organizer and not the speakers.
To respect data privacy rules, this option only displays profiles of attendees who have chosen to share their profile information publicly.

Changes here will affect all session detail pages