When: Thursday, October 21
Time: 4 to 5:45 p.m.
More sessions may be added in the future.
Read below to learn more about each poster session and presenter.
Dissecting Residual APIs in Custom Android ROMs
Presenter: Zeinab El-Rewini
Table 2
Abstract: Many classic software vulnerabilities (e.g., Heartbleed, etc.) are rooted in unused code. In this work, we aim to understand whether unused Android functionality may similarly open unnecessary attack opportunities. Our study specifically focuses on OEM-introduced APIs, which are added and removed erratically through different device models and releases – contributing to the production of bloated custom APIs, some of which may not even be used on a particular device. We call such unused APIs Residuals. In this work, we conduct the first large-scale investigation of custom Android Residuals to understand whether they may lead to security vulnerabilities. Our investigation is driven by the intuition that it is challenging for vendor developers to ensure a proper protection of Residuals. Since they are deemed unnecessary, Residuals may be naturally overlooked during integration or maintenance. This is particularly exacerbated by the complexities of Android’s ever-evolving access control. To facilitate the study at large, we propose a set of analysis techniques that detect and evaluate Residual security. Our techniques feature a synergy between application and framework program analysis to recognize potential Residuals in specially curated ROM samples. The Residual implementations are then statically analyzed to detect potential evolution-induced vulnerabilities. Our study reveals that Residuals are prevalent among OEMs. More importantly, they may even lead to security-critical vulnerabilities.
Is Differential Privacy the Right Defence against Membership Inference Attacks?
Presenter: Thomas Humphries
Table 3
Abstract: Training machine learning models on privacy-sensitive data has become a popular practice, driving innovation in ever-expanding fields. This has opened the door to new attacks that can have serious privacy implications. One such attack, the Membership Inference Attack (MIA), exposes whether or not a particular data point was used to train a model. A growing body of literature shows that Differential Privacy (DP) is an effective defence against such attacks. We observe that these works optimistically assume that training samples are independently and identically distributed. This assumption, however, does not hold for many real-world use cases that have been extensively studied in the ML literature. Motivated by this, we conduct a series of evaluations with off-the-shelf MIAs using real-world datasets with data splits that yield biased training sets. Our results reveal that training set bias can severely increase the performance of MIAs, surpassing the theoretical guarantees of DP for unbiased data. This suggests that membership can be a property of the entire dataset and not just a single member. We conclude that DP alone is insufficient to protect against MIAs on biased data and other defences need to be considered.
Stegozoa: Enhanced covert communications over WebRTC video streams
Presenter: Diogo Barradas
Table 4
Abstract: Nowadays, totalitarian states implement strict Internet surveillance and censorship mechanisms, preventing their citizens from freely accessing certain content on the Internet. To tackle these restrictions, Protozoa, a recent censorship-circumvention tool, generates covert channels over peer-to-peer encrypted WebRTC video conferencing links. By replacing compressed video frames with covert payload, Protozoa produces WebRTC streams that remain indistinguishable from legitimate video calls, thus evading censors’ traffic analysis attacks aimed at the detection of covert channels. Due to performance reasons, an increasing number of WebRTC applications, e.g., Discord, enforce the usage of WebRTC gateways. Briefly, WebRTC gateways are servers that mediate a call between participants, and which can decrypt incoming video streams for performing content validation, media re-encoding, or call quality assessment routines before re-encrypting them and forwarding them to their final destination.
However, since Protozoa streams do not exchange valid media data, WebRTC gateways can easily detect and thwart the operation of Protozoa. Worse yet, assuming these gateways are controlled by censors, Protozoa users can be pinpointed for later prosecution. In this talk, I will describe our progress towards building Stegozoa, a system based on Protozoa, that leverages video steganography techniques to generate undetectable covert channels over video calls mediated by WebRTC gateways. Central to the operation of Stegozoa is the ability to adopt video steganography techniques that can:
a) be implemented efficiently for providing real-time embedding of covert content
b) resist against state-of-the-art video steganalysis, and c) provide reasonable throughput for conducting simple web tasks.
Our preliminary evaluation suggests that Stegozoa can evade detection while achieving a throughput of 8.2Kbps.
The “Quantum Annoying” Property of Password-Authenticated Key Exchange Protocols
Presenter: Ted Eaton
Table 5
Abstract: During the Crypto Forum Research Group (CFRG)'s standardization of password-authenticated key exchange (PAKE) protocols, a novel property emerged: a PAKE scheme is said to be "quantum-annoying" if a quantum computer can compromise the security of the scheme, but only by solving one discrete logarithm for each guess of a password. Considering that early quantum computers will likely take quite long to solve even a single discrete logarithm, a quantum-annoying PAKE, combined with a large password space, could delay the need for a post-quantum replacement by years, or even decades. In this paper, we make the first steps toward formalizing the quantum-annoying property. We consider a classical adversary in an extension of the generic group model in which the adversary has access to an oracle that solves discrete logarithms. While this idealized model does not fully capture the range of operations available to an adversary with a general-purpose quantum computer, this model does allow us to quantify security in terms of the number of discrete logarithms solved. We apply this approach to the CPace protocol, a balanced PAKE advancing through the CFRG standardization process, and show that the CPaceBase variant is secure in the generic group model with a discrete logarithm oracle.
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Network
Presenter: Nils Lucas
Table 6
Abstract: Training deep neural networks (DNNs) is expensive and for this reason, third parties provide computational resources to train models. This makes DNNs vulnerable to backdoor attacks, in which the third party maliciously injects hidden functionalities in the model at training time. Removing a backdoor is challenging because although the defender has access to a clean, labeled dataset, they only have limited computational resources which are a fraction of the resources required to train a model from scratch. We propose Feature Grinding as an efficient, randomized backdoor sanitation technique against seven contemporary backdoors on CIFAR-10 and ImageNet. Feature Grinding requires at most six percent of the model's training time on CIFAR-10 and at most two percent on ImageNet for sanitizing the surveyed backdoors. We compare Feature Grinding with five other sanitation methods and find that it is often the most effective at decreasing the backdoor's success rate while preserving a high model accuracy. Our experiments include an ablation study over multiple parameters for each backdoor attack and sanitation technique to ensure a fair evaluation of all methods. Models suspected of containing a backdoor can be Feature Grinded using limited resources, which makes it a practical defense against backdoors that can be incorporated into any standard training procedure.
You May Also Like ... Privacy: Recommendation Systems Meet PIR
Presenter: Adithya Vadapalli
Table 7
Abstract: We describe the design, analysis, implementation, and evaluation of Pirsona, a digital content delivery system that realizes collaborative-filtering recommendations atop private information retrieval (PIR). This combination of seemingly antithetical primitives makes possible- for the first time - the construction of practically efficient e-commerce and digital media delivery systems that can provide personalized content recommendations based on their users' historical consumption patterns while simultaneously keeping said consumption patterns private. In designing-, we have opted for the most performant primitives available (at the expense of rather strong non-collusion assumptions); namely, we use the recent computationally 1-private PIR protocol of Hafiz and Henry (PETS~2019.4) together with a carefully optimized 4PC Boolean matrix factorization.
Constant-weight PIR: Single-round Keyword PIR via Constant-weight Equality Operators
Presenter: Rasoul Akhavan Mahdavi
Table 8
Abstract: Equality operators are an essential building block in tasks over secure computation such as private information retrieval. In private information retrieval (PIR), a user queries a database such that the server does not learn which element is queried. In this work, we propose \emph{equality operators for constant-weight codewords}. A constant-weight code is a collection of binary codewords that share the same Hamming weight. Our proposed constant-weight equality operators have a multiplicative depth that depends only on the Hamming weight of the code, not the bit-length of the elements. In our experiments, we show how these equality operators are up to 10 times faster than existing equality operators. Furthermore, we propose PIR using the constant-weight equality operator or constant-weight PIR, which is a PIR protocol using an approach previously deemed impractical. We show that for private retrieval of large, streaming data, constant-weight PIR has a smaller communication complexity and lower runtime compared to SEALPIR and MulPIR, respectively, which are two state-of-the-art solutions for PIR. Moreover, we show how constant-weight PIR can be extended to keyword PIR. In keyword PIR, the desired element is retrieved by a unique identifier pertaining to the sought item, e.g., the name of a file. Previous solutions to keyword PIR require one or multiple rounds of communication to reduce the problem to normal PIR. We show that constant-weight PIR is the first practical single-round solution to single-server keyword PIR.
PRSONA: Private Reputation Supporting Ongoing Network Avatars
Presenter: Stan Gurtler
Table 9
Abstract: As an increasing amount of social activity moves online, online communities have become important outlets for their members to interact and communicate with one another in ways that advance their mutual interests. At times, these communities may identify opportunities where providing their members specific privacy guarantees would promote new opportunities for healthy social interaction, giving members assurances that their participation can be conducted safely. On the other hand, communities also face the threat of bad actors, who may wish to disrupt their activities, or even to bring harm to members for their status as members of such groups. Reputation can be used to help ameliorate the threat of bad actors, and there has been a wide body of work on privacy-preserving reputation systems. However, previous work has overlooked the needs of certain kinds of niche communities, failing to provide important privacy guarantees or address shortcomings with common implementations of reputation. This work features a novel design for a privacy-preserving reputation system which is targeted to fill these gaps. Further, this work implements and benchmarks said system to determine its viability in real-world deployment. This novel construction addresses shortcomings with previous approaches and provides new opportunity to a heretofore underrepresented audience.
Verifying Verified Code
Presenter: Siddharth Priya
Table 10
Abstract: A recent case study from Amazon Web Services (AWS) by Chong et al. proposes an effective methodology for Bounded Model Checking in industry. In this paper, we report on a follow up case study that explores the methodology from the perspective of three research questions:
a) can proof artifacts be used across verification tools;
b) are there bugs in verified code; and
c) can specifications be improved
To study these questions, we port the verification tasks for aws-c-common library to SeaHorn and KLEE verification tools. We show the benefits of using compiler semantics and cross-checking specifications with different verification techniques, and call for standardizing proof library extensions to increase specification reuse. The verification tasks discussed are publicly available online.
Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Generative Adversarial Networks
Presenters: Vasisht Duddu & Sebastian Szyller
Table 11
Abstract: Machine learning models are typically made available to potential client users via inference APIs. Model extraction attacks occur when a malicious client uses information gleaned from queries to the inference API of a victim model FV to build a surrogate model FA that has comparable functionality. Recent research has shown successful model extraction attacks against image classification, and NLP models. In this paper, we show the first model extraction attack against real-world generative adversarial network (GAN) image translation models. We present a framework for conducting model extraction attacks against image translation models and show that the adversary can successfully extract functional surrogate models. The adversary is not required to know FV's architecture or any other information about it beyond its intended image translation task, and queries FV's inference interface using data drawn from the same domain as the training data for FV.
We evaluate the effectiveness of our attacks using three different instances of two popular categories of image translation:
- Selfie-to-Anime
- Monet-to-Photo (image style transfer)
- Super-Resolution (super resolution)
Using standard performance metrics for GANs, we show that our attacks are effective in each of the three cases -- the differences between FV and FA, compared to the target are in the following ranges: Selfie-to-Anime: FID 13.36−68.66, Monet-to-Photo: FID 3.57−4.40, and Super-Resolution: SSIM: 0.06−0.08 and PSNR: 1.43−4.46. Furthermore, we conducted a large scale (125 participants) user study on Selfie-to-Anime and Monet-to-Photo to show that human perception of the images produced by the victim and surrogate models can be considered equivalent, within an equivalence bound of Cohen's d=0.3.
Finding Specification Blind Spots with Fuzz Testing
Presenter: Meng Xu
Table 12
Abstract: A formally verified program is only as correct as its specifications. But how do we know that the specifications are free of loopholes? This poster presents Fuzzing-Assisted Specification Testing (FAST) to find specification loopholes in an automated way. The key insight is to exploit and synergize the “redundancy” and “diversity” in formally verified programs for cross-checking; after all, specifications, code, and test suites are all derived from the same set of business requirements. To be specific, FAST first applies evolutionary mutation testing on specifications and code to locate gaps in the specifications, and then leverages the test suites to infer whether a gap is introduced by intention or by mistake.
PACStack: an Authenticated Call Stack
Presenter: Hans Liljestrand
Table 13
Abstract: A popular run-time attack technique is to compromise the control-flow integrity of a program by modifying function return addresses on the stack. So far, shadow stacks have proven to be essential for comprehensively preventing return address manipulation. Shadow stacks record return addresses in integrity-protected memory secured with hardware-assistance or software access control. Software shadow stacks incur high overheads or trade off security for efficiency. Hardware-assisted shadow stacks are efficient and secure, but require the deployment of special-purpose hardware.
We present authenticated call stack (ACS), an approach that uses chained message authentication codes (MACs). Our prototype, PACStack, uses the ARMv8.3-A general purpose hardware mechanism for pointer authentication (PA) to implement ACS. Via a rigorous security analysis, we show that
PACStack achieves security comparable to hardware-assisted shadow stacks without requiring dedicated hardware. We demonstrate that PACStack's
performance overhead is small (≈3%).
Once you registered, we will email you a link on Tuesday October 19th. When you click the link you will be redirected to the event. The host will then start off, outlining what the event will look like. Afterward, you'll see a visual grid of sessions that you can join. You will then select the room for the session you want to join.
Please register for this poster session via the CPI Cybersecurity Month 2021 page