Advanced Research in Cryptography

Research

A Waterlog for Detecting and Tracing Synthetic Text from Large Language Models

We propose waterlogs, a new direction to detect and trace synthetic text outputs from large language models based on transparency logs. Waterlogs offer major categorical advantages over watermarking: it (1) allows for the inclusion of arbitrary metadata to facilitate tracing, (2) is publicly verifiable by third parties, and (3) operates in a distributed manner while remaining robust and efficient.

Waterlogs rely on a verifiable Hamming distance index, a novel data structure that we describe, to efficiently search multi-dimensional semantic hashes of natural language embeddings in a verifiable manner. This data structure may be of independent interest.

We implement a waterlog, which we call DREDGE, and benchmark it using synthetic text generated by GPT-2 1.5B and OPT-13B; embeddings are generated via OpenAI’s text-embedding-ada-002 model. We provide empirical benchmarks on the efficiency of appending text to the log and querying it for matches. We compare our results to watermarking and outline areas for further research.

@misc{cryptoeprint:2024/1424,
      author = {Brennon Brimhall and Orion Weller and Matthew Green and Ian Miers},
      title = {A Waterlog for Detecting and Tracing Synthetic Text from Large Language Models},
      howpublished = {Cryptology {ePrint} Archive, Paper 2024/1424},
      year = {2024},
      url = {https://eprint.iacr.org/2024/1424}
}

On Soundness Notions for Interactive Oracle Proofs

Interactive oracle proofs (IOPs) (Ben-Sasson et al., TCC 2016; Reingold et al., SICOMP 2021) have emerged as a powerful model for proof systems combining IP and PCP. While IOPs are not any more powerful than PCPs from a complexity theory perspective, their potential to create succinct proofs and arguments has been demonstrated by many recent constructions achieving better parameters such as total proof length, alphabet size, and query complexity. In this work, we establish new results on the relationship between various notions of soundness for IOPs. First, we formally generalize the notion of round-by-round soundness (Canetti et al., STOC 2019) and round-by-round knowledge soundness (Chiesa et al., TCC 2019). Given this generalization, we then examine its relationship to the notions of generalized special soundness (Attema et al., CRYPTO 2021) and generalized special unsoundness (Attema et al., TCC 2022). We show that:

  1. generalized special soundness implies generalized round-by-round soundness;
  2. generalized round-by-round knowledge soundness implies generalized special soundness;
  3. generalized special soundness does not imply generalized round-by-round knowledge soundness;
  4. generalized round-by-round soundness (resp., special unsoundness) is an upper bound (resp., a lower bound) on standard soundness, and this relationship is tight when the round-by-round soundness and special unsoundness errors are equal; and
  5. any special sound IOP can be transformed via (a variant of) the Fiat-Shamir transformation (in the Random Oracle Model) into a non-interactive proof that is adaptively sound in the Quantum Random Oracle Model.
@misc{cryptoeprint:2023/1256,
  author = {Alexander R. Block and Albert Garreta and Pratyush Ranjan Tiwari and Michał Zając},
  title = {On Soundness Notions for Interactive Oracle Proofs},
  howpublished = {Cryptology ePrint Archive, Paper 2023/1256},
  year = {2023},
  doi = {10.1007/978-981-99-8724-5_1},
  note = {\url{https://eprint.iacr.org/2023/1256}},
  url = {https://eprint.iacr.org/2023/1256}
}

Abuse-Resistant Location Tracking: Balancing Privacy and Safety in the Offline Finding Ecosystem

Location tracking accessories (or “tracking tags”) such as those sold by Apple, Samsung, and Tile, allow owners to track the location of their property and devices via offline tracking networks. The tracking protocols have been designed to ensure some level of user privacy against surveillance by the vendor. Such privacy mechanisms, however, seem to be at odds with the phenomenon of tracker-based stalking, where attackers use these very tags to monitor a target’s movements. Numerous such criminal incidents have been reported, and in response, manufacturers have chosen to weaken privacy guarantees in order to allow users to detect malicious stalker tags. In this work we show how to achieve an improved trade-off between user privacy and stalker detection within the constraints of existing tracking protocols. We implement our new protocol using families of list-decodable error-correcting codes, and give efficient algorithms for stalker detection under realistic conditions.
@misc{cryptoeprint:2023/1332,
      author = {Harry Eldridge and Gabrielle Beck and Matthew Green and Nadia Heninger and Abhishek Jain},
      title = {Abuse-Resistant Location Tracking: Balancing Privacy and Safety in the Offline Finding Ecosystem},
      howpublished = {Cryptology {ePrint} Archive, Paper 2023/1332},
      year = {2023},
      url = {https://eprint.iacr.org/2023/1332}
}

SoK: Privacy-preserving signatures

Modern security systems depend fundamentally on the ability of users to authenticate their communications to other parties in a network. Unfortunately, cryptographic authentication can substantially undermine the privacy of users. One possible solution to this problem is to use privacy-preserving cryptographic authentication. These protocols allow users to authenticate their communications without revealing their identity to the verifier. In the non-interactive setting, the most common protocols include blind, ring, and group signatures, each of which has been the subject of enormous research in the security and cryptography literature. These primitives are now being deployed at scale in major applications, including Intel’s SGX software attestation framework. The depth of the research literature and the prospect of large-scale deployment motivate us to systematize our understanding of the research in this area. This work provides an overview of these techniques, focusing on applications and efficiency.
@misc{cryptoeprint:2023/1039,
      author = {Alishah Chator and Matthew Green and Pratyush Ranjan Tiwari},
      title = {SoK: Privacy-Preserving Signatures},
      howpublished = {Cryptology ePrint Archive, Paper 2023/1039},
      year = {2023},
      note = {\url{https://eprint.iacr.org/2023/1039}},
      url = {https://eprint.iacr.org/2023/1039}
}

hinTS: Threshold Signatures with Silent Setup

We propose hinTS — a new threshold signature scheme built on top of the widely used BLS signatures. Our scheme enjoys the following attractive features:

  • A silent setup process where the joint public key of the parties is computed as a deterministic function of their locally computed public keys.
  • Support for dynamic choice of thresholds and signers, after the silent setup, without further interaction.
  • Support for general access policies; in particular, native support for weighted thresholds with zero additional overhead over standard threshold setting.
  • Strong security guarantees, including proactive security and forward security. We prove the security of our scheme in the algebraic group model and provide implementation and extensive evaluation. Our scheme outperforms all prior proposals that aim to avoid distributed key generation in terms of aggregation time, signature size, and verification time. As an example, the aggregation time for 1000 signers is under 0.5 seconds, while both signing and verification are constant time algorithms, taking roundly 1 ms and 17.5 ms respectively. The key technical contribution of our work involves the design of special-purpose succinct proofs to efficiently prove the well-formedness of aggregated public keys. Our solution uses public “hints” released by the signers as part of their public keys (hence the name hinTS).
@misc{cryptoeprint:2023/567,
      author = {Sanjam Garg and Abhishek Jain and Pratyay Mukherjee and Rohit Sinha and Mingyuan Wang and Yinuo Zhang},
      title = {hinTS: Threshold Signatures with Silent Setup},
      howpublished = {Cryptology ePrint Archive, Paper 2023/567},
      year = {2023},
      note = {\url{https://eprint.iacr.org/2023/567}},
      url = {https://eprint.iacr.org/2023/567}
}

On the Possibility of a Backdoor in the Micali-Schnorr Generator

In this paper, we study both the implications and potential impact of backdoored parameters for two RSA-based pseudorandom number generators: the ISO-standardized Micali-Schnorr generator and a closely related design, the RSA PRG. We observe, contrary to common understanding, that the security of the Micali-Schnorr PRG is not tightly bound to the difficulty of inverting RSA. We show that the Micali-Schnorr construction remains secure even if one replaces RSA with a publicly evaluatable PRG, or a function modeled as an efficiently invertible random permutation. This implies that any cryptographic backdoor must somehow exploit the algebraic structure of RSA, rather than an attacker’s ability to invert RSA or the presence of secret keys. We exhibit two such backdoors in related constructions: a family of exploitable parameters for the RSA PRG, and a second vulnerable construction for a finite-field variant of Micali-Schnorr. We also observe that the parameters allowed by the ISO standard are incompletely specified, and allow insecure choices of exponent. Several of our backdoor constructions make use of lattice techniques, in particular multivariate versions of Coppersmith’s method for finding small solutions to polynomials modulo integers.
@misc{cryptoeprint:2023/440,
      author = {Hannah Davis and Matthew Green and Nadia Heninger and Keegan Ryan and Adam Suhl},
      title = {On the Possibility of a Backdoor in the Micali-Schnorr Generator},
      howpublished = {Cryptology ePrint Archive, Paper 2023/440},
      year = {2023},
      note = {\url{https://eprint.iacr.org/2023/440}},
      url = {https://eprint.iacr.org/2023/440}
}

Scalable Multiparty Garbling

Multiparty garbling is the most popular approach for constant-round secure multiparty computation (MPC). Despite being the focus of significant research effort, instantiating prior approaches to multiparty garbling results in constant-round MPC that can not realistically accommodate large numbers of parties. In this work we present the first global-scale multiparty garbling protocol. The per-party communication complexity of our protocol decreases as the number of parties participating in the protocol increases — for the first time matching the asymptotic communication complexity of non-constant round MPC protocols. Our protocol achieves malicious security in the honest-majority setting and relies on the hardness of the Learning Party with Noise assumption.
@misc{cryptoeprint:2023/099,
    author = {Gabrielle Beck and Aarushi Goel and Aditya Hegde and Abhishek Jain and Zhengzhong Jin and Gabriel Kaptchuk},
    title = {Scalable Multiparty Garbling},
    howpublished = {Cryptology ePrint Archive, Paper 2023/099},
    year = {2023},
    note = {\url{https://eprint.iacr.org/2023/099}},
    url = {https://eprint.iacr.org/2023/099}
}

Set Membership Encryption and Applications

The emerging area of laconic cryptography [Cho et al., CRYPTO'17] involves the design of two-party protocols involving a sender and a receiver, where the receiver’s input is large. The key efficiency requirement is that the protocol communication complexity must be independent of the receiver’s input size. In recent years, many tasks have been studied under this umbrella, including laconic oblivious transfer (lOT). In this work, we introduce the notion of Set Membership Encryption (SME) - a new member in the area of laconic cryptography. SME allows a sender to encrypt to one recipient from a universe of receivers, while using a small digest from a large subset of receivers. A recipient is only able to decrypt the message if and only if it is part of the large subset. We show that lOT can be derived from SME. We provide efficient constructions of SME using bilinear groups. Our solutions achieve orders of magnitude improvements in decryption times than state-of-the-art (on lOT) and significant improvements overall in concrete efficiency over initial works in the area of laconic cryptography, albeit at the cost of worse asymptotics.
@inproceedings{10.1145/3576915.3623131,
author = {Green, Matthew and Jain, Abhishek and Van Laer, Gijs},
title = {Efficient Set Membership Encryption and Applications},
year = {2023},
isbn = {9798400700507},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3576915.3623131},
doi = {10.1145/3576915.3623131},
abstract = {The emerging area of laconic cryptography [Cho et al., CRYPTO'17] involves the design of two-party protocols involving a sender and a receiver, where the receiver's input is large. The key efficiency requirement is that the protocol communication complexity must be independent of the receiver's input size. In recent years, many tasks have been studied under this umbrella, including laconic oblivious transfer (ℓOT).In this work, we introduce the notion of Set Membership Encryption (SME) - a new member in the area of laconic cryptography. SME allows a sender to encrypt to one recipient from a universe of receivers, while using a small digest from a large subset of receivers. A recipient is only able to decrypt the message if and only if it is part of the large subset. We show that ℓOT can be derived from SME.We provide efficient constructions of SME using bilinear groups. Our solutions achieve orders of magnitude improvements in decryption times than state-of-the-art (on ℓOT) and significant improvements overall in concrete efficiency over initial works in the area of laconic cryptography, albeit at the cost of worse asymptotics.},
booktitle = {Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security},
pages = {1080–1092},
numpages = {13},
keywords = {bilinear diffie-hellman, laconic ot, multiparty computation, oblivious transfer, set membership encryption},
location = {<conf-loc>, <city>Copenhagen</city>, <country>Denmark</country>, </conf-loc>},
series = {CCS '23}
}

McFIL: Model Counting Functionality-Inherent Leakage

Protecting the confidentiality of private data and using it for useful collaboration have long been at odds. Modern cryptography is bridging this gap through rapid growth in secure protocols such as multi-party computation, fully-homomorphic encryption, and zero-knowledge proofs. However, even with provable indistinguishability or zero-knowledgeness, confidentiality loss from leakage inherent to the functionality may partially or even completely compromise secret values without ever falsifying proofs of security. In this work, we describe McFIL, an algorithmic approach and accompanying software implementation which automatically quantifies intrinsic leakage for a given functionality. Extending and generalizing the Chosen-Ciphertext attack framework of Beck et al. with a practical heuristic, our approach not only quantifies but maximizes functionality-inherent leakage using Maximum Model Counting within a SAT solver. As a result, McFIL automatically derives approximately-optimal adversary inputs that, when used in secure protocols, maximize information leakage of private values.
@misc{cryptoeprint:2023/1332,
      author = {Gabrielle Beck and Harry Eldridge and Matthew Green and Nadia Heninger and Abhishek Jain},
      title = {Abuse-Resistant Location Tracking: Balancing Privacy and Safety in the Offline Finding Ecosystem},
      howpublished = {Cryptology ePrint Archive, Paper 2023/1332},
      year = {2023},
      note = {\url{https://eprint.iacr.org/2023/1332}},
      url = {https://eprint.iacr.org/2023/1332}
}

zkSaaS: ZK-SNARKs as a Service

A decade of active research has led to practical constructions of zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARKs) that are now being used in a wide variety of applications. Despite this astonishing progress, overheads in proof generation time remain significant. In this work, we envision a world where consumers with low computational resources can outsource the task of proof generation to a group of untrusted servers in a privacy-preserving manner. The main requirement is that these servers should be able to collectively generate proofs at a faster speed (than the consumer). Towards this goal, we introduce a framework called zk-SNARKs-as-a-service (ZkSaaS) for faster computation of zk-SNARKs. Our framework allows for distributing proof computation across multiple servers such that each server is expected to run for a shorter duration than a single prover. Moreover, the privacy of the prover’s witness is ensured against any minority of colluding servers. We design custom protocols in this framework that can be used to obtain faster runtimes for widely used zk-SNARKs, such as Groth16 [EUROCRYPT 2016], Marlin [EUROCRYPT 2020], and Plonk [EPRINT 2019]. We implement proof of concept zkSaaS for the Groth16 and Plonk provers. In comparison to generating these proofs on commodity hardware, we show that not only can we generate proofs for a larger number of constraints (without memory exhaustion), but can also get ~22x speed-up when run with 128 parties for 2^25 constraints with Groth16 and 2^21 gates with Plonk.
@misc{cryptoeprint:2023/905,
      author = {Sanjam Garg and Aarushi Goel and Abhishek Jain and Guru-Vamsi Policharla and Sruthi Sekar},
      title = {$\mathsf{zkSaaS}$: Zero-Knowledge SNARKs as a Service},
      howpublished = {Cryptology ePrint Archive, Paper 2023/905},
      year = {2023},
      note = {\url{https://eprint.iacr.org/2023/905}},
      url = {https://eprint.iacr.org/2023/905}
}

A Note on Non-Interactive Zero-Knowledge Proofs for NP from CDH

We build non-interactive zero-knowledge (NIZK) and ZAP arguments for all NP where soundness holds for infinitely-many security parameters, and against uniform adversaries, assuming the subexponential hardness of the Computational Diffie-Hellman (CDH) assumption. We additionally prove the existence of NIZK arguments with these same properties assuming the polynomial hardness of both CDH and the Learning Parity with Noise (LPN) assumption. In both cases, the CDH assumption does not require a group equipped with a pairing. Infinitely-often uniform security is a standard byproduct of commonly used non-black-box techniques that build on disjunction arguments on the (in)security of some primitive. In the course of proving our results, we develop a new variant of this non-black-box technique that yields improved guarantees: we obtain explicit constructions (previous works generally only obtained existential results) where security holds for a relatively dense set of security parameters (as opposed to an arbitrary infinite set of security parameters). We demonstrate that our technique can have applications beyond our main results.
@misc{cryptoeprint:2023/970,
      author = {Geoffroy Couteau and Abhishek Jain and Zhengzhong Jin and Willy Quach},
      title = {A Note on Non-Interactive Zero-Knowledge from CDH},
      howpublished = {Cryptology ePrint Archive, Paper 2023/970},
      year = {2023},
      note = {\url{https://eprint.iacr.org/2023/970}},
      url = {https://eprint.iacr.org/2023/970}
}

Cryptography with Weights: MPC, Encryption and Signatures

The security of several cryptosystems rests on the trust assumption that a certain fraction of the parties are honest. This trust assumption has enabled a diverse of cryptographic applications such as secure multiparty computation, threshold encryption, and threshold signatures. However, current and emerging practical use cases suggest that this paradigm of one-person-one-vote is outdated. In this work, we consider weighted cryptosystems where every party is assigned a certain weight and the trust assumption is that a certain fraction of the total weight is honest. This setting can be translated to the standard setting (where each party has a unit weight) via virtualization. However, this method is quite expensive, incurring a multiplicative overhead in the weight. We present new weighted cryptosystems with significantly better efficiency. Specifically, our proposed schemes incur only an additive overhead in weights.

  • We first present a weighted ramp secret-sharing scheme where the size of the secret share is as short as O(w) (where w corresponds to the weight). In comparison, Shamir’s secret sharing with virtualization requires secret shares of size w * lambda, where lambda is the security parameter.
  • Next, we use our weighted secret-sharing scheme to construct weighted versions of (semi-honest) secure multiparty computation (MPC), threshold encryption, and threshold signatures. All these schemes inherit the efficiency of our secret sharing scheme and incur only an additive overhead in the weights. Our weighted secret-sharing scheme is based on the Chinese remainder theorem. Interestingly, this secret-sharing scheme is non-linear and only achieves statistical privacy. These distinct features introduce several technical hurdles in applications to MPC and threshold cryptosystems. We resolve these challenges by developing several new ideas.
@misc{cryptoeprint:2022/1632,
      author = {Sanjam Garg and Abhishek Jain and Pratyay Mukherjee and Rohit Sinha and Mingyuan Wang and Yinuo Zhang},
      title = {Cryptography with Weights: MPC, Encryption and Signatures},
      howpublished = {Cryptology ePrint Archive, Paper 2022/1632},
      year = {2022},
      note = {\url{https://eprint.iacr.org/2022/1632}},
      url = {https://eprint.iacr.org/2022/1632}
}

Correlation Intractability and SNARGs from Sub-exponential DDH

We provide the first constructions of SNARGs for Batch-NP and P based solely on the sub-exponential Decisional Diffie Hellman (DDH) assumption. Our schemes achieve poly-logarithmic proof sizes. Central to our results and of independent interest is a new construction of correlation-intractable hash functions for “small input” product relations verifiable in TC^0, based on sub-exponential DDH.
@misc{cryptoeprint:2022/1486,
      author = {Arka Rai Choudhuri and Sanjam Garg and Abhishek Jain and Zhengzhong Jin and Jiaheng Zhang},
      title = {Correlation Intractability and SNARGs from Sub-exponential DDH},
      howpublished = {Cryptology ePrint Archive, Paper 2022/1486},
      year = {2022},
      note = {\url{https://eprint.iacr.org/2022/1486}},
      url = {https://eprint.iacr.org/2022/1486}
}

End-to-End Secure Messaging with Traceability Only for Illegal Content

As end-to-end encrypted messaging services become widely adopted, law enforcement agencies have increasingly expressed concern that such services interfere with their ability to maintain public safety. Indeed, there is a direct tension between preserving user privacy and enabling content moderation on these platforms. Recent research has begun to address this tension, proposing systems that purport to strike a balance between the privacy of “honest” users and traceability of “malicious” users. Unfortunately, these systems suffer from a lack of protection against malicious or coerced service providers. In this work, we address the privacy vs. content moderation question through the lens of pre-constrained cryptography [Ananth et al., ITCS 2022]. We introduce the notion of set pre-constrained (SPC) group signatures that guarantees security against malicious key generators. SPC group signatures offer the ability to trace users in messaging systems who originate pre-defined illegal content (such as child sexual abuse material), while providing security against malicious service providers. We construct concretely efficient protocols for SPC group signatures, and demonstrate the real-world feasibility of our approach via an implementation. The starting point for our solution is the recently introduced Apple PSI system, which we significantly modify to improve security and expand functionality.
@misc{cryptoeprint:2022/1643,
      author = {James Bartusek and Sanjam Garg and Abhishek Jain and Guru-Vamsi Policharla},
      title = {End-to-End Secure Messaging with Traceability Only for Illegal Content},
      howpublished = {Cryptology ePrint Archive, Paper 2022/1643},
      year = {2022},
      note = {\url{https://eprint.iacr.org/2022/1643}},
      url = {https://eprint.iacr.org/2022/1643}
}

Threshold Signatures in the Multiverse

We introduce a new notion of multiverse threshold signatures (MTS). In an MTS scheme, multiple universes – each defined by a set of (possibly overlapping) signers, their weights, and a specific security threshold – can co-exist. A universe can be (adaptively) created via a non-interactive asynchronous setup. Crucially, each party in the multiverse holds constant-sized keys and releases compact signatures with size and computation time both independent of the number of universes. Given sufficient partial signatures over a message from the members of a specific universe, an aggregator can produce a short aggregate signature relative to that universe. We construct an MTS scheme building on BLS signatures. Our scheme is practical, and can be used to reduce bandwidth complexity and computational costs in decentralized oracle networks. As an example data point, consider a multiverse containing 2000 nodes and 100 universes (parameters inspired by Chainlink’s use in the wild) each of which contains arbitrarily large subsets of nodes and arbitrary thresholds. Each node computes and outputs 1 group element as its partial signature; the aggregator performs under 0.7 seconds of work for each aggregate signature, and the final signature of size 192 bytes takes 6.4 ms (or 198K EVM gas units) to verify. For this setting, prior approaches when used to construct MTS, yield schemes that have one of the following drawbacks: (i) partial signatures that are 97 larger, (ii) have aggregation times 311 worse, or (iii) have signature size 39 and verification gas costs 3.38 larger. We also provide an open-source implementation and a detailed evaluation.
@misc{cryptoeprint:2023/063,
      author = {Leemon Baird and Sanjam Garg and Abhishek Jain and Pratyay Mukherjee and Rohit Sinha and Mingyuan Wang and Yinuo Zhang},
      title = {Threshold Signatures in the Multiverse},
      howpublished = {Cryptology ePrint Archive, Paper 2023/063},
      year = {2023},
      note = {\url{https://eprint.iacr.org/2023/063}},
      url = {https://eprint.iacr.org/2023/063}
}

Credibility in Private Set Membership

A private set membership (PSM) protocol allows a “receiver” to learn whether its input x is contained in a large database held by a “sender”. In this work, we define and construct credible private set membership (C-PSM) protocols: in addition to the conventional notions of privacy, C-PSM provides a soundness guarantee that it is hard for a sender (that does not know x) to convince the receiver that x is in DB. Furthermore, the communication complexity must be logarithmic in the size of DB. We provide 2-round (i.e., round-optimal) C-PSM constructions based on standard assumptions:

  • We present a black-box construction in the plain model based on DDH or LWE.
  • Next, we consider protocols that support predicates f beyond string equality, i.e., the receiver can learn if there exists w in DB such that f(x, w) = 1. We present two results with transparent setups: (1) A black-box protocol, based on DDH or LWE, for the class of NC functions f which are efficiently searchable. (2) An LWE-based construction for all bounded-depth circuits. The only non-black-box use of cryptography in this construction is through the bootstrapping procedure in fully homomorphic encryption. As an application, our protocols can be used to build enhanced round-optimal leaked password notification services, where unlike existing solutions, a dubious sender cannot fool a receiver into changing its password.
@inproceedings{pkc-2023-32806,
  title={Credibility in Private Set Membership},
  publisher={Springer-Verlag},
  doi={10.1007/978-3-031-31371-4_6},
  author={Sanjam Garg and Mohammad Hajiabadi and Abhishek Jain and Zhengzhong Jin and Omkant Pandey and Sina Shiehian},
  year=2023
}

Time-Deniable Signatures

In this work we propose time-deniable signatures (TDS), a new primitive that facilitates deniable authentication in protocols such as DKIM-signed email. As with traditional signatures, TDS provide strong authenticity for message content, at least for a sender-chosen period of time. Once this time period has elapsed, however, time-deniable signatures can be forged by any party who obtains a signature. This forgery property ensures that signatures serve a suseful authentication purpose for a bounded time period, while also allowing signers to plausibly disavow the creation of older signed content. Most critically, and unlike many past proposals for deniable authentication, TDS do not require interaction with the reciever or the deployment of any persistent cryptogrpahic infrastructure or services beyond the signing process (e.g., APIs to publish secrets or author timestamp certificates.) We first investigate the security definitions for time-deniability, demonstrating that past definitional attempts are insufficient (and indeed, allow for broken signature schemes.) We then propose an efficient construction of TDS based on well-studied assumptions.
@misc{cryptoeprint:2022/1018,
    author = {Gabrielle Beck and Arka Rai Choudhuri and Matthew Green and Abhishek Jain and Pratyush Ranjan Tiwari},
    title = {Time-Deniable Signatures},
    howpublished = {Cryptology ePrint Archive, Paper 2022/1018},
    year = {2022},
    note = {\url{https://eprint.iacr.org/2022/1018}},
    url = {https://eprint.iacr.org/2022/1018}
}

Efficient Proofs of Software Exploitability for Real-world Processors

We consider the problem of proving in zero-knowledge the existence of vulnerabilities in executables compiled to run on real-world processors. We demonstrate that it is practical to prove knowledge of real exploits for real-world processor architectures without the need for source code and without limiting our consideration to narrow vulnerability classes. To achieve this, we devise a novel circuit compiler and a toolchain that produces highly optimized, non-interactive zero-knowledge proofs for programs executed on the MSP430, an ISA commonly used in embedded hardware. Our toolchain employs a highly optimized circuit compiler and a number of novel optimizations to construct efficient proofs for program binaries. To demonstrate the capability of our system, we test our toolchain by constructing proofs for challenges in the Microcorruption capture the flag exercises.
@misc{cryptoeprint:2022/1223,
      author = {Matthew Green and Mathias Hall-Andersen and Eric Hennenfent and Gabriel Kaptchuk and Benjamin Perez and Gijs Van Laer},
      title = {Efficient Proofs of Software Exploitability for Real-world Processors},
      howpublished = {Cryptology ePrint Archive, Paper 2022/1223},
      year = {2022},
      note = {\url{https://eprint.iacr.org/2022/1223}},
      url = {https://eprint.iacr.org/2022/1223}
}

Squint Hard Enough: Evaluating Perceptual Hashing with Machine Learning

Many online communications systems use perceptual hash matching systems to detect illicit files in user content. These systems employ specialized perceptual hash functions such as Microsoft’s PhotoDNA or Facebook’s PDQ to produce a compact digest of an image file that can be approximately compared to a database of known illicit-content digests. Recently, several proposals have suggested that hash-based matching systems be incorporated into client-side and end-to-end encrypted (E2EE) systems: in these designs, files that register as illicit content will be reported to the provider, while the remaining content will be sent confidentially. By using perceptual hashing to determine confidentiality guarantees, this new setting significantly changes the function of existing perceptual hashing – thus motivating the need to evaluate these functions from an adversarial perspective, using their perceptual capabilities against them. For example, an attacker may attempt to trigger a match on innocuous, but politically-charged, content in an attempt to stifle speech. In this work we develop threat models for perceptual hashing algorithms in an adversarial setting, and present attacks against the two most widely deployed algorithms: PhotoDNA and PDQ. Our results show that it is possible to efficiently generate targeted second-preimage attacks in which an attacker creates a variant of some source image that matches some target digest. As a complement to this main result, we also further investigate the production of images that facilitate detection avoidance attacks, continuing a recent investigation of Jain et al. Our work shows that existing perceptual hash functions are likely insufficiently robust to survive attacks on this new setting.
@misc{cryptoeprint:2021/1531,
      author = {Jonathan Prokos and Tushar M.  Jois and Neil Fendley and Roei Schuster and Matthew Green and Eran Tromer and Yinzhi Cao},
      title = {Squint Hard Enough: Evaluating Perceptual Hashing with Machine Learning},
      howpublished = {Cryptology ePrint Archive, Paper 2021/1531},
      year = {2021},
      note = {\url{https://eprint.iacr.org/2021/1531}},
      url = {https://eprint.iacr.org/2021/1531}
}

SoK: Cryptographic Confidentiality of Data on Mobile Devices

Mobile devices have become an indispensable component of modern life. Their high storage capacity gives these devices the capability to store vast amounts of sensitive personal data, which makes them a high-value target: these devices are routinely stolen by criminals for data theft, and are increasingly viewed by law enforcement agencies as a valuable source of forensic data. Over the past several years, providers have deployed a number of advanced cryptographic features intended to protect data on mobile devices, even in the strong setting where an attacker has physical access to a device. Many of these techniques draw from the research literature, but have been adapted to this entirely new problem setting. This involves a number of novel challenges, which are incompletely addressed in the literature. In this work, we outline those challenges, and systematize the known approaches to securing user data against extraction attacks. Our work proposes a methodology that researchers can use to analyze cryptographic data confidentiality for mobile devices. We evaluate the existing literature for securing devices against data extraction adversaries with powerful capabilities including access to devices and to the cloud services they rely on. We then analyze existing mobile device confidentiality measures to identify research areas that have not received proper attention from the community and represent opportunities for future research.
@article{zinkus2022sok,
title={SoK: Cryptographic Confidentiality of Data on Mobile Devices}, 
author={Zinkus, Maximilian and Jois, Tushar M and Green, Matthew},
journal={Proceedings on Privacy Enhancing Technologies}, 
volume={2022}, 
number={1}, 
pages={586--607}, 
year={2022}
}

India’s Aadhaar Biometric ID: Structure, Security, and Vulnerabilities

India’s Aadhaar is the largest biometric identity system in history, designed to help deliver subsidies, benefits, and services to India’s 1.36 billion residents. The Unique Identification Authority of India (UIDAI) is responsible for providing each resident (not each citizen) with a distinct identity—a 12-digit Aadhaar number—using their biometric and demographic details. We provide the first comprehensive description of the Aadhaar infrastructure, collating information across thousands of pages of public documents and releases, as well as direct discussions with Aadhaar developers. Critically, we describe the first known cryptographic issue within the system, and discuss how a workaround prevents it from being exploitable at scale. Further, we categorize and rate various security and privacy limitations and the corresponding threat actors, examine the legitimacy of alleged security breaches, and discuss improvements and mitigation strategies.
@InProceedings{10.1007/978-3-031-18283-9_34,
author="Tiwari, Pratyush Ranjan
and Agarwal, Dhruv
and Jain, Prakhar
and Dasgupta, Swagam
and Datta, Preetha
and Reddy, Vineet
and Gupta, Debayan",
editor="Eyal, Ittay
and Garay, Juan",
title="India's ``Aadhaar'' Biometric ID: Structure, Security, and Vulnerabilities",
booktitle="Financial Cryptography and Data Security",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="672--693",
isbn="978-3-031-18283-9"
}

One-Time Programs from Commodity Hardware

One-time programs, originally formulated by Goldwasser et al. [CRYPTO'08], are a powerful cryptographic primitive with compelling applications. Known solutions for one-time programs, however, require specialized secure hardware that is not widely available (or, alternatively, access to blockchains and very strong cryptographic tools). In this work we investigate the possibility of realizing one-time programs from a recent and now more commonly available hardware functionality: the counter lockbox. A counter lockbox is a stateful functionality that protects an encryption key under a user-specified password, and enforces a limited number of incorrect guesses. Counter lockboxes have become widely available in consumer devices and cloud platforms. We show that counter lockboxes can be used to realize one-time programs for general functionalities. We develop a number of techniques to reduce the number of counter lockboxes required for our constructions, that may be of independent interest.
@misc{cryptoeprint:2022/1257,
      author = {Harry Eldridge and Aarushi Goel and Matthew Green and Abhishek Jain and Maximilian Zinkus},
      title = {One-Time Programs from Commodity Hardware},
      howpublished = {Cryptology ePrint Archive, Paper 2022/1257},
      year = {2022},
      note = {\url{https://eprint.iacr.org/2022/1257}},
      url = {https://eprint.iacr.org/2022/1257}
}

Secure Multiparty Computation with Free Branching

We study secure multi-party computation (MPC) protocols for branching circuits that contain multiple sub-circuits (i.e., branches) and the output of the circuit is that of single “active” branch. Crucially, the identity of the active branch must remain hidden from the protocol participants. While such circuits can be securely computed by evaluating each branch and then multiplexing the output, such an approach incurs a communication cost linear in the size of the entire circuit. To alleviate this, a series of recent works have investigated the problem of reducing the communication cost of branching executions inside MPC (without relying on fully homomorphic encryption). Most notably, the stacked garbling paradigm [Heath and Kolesnikov, CRYPTO'20] yields garbled circuits for branching circuits whose size only depends on the size of the largest branch. Presently, however, it is not known how to obtain similar communication improvements for secure computation involving more than two parties. In this work, we provide a generic framework for branching multi-party computation that supports any number of parties. The communication complexity of our scheme is proportional to the size of the largest branch and the computation is linear in the size of the entire circuit. We provide an implementation and benchmarks to demonstrate practicality of our approach.
@InProceedings{10.1007/978-3-031-06944-4_14,
author="Goel, Aarushi
and Hall-Andersen, Mathias
and Hegde, Aditya
and Jain, Abhishek",
editor="Dunkelman, Orr
and Dziembowski, Stefan",
title="Secure Multiparty Computation with Free Branching",
booktitle="Advances in Cryptology -- EUROCRYPT 2022",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="397--426",
abstract="We study secure multi-party computation (MPC) protocols for branching circuits that contain multiple sub-circuits (i.e., branches) and the output of the circuit is that of  single ``active'' branch. Crucially, the identity of the active branch must remain hidden from the protocol participants.",
isbn="978-3-031-06944-4"
}

Attaining GOD beyond Honest Majority with Friends and Foes

In the classical notion of multiparty computation (MPC), an honest party learning private inputs of others, either as a part of protocol specification or due to a malicious party’s unspecified messages, is not considered a potential breach. Several works in the literature exploit this seemingly minor loophole to achieve the strongest security of guaranteed output delivery via a trusted third party, which nullifies the purpose of MPC. Alon et al. (CRYPTO 2020) presented the notion of Friends and Foes (FaF) security, which accounts for such undesired leakage towards honest parties by modeling them as semi-honest (friends) who do not collude with malicious parties (foes). With real-world applications in mind, it’s more realistic to assume parties are semi-honest rather than completely honest, hence it is imperative to design efficient protocols conforming to the FaF security model. Our contributions are not only motivated by the practical viewpoint, but also consider the theoretical aspects of FaF security. We prove the necessity of semi-honest oblivious transfer for FaF-secure protocols with optimal resiliency. On the practical side, we present QuadSquad, a ring-based 4PC protocol, which achieves fairness and GOD in the FaF model, with an optimal corruption of 1 malicious and 1 semi-honest party. QuadSquad is, to the best of our knowledge, the first practically efficient FaF secure protocol with optimal resiliency. Its performance is comparable to the state-of-the-art dishonest majority protocols while improving the security guarantee from abort to fairness and GOD. Further, QuadSquad elevates the security by tackling a stronger adversarial model over the state-of-the-art honest-majority protocols, while offering a comparable performance for the input-dependent computation. We corroborate these claims by benchmarking the performance of QuadSquad. We also consider the application of liquidity matching that deals with highly sensitive financial transaction data, where FaF security is apt. We design a range of FaF secure building blocks to securely realize liquidity matching as well as other popular applications such as privacy-preserving machine learning (PPML). Inclusion of these blocks makes QuadSquad a comprehensive framework.
@misc{cryptoeprint:2022/1207,
      author = {Aditya Hegde and Nishat Koti and Varsha Bhat Kukkala and Shravani Patil and Arpita Patra and Protik Paul},
      title = {Attaining GOD Beyond Honest Majority With Friends and Foes},
      howpublished = {Cryptology ePrint Archive, Paper 2022/1207},
      year = {2022},
      note = {\url{https://eprint.iacr.org/2022/1207}},
      url = {https://eprint.iacr.org/2022/1207}
}

Pre-Constrained Encryption

In all existing encryption systems, the owner of the master secret key has the ability to decrypt all ciphertexts. In this work, we propose a new notion of pre-constrained encryption (PCE) where the owner of the master secret key does not have “full” decryption power. Instead, its is constrained in a pre-specified manner during the system setup. We present formal definitions and constructions of PCE, and discuss societal applications and implications to some well-studied cryptographic primitives.
@inproceedings{ananth2022pre,
  title={Pre-Constrained Encryption},
  author={Ananth, Prabhanjan and Jain, Abhishek and Jin, Zhengzhong and Malavolta, Giulio},
  booktitle={13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  year={2022},
  organization={Schloss Dagstuhl-Leibniz-Zentrum f{\"u}r Informatik}
}

Indistinguishability Obfuscation via Mathematical Proofs of Equivalence

Over the last decade, indistinguishability obfuscation (iO) has emerged as a seemingly omnipotent primitive in cryptography. Moreover, recent breakthrough work has demonstrated that iO can be realized from well-founded assumptions. A thorn to all this remarkable progress is a limitation of all known constructions of general-purpose iO: the security reduction incurs a loss that is exponential in the input length of the function. This “input-length barrier” to iO stems from the non-falsifiability of the iO definition and is discussed in folklore as being possibly inherent. It has many negative consequences; notably, constructing iO for programs with inputs of unbounded length remains elusive due to this barrier. We present a new framework aimed towards overcoming the input-length barrier. Our approach relies on short mathematical proofs of functional equivalence of circuits (and Turing machines) to avoid the brute-force “input-by-input” check employed in prior works.

  • We show how to obfuscate circuits that have efficient proofs of equivalence in Propositional Logic with a security loss independent of input length.
  • Next, we show how to obfuscate Turing machines with unbounded length inputs, whose functional equivalence can be proven in Cook’s Theory PV.
  • Finally, we demonstrate applications of our results to succinct non-interactive arguments and witness encryption, and provide guidance on using our techniques for building new applications. To realize our approach, we depart from prior work and develop a new gate-by-gate obfuscation template that preserves the topology of the input circuit.
@misc{cryptoeprint:2022/1430,
      author = {Abhishek Jain and Zhengzhong Jin},
      title = {Indistinguishability Obfuscation via Mathematical Proofs of Equivalence},
      howpublished = {Cryptology ePrint Archive, Paper 2022/1430},
      year = {2022},
      note = {\url{https://eprint.iacr.org/2022/1430}},
      url = {https://eprint.iacr.org/2022/1430}
}

Succinct Zero Knowledge for Floating Point Computations

Steganography-Free Zero Knowledge

We revisit the well-studied problem of preventing steganographic communication in multi-party communications. While this is known to be a provably impossible task, we propose a new model that allows circumventing this impossibility. In our model, the parties first publish a single message during an honest non-interactive pre-processing phase and then later interact in an execution phase. We show that in this model, it is indeed possible to prevent any steganographic communication in zero-knowledge protocols. Our solutions rely on standard cryptographic assumptions.
@misc{cryptoeprint:2022/1263,
      author = {Behzad Abdolmaleki and Nils Fleischhacker and Vipul Goyal and Abhishek Jain and Giulio Malavolta},
      title = {Steganography-Free Zero-Knowledge},
      howpublished = {Cryptology ePrint Archive, Paper 2022/1263},
      year = {2022},
      note = {\url{https://eprint.iacr.org/2022/1263}},
      url = {https://eprint.iacr.org/2022/1263}
}

Order-C Secure Multiparty Computation for Highly Repetitive Circuits

Running secure multiparty computation (MPC) protocols with hundreds or thousands of players would allow leveraging large volunteer networks (such as blockchains and Tor) and help justify honest majority assumptions. However, most existing protocols have at least a linear (multiplicative) dependence on the number of players, making scaling difficult. Known protocols with asymptotic efficiency independent of the number of parties (excluding additive factors) require expensive circuit transformations that induce large overheads. We observe that the circuits used in many important applications of MPC such as training algorithms used to create machine learning models have a highly repetitive structure. We formalize this class of circuits and propose an MPC protocol that achieves O(|C|) total complexity for this class. We implement our protocol and show that it is practical and outperforms O(n|C|) protocols for modest numbers of players.
@inproceedings{BGJK21,
  author    = {Gabrielle Beck and
              Aarushi Goel and
              Abhishek Jain and 
              Gabriel Kaptchuk},
  title     = {Order-C Secure Multiparty Computation for Highly Repetitive Circuits},
  booktitle = {{EUROCRYPT} {(1)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12697},
  pages     = {663--693},
  publisher = {Springer},
  year      = {2021}}

Abuse Resistant Law Enforcement Access Systems

The increasing deployment of end-to-end encrypted communications services has ignited a debate between technology firms and law enforcement agencies over the need for lawful access to encrypted communications. Unfortunately, existing solutions to this problem suffer from serious technical risks, such as the possibility of operator abuse and theft of escrow key material. In this work we investigate the problem of constructing law enforcement access systems that mitigate the possibility of unauthorized surveillance. We first define a set of desirable properties for an abuse-resistant law enforcement access system (ARLEAS), and motivate each of these properties. We then formalize these definitions in the Universal Composability (UC) framework, and present two main constructions that realize this definition. The first construction enables prospective access, allowing surveillance only if encryption occurs after a warrant has been issued and activated. The second, more powerful construction, allows retrospective access to communications that occurred prior to a warrant’s issuance. To illustrate the technical challenge of constructing the latter type of protocol, we conclude by investigating the minimal assumptions required to realize these systems.
@inproceedings{GKV21,
  author    = {Matthew Green and
              Gabriel Kaptchuk and 
                Gijs Van Laer},
  title     = {Abuse Resistant Law Enforcement Access Systems},
  booktitle = {{EUROCRYPT} {(1)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12698},
  pages     = {553--583},
  publisher = {Springer},
  year      = {2021}}

Non-Interactive Zero Knowledge from Sub-exponential DDH

We provide the first constructions of non-interactive zero-knowledge and Zap arguments for NP based on the sub-exponential hardness of Decisional Diffie-Hellman against polynomial time adversaries (without use of groups with pairings). Central to our results, and of independent interest, is a new notion of interactive trapdoor hashing protocols.
@inproceedings{JJ21,
  author    = {Abhishek Jain and
              Zhengzhong Jin},
  title     = {Non-Interactive Zero Knowledge from Sub-exponential DDH},
  booktitle = {{EUROCRYPT} {(1)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12696},
  pages     = {3--32},
  publisher = {Springer},
  year      = {2021}}

Unbounded Multi-Party Computation from Learning with Errors

We consider the problem of round-optimal unbounded MPC: in the first round, parties publish a message that depends only on their input. In the second round, any subset of parties can jointly and securely compute any function f over their inputs in a single round of broadcast. We do not impose any a-priori bound on the number of parties nor on the size of the functions that can be computed. Our main result is a semi-honest two-round protocol for unbounded MPC in the plain model from the hardness of the standard learning with errors (LWE) problem. Prior work in the same setting assumes the hardness of problems over bilinear maps. Thus, our protocol is the first example of unbounded MPC that is post-quantum secure. The central ingredient of our protocol is a new scheme of attribute-based secure function evaluation (AB-SFE) with public decryption. Our construction combines techniques from the realm of homomorphic commitments with delegation of lattice basis. We believe that such a scheme may find further applications in the future.
@inproceedings{AJJM21,
  author    = {Prabhanjan Ananth and
                    Abhishek Jain and
              Zhengzhong Jin and
                        Giulio Malavolta},
  title     = {Unbounded Multi-Party Computation from Learning with Errors},
  booktitle = {{EUROCRYPT} {(1)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12697},
  pages     = {754--781},
  publisher = {Springer},
  year      = {2021}}

Fluid MPC: Secure Multiparty Computation with Dynamic Participants

Existing approaches to secure multiparty computation (MPC) require all participants to commit to the entire duration of the protocol. As interest in MPC continues to grow, it is inevitable that there will be a desire to use it to evaluate increasingly complex functionalities, resulting in computations spanning several hours or days. Such scenarios call for a dynamic participation model for MPC where participants have the flexibility to go offline as needed and (re)join when they have available computational resources. Such a model would also democratize access to privacy-preserving computation by facilitating an “MPC-as-a-service” paradigm—the deployment of MPC in volunteer-operated networks (such as blockchains, where dynamism is inherent) that perform computation on behalf of clients. In this work, we initiate the study of fluid MPC, where parties can dynamically join and leave the computation. The minimum commitment required from each participant is referred to as fluidity, measured in the number of rounds of communication that it must stay online. Our contributions are threefold:

  • We provide a formal treatment of fluid MPC, exploring various possible modeling choices.
  • We construct information-theoretic fluid MPC protocols in the honest-majority setting. Our protocols achieve maximal fluidity, meaning that a party can exit the computation after receiving and sending messages in one round.
  • We implement our protocol and test it in multiple network settings.
@inproceedings{CGGJK21,
  author    = {Arka Rai Choudhuri and
                    Aarushi Goel and
              Matthew Green and
                        Abhishek Jain and
                      Gabriel Kaptchuk},
  title     = {Fluid MPC: Secure Multiparty Computation with Dynamic Participants},
  booktitle = {{CRYPTO} {(1)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12826},
  pages     = {94--123},
  publisher = {Springer},
  year      = {2021}}

Non-Interactive Batch Arguments for NP from Standard Assumptions

We study the problem of designing non-interactive batch arguments for NP. Such an argument system allows an efficient prover to prove multiple NP statements, with size smaller than the combined witness length. We provide the first construction of such an argument system for NP in the common reference string model based on standard cryptographic assumptions. Prior works either require non-standard assumptions (or the random oracle model) or can only support private verification. At the heart of our result is a new dual mode interactive batch argument system for NP. We show how to apply the correlation-intractability framework for Fiat-Shamir - that has primarily been applied to proof systems - to such interactive arguments.
@inproceedings{CJJ21a,
  author    = {Arka Rai Choudhuri and
                        Abhishek Jain and
                      Zhengzhong Jin},
  title     = {Non-Interactive Batch Arguments for NP from Standard Assumptions},
  booktitle = {{CRYPTO} {(1)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12828},
  pages     = {394--423},
  publisher = {Springer},
  year      = {2021}}

On Actively-Secure Elementary MPC Reductions

We introduce the notion of elementary MPC reductions that allow us to securely compute a functionality f by making a single call to a constant-degree “non-cryptographic” functionality g without requiring any additional interaction. Roughly speaking, “non-cryptographic” means that g does not make use of cryptographic primitives, though the parties can locally call such primitives. Classical MPC results yield such elementary reductions in various cases including the setting of passive security with full corruption threshold t < n (Yao, FOCS'86; Beaver, Micali, and Rogaway, STOC'90), the setting of full active security against a corrupted minority t < n/2 (Damgård and Ishai, Crypto'05), and, for NC1 functionalities, even for the setting of full active (information-theoretic) security with full corruption threshold of t < n (Ishai and Kushilevitz, FOCS'00). This leaves open the existence of an elementary reduction that achieves full active security in the dishonest majority setting for all efficiently computable functions. Our main result shows that such a reduction is unlikely to exist. Specifically, the existence of a computationally secure elementary reduction that makes black-box use of a PRG and achieves a very weak form of partial fairness (e.g., that holds only when the first party is not corrupted) would allow us to realize any efficiently-computable function by a constant-round protocol that achieves a non-trivial notion of information-theoretic passive security. The existence of the latter is a well-known 3-decade old open problem in information-theoretic cryptography (Beaver, Micali, and Rogaway, STOC'90). On the positive side, we observe that this barrier can be bypassed under any of the following relaxations: (1) non-black-box use of a pseudorandom generator; (2) weaker security guarantees such as security with identifiable abort; or (3) an additional round of communication with the functionality g.
@inproceedings{AG21,
author={Benny Applebaum and Aarushi Goel},
editor={Kobbi Nissim and Brent Waters},
title={On Actively-Secure Elementary {MPC} Reductions},
booktitle = {Theory of Cryptography - 19th International Conference, {TCC} 2021, Raleigh, NC, USA, November 8-11, 2021, Proceedings, Part {I}},
series    = {Lecture Notes in Computer Science},
volume    = {13042},
pages     = {717--749},
publisher = {Springer},
year      = {2021}}

Oblivious Transfer from Trapdoor Permutations in Minimal Rounds

Oblivious transfer (OT) is a foundational primitive within cryptography owing to its connection with secure computation. One of the oldest constructions of oblivious transfer was from certified trapdoor permutations (TDPs). However several decades later, we do not know if a similar construction can be obtained from TDPs in general. In this work, we study the problem of constructing round optimal oblivious transfer from trapdoor permutations. In particular, we obtain the following new results (in the plain model) relying on TDPs in a black-box manner:

  • Three-round oblivious transfer protocol that guarantees indistinguishability-security against malicious senders (and semi-honest receivers).
  • Four-round oblivious transfer protocol secure against malicious adversaries with black-box simulation-based security. By combining our second result with an already known compiler we obtain the first round-optimal 2-party computation protocol that relies in a black-box way on TDPs. A key technical tool underlying our results is a new primitive we call dual witness encryption (DWE) that may be of independent interest.
@inproceedings{CCGJO21,
  author    = {Arka Rai Choudhuri and
              Michele Ciampi and
              Vipul Goyal and
              Abhishek Jain and
              Rafail Ostrovsky},
  editor    = {Kobbi Nissim and
              Brent Waters},
  title     = {Oblivious Transfer from Trapdoor Permutations in Minimal Rounds},
  booktitle = {Theory of Cryptography - 19th International Conference, {TCC} 2021,
              Raleigh, NC, USA, November 8-11, 2021, Proceedings, Part {II}},
  series    = {Lecture Notes in Computer Science},
  volume    = {13043},
  pages     = {518--549},
  publisher = {Springer},
  year      = {2021},
  url       = {https://doi.org/10.1007/978-3-030-90453-1\_18},
  doi       = {10.1007/978-3-030-90453-1\_18},
  timestamp = {Mon, 08 Nov 2021 11:45:31 +0100},
  biburl    = {https://dblp.org/rec/conf/tcc/ChoudhuriCG0O21.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

On Communication Models and Best-Achievable Security in Two-Round MPC

Recently, a sequence of works have made strong advances in two-round (i.e., round-optimal) secure multi-party computation (MPC). In the honest-majority setting – the focus of this work – Ananth et al. [CRYPTO’18, EC’19], Applebaum et al. [TCC’18, EC’19] and Garg et al. [TCC’18] have established the feasibility of general two-round MPC in standard communication models involving broadcast (BC) and private point-to-point (P2P) channels. In this work, we set out to understand what features of the communication model are necessary for these results, and more broadly the design of two-round MPC. Focusing our study on the plain model – the most natural model for honest-majority MPC – we obtain the following results:

  • Dishonest majority from Honest majority: In the two round setting, honest-majority MPC and dishonest-majority MPC are surprisingly close, and often equivalent. This follows from our results that the former implies 2-message oblivious transfer, in many settings. (i) We show that without private point-to-point (P2P) channels, i.e., when we use only broadcast (BC) channels, honest-majority MPC implies 2-message oblivious transfer. (ii) Furthermore, this implication holds even when we use both P2P and BC channels, provided that the MPC protocol is robust against “fail-stop” adversaries.
  • Best-Achievable Security: While security with guaranteed output delivery (and even fairness) against malicious adversaries is impossible in two rounds, nothing is known with regards to the “next best” security notion, namely, security with identifiable abort (IA). We show that IA is also impossible to achieve with honest-majority even if we use both P2P and BC channels. However, if we replace P2P channels with a “bare” (i.e., untrusted) public-key infrastructure (PKI), then even security with guaranteed output delivery (and hence IA) is possible to achieve. These results “explain” that the reliance on P2P channels (together with BC) in the recent two-round protocols in the plain model was in fact necessary, and that these protocols couldn’t have achieved a stronger security guarantee, namely, IA. Overall, our results (put together with prior works) fully determine the best-achievable security for honest-majority MPC in different communication models in two rounds. As a consequence, they yield the following hierarchy of communication models: BC < P2P < BC + P2P < BC + PKI. This shows that BC channel is the weakest communication model, and that BC + PKI model is strictly stronger than BC + P2P model.
@inproceedings{GJPR21,
  author    = {Aarushi Goel and
              Abhishek Jain and
              Manoj Prabhakaran and
              Rajeev Raghunath},
  editor    = {Kobbi Nissim and
              Brent Waters},
  title     = {On Communication Models and Best-Achievable Security in Two-Round
              {MPC}},
  booktitle = {Theory of Cryptography - 19th International Conference, {TCC} 2021,
              Raleigh, NC, USA, November 8-11, 2021, Proceedings, Part {II}},
  series    = {Lecture Notes in Computer Science},
  volume    = {13043},
  pages     = {97--128},
  publisher = {Springer},
  year      = {2021},
  url       = {https://doi.org/10.1007/978-3-030-90453-1\_4},
  doi       = {10.1007/978-3-030-90453-1\_4},
  timestamp = {Mon, 08 Nov 2021 11:45:31 +0100},
  biburl    = {https://dblp.org/rec/conf/tcc/Goel0PR21.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Meteor: Cryptographically Secure Steganography for Realistic Distributions

Despite a long history of research and wide-spread applications to censorship resistant systems, practical steganographic systems capable of embedding messages into realistic communication distributions, like text, do not exist. We identify two primary impediments to deploying universal steganography: (1) prior work leaves the difficult problem of finding samplers for non-trivial distributions unaddressed, and (2) prior constructions have impractical minimum entropy requirements. We investigate using generative models as steganographic samplers, as they represent the best known technique for approximating human communication. Additionally, we study methods to overcome the entropy requirement, including evaluating existing techniques and designing a new steganographic protocol, called Meteor. The resulting protocols are provably indistinguishable from honest model output and represent an important step towards practical steganographic communication for mundane communication channels. We implement Meteor and evaluate it on multiple computation environments with multiple generative models.

Fuzzy Message Detection

Many privacy-preserving protocols employ a primitive that allows a sender to “flag” a message to a recipient’s public key, such that only the recipient (who possesses the corresponding secret key) can detect that the message is intended for their use. Examples of such protocols include anonymous messaging, privacy-preserving payments, and anonymous tracing. A limitation of the existing techniques is that recipients cannot easily outsource the detection of messages to a remote server, without revealing to the server the exact set of matching messages. In this work we propose a new class of cryptographic primitives called fuzzy message detection schemes. These schemes allow a recipient to derive a specialized message detection key that can identify correct messages, while also incorrectly identifying non-matching messages with a specific and chosen false positive rate p. This allows recipients to outsource detection work to an untrustworthy server, without revealing precisely which messages belong to the receiver. We show how to construct these schemes under a variety of assumptions; describe several applications of the new technique; and show that our schemes are efficient enough to use in real applications.

SNARGs for P from LWE

We provide the first construction of a succinct non-interactive argument (SNARG) for all polynomial time deterministic computations based on standard assumptions. For T steps of computation, the size of the proof and the common random string (CRS) as well as the verification time are poly-logarithmic in T. The security of our scheme relies on the hardness of the Learning with Errors (LWE) problem against polynomial-time adversaries. Previously, SNARGs based on standard assumptions could support bounded-depth computations and required sub-exponential hardness assumptions [Jawale-Kalai-Khurana-Zhang, STOC'21]. Along the way, we also provide the first construction of non-interactive batch arguments for NP based solely on the LWE assumption.

Breaking the O(sqrt(n)) Bits Barrier: Byzantine Agreement with Polylog Bits per Party

Byzantine agreement (BA), the task of n parties to agree on one of their input bits in the face of malicious agents, is a powerful primitive that lies at the core of a vast range of distributed protocols. Interestingly, in BA protocols with the best overall communication, the demands of the parties are highly unbalanced: the amortized cost is Õ(1) bits per party, but some parties must send Ω(n) bits. In best known balanced protocols, the overall communication is sub-optimal, with each party communicating Õ(√n). In this work, we ask whether asymmetry is inherent for optimizing total communication. In particular, is BA possible where each party communicates only Õ(1) bits? Our contributions in this line are as follows: We define a cryptographic primitive—succinctly reconstructed distributed signatures (SRDS)—that suffices for constructing Õ(1) balanced BA. We provide two constructions of SRDS from different cryptographic and Public-Key Infrastructure (PKI) assumptions. The SRDS-based BA follows a paradigm of boosting from almost-everywhere agreement to full agreement, and does so in a single round. Complementarily, we prove that PKI setup and cryptographic assumptions are necessary for such protocols in which every party sends O(n) messages. We further explore connections between a natural approach toward attaining SRDS and average-case succinct non-interactive argument systems (SNARGs) for a particular type of NP-Complete problems (generalizing Subset-Sum and Subset-Product). Our results provide new approaches forward, as well as limitations and barriers, towards minimizing per-party communication of BA. In particular, we construct the first two BA protocols with Õ(1) balanced communication, offering a tradeoff between setup and cryptographic assumptions, and answering an open question presented by King and Saia (DISC'09).
@inproceedings{BCG21,
  author    = {Elette Boyle and
              Ran Cohen and
              Aarushi Goel},
  editor    = {Avery Miller and
              Keren Censor{-}Hillel and
              Janne H. Korhonen},
  title     = {Breaking the O({\(\surd\)} n)-Bit Barrier: Byzantine Agreement with
              Polylog Bits Per Party},
  booktitle = {{PODC} '21: {ACM} Symposium on Principles of Distributed Computing,
              Virtual Event, Italy, July 26-30, 2021},
  pages     = {319--330},
  publisher = {{ACM}},
  year      = {2021},
  url       = {https://doi.org/10.1145/3465084.3467897},
  doi       = {10.1145/3465084.3467897},
  timestamp = {Mon, 26 Jul 2021 09:21:43 +0200},
  biburl    = {https://dblp.org/rec/conf/podc/BoyleCG21.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Multi-key Fully-Homomorphic Encryption in the Plain Model

The notion of multi-key fully homomorphic encryption (multi-key FHE) [López-Alt, Tromer, Vaikuntanathan, STOC'12] was proposed as a generalization of fully homomorphic encryption to the multiparty setting. In a multi-key FHE scheme for n parties, each party can individually choose a key pair and use it to encrypt its own private input. Given n ciphertexts computed in this manner, the parties can homomorphically evaluate a circuit C over them to obtain a new ciphertext containing the output of C, which can then be decrypted via a decryption protocol. The key efficiency property is that the size of the (evaluated) ciphertext is independent of the size of the circuit. Multi-key FHE with one-round decryption [Mukherjee and Wichs, Eurocrypt'16], has found several powerful applications in cryptography over the past few years. However, an important drawback of all such known schemes is that they require a trusted setup. In this work, we address the problem of constructing multi-key FHE in the plain model. We obtain the following results:

  • A multi-key FHE scheme with one-round decryption based on the hardness of learning with errors (LWE), ring LWE, and decisional small polynomial ratio (DSPR) problems.
  • A variant of multi-key FHE where we relax the decryption algorithm to be non-compact – i.e., where the decryption complexity can depend on the size of C – based on the hardness of LWE. We call this variant multi-homomorphic encryption (MHE). We observe that MHE is already sufficient for some applications of multi-key FHE.
@inproceedings{AJJM20,
  author    = {Prabhanjan Ananth and
              Abhishek Jain and
              Zhengzhong Jin and
              Giulio Malavolta},
  title     = {Multi-key Fully-Homomorphic Encryption in the Plain Model},
  booktitle = {{TCC} {(1)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12550},
  pages     = {28--57},
  publisher = {Springer},
  year      = {2020}
}

Characterizing Deterministic-Prover Zero Knowledge

Randomness is typically thought to be essential for zero knowledge protocols. Following this intuition, Goldreich and Oren (Journal of Cryptology 94) proved that auxiliary-input zero knowledge cannot be achieved with a deterministic prover. On the other hand, positive results are only known in the honest-verifier setting, or when the prover is given at least a restricted source of entropy. We prove that removing (or just bounding) the verifier’s auxiliary input, deterministic-prover zero knowledge becomes feasible:

  • Assuming non-interactive witness-indistinguishable proofs and subexponential indistinguishability obfuscation and one-way functions, we construct deterministic-prover zero-knowledge arguments for 𝖭𝖯 ∩ 𝖼𝗈𝖭𝖯 against verifiers with bounded non-uniform auxiliary input.
  • Assuming also keyless hash functions that are collision-resistant against bounded-auxiliary-input quasipolynomial-time attackers, we construct similar arguments for all of 𝖭𝖯. Together with the result of Goldreich and Oren, this characterizes when deterministic-prover zero knowledge is feasible. We also demonstrate the necessity of strong assumptions, by showing that deterministic prover zero knowledge arguments for a given language imply witness encryption for that language. We further prove that such arguments can always be collapsed to two messages and be made laconic. These implications rely on a more general connection with the notion of predictable arguments by Faonio, Nielsen, and Venturi (PKC 17).
@inproceedings{BC20,
  author    = {Nir Bitansky and
              Arka Rai Choudhuri},
  title     = {Characterizing Deterministic-Prover Zero Knowledge},
  booktitle = {{TCC} {(1)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12550},
  pages     = {535--566},
  publisher = {Springer},
  year      = {2020}
}

Round Optimal Secure Multiparty Computation from Minimal Assumptions

We construct a four round secure multip arty computation (MPC) protocol in the plain model that achieves security against any dishonest majority. The security of our protocol relies only on the existence of four round oblivious transfer. This culminates the long line of research on constructing round-efficient MPC from minimal assumptions (at least w.r.t. black-box simulation).
@inproceedings{CCGJO20,
  author    = {Arka Rai Choudhuri and
              Michele Ciampi and
              Vipul Goyal and
              Abhishek Jain and
              Rafail Ostrovsky},
  title     = {Round Optimal Secure Multiparty Computation from Minimal Assumptions},
  booktitle = {{TCC} {(2)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12551},
  pages     = {291--319},
  publisher = {Springer},
  year      = {2020}
}

Towards Efficiency-Preserving Round Compression in MPC: Do fewer rounds mean more computation?

Reducing the rounds of interaction in secure multiparty computation (MPC) protocols has been the topic of study of many works. One popular approach to reduce rounds is to construct round compression compilers. A round compression compiler is one that takes a highly interactive protocol and transforms it into a protocol with far fewer rounds. The design of round compression compilers has traditionally focused on preserving the security properties of the underlying protocol and in particular, not much attention has been given towards preserving their computational and communication efficiency. Indeed, the recent round compression compilers that yield round-optimal MPC protocols incur large computational and communication overhead. In this work, we initiate the study of efficiency-preserving round compression compilers, i.e. compilers that translate the efficiency benefits of the underlying highly interactive protocols to the fewer round setting. Focusing on the honest majority setting (with near-optimal corruption threshold 1/2−ε, for any ε > 0), we devise a new compiler that yields two round (i.e., round optimal) semi-honest MPC with similar communication efficiency as the underlying (arbitrary round) protocol. By applying our compiler on the most efficient known MPC protocols, we obtain a two-round semi-honest protocol based on one-way functions, with total communication (and per-party computation) cost O˜(n^τ s + n^(τ+1) d) – a significant improvement over prior two-round protocols with cost O˜(n^τ s + n^(τ+1) d), where τ ≥ 2, s is the size of the circuit computing the function and d the corresponding depth. Our result can also be extended to handle malicious adversaries, either using stronger assumptions in the public key infrastructure (PKI) model, or in the plain model using an extra round. An artifact of our approach is that the resultant protocol is “unbalanced’’ in the amount of computation performed by different parties. We give evidence that this is necessary in our setting. Our impossibility result makes novel use of the “MPC-in-the-head” paradigm which has typically been used to demonstrate feasibility results.
@inproceedings{ACGJ20,
author    = {Prabhanjan Ananth and
              Arka Rai Choudhuri and
              Aarushi Goel and
              Abhishek Jain},
  title     = {Towards Efficiency-Preserving Round Compression in {MPC} - Do Fewer
              Rounds Mean More Computation?},
  booktitle = {{ASIACRYPT} {(3)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12493},
  pages     = {181--212},
  publisher = {Springer},
  year      = {2020}
}

Self-Processing Private Sensor Data via Garbled Encryption

We introduce garbled encryption, a relaxation of secret-key multi-input functional encryption (MiFE) where a function key can be used to jointly compute upon only a particular subset of all possible tuples of ciphertexts. We construct garbled encryption for general functionalities based on one-way functions. We show that garbled encryption can be used to build a self-processing private sensor data system where after a one-time trusted setup phase, sensors deployed in the field can periodically broadcast encrypted readings of private data that can be computed upon by anyone holding function keys to learn processed output, without any interaction. Such a system can be used to periodically check, e.g., whether a cluster of servers are in an “alarm” state. We implement our garbled encryption scheme and find that it performs quite well, with function evaluations in the microseconds. The performance of our scheme was tested on a standard commodity laptop.
@inproceedings { MJS20,
      author = Nathan Manohar and 
                    Abhishek Jain and 
                    Amit Sahai"
      title = "Self-Processing Private Sensor Data via Garbled Encryption",
      journal = "Proceedings on Privacy Enhancing Technologies",
      year = "01 Oct. 2020",
      publisher = "Sciendo",
      address = "Berlin",
      volume = "2020",
      number = "4",
      doi = "https://doi.org/10.2478/popets-2020-0081",
      pages=      "434 - 460",
      url = "https://content.sciendo.com/view/journals/popets/2020/4/article-p434.xml"
}

ZEXE: Enabling Decentralized Private Computation

Ledger-based systems that support rich applications often suffer from two limitations. First, validating a transaction requires re-executing the state transition that it attests to. Second, transactions not only reveal which application had a state transition but also reveal the application’s internal state. We design, implement, and evaluate ZEXE, a ledger-based system where users can execute offline computations and subsequently produce transactions, attesting to the correctness of these computations, that satisfy two main properties. First, transactions hide all information about the offline computations. Second, transactions can be validated in constant time by anyone, regardless of the offline computation. The core of ZEXE is a construction for a new cryptographic primitive that we introduce, decentralized private computation (DPC) schemes. In order to achieve an efficient implementation of our construction, we leverage tools in the area of cryptographic proofs, including succinct zero knowledge proofs and recursive proof composition. Overall, transactions in ZEXE are 968 bytes regardless of the offline computation, and generating them takes less than a minute plus a time that grows with the offline computation. We demonstrate how to use ZEXE to realize privacy-preserving analogues of popular applications: private decentralized exchanges for user-defined fungible assets and regulation-friendly private stablecoins.
@inproceedings{BCGMMW20,
  author    = {Sean Bowe and
              Alessandro Chiesa and
              Matthew Green and
              Ian Miers and
              Pratyush Mishra and
              Howard Wu},
  title     = {{ZEXE:} Enabling Decentralized Private Computation},
  booktitle = {{IEEE} Symposium on Security and Privacy},
  pages     = {947--964},
  publisher = {{IEEE}},
  year      = {2020}
}

New Methods and Abstractions for RSA-Based Forward Secure Signatures

We put forward a new abstraction for achieving forward-secure signatures that are (1) short, (2) have fast update and signing and (3) have small private key size. Prior work that achieved these parameters was pioneered by the pebbling techniques of Itkis and Reyzin (CRYPTO 2001) which showed a process for generating a sequence of roots h1/e1,h1/e2,…,h1/eT for a group element h in ℤ^*_N. However, the current state of the art has limitations. First, while many works claim that Itkis-Reyzin pebbling can be applied, it is seldom shown how this non-trivial step is concretely done. Second, setting up the pebbling data structure takes T time which makes key generation using this approach expensive. Third, many past works require either random oracles and/or the Strong RSA assumption; we will work in the standard model under the RSA assumption. We introduce a new abstraction that we call an RSA sequencer. Informally, the job of an RSA sequencer is to store roots of a public key U, so that at time period t, it can provide U^(1/e_t), where the value e_t is an RSA exponent computed from a certain function. This separation allows us to focus on building a sequencer that efficiently stores such values, in a forward-secure manner and with better setup times than other comparable solutions. Our sequencer abstraction also has certain re-randomization properties that allow for constructing forward-secure signatures with a single trusted setup that takes T time and individual key generation takes lg(T) time. We demonstrate the utility of our abstraction by using it to provide concrete forward-secure signature schemes. We first give a random-oracle construction that closely matches the performance and structure of the Itkis-Reyzin scheme with the important exception that key generation is much faster (after the one-time setup). We then move on to designing a standard model scheme. This abstraction and illustration of how to use it may be useful for other future works. We include a detailed performance evaluation of our constructions, with an emphasis on the time and space costs for large caps on the maximum number of time periods T supported. Our philosophy is that frequently updating forward secure keys should be part of “best practices” in key maintenance. To make this practical, even for bounds as high as T=2^32, we show that after an initial global setup, it takes only seconds to generate a key pair, and only milliseconds to update keys, sign messages and verify signatures. The space requirements for the public parameters and private keys are also a modest number of kilobytes, with signatures being a single element in ℤ_N and one smaller value.
@inproceedings{HW20,
  author    = {Susan Hohenberger and
              Brent Waters},
  title     = {New Methods and Abstractions for RSA-Based Forward Secure Signatures},
  booktitle = {{ACNS} {(1)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12146},
  pages     = {292--312},
  publisher = {Springer},
  year      = {2020}
}

Chosen Ciphertext Security from Injective Trapdoor Functions

We provide a construction of chosen ciphertext secure public-key encryption from (injective) trapdoor functions. Our construction is black box and assumes no special properties (e.g. “lossy”, “correlated product secure”) of the trapdoor function.
@inproceedings{HKW20,
  author    = {Susan Hohenberger and
              Venkata Koppula and
              Brent Waters},
  title     = {Chosen Ciphertext Security from Injective Trapdoor Functions},
  booktitle = {{CRYPTO} {(1)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12170},
  pages     = {836--866},
  publisher = {Springer},
  year      = {2020}
}

The Round Complexity of Secure Computation Against Covert Adversaries

We investigate the exact round complexity of secure multiparty computation (MPC) against covert adversaries who may attempt to cheat, but do not wish to be caught doing so. Covert adversaries lie in between semi-honest adversaries who follow protocol specification and malicious adversaries who may deviate arbitrarily. Recently, two round protocols for semi-honest MPC and four round protocols for malicious-secure MPC were constructed, both of which are optimal. While these results can be viewed as constituting two end points of a security spectrum, we investigate the design of protocols that potentially span the spectrum. Our main result is an MPC protocol against covert adversaries with variable round complexity: when the detection probability is set to the lowest setting, our protocol requires two rounds and offers same security as semi-honest MPC. By increasing the detecting probability, we can increase the security guarantees, with round complexity five in the extreme case. The security of our protocol is based on standard cryptographic assumptions. We supplement our positive result with a negative result, ruling out strict three round protocols with respect to black-box simulation.
@inproceedings{CGJ20,
  author    = {Arka Rai Choudhuri and
              Vipul Goyal and
              Abhishek Jain},
  title     = {The Round Complexity of Secure Computation Against Covert Adversaries},
  booktitle = {{SCN}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12238},
  pages     = {600--620},
  publisher = {Springer},
  year      = {2020}
}

BlockSci: Design and applications of a blockchain analysis platform

Analysis of blockchain data is useful for both scientific research and commercial applications. We present BlockSci, an open-source software platform for blockchain analysis. BlockSci is versatile in its support for different blockchains and analysis tasks. It incorporates an in-memory, analytical (rather than transactional) database, making it orders of magnitudes faster than using general-purpose graph databases. We describe BlockSci’s design and present four analyses that illustrate its capabilities, shedding light on the security, privacy, and economics of cryptocurrencies.
@inproceedings {KMLGPCN20,
title = {BlockSci: Design and applications of a blockchain analysis platform},
booktitle = {29th {USENIX} Security Symposium ({USENIX} Security 20)},
year = {2020},
url = {https://www.usenix.org/conference/usenixsecurity20/presentation/kalodner},
publisher = {{USENIX} Association},
month = aug,
}

Automating the Development of Chosen Ciphertext Attacks

In this work we investigate the problem of automating the development of adaptive chosen ciphertext attacks on systems that contain vulnerable format oracles. Unlike previous attempts, which simply automate the execution of known attacks, we consider a more challenging problem: to programmatically derive a novel attack strategy, given only a machine-readable description of the plaintext verification function and the malleability characteristics of the encryption scheme. We present a new set of algorithms that use SAT and SMT solvers to reason deeply over the design of the system, producing an automated attack strategy that can entirely decrypt protected messages. Developing our algorithms required us to adapt techniques from a diverse range of research fields, as well as to explore and develop new ones. We implement our algorithms using existing theory solvers. The result is a practical tool called Delphinium that succeeds against real-world and contrived format oracles. To our knowledge, this is the first work to automatically derive such complex chosen ciphertext attacks.
@inproceedings {BZG20,
author = {Gabrielle Beck and Maximilian Zinkus and Matthew Green}, 
title = {Automating the Development of Chosen Ciphertext Attacks}, 
booktitle = {29th {USENIX} Security Symposium ({USENIX} Security 20)}, 
year = {2020}, 
address = {Boston, MA}, 
url = {https://www.usenix.org/conference/usenixsecurity20/presentation/beck}, 
publisher = {{USENIX} Association}, 
month = aug, }

Statistical Zaps and New Oblivious Transfer Protocols

We study the problem of achieving statistical privacy in interactive proof systems and oblivious transfer – two of the most well studied two-party protocols – when limited rounds of interaction are available.

  • Statistical Zaps: We give the first construction of statistical Zaps, namely, two-round statistical witness-indistinguishable (WI) protocols with a public-coin verifier. Our construction achieves computational soundness based on the quasi-polynomial hardness of learning with errors assumption.
  • Three-Round Statistical Receiver-Private Oblivious Transfer: We give the first construction of a three-round oblivious transfer (OT) protocol – in the plain model – that achieves statistical privacy for receivers and computational privacy for senders against malicious adversaries, based on polynomial-time assumptions. The round-complexity of our protocol is optimal. We obtain our first result by devising a public-coin approach to compress sigma protocols, without relying on trusted setup. To obtain our second result, we devise a general framework via a new notion of statistical hash commitments that may be of independent interest.
@inproceedings{GJJM20,
  author    = {Vipul Goyal and
              Abhishek Jain and
              Zhengzhong Jin and
              Giulio Malavolta},
  title     = {Statistical Zaps and New Oblivious Transfer Protocols},
  booktitle = {{EUROCRYPT} {(3)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {12107},
  pages     = {668--699},
  publisher = {Springer},
  year      = {2020}
}

The Broadcast Message Complexity of Secure Multiparty Computation

We study the broadcast message complexity of secure multiparty computation (MPC), namely, the total number of messages that are required for securely computing any functionality in the broadcast model of communication. MPC protocols are traditionally designed in the simultaneous broadcast model, where each round consists of every party broadcasting a message to the other parties. We show that this method of communication is sub-optimal; specifically, by eliminating simultaneity, it is, in fact, possible to reduce the broadcast message complexity of MPC. More specifically, we establish tight lower and upper bounds on the broadcast message complexity of n-party MPC for every t<n corruption threshold, both in the plain model as well as common setup models. For example, our results show that the optimal broadcast message complexity of semi-honest MPC can be much lower than 2n, but necessarily requires at least three rounds of communication. We also extend our results to the malicious setting in setup models.
@inproceedings{GGJ19,
  author    = {Sanjam Garg and
              Aarushi Goel and
              Abhishek Jain},
  title     = {The Broadcast Message Complexity of Secure Multiparty Computation},
  booktitle = {{ASIACRYPT} {(1)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {11921},
  pages     = {426--455},
  publisher = {Springer},
  year      = {2019}
}

UC-Secure Multiparty Computation from One-Way Functions using Stateless Tokens

We revisit the problem of universally composable (UC) secure multiparty computation in the stateless hardware token model.

  • We construct a three round multi-party computation protocol for general functions based on one-way functions where each party sends two tokens to every other party. Relaxing to the two-party case, we also construct a two round protocol based on one-way functions where each party sends a single token to the other party, and at the end of the protocol, both parties learn the output.
  • One of the key components in the above constructions is a new two-round oblivious transfer protocol based on one-way functions using only one token, which can be reused an unbounded polynomial number of times. All prior constructions required either stronger complexity assumptions, or larger number of rounds, or a larger number of tokens.
@inproceedings{BJOV19,
  author    = {Saikrishna Badrinarayanan and
              Abhishek Jain and
              Rafail Ostrovsky and
              Ivan Visconti},
  title     = {UC-Secure Multiparty Computation from One-Way Functions Using Stateless
              Tokens},
  booktitle = {{ASIACRYPT} {(2)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {11922},
  pages     = {577--605},
  publisher = {Springer},
  year      = {2019}
}

Public-Key Function-Private Hidden Vector Encryption (and More)

We construct public-key function-private predicate encryption for the “small superset functionality,” recently introduced by Beullens and Wee (PKC 2019). This functionality captures several important classes of predicates:

  • Point functions. For point function predicates, our construction is equivalent to public-key function-private anonymous identity-based encryption.
  • Conjunctions. If the predicate computes a conjunction, our construction is a public-key function-private hidden vector encryption scheme. This addresses an open problem posed by Boneh, Raghunathan, and Segev (ASIACRYPT 2013).
  • d-CNFs and read-once conjunctions of d-disjunctions for constant-size d. Our construction extends the group-based obfuscation schemes of Bishop et al. (CRYPTO 2018), Beullens and Wee (PKC 2019), and Bartusek et al. (EUROCRYPT 2019) to the setting of public-key function-private predicate encryption. We achieve an average-case notion of function privacy, which guarantees that a decryption key sk_f reveals nothing about f as long as f is drawn from a distribution with sufficient entropy. We formalize this security notion as a generalization of the (enhanced) real-or-random function privacy definition of Boneh, Raghunathan, and Segev (CRYPTO 2013). Our construction relies on bilinear groups, and we prove security in the generic bilinear group model.
@inproceedings{BCJJLMMMR19,
  author    = {James Bartusek and
              Brent Carmer and
              Abhishek Jain and
              Zhengzhong Jin and
              Tancr{\`{e}}de Lepoint and
              Fermi Ma and
              Tal Malkin and
              Alex J. Malozemoff and
              Mariana Raykova},
  title     = {Public-Key Function-Private Hidden Vector Encryption (and More)},
  booktitle = {{ASIACRYPT} {(3)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {11923},
  pages     = {489--519},
  publisher = {Springer},
  year      = {2019}
}

Interactive Non-Malleable Codes

Non-malleable codes (NMC) introduced by Dziembowski et al. [ICS’10] allow one to encode “passive” data in such a manner that when a codeword is tampered, the original data either remains completely intact or is essentially destroyed. In this work, we initiate the study of interactive non-malleable codes (INMCs) that allow for encoding “active communication” rather than passive data. An INMC allows two parties to engage in an interactive protocol such that an adversary who is able to tamper with the protocol messages either leaves the original transcript intact (i.e., the parties are able to reconstruct the original transcript) or the transcript is completely destroyed and replaced with an unrelated one. We formalize a tampering model for interactive protocols and put forward the notion of INMCs. Since constructing INMCs for general adversaries is impossible (as in the case of non-malleable codes), we construct INMCs for several specific classes of tampering functions. These include bounded state, split state, and fragmented sliding window tampering functions. We also obtain lower bounds for threshold tampering functions via a connection to interactive coding. All of our results are unconditional.
@inproceedings{FGJPR19,
  author    = {Nils Fleischhacker and
              Vipul Goyal and
              Abhishek Jain and
              Anat Paskin{-}Cherniavsky and
              Slava Radune},
  title     = {Interactive Non-malleable Codes},
  booktitle = {{TCC} {(2)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {11892},
  pages     = {233--263},
  publisher = {Springer},
  year      = {2019}
}

Are These Pairing Elements Correct?: Automated Verification and Applications

Using a set of pairing product equations (PPEs) to verify the correctness of an untrusted set of pairing elements with respect to another set of trusted elements has numerous cryptographic applications. These include the design of basic and structure-preserving signature schemes, building oblivious transfer schemes from “blind” IBE, finding new verifiable random functions and keeping the IBE/ABE authority “accountable” to the user. A natural question to ask is: are all trusted-untrusted pairing element groups in the literature PPE testable? We provide original observations demonstrating that the answer is no, and moreover, it can be non-trivial to determine whether or not there exists a set of PPEs that can verify some pairing elements with respect to others. Many IBE schemes have PPE-testable private keys (with respect to the public parameters), while others, such as those based on dual-system encryption, provably do not. To aid those wishing to use PPE-based element verification in their cryptosystems, we devised rules to systematically search for a set of PPEs that can verify untrusted elements with respect to a set of trusted elements. We prove the correctness of each rule and combine them into a main searching algorithm for which we also prove correctness. We implemented this algorithm in a new software tool, called AutoPPE. Tested on over two dozen case studies, AutoPPE found a set of PPEs (on schemes where they exist) usually in just a matter of seconds. This work represents an important step towards the larger goal of improving the speed and accuracy of pairing-based cryptographic design via computer automation.
@inproceedings{HV19,
  author    = {Susan Hohenberger and
              Satyanarayana Vusirikala},
  title     = {Are These Pairing Elements Correct?: Automated Verification and Applications},
  booktitle = {{ACM} Conference on Computer and Communications Security},
  pages     = {923--939},
  publisher = {{ACM}},
  year      = {2019}
}

Finding a Nash Equilibrium is No Easier than Breaking Fiat-Shamir

The Fiat-Shamir heuristic transforms a public-coin interactive proof into a non-interactive argument, by replacing the verifier with a cryptographic hash function that is applied to the protocol’s transcript. Constructing hash functions for which this transformation is sound is a central and long-standing open question in cryptography. We show that solving the End-of-Metered-Line problem is no easier than breaking the soundness of the Fiat-Shamir transformation when applied to the sumcheck protocol. In particular, if the transformed protocol is sound, then any hard problem in #P gives rise to a hard distribution in the class CLS, which is contained in PPAD. Our main technical contribution is a stateful incrementally verifiable procedure that, given a SAT instance over n variables, counts the number of satisfying assignments. This is accomplished via an exponential sequence of small steps, each computable in time poly(n). Incremental verifiability means that each intermediate state includes a sumcheck-based proof of its correctness, and the proof can be updated and verified in time poly(n). Combining our construction with a hash family proposed by Canetti et al. [STOC 2019] gives rise to a distribution in the class CLS, which is provably hard under the assumption that any one of a class of fully homomorphic encryption (FHE) schemes has almost-optimal security against quasi-polynomial time adversaries, and under the additional worst-case assumption that there is no polynomial time algorithm for counting the number of satisfying assignments for formulas over a polylogarithmic number of variables.
@inproceedings{CHKPRR19,
  author    = {Arka Rai Choudhuri and
              Pavel Hub'{a}v{c}ek and
              Chethan Kamath and
              Krzysztof Pietrzak and
              Alon Rosen and
              Guy N. Rothblum},
  title     = {Finding a Nash equilibrium is no easier than breaking Fiat-Shamir},
  booktitle = {Proceedings of the 51st Annual {ACM} {SIGACT} Symposium on Theory
              of Computing, {STOC} 2019, Phoenix, AZ, USA, June 23-26, 2019.},
  pages     = {1103--1114},
  year      = {2019},
  url       = {https://doi.org/10.1145/3313276.3316400},
  doi       = {10.1145/3313276.3316400},
}

Founding Secure Computation on Blockchains

We study the foundations of secure computation in the blockchain-hybrid model, where a blockchain – modeled as a global functionality – is available as an Oracle to all the participants of a cryptographic protocol. We demonstrate both destructive and constructive applications of blockchains:

  • We show that classical rewinding-based simulation techniques used in many security proofs fail against blockchain-active adversaries that have read and post access to a global blockchain. In particular, we show that zero-knowledge (ZK) proofs with black-box simulation are impossible against blockchain-active adversaries.
  • Nevertheless, we show that achieving security against blockchain-active adversaries is possible if the honest parties are also blockchain active. We construct an ω(1)-round ZK protocol with black-box simulation. We show that this result is tight by proving the impossibility of constant-round ZK with black-box simulation.
  • Finally, we demonstrate a novel application of blockchains to overcome the known impossibility results for concurrent secure computation in the plain model. We construct a concurrent self-composable secure computation protocol for general functionalities in the blockchain-hybrid model based on standard cryptographic assumptions. We develop a suite of techniques for constructing secure protocols in the blockchain-hybrid model that we hope will find applications to future research in this area.
@inproceedings{CGJ19,
  author    = {Arka Rai Choudhuri and
              Vipul Goyal and
              Abhishek Jain},
  title     = {Founding Secure Computation on Blockchains},
  booktitle = {Advances in Cryptology - {EUROCRYPT} 2019 - 38th Annual International
              Conference on the Theory and Applications of Cryptographic Techniques,
              Darmstadt, Germany, May 19-23, 2019, Proceedings, Part {II}},
  pages     = {351--380},
  year      = {2019},
  doi       = {10.1007/978-3-030-17656-3\_13},
}

Two Round Information-Theoretic MPC with Malicious Security

We provide the first constructions of two round information-theoretic (IT) secure multiparty computation (MPC) protocols in the plain model that tolerate any t < n/2 malicious corruptions. Our protocols satisfy the strongest achievable standard notions of security in two rounds in different communication models. Previously, IT-MPC protocols in the plain model either required a larger number of rounds, or a smaller minority of corruptions.
@inproceedings{ACGJ19,
  author    = {Prabhanjan Ananth and
              Arka Rai Choudhuri and
              Aarushi Goel and
              Abhishek Jain},
  title     = {Two Round Information-Theoretic {MPC} with Malicious Security},
  booktitle = {Advances in Cryptology - {EUROCRYPT} 2019 - 38th Annual International
              Conference on the Theory and Applications of Cryptographic Techniques,
              Darmstadt, Germany, May 19-23, 2019, Proceedings, Part {II}},
  pages     = {532--561},
  year      = {2019},
  url       = {https://doi.org/10.1007/978-3-030-17656-3\_19},
}

Block Edit Errors with Transpositions: Deterministic Document Exchange Protocols and Almost Optimal Binary Codes

Document exchange and error correcting codes are two fundamental problems regarding communications. In the first problem, Alice and Bob each holds a string, and the goal is for Alice to send a short sketch to Bob, so that Bob can recover Alice’s string. In the second problem, Alice sends a message with some redundant information to Bob through a channel that can add adversarial errors, and the goal is for Bob to correctly recover the message despite the errors. In a recent work [CJLW18], the authors constructed explicit deterministic document exchange protocols and binary error correcting codes for edit errors with almost optimal parameters. Unfortunately, the constructions in [CJLW18] do not work for other common errors such as block transpositions. In this paper, we generalize the constructions in [CJLW18] to handle a much larger class of errors. These include bursts of insertions and deletions, as well as block transpositions. Specifically, we consider document exchange and error correcting codes where the total number of block insertions, block deletions, and block transpositions is at most k ≤ αn/ log n for some constant 0 < α < 1. In addition, the total number of bits inserted and deleted by the first two kinds of operations is at most t ≤ βn for some constant 0 < β < 1, where n is the length of Alice’s string or message. We construct explicit, deterministic document exchange protocols with sketch size O((k log n + t) log^2 (n / (k log n + t))) and explicit binary error correcting code with O(k logn log log log n + t) redundant bits.
@inproceedings{CJLW19,
  author    = {Kuan Cheng and
              Zhengzhong Jin and
              Xin Li and
              Ke Wu},
  title     = {Block Edit Errors with Transpositions: Deterministic Document Exchange
              Protocols and Almost Optimal Binary Codes},
  booktitle = {46th International Colloquium on Automata, Languages, and Programming,
              {ICALP} 2019, July 9-12, 2019, Patras, Greece.},
  pages     = {37:1--37:15},
  year      = {2019},
  doi       = {10.4230/LIPIcs.ICALP.2019.37},
}

Generic and Practical Key Establishment from Lattice

In this work, we abstract some key ingredients in previous key establishment and public-key encryption schemes from LWE and its variants. Specifically, we explicitly formalize the building tool, referred to as key consensus (KC) and its asymmetric variant AKC. KC and AKC allow two communicating parties to reach consensus from close values, which plays the fundamental role in lattice-based cryptography. We then prove the upper bounds on parameters for any KC and AKC, which reveal the inherent constraints on the parameters among security, bandwidth, error probability, and consensus range. As a conceptual contribution, this simplifies the design and analysis of these cryptosystems in the future. Guided by the proved upper bounds, we design and analyze both generic and highly practical KC and AKC schemes, which are referred to as OKCN and AKCN respectively for presentation simplicity. We present a generic protocol structure for key establishment from learning with rounding (LWR), which can be instantiated with either KC or AKC. We then provide an analysis breaking the correlation between the rounded deterministic noise and the secret, and design an algorithm to calculate the error probability numerically. When applied to LWE-based key establishment, OKCN and AKCN can result in more practical or well-balanced schemes, compared to existing LWE-based protocols in the literature.
@inproceedings{JZ19,
  author    = {Zhengzhong Jin and
              Yunlei Zhao},
  title     = {Generic and Practical Key Establishment from Lattice},
  booktitle = {Applied Cryptography and Network Security - 17th International Conference,
              {ACNS} 2019, Bogota, Colombia, June 5-7, 2019, Proceedings},
  pages     = {302--322},
  year      = {2019},
  doi       = {10.1007/978-3-030-21568-2\_15},
}

Giving State to the Stateless: Augmenting Trustworthy Computation with Ledgers

In this work we investigate the problem of achieving secure computation by combining stateless trusted devices with public ledgers. We consider a hybrid paradigm in which a client-side device (such as a co-processor or trusted enclave) performs secure computation, while interacting with a public ledger via a possibly malicious host computer. We explore both the constructive and potentially destructive implications of such systems. We first show that this combination allows for the construction of stateful interactive functionalities (including general computation) even when the device has no persistent storage; this allows us to build sophisticated applications using inexpensive trusted hardware or even pure cryptographic obfuscation techniques. We further show how to use this paradigm to achieve censorship-resistant communication with a network, even when network communications are mediated by a potentially malicious host. Finally we describe a number of practical applications that can be achieved today. These include the synchronization of private smart contracts; rate limited mandatory logging; strong encrypted backups from weak passwords; enforcing fairness in multi-party computation; and destructive applications such as autonomous ransomware, which allows for payments without an online party.
@inproceedings{KMG19,
  author    = {Gabriel Kaptchuk and
              Matthew Green and
              Ian Miers},
  title     = {Giving State to the Stateless: Augmenting Trustworthy Computation
              with Ledgers},
  booktitle = {{NDSS}},
  publisher = {The Internet Society},
  year      = {2019}
}

Non-Interactive Secure Computation from One-Way Functions

The notion of non-interactive secure computation (NISC) first introduced in the work of Ishai et al. [EUROCRYPT 2011] studies the following problem: Suppose a receiver R wishes to publish an encryption of her secret input y so that any sender S with input x can then send a message m that reveals f(x,y) to R (for some function f). Here, m can be viewed as an encryption of f(x,y) that can be decrypted by R. NISC requires security against both malicious senders and receivers, and also requires the receiver’s message to be reusable across multiple computations (w.r.t. a fixed input of the receiver). All previous solutions to this problem necessarily rely upon OT (or specific number-theoretic assumptions) even in the common reference string model or the random oracle model or to achieve weaker notions of security such as super-polynomial-time simulation. In this work, we construct a NISC protocol based on the minimal assumption of one way functions, in the stateless hardware token model. Our construction achieves UC security and requires a single token sent by the receiver to the sender.
@inproceedings{BJOV18,
  author    = {Saikrishna Badrinarayanan and
              Abhishek Jain and
              Rafail Ostrovsky and
              Ivan Visconti},
  title     = {Non-interactive Secure Computation from One-Way Functions},
  booktitle = {{ASIACRYPT} {(3)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {11274},
  pages     = {118--138},
  publisher = {Springer},
  year      = {2018}
}

Practical State Recovery Attacks against Legacy RNG Implementations

The ANSI X9.17/X9.31 pseudorandom number generator design was first standardized in 1985, with variants incorporated into numerous cryptographic standards over the next three decades. The design uses timestamps together with a statically keyed block cipher to produce pseudo-random output. It has been known since 1998 that the key must remain secret in order for the output to be secure. However, neither the FIPS 140-2 standardization process nor NIST’s later descriptions of the algorithm specified any process for key generation. We performed a systematic study of publicly available FIPS 140-2 certifications for hundreds of products that implemented the ANSI X9.31 random number generator, and found twelve whose certification documents use of static, hard-coded keys in source code, leaving the implementation vulnerable to an attacker who can learn this key from the source code or binary. In order to demonstrate the practicality of such an attack, we develop a full passive decryption attack against FortiGate VPN gateway products using FortiOS v4 that recovers the private key in seconds. We measure the prevalence of this vulnerability on the visible Internet using active scans, and demonstrate state recovery and full private key recovery in the wild. Our work highlights the extent to which the validation and certification process has failed to provide even modest security guarantees.
@inproceedings{CGH18,
  author    = {Shaanan N. Cohney and
              Matthew D. Green and
              Nadia Heninger},
  title     = {Practical State Recovery Attacks against Legacy {RNG} Implementations},
  booktitle = {Proceedings of the 2018 {ACM} {SIGSAC} Conference on Computer and
              Communications Security, {CCS} 2018, Toronto, ON, Canada, October
              15-19, 2018},
  pages     = {265--280},
  year      = {2018},
  url       = {http://doi.acm.org/10.1145/3243734.3243756},
  doi       = {10.1145/3243734.3243756},
}

Deterministic Document Exchange Protocols, and Almost Optimal Binary Codes for Edit Errors

We study two basic problems regarding edit error, i.e. document exchange and error correcting codes for edit errors (insdel codes). For message length n and edit error upper bound k, it is known that in both problems the optimal sketch size or the optimal number of redundant bits is Θ(klognk). However, known constructions are far from achieving these bounds. We significantly improve previous results on both problems. For document exchange, we give an efficient deterministic protocol with sketch size O(k log^2 n/k). This significantly improves the previous best known deterministic protocol, which has sketch size O(k^2 + k log^2 n) (Belazzougui15). For binary insdel codes, we obtain the following results:

  1. An explicit binary insdel code which encodes an n-bit message x against k errors with redundancy O(klog2nk). In particular this implies an explicit family of binary insdel codes that can correct ε fraction of insertions and deletions with rate 1−O(ε log^2 (1/ε)) = 1 − ~O(ε).
  2. An explicit binary insdel code which encodes an n-bit message x against k errors with redundancy O(k log n). This is the first explicit construction of binary insdel codes that has optimal redundancy for a wide range of error parameters k, and this brings our understanding of binary insdel codes much closer to that of standard binary error correcting codes. In obtaining our results we introduce the notion of ε-self matching hash functions and ε-synchronization hash functions. We believe our techniques can have further applications in the literature.
@inproceedings{CJLW18,
  author    = {Kuan Cheng and
              Zhengzhong Jin and
              Xin Li and
              Ke Wu},
  title     = {Deterministic Document Exchange Protocols, and Almost Optimal Binary
              Codes for Edit Errors},
  booktitle = {{FOCS}},
  pages     = {200--211},
  publisher = {{IEEE} Computer Society},
  year      = {2018}
}

Round-Optimal Secure Multiparty Computation with Honest Majority

We study the exact round complexity of secure multiparty computation (MPC) in the honest majority setting. We construct several round-optimal n-party protocols, tolerating any t<n2 corruptions.

  • Security with abort: We give the first construction of two round MPC for general functions that achieves security with abort against malicious adversaries in the plain model. The security of our protocol only relies on one-way functions.
  • Guaranteed output delivery: We also construct protocols that achieve security with guaranteed output delivery: (i) Against fail-stop adversaries, we construct two round MPC either in the (bare) public-key infrastructure model with no additional assumptions, or in the plain model assuming two-round semi-honest oblivious transfer. In three rounds, however, we can achieve security assuming only one-way functions. (ii) Against malicious adversaries, we construct three round MPC in the plain model, assuming public-key encryption and Zaps. Previously, such protocols were only known based on specific learning assumptions and required the use of common reference strings. All of our results are obtained via general compilers that may be of independent interest.
@inproceedings{ACGJ18,
  author    = {Prabhanjan Ananth and
              Arka Rai Choudhuri and
              Aarushi Goel and
              Abhishek Jain},
  title     = {Round-Optimal Secure Multiparty Computation with Honest Majority},
  booktitle = {Advances in Cryptology - {CRYPTO} 2018 - 38th Annual International
              Cryptology Conference, Santa Barbara, CA, USA, August 19-23, 2018,
              Proceedings, Part {II}},
  pages     = {395--424},
  year      = {2018},
  url       = {https://doi.org/10.1007/978-3-319-96881-0\_14},
  doi       = {10.1007/978-3-319-96881-0\_14}
}

Promise Zero Knowledge and its Applications to Round-Optimal MPC

We devise a new partitioned simulation technique for MPC where the simulator uses different strategies for simulating the view of aborting adversaries and non-aborting adversaries. The protagonist of this technique is a new notion of promise zero knowledge (ZK) where the ZK property only holds against non-aborting verifiers. We show how to realize promise ZK in three rounds in the simultaneous-message model assuming polynomially hard DDH (or QR or Nth-Residuosity). We demonstrate the following applications of our new technique:

  • We construct the first round-optimal (i.e., four round) MPC protocol for general functions based on polynomially hard DDH (or QR or Nth-Residuosity).
  • We further show how to overcome the four-round barrier for MPC by constructing a three-round protocol for “list coin-tossing” – a slight relaxation of coin-tossing that suffices for most conceivable applications – based on polynomially hard DDH (or QR or Nth-Residuosity). This result generalizes to randomized input-less functionalities. Previously, four round MPC protocols required sub-exponential-time hardness assumptions and no multi-party three-round protocols were known for any relaxed security notions with polynomial-time simulation against malicious adversaries. In order to base security on polynomial-time standard assumptions, we also rely upon a leveled rewinding security technique that can be viewed as a polynomial-time alternative to leveled complexity leveraging for achieving “non-malleability” across different primitives.
@inproceedings{BGJKKS18,
  author    = {Saikrishna Badrinarayanan 
              and Vipul Goyal 
              and Abhishek Jain 
              and Yael Tauman Kalai 
              and Dakshita Khurana 
              and Amit Sahai},
  title     = {Promise Zero Knowledge and Its Applications to Round Optimal {MPC}},
  booktitle = {Advances in Cryptology - {CRYPTO} 2018 - 38th Annual International
              Cryptology Conference, Santa Barbara, CA, USA, August 19-23, 2018,
              Proceedings, Part {II}},
  pages     = {459--487},
  year      = {2018},
  url       = {https://doi.org/10.1007/978-3-319-96881-0\_16},
  doi       = {10.1007/978-3-319-96881-0\_16}
}

The Bottleneck Complexity of Secure Multiparty Computation

In this work, we initiate the study of bottleneck complexity as a new communication efficiency measure for secure multiparty computation (MPC). Roughly, the bottleneck complexity of an MPC protocol is defined as the maximum communication complexity required by any party within the protocol execution. We observe that even without security, bottleneck communication complexity is an interesting measure of communication complexity for (distributed) functions and propose it as a fundamental area to explore. While achieving O(n) bottleneck complexity (where n is the number of parties) is straightforward, we show that: (1) achieving sublinear bottleneck complexity is not always possible, even when no security is required. (2) On the other hand, several useful classes of functions do have o(n) bottleneck complexity, when no security is required. Our main positive result is a compiler that transforms any (possibly insecure) efficient protocol with fixed communication-pattern for computing any functionality into a secure MPC protocol while preserving the bottleneck complexity of the underlying protocol (up to security parameter overhead). Given our compiler, an efficient protocol for any function f with sublinear bottleneck complexity can be transformed into an MPC protocol for f with the same bottleneck complexity. Along the way, we build cryptographic primitives - incremental fully-homomorphic encryption, succinct non-interactive arguments of knowledge with ID-based simulation-extractability property and verifiable protocol execution - that may be of independent interest.
@inproceedings{BJPY18,
  author    = {Elette Boyle and
              Abhishek Jain and
              Manoj Prabhakaran and
              Ching{-}Hua Yu},
  title     = {The Bottleneck Complexity of Secure Multiparty Computation},
  booktitle = {45th International Colloquium on Automata, Languages, and Programming,
              {ICALP} 2018, July 9-13, 2018, Prague, Czech Republic},
  pages     = {24:1--24:16},
  year      = {2018},
  url       = {https://doi.org/10.4230/LIPIcs.ICALP.2018.24},
  doi       = {10.4230/LIPIcs.ICALP.2018.24}
}

On the Existence of Three Round Zero-Knowledge Proofs

We study the round complexity of zero-knowledge (ZK) proof systems. While five round ZK proofs for NP are known from standard assumptions [Goldreich-Kahan, J. Cryptology'96], Katz [TCC'08] proved that four rounds are insufficient for this task w.r.t. black-box simulation. In this work, we study the feasibility of ZK proofs using non-black-box simulation. Our main result is that three round private-coin ZK proofs for NP do not exist (even w.r.t. non-black-box simulation), under certain assumptions on program obfuscation. Our approach builds upon the recent work of Kalai et al. [Crypto'17] who ruled out constant round public-coin ZK proofs under the same assumptions as ours.
@inproceedings{FGJ18,
  author    = {Nils Fleischhacker and
              Vipul Goyal and
              Abhishek Jain},
  title     = {On the Existence of Three Round Zero-Knowledge Proofs},
  booktitle = {Advances in Cryptology - {EUROCRYPT} 2018 - 37th Annual International
              Conference on the Theory and Applications of Cryptographic Techniques,
              Tel Aviv, Israel, April 29 - May 3, 2018 Proceedings, Part {III}},
  pages     = {3--33},
  year      = {2018},
  url       = {https://doi.org/10.1007/978-3-319-78372-7\_1},
  doi       = {10.1007/978-3-319-78372-7\_1},
}

Synchronized Aggregate Signatures from the RSA Assumption

In this work we construct efficient aggregate signatures from the RSA assumption in the synchronized setting. In this setting, the signing algorithm takes as input a (time) period t as well the secret key and message. A signer should sign at most once for each t. A set of signatures can be aggregated so long as they were all created for the same period t. Synchronized aggregate signatures are useful in systems where there is a natural reporting period such as log and sensor data, or for signatures embedded in a blockchain protocol where the creation of an additional block is a natural synchronization event. We design a synchronized aggregate signature scheme that works for a bounded number of periods T that is given as a parameter to a global system setup. The big technical question is whether we can create solutions that will perform well with the large T values that we might use in practice. For instance, if one wanted signing keys to last up to ten years and be able to issue signatures every second, then we would need to support a period bound of upwards of 228. We build our solution in stages where we start with an initial solution that establishes feasibility, but has an impractically large signing time where the number of exponentiations and prime searches grows linearly with T. We prove this scheme secure in the standard model under the RSA assumption with respect to honestly-generated keys. We then provide a tradeoff method where one can tradeoff the time to create signatures with the space required to store private keys. One point in the tradeoff is where each scales with sqrt(T). Finally, we reach our main innovation which is a scheme where both the signing time and storage scale with lg T which allows for us to keep both computation and storage costs modest even for large values of T. Conveniently, our final scheme uses the same verification algorithm, and has the same distribution of public keys and signatures as the first scheme. Thus we are able to recycle the existing security proof for the new scheme. We also show how to extend our results to the identity-based setting in the random oracle model, which can further reduce the overall cryptographic overhead. We conclude with a detailed evaluation of the signing time and storage requirements for various practical settings of the system parameters.
@inproceedings{HW18,
  author    = {Susan Hohenberger and
              Brent Waters},
  title     = {Synchronized Aggregate Signatures from the {RSA} Assumption},
  booktitle = {Advances in Cryptology - {EUROCRYPT} 2018 - 37th Annual International
              Conference on the Theory and Applications of Cryptographic Techniques,
              Tel Aviv, Israel, April 29 - May 3, 2018 Proceedings, Part {II}},
  pages     = {197--229},
  year      = {2018},
  url       = {https://doi.org/10.1007/978-3-319-78375-8\_7},
  doi       = {10.1007/978-3-319-78375-8\_7},
}

How to Squeeze a Crowd: Reducing Bandwidth in Mixing Cryptocurrencies

Several popular cryptocurrencies incorporate privacy features that “mix” real transactions with cover traffic in order to obfuscate the public transaction graph. The underlying protocols, which include CryptoNote and Monero’s RingCT, work by first identifying a real transaction output (TXO), sampling a number of cover outputs, and transmitting the entire resulting set to verifiers, along with a zero knowledge (or WI) proof that hides the identity of the real transaction. Unfortunately, many of these schemes suffer from a practical limitation: the description of the combined input set grows linearly with size of the anonymity set. In this work we propose a simple technique for efficiently sampling cover traffic from a finite (and public) set of known values, while deriving a compact description of the resulting transaction set. This technique, which is based on programmable hash functions, allows us to dramatically reduce transaction bandwidth when large cover sets are used. We refer to our construction as a recoverable sampling scheme, and note that it may be of independent interest for other privacy applications. We present formal security definitions; prove our constructions secure; and show how these constructions can be integrated with various currencies and different cover sampling distributions.
@INPROCEEDINGS{CG18b, 
author={A. Chator and M. Green}, 
booktitle={2018 IEEE European Symposium on Security and Privacy Workshops (EuroS PW)}, 
title={How to Squeeze a Crowd: Reducing Bandwidth in Mixing Cryptocurrencies}, 
year={2018}, 
volume={}, 
number={}, 
pages={40-49}, 
keywords={Bandwidth;Cryptography;Currencies;Indexes;Privacy;Protocols;Anonymity;Mixing;Privacy;Ring Signatures}, 
doi={10.1109/EuroSPW.2018.00012}, 
ISSN={}, 
month={April},}

Don’t Talk to Strangers: On the Challenges of Intelligent Vehicle Authentication

Vehicle-to-vehicle (V2V) communications offer an unprecedented opportunity to increase driver safety. At the same time, the use of computer networking technologies raises new concerns around information security and privacy. Specifically, V2V communications systems provide the opportunity for malicious individuals to transmit false data, with unknown effects on future vehicle systems. A number of proposals have been advanced in order to add authenticity guarantees to V2V systems using cryptographic techniques. Unfortunately, many of these proposals have a number of side effects related to efficiency and driver privacy. In this work we discuss these tradeoffs and explain why it is challenging to achieve all desired properties in a single system. We then suggest alternative approaches that may be more realistic than current proposals.
@conference{CG18a,
  author={Alishah Chator and Matthew Green},
  title={Don’t Talk to Strangers - On the Challenges of Intelligent Vehicle Authentication},
  booktitle={Proceedings of the 4th International Conference on Vehicle Technology and Intelligent Transport Systems - Volume 1: VEHITS,},
  year={2018},
  pages={522-528},
  publisher={SciTePress},
  organization={INSTICC},
  doi={10.5220/0006790205220528},
  isbn={978-989-758-293-6},
}

Non-Interactive Multiparty Computation without Correlated Randomness

We study the problem of non-interactive multiparty computation (NI-MPC) where a group of completely asynchronous parties can evaluate a function over their joint inputs by sending a single message to an evaluator who computes the output. Previously, the only general solutions to this problem that resisted collusions between the evaluator and a set of parties were based on multi-input functional encryption and required the use of complex correlated randomness setup. In this work, we present a new solution for NI-MPC against arbitrary collusions using a public-key infrastructure (PKI) setup supplemented with a common random string. A PKI is, in fact, the minimal setup that one can hope for in this model in order to achieve a meaningful “best possible” notion of security, namely, that an adversary that corrupts the evaluator and an arbitrary set of parties only learns the residual function obtained by restricting the function to the inputs of the uncorrupted parties. Our solution is based on indistinguishability obfuscation and DDH both with sub-exponential security. We extend this main result to the case of general interaction patterns, providing the above best possible security that is achievable for the given interaction. Our main result gives rise to a novel notion of (public-key) multiparty obfuscation, where n parties can independently obfuscate program modules Mi such that the obfuscated modules, when put together, exhibit the functionality of the program obtained by “combining” the underlying modules Mi. This notion may be of independent interest.
@inproceedings{HIJKSY17,
  author    = {Shai Halevi and
              Yuval Ishai and
              Abhishek Jain and
              Ilan Komargodski and
              Amit Sahai and
              Eylon Yogev},
  title     = {Non-Interactive Multiparty Computation Without Correlated Randomness},
  booktitle = {{ASIACRYPT} {(3)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {10626},
  pages     = {181--211},
  publisher = {Springer},
  year      = {2017}
}

A Generic Approach to Constructing and Proving Verifiable Random Functions

Verifiable Random Functions (VRFs) as introduced by Micali, Rabin and Vadhan are a special form of Pseudo Random Functions (PRFs) wherein a secret key holder can also prove validity of the function evaluation relative to a statistically binding commitment. Prior works have approached the problem of constructing VRFs by proposing a candidate under specific number theoretic setting — mostly in bilinear groups — and then grapple with the challenges of proving security in the VRF environments. These constructions achieved different results and tradeoffs in practical efficiency, tightness of reductions and cryptographic assumptions. In this work we take a different approach. Instead of tackling the VRF problem as a whole we demonstrate a simple and generic way of building Verifiable Random Functions from more basic and narrow cryptographic primitives. Then we can turn to exploring solutions to these primitives with a more focused mindset. In particular, we show that VRFs can be constructed generically from the ingredients of: (1) a 1-bounded constrained pseudo random function for a functionality that is “admissible hash friendly”, (2) a non-interactive statistically binding commitment scheme (without trusted setup) and (3) a non-interactive witness indistinguishable proofs or NIWIs. The first primitive can be replaced with a more basic puncturable PRF constraint if one is willing to settle for selective security or assume sub-exponential hardness of assumptions. In the second half of our work we support our generic approach by giving new constructions of the underlying primitives. We first provide new constructions of perfectly binding commitments from the Learning with Errors (LWE) and Learning Parity with Noise (LPN) assumptions. Second, we give give two new constructions of 1-bounded constrained PRFs for admissible hash friendly constructions. Our first construction is from the n-powerDDH assumption. The next is from the ϕ hiding assumption.
@inproceedings{GHKW17,
  author    = {Rishab Goyal and
              Susan Hohenberger and
              Venkata Koppula and
              Brent Waters},
  title     = {A Generic Approach to Constructing and Proving Verifiable Random Functions},
  booktitle = {Theory of Cryptography - 15th International Conference, {TCC} 2017,
              Baltimore, MD, USA, November 12-15, 2017, Proceedings, Part {II}},
  pages     = {537--566},
  year      = {2017},
  url       = {https://doi.org/10.1007/978-3-319-70503-3_18},
  doi       = {10.1007/978-3-319-70503-3_18}
}

On Secure Two-Party Computation in Three Rounds

We revisit the exact round complexity of secure two-party computation. While four rounds are known to be sufficient for securely computing general functions that provide output to one party [Katz-Ostrovsky, CRYPTO’04], Goldreich-Krawczyk [SIAM J. Computing’96] proved that three rounds are insufficient for this task w.r.t. black-box simulation. In this work, we study the feasibility of secure computation in three rounds using non-black-box simulation. Our main result is a three-round two-party computation protocol for general functions against adversaries with auxiliary inputs of a priori bounded size. This result relies on a new two round input-extraction protocol based on succinct randomized encodings. We also provide a partial answer to the question of achieving security against non-uniform adversaries. Assuming sub-exponentially secure iO and one-way functions, we rule out three-round protocols that achieve polynomial simulation-based security against the output party and exponential indistinguishability-based security against the other party.
@inproceedings{AJ17,
  author    = {Prabhanjan Ananth and
              Abhishek Jain},
  title     = {On Secure Two-Party Computation in Three Rounds},
  booktitle = {Theory of Cryptography - 15th International Conference, {TCC} 2017,
              Baltimore, MD, USA, November 12-15, 2017, Proceedings, Part {I}},
  pages     = {612--644},
  year      = {2017},
  url       = {https://doi.org/10.1007/978-3-319-70500-2_21},
  doi       = {10.1007/978-3-319-70500-2_21}
}

Round Optimal Concurrent MPC via Strong Simulation

In this paper, we study the round complexity of concurrently secure multi-party computation (MPC) with super-polynomial simulation (SPS) in the plain model. In the plain model, there are known explicit attacks that show that concurrently secure MPC with polynomial simulation is impossible to achieve; SPS security is the most widely studied model for concurrently secure MPC in the plain model. We obtain the following results: – Three-round concurrent MPC with SPS security against Byzantine adversaries, assuming sub-exponentially secure DDH and LWE. – Two-round concurrent MPC with SPS security against Byzantine adversaries for input-less randomized functionalities, assuming sub-exponentially secure indistinguishability obfuscation and DDH. In particular, this class includes sampling functionalities that allow parties to jointly sample a secure common reference string for cryptographic applications. Prior to our work, to the best of our knowledge, concurrent MPC with SPS security required roughly 20 rounds, although we are not aware of any work that even gave an approximation of the constant round complexity sufficient for the multi-party setting. We also improve over the previous best round complexity for the two-party setting, where 5 rounds were needed (Garg, Kiyoshima, and Pandey, Eurocrypt 2017). To obtain our results, we compile protocols that already achieve security against “semi-malicious” adversaries, to protocols secure against fully malicious adversaries, additionally assuming sub-exponential DDH. Our protocols develop new techniques to use two-round zero-knowledge with super-polynomial strong simulation, defined by Pass (Eurocrypt 2003) and very recently realized by Khurana and Sahai (FOCS 2017). These remain zero-knowledge against adversaries running in time larger than the running time of the simulator.
@inproceedings{BGJKS17,
  author    = {Saikrishna Badrinarayanan and
              Vipul Goyal and
              Abhishek Jain and
              Dakshita Khurana and
              Amit Sahai},
  title     = {Round Optimal Concurrent {MPC} via Strong Simulation},
  booktitle = {Theory of Cryptography - 15th International Conference, {TCC} 2017,
              Baltimore, MD, USA, November 12-15, 2017, Proceedings, Part {I}},
  pages     = {743--775},
  year      = {2017},
  url       = {https://doi.org/10.1007/978-3-319-70500-2_25},
  doi       = {10.1007/978-3-319-70500-2_25}
}

Verified Correctness and Security of mbedTLS HMAC-DRBG

We have formalized the functional specification of HMAC-DRBG (NIST 800-90A), and we have proved its cryptographic security-that its output is pseudorandom–using a hybrid game-based proof. We have also proved that the mbedTLS implementation (C program) correctly implements this functional specification. That proof composes with an existing C compiler correctness proof to guarantee, end-to-end, that the machine language program gives strong pseudorandomness. All proofs (hybrid games, C program verification, compiler, and their composition) are machine-checked in the Coq proof assistant. Our proofs are modular: the hybrid game proof holds on any implementation of HMAC-DRBG that satisfies our functional specification. Therefore, our functional specification can serve as a high-assurance reference.
@inproceedings{YGSBPA17,
  author = {Ye, Katherine Q. and Green, Matthew and Sanguansin, Naphat and Beringer, Lennart and Petcher, Adam and Appel, Andrew W.},
  title = {Verified Correctness and Security of mbedTLS HMAC-DRBG},
  booktitle = {Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security},
  series = {CCS '17},
  year = {2017},
  isbn = {978-1-4503-4946-8},
  location = {Dallas, Texas, USA},
  pages = {2007--2020},
  numpages = {14},
  url = {http://doi.acm.org/10.1145/3133956.3133974},
  doi = {10.1145/3133956.3133974},
  acmid = {3133974},
  publisher = {ACM},
  address = {New York, NY, USA},
  keywords = {formal verification, functional specification, hmac-drbg, pseudo-random generator},
} 

Bolt: Anonymous Payment Channels for Decentralized Currencies

Bitcoin owes its success to the fact that transactions are transparently recorded in the blockchain, a global public ledger that removes the need for trusted parties. Unfortunately, recording every transaction in the blockchain causes privacy, latency, and scalability issues. Building on recent proposals for “micropayment channels” — two party associations that use the ledger only for dispute resolution — we introduce techniques for constructing anonymous payment channels. Our proposals allow for secure, instantaneous and private payments that substantially reduce the storage burden on the payment network. Specifically, we introduce three channel proposals, including a technique that allows payments via untrusted intermediaries. We build a concrete implementation of our scheme and show that it can be deployed via a soft fork to existing anonymous currencies such as ZCash.
@inproceedings{GM17,
  author = {Green, Matthew and Miers, Ian},
  title = {Bolt: Anonymous Payment Channels for Decentralized Currencies},
  booktitle = {Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security},
  series = {CCS '17},
  year = {2017},
  isbn = {978-1-4503-4946-8},
  location = {Dallas, Texas, USA},
  pages = {473--489},
  numpages = {17},
  url = {http://doi.acm.org/10.1145/3133956.3134093},
  doi = {10.1145/3133956.3134093},
  acmid = {3134093},
  publisher = {ACM},
  address = {New York, NY, USA},
  keywords = {bitcoin, blockchain, off chain, payments},
} 

Fairness in an Unfair World: Fair Multiparty Computation from Public Bulletin Boards

Secure multiparty computation allows mutually distrusting parties to compute a function on their private inputs such that nothing but the function output is revealed. Achieving fairness — that all parties learn the output or no one does – is a long studied problem with known impossibility results in the standard model if a majority of parties are dishonest. We present a new model for achieving fairness in MPC against dishonest majority by using public bulletin boards implemented via existing infrastructure such as blockchains or Google’s certificate transparency logs. We present both theoretical and practical constructions using either witness encryption or trusted hardware (such as Intel SGX). Unlike previous works that either penalize an aborting party or achieve weaker notions such as Δ-fairness, we achieve complete fairness using existing infrastructure.
@inproceedings{CGJKM17,
  author    = {Arka Rai Choudhuri and
              Matthew Green and
              Abhishek Jain and
              Gabriel Kaptchuk and
              Ian Miers},
  title     = {Fairness in an Unfair World: Fair Multiparty Computation from Public
              Bulletin Boards},
  booktitle = {Proceedings of the 2017 {ACM} {SIGSAC} Conference on Computer and
              Communications Security, {CCS} 2017, Dallas, TX, USA, October 30 -
              November 03, 2017},
  pages     = {719--728},
  year      = {2017},
  url       = {http://doi.acm.org/10.1145/3133956.3134092},
  doi       = {10.1145/3133956.3134092}
}

A New Approach to Round-Optimal Secure Multiparty Computation

We present a new approach towards constructing round-optimal secure multiparty computation (MPC) protocols against malicious adversaries without trusted setup assumptions. Our approach builds on ideas previously developed in the context of covert multiparty computation [Chandran et al., FOCS'07] even though we do not seek covert security. Using our new approach, we obtain the following results:

  1. A five round MPC protocol based on the Decisional Diffie-Hellman (DDH) assumption.
  2. A four round MPC protocol based on one-way permutations and sub-exponentially secure DDH. This result is optimal in the number of rounds. Previously, no four-round MPC protocol for general functions was known and five-round protocols were only known based on indistinguishability obfuscation (and some additional assumptions) [Garg et al., EUROCRYPT'16].
@inproceedings{ACJ17b,
  author    = {Prabhanjan Ananth and
              Arka Rai Choudhuri and
              Abhishek Jain},
  title     = {A New Approach to Round-Optimal Secure Multiparty Computation},
  booktitle = {Advances in Cryptology - {CRYPTO} 2017 - 37th Annual International
              Cryptology Conference, Santa Barbara, CA, USA, August 20-24, 2017,
              Proceedings, Part {I}},
  pages     = {468--499},
  year      = {2017},
  url       = {https://doi.org/10.1007/978-3-319-63688-7_16},
  doi       = {10.1007/978-3-319-63688-7_16}
}

Distinguisher Dependent Simulation in Two Rounds and its Applications

We devise a novel simulation technique that makes black-box use of the adversary as well as the distinguisher. Using this technique we construct several round-optimal protocols, many of which were previously unknown even using non-black-box simulation techniques:

  • Two-round witness indistinguishable (WI) arguments for NP from different assumptions than previously known.
  • Two-round arguments and three-round arguments of knowledge for NP that achieve strong WI, witness hiding (WH) and distributional weak zero knowledge (WZK) properties in a setting where the instance is only determined by the prover in the last round of the interaction. The soundness of these protocols is guaranteed against adaptive provers.
  • Three-round two-party computation satisfying input-indistinguishable security as well as a weaker notion of simulation security against malicious adversaries.
  • Three-round extractable commitments with guaranteed correctness of extraction from polynomial hardness assumptions. Our three-round protocols can be based on DDH or QR or N^th residuosity and our two-round protocols require quasi-polynomial hardness of the same assumptions. In particular, prior to this work, two-round WI arguments for NP were only known based on assumptions such as the existence of trapdoor permutations, hardness assumptions on bilinear maps, or the existence of program obfuscation; we give the first construction based on (quasi-polynomial) DDH. Our simulation technique bypasses known lower bounds on black-box simulation [Goldreich-Krawcyzk'96] by using the distinguisher’s output in a meaningful way. We believe that this technique is likely to find more applications in the future.
@inproceedings{JKKR17,
  author    = {Abhishek Jain and
              Yael Tauman Kalai and
              Dakshita Khurana and
              Ron Rothblum},
  title     = {Distinguisher-Dependent Simulation in Two Rounds and its Applications},
  booktitle = {Advances in Cryptology - {CRYPTO} 2017 - 37th Annual International
              Cryptology Conference, Santa Barbara, CA, USA, August 20-24, 2017,
              Proceedings, Part {II}},
  pages     = {158--189},
  year      = {2017},
  url       = {https://doi.org/10.1007/978-3-319-63715-0_6},
  doi       = {10.1007/978-3-319-63715-0_6}
}

Indistinguishability Obfuscation for Turing Machines: Constant Overhead and Amortization

We study the asymptotic efficiency of indistinguishability obfuscation (iO) on two fronts:

  • Obfuscation size: Present constructions of indistinguishability obfuscation (iO) create obfuscated programs where the size of the obfuscated program is at least a multiplicative factor of security parameter larger than the size of the original program. In this work, we construct the first iO scheme for (bounded-input) Turing machines that achieves only a constant multiplicative overhead in size. The constant in our scheme is, in fact, 2.
  • Amortization: Suppose we want to obfuscate an arbitrary polynomial number of (bounded-input) Turing machines M1,…,Mn. We ask whether it is possible to obfuscate M1,…,Mn using a single application of an iO scheme for a circuit family where the size of any circuit is independent of n as well the size of any Turing machine Mi. In this work, we resolve this question in the affirmative, obtaining a new bootstrapping theorem for obfuscating arbitrarily many Turing machines. Our results rely on the existence of sub-exponentially secure iO for circuits and re-randomizable encryption schemes. In order to obtain these results, we develop a new template for obfuscating Turing machines that is of independent interest and has recently found application in subsequent work on patchable obfuscation [Ananth et al, EUROCRYPT'17].
@inproceedings{AJS17b,
  author    = {Prabhanjan Ananth and
              Abhishek Jain and
              Amit Sahai},
  title     = {Indistinguishability Obfuscation for Turing Machines: Constant Overhead
              and Amortization},
  booktitle = {Advances in Cryptology - {CRYPTO} 2017 - 37th Annual International
              Cryptology Conference, Santa Barbara, CA, USA, August 20-24, 2017,
              Proceedings, Part {II}},
  pages     = {252--279},
  year      = {2017},
  url       = {https://doi.org/10.1007/978-3-319-63715-0_9},
  doi       = {10.1007/978-3-319-63715-0_9}
}

Signature Schemes with Randomized Verification

A signature scheme consists of a setup, signing and verification algorithms. In most existing works, the verification algorithm is assumed to be deterministic. However, there could be signature schemes where the verification algorithm is randomized. In this work, we study signature schemes with randomized verification. Our results can be summarized as follows. First, we present a security definition for signature schemes with randomized verification. The standard EUFCMA notion of security for signature schemes with deterministic verification is very restrictive when we consider randomized verification. Therefore, we propose a new security definition called χ-EUFCMA which captures a broad class of signature schemes with randomized verification. Next, we analyse the security of Naor’s transformation from Identity Based Encryption to signature schemes. Such a transformation results in a scheme with randomized verification. We show that this transformation can be proven χ-EUFCMA secure by choosing χ appropriately. Finally, we show how a scheme with randomized verification can be generically transformed to one with deterministic verification.
@inproceedings{FGHKLOTW17,
  author    = {Cody Freitag and
              Rishab Goyal and
              Susan Hohenberger and
              Venkata Koppula and
              Eysa Lee and
              Tatsuaki Okamoto and
              Jordan Tran and
              Brent Waters},
  title     = {Signature Schemes with Randomized Verification},
  booktitle = {Applied Cryptography and Network Security - 15th International Conference,
              {ACNS} 2017, Kanazawa, Japan, July 10-12, 2017, Proceedings},
  pages     = {373--389},
  year      = {2017},
  url       = {https://doi.org/10.1007/978-3-319-61204-1_19},
  doi       = {10.1007/978-3-319-61204-1_19}
}

Decentralized Anonymous Micropayments

Micropayments (payments worth a few pennies) have numerous potential applications. A challenge in achieving them is that payment networks charge fees that are high compared to “micro” sums of money. Wheeler (1996) and Rivest (1997) proposed probabilistic payments as a technique to achieve micropayments: a merchant receives a macro-value payment with a given probability so that, in expectation, he receives a micro-value payment. Despite much research and trial deployment, micropayment schemes have not seen adoption, partly because a trusted party is required to process payments and resolve disputes. The widespread adoption of decentralized currencies such as Bitcoin (2009) suggests that decentralized micropayment schemes are easier to deploy. Pass and Shelat (2015) proposed several micropayment schemes for Bitcoin, but their schemes provide no more privacy guarantees than Bitcoin itself, whose transactions are recorded in plaintext in a public ledger. We formulate and construct decentralized anonymous micropayment (DAM) schemes, which enable parties with access to a ledger to conduct offline probabilistic payments with one another, directly and privately. Our techniques extend those of Zerocash (2014) with a new probabilistic payment scheme; we further provide an efficient instantiation based on a new fractional message transfer protocol. Double spending in our setting cannot be prevented. Our second contribution is an economic analysis that bounds the additional utility gain of any cheating strategy, and applies to virtually any probabilistic payment scheme with offline validation. In our construction, this bound allows us to deter double spending by way of advance deposits that are revoked when cheating is detected.
@inproceedings{CGLMMM17,
  author    = {Alessandro Chiesa and
              Matthew Green and
              Jingcheng Liu and
              Peihan Miao and
              Ian Miers and
              Pratyush Mishra},
  title     = {Decentralized Anonymous Micropayments},
  booktitle = {Advances in Cryptology - {EUROCRYPT} 2017 - 36th Annual International
              Conference on the Theory and Applications of Cryptographic Techniques,
              Paris, France, April 30 - May 4, 2017, Proceedings, Part {II}},
  pages     = {609--642},
  year      = {2017},
  url       = {https://doi.org/10.1007/978-3-319-56614-6_21},
  doi       = {10.1007/978-3-319-56614-6_21}
}

Cryptography with Updates

Starting with the work of Bellare, Goldreich and Goldwasser [CRYPTO’94], a rich line of work has studied the design of updatable cryptographic primitives. For example, in an updatable signature scheme, it is possible to efficiently transform a signature over a message into a signature over a related message without recomputing a fresh signature. In this work, we continue this line of research, and perform a systematic study of updatable cryptography. We take a unified approach towards adding updatability features to recently studied cryptographic objects such as attribute-based encryption, functional encryption, witness encryption, indistinguishability obfuscation, and many others that support non-interactive computation over inputs. We, in fact, go further and extend our approach to classical protocols such as zero-knowledge proofs and secure multiparty computation. To accomplish this goal, we introduce a new notion of updatable randomized encodings that extends the standard notion of randomized encodings to incorporate updatability features. We show that updatable randomized encodings can be used to generically transform cryptographic primitives to their updatable counterparts. We provide various definitions and constructions of updatable randomized encodings based on varying assumptions, ranging from one-way functions to compact functional encryption.
@inproceedings{ACJ17a,
  author    = {Prabhanjan Ananth and
              Aloni Cohen and
              Abhishek Jain},
  title     = {Cryptography with Updates},
  booktitle = {Advances in Cryptology - {EUROCRYPT} 2017 - 36th Annual International
              Conference on the Theory and Applications of Cryptographic Techniques,
              Paris, France, April 30 - May 4, 2017, Proceedings, Part {II}},
  pages     = {445--472},
  year      = {2017},
  url       = {https://doi.org/10.1007/978-3-319-56614-6_15},
  doi       = {10.1007/978-3-319-56614-6_15},
}

Patchable Indistinguishability Obfuscation: iO for Evolving Software

In this work, we introduce patchable indistinguishability obfuscation: our notion adapts the notion of indistinguishability obfuscation (iO) to a very general setting where obfuscated software evolves over time. We model this broadly by considering software patches P as arbitrary Turing Machines that take as input the description of a Turing Machine M, and output a new Turing Machine description M′=P(M). Thus, a short patch P can cause changes everywhere in the description of M and can even cause the description length of the machine to increase by an arbitrary polynomial amount. We further consider multi-program patchable indistinguishability obfuscation where a patch is applied not just to a single machine M, but to an unbounded set of machines M1,…,Mn to yield P(M1),…,P(Mn). We consider both single-program and multi-program patchable indistinguishability obfuscation in a setting where there are an unbounded number of patches that can be adaptively chosen by an adversary. We show that sub-exponentially secure iO for circuits and sub-exponentially secure re-randomizable encryption schemes (Re-randomizable encryption schemes can be instantiated under standard assumptions such as DDH, LWE.) imply single-program patchable indistinguishability obfuscation; and we show that sub-exponentially secure iO for circuits and sub-exponentially secure DDH imply multi-program patchable indistinguishability obfuscation. At the our heart of results is a new notion of splittable iO that allows us to transform any iO scheme into a patchable one. Finally, we exhibit some simple applications of patchable indistinguishability obfuscation, to demonstrate how these concepts can be applied.
@inproceedings{ACJ17a,
  author    = {Prabhanjan Ananth and
              Aloni Cohen and
              Abhishek Jain},
  title     = {Cryptography with Updates},
  booktitle = {Advances in Cryptology - {EUROCRYPT} 2017 - 36th Annual International
              Conference on the Theory and Applications of Cryptographic Techniques,
              Paris, France, April 30 - May 4, 2017, Proceedings, Part {II}},
  pages     = {445--472},
  year      = {2017},
  url       = {https://doi.org/10.1007/978-3-319-56614-6_15},
  doi       = {10.1007/978-3-319-56614-6_15},
}

Outsourcing Medical Dataset Analysis: A Possible Solution

We explore the possible ways modern cryptographic methods can be applied to the field of medical data analysis. Current systems require large computational facilities owned by the data owners or excessive trust given to the researchers. We implement one possible solution in which researchers operate directly on homomorphically encrypted data and the data owner decrypts the results. We test our implementation on large datasets and show that it is suciently practical that it could be a helpful tool for modern researchers. We also perform a heuristic analysis of the security of our system.
@inproceedings{AJS17a,
  author    = {Prabhanjan Ananth and
              Abhishek Jain and
              Amit Sahai},
  title     = {Patchable Indistinguishability Obfuscation: i\emph{O} for Evolving
              Software},
  booktitle = {Advances in Cryptology - {EUROCRYPT} 2017 - 36th Annual International
              Conference on the Theory and Applications of Cryptographic Techniques,
              Paris, France, April 30 - May 4, 2017, Proceedings, Part {III}},
  pages     = {127--155},
  year      = {2017},
  url       = {https://doi.org/10.1007/978-3-319-56617-7_5},
  doi       = {10.1007/978-3-319-56617-7_5}
}

Significantly Improved Multi-bit Differentials for Reduced Round Salsa and ChaCha

ChaCha and Salsa are two software oriented stream ciphers that have attracted serious attention in academic as well as commercial domain. The most important cryptanalysis of reduced versions of these ciphers was presented by Aumasson et al. in FSE 2008. One part of their attack was to apply input difference(s) to investigate biases after a few rounds. So far there have been certain kind of limited exhaustive searches to obtain such biases. For the first time, in this paper, we show how to theoretically choose the combinations of the output bits to obtain significantly improved biases. The main idea here is to consider the multi-bit differentials as extension of suitable single-bit differentials with linear approximations, which is essentially a differential-linear attack. As we consider combinations of many output bits (for example 19 for Salsa and 21 for ChaCha), exhaustive search is not possible here. By this method we obtain very high biases for linear combinations of bits in Salsa after 6 rounds and in ChaCha after 5 rounds. These are clearly two rounds of improvement for both the ciphers over the existing works. Using these biases we obtain several significantly improved cryptanalytic results for reduced round Salsa and ChaCha that could not be obtained earlier. In fact, with our results it is now possible to cryptanalyse 6-round Salsa and 5-round ChaCha in practical time.
@article{CM16,
  author    = {Arka Rai Choudhuri and
              Subhamoy Maitra},
  title     = {Significantly Improved Multi-bit Differentials for Reduced Round Salsa
              and ChaCha},
  journal   = {{IACR} Trans. Symmetric Cryptol.},
  volume    = {2016},
  number    = {2},
  pages     = {261--287},
  year      = {2016},
  url       = {https://doi.org/10.13154/tosc.v2016.i2.261-287},
  doi       = {10.13154/tosc.v2016.i2.261-287}
}

Indistinguishability Obfuscation from Functional Encryption for Simple Functions

We show how to construct indistinguishability obfuscation (iO) for circuits from any non-compact functional encryption (FE) scheme with sub-exponential security against unbounded collusions. We accomplish this by giving a generic transformation from any such FE scheme into a compact FE scheme. By composing this with the transformation from sub-exponentially secure compact FE to iO (Ananth and Jain [CRYPTO'15], Bitansky and Vaikuntanathan [FOCS'15]), we obtain our main result. Our result provides a new pathway to iO. We use our technique to identify a simple function family for FE that suffices for our general result. We show that the function family F is complete, where every f in F consists of three evaluations of a Weak PRF followed by finite operations. We believe that this may be useful for realizing iO from weaker assumptions in the future.
@misc{cryptoeprint:2015/730,
      author = {Prabhanjan Ananth and Abhishek Jain and Amit Sahai},
      title = {Indistinguishability Obfuscation from Functional Encryption for Simple Functions},
      howpublished = {Cryptology ePrint Archive, Paper 2015/730},
      year = {2015},
      note = {\url{https://eprint.iacr.org/2015/730}},
      url = {https://eprint.iacr.org/2015/730}
}