FLandEncryption

Papers and Code

Asynchronous Federated Optimization https://arxiv.org/pdf/1903.03934

Towards Federated Learning at Scale: System Design https://arxiv.org/pdf/1902.01046

Robust and Communication-Efficient Federated Learning from Non-IID Data https://arxiv.org/pdf/1903.02891

One-Shot Federated Learning https://arxiv.org/pdf/1902.11175

High Dimensional Restrictive Federated Model Selection with multi-objective Bayesian Optimization over shifted distributions https://arxiv.org/pdf/1902.08999

Federated Machine Learning: Concept and Applications https://arxiv.org/pdf/1902.04885

Agnostic Federated Learning https://arxiv.org/pdf/1902.00146

Peer-to-peer Federated Learning on Graphs https://arxiv.org/pdf/1901.11173

Federated Collaborative Filtering for Privacy-Preserving Personalized Recommendation Systemhttps://arxiv.org/pdf/1901.09888

SecureBoost: A Lossless Federated Learning Framework https://arxiv.org/pdf/1901.08755

Federated Reinforcement Learning https://arxiv.org/pdf/1901.08277

Lifelong Federated Reinforcement Learning: A Learning Architecture for Navigation in Cloud Robotic Systemshttps://arxiv.org/pdf/1901.06455

Federated Learning via Over-the-Air Computation https://arxiv.org/pdf/1812.11750

Broadband Analog Aggregation for Low-Latency Federated Edge Learning (Extended Version)https://arxiv.org/pdf/1812.11494

Multi-objective Evolutionary Federated Learning https://arxiv.org/pdf/1812.07478

Federated Optimization for Heterogeneous Networks https://arxiv.org/pdf/1812.06127

Efficient Training Management for Mobile Crowd-Machine Learning: A Deep Reinforcement Learning Approachhttps://arxiv.org/pdf/1812.03633

No Peek: A Survey of private distributed deep learning https://arxiv.org/pdf/1812.03288

A Hybrid Approach to Privacy-Preserving Federated Learning https://arxiv.org/pdf/1812.03224

Applied Federated Learning: Improving Google Keyboard Query Suggestions https://arxiv.org/pdf/1812.02903

Differentially Private Data Generative Models https://arxiv.org/pdf/1812.02274

Protection Against Reconstruction and Its Applications in Private Federated Learning https://arxiv.org/pdf/1812.00984

Split learning for health: Distributed deep learning without sharing raw patient data https://arxiv.org/pdf/1812.00564

Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learninghttps://arxiv.org/pdf/1812.00535

LoAdaBoost:Loss-Based AdaBoost Federated Machine Learning on medical Data https://arxiv.org/pdf/1811.12629

Analyzing Federated Learning through an Adversarial Lens https://arxiv.org/pdf/1811.12470

Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data https://arxiv.org/pdf/1811.11479

Biscotti: A Ledger for Private and Secure Peer-to-Peer Machine Learning https://arxiv.org/pdf/1811.09904

Dancing in the Dark: Private Multi-Party Machine Learning in an Untrusted Setting https://arxiv.org/pdf/1811.09712

Weekly Dig in Privacy-Preserving Machine Learning

15 February 2019

Papers

Bonus

8 February 2019

Paper

Bonus

1 February 2019

Papers

News

Bonus

18 January 2019

News

11 January 2019

Papers

News

Bonus

31 December 2018

Papers

News

14 December 2018

Papers

News

Bonus

30 November 2018

Papers

News

31 October 2018

Papers

28 September 2018

Papers

27 July 2018

Papers

27 June 2018

25 May 2018

Papers

News

Bonus

18 May 2018

Small but good: we only dug up one paper this week but it comes with very interesting claims.

Papers

  • SecureNN: Efficient and Private Neural Network Training
    Following recent approachs but reporting significant performance improvements via specialized protocols for the 3 and 4-server setting: the claimed cost of encrypted training is in some cases only 13-33 times that of training on cleartext data. Big factor in this is the avoidance of bit-decomposition and garbled circuits when computing comparisons and ReLUs.

11 May 2018

If anyone had any doubt that private machine learning is a growing area then this week might take care of that.

Papers

Secure multiparty computation:

Homomorphic encryption:

  • Unsupervised Machine Learning on Encrypted Data
    Implements K-means privately using fully homomorphic encryption and a bit-wise rational encoding, with suggestions for tweaking K-means to make it more practical for this setting. The TFHE library (see below) is used for experiments.
  • TFHE: Fast Fully Homomorphic Encryption over the Torus
    Proclaimed as the fastest FHE library currently available, this paper is the extended version of previous descriptions of the underlying scheme and optimizations.
  • Homomorphic Secret Sharing: Optimizations and Applications
    Further work on a hybrid scheme between homomorphic encryption and secret sharing: operations can be performed locally by each share holder as in the former, yet a final combination is needed in the end to recover the result as in the latter: “this enables a level of compactness and efficiency of reconstruction that is impossible to achieve via standard FHE”.

Secure enclaves:

Differential privacy:

Bonus

27 April 2018

Papers

  • Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning
    “in this paper we propose to add an additional dustbin class containing natural out-distribution samples” “We show that such an augmented CNN has a lower error rate in the presence of adversarial examples because it either correctly classifies adversarial samples or rejects them to a dustbin class.”
  • Weak labeling for crowd learning
    “weak labeling for crowd learning is proposed, where the annotators may provide more than a single label per instance to try not to miss the real label”
  • Decentralized learning with budgeted network load using Gaussian copulas and classifier ensembles
    “In this article, we place ourselves in a context where the amount of transferred data must be anticipated but a limited portion of the local training sets can be shared. We also suppose a minimalist topology where each node can only send information unidirectionally to a single central node which will aggregate models trained by the nodes” “Using shared data on the central node, we then train a probabilistic model to aggregate the base classifiers in a second stage.”
  • Securing Distributed Machine Learning in High Dimensions
    Some results towards the issue of input pollution in federated learning, where a fraction of gradient providers may give arbitrarily malicious inputs to an aggregation protocol. “The core of our method is a robust gradient aggregator based on the iterative filtering algorithm for robust mean estimation”.

20 April 2018

Papers

News

  • Sharemind, one of the biggest and earliest players pushing MPC to industry, has launched a new privacy servicebased on secure computation using secure enclaves with the promise that it can handle big data. Via @positium.
  • Interesting interview with Lea Kissner, the head of Google’s privacy team NightWatch. Few details are given but “She recently tried to obscure some data using cryptography, so that none of it would be visible to Google upon upload … but it turned out that [it] would require more spare computing power than Google has” sounds like techniques that could be related to MPC or HE. Via @rosa.
  • Google had two AI presentations at this year’s RSA conference, one on fraud detection and one on adversarial techniques. Via @goodfellow_ian.

Bonus

13 April 2018

Papers

News

Bonus

30 March 2018

Papers

16 March 2018

Papers

Bonus

9 March 2018

News

Papers

Blogs

2 March 2018

News

Papers

  • Scalable Private Learning with PATE
    Follow-up work to the celebrated Student-Teacher way of ensuring privacy of training data via differential privacy, now with better privacy bounds and hence less added noise. This is partially achieved by switching to Gaussian noise and more advanced (trusted) aggregation mechanisms.
  • Privacy-Preserving Logistic Regression Training
    Fitting a logistic model from homomorphically encrypted data using the Newton-Raphson iterative method, but with a fixed and approximated Hessian matrix. Performance is evaluated on the iDASH cancer detection scenario.
  • Privacy-Preserving Boosting with Random Linear Classifiers for Learning from User-Generated Data
    Presents the SecureBoost framework for mixing boosting algorithms with secure computation. The former uses randomly generated linear classifiers at the base and the latter comes in three variants: RLWE+GC, Paillier+GC, and SecretSharing+GC. Performance experiments on both the model itself and on the secure versions are provided.
  • Machine learning and genomics: precision medicine vs. patient privacy
    Non-technical paper illustrating that secure computation techniques are finding their way into otherwise unrelated research areas, and hitting home-run with “data access restrictions are a burden for researchers, particularly junior researchers or small labs that do not have the clout to set up collaborations with major data curators”.

Blogs

23 February 2018

Papers

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×