Rafael Pinot

Postdoctoral researcher in Computer Science

École Polytechnique Fédérale de Lausanne


I am currently a postdoctoral researcher at École Polytechnique Fédérale de Lausanne. I work with Pr. Rachid Guerraoui and Pr. Anne-Marie Kermarrec within the Ecocloud Research Center. From october 2017 to december 2020 I completed my PhD in Computer Science at Université Paris Dauphine-PSL and CEA LIST institute, Université Paris Saclay where I was advised by Pr. Jamal Atif, Dr. Florian Yger, and Dr. Cédric Gouy-Pailler. Prior to that, I also obtained in 2017 a MSc in Theoretical and Applied Statistics from Sorbonne Université (Paris 6).

My main line of research is in statistical machine learning with a focus on the security and privacy of machine learning applications. I also like to work on statistical analysis of complex data structures such as graphs.


  • Robustness in Machine Learning
  • Privacy Preserving Data Analysis
  • Distributed Machine Learning
  • Theory of Deep Learning
  • Graph Theory


  • PhD in Computer Science, 2020

    PSL University (Paris-Dauphine)

  • MSc in Applied Mathematics, 2017

    Sorbonne Université (Paris 6)

  • BSc in Applied Mathematics, 2016

    Sorbonne Université (Paris 1)


(2020). On the impact of randomization on robustness in machine learning. PhD Thesis in Computer Science, PSL University (Paris-Dauphine).

PhD Thesis Slides

(2017). Minimum spanning tree release under differential privacy constraints. Master Thesis in Statistics, Sorbonne Université (Paris 6).

Master Thesis

Some Recent Publications

Towards Consistency in Adversarial Classification

n this paper, we study the problem of consistency in the context of adversarial examples. Specifically, we tackle the following question; can surrogate losses still be used as a proxy for minimizing the 0/1 loss in the presence of an adversary that alters the inputs at test-time? Different from the standard classification task, this question cannot be reduced to a point-wise minimization problem, and calibration needs not to be sufficient to ensure consistency. In this paper, we expose some pathological behaviors specific to the adversarial problem, and show that no convex surrogate loss can be consistent or calibrated in this context. It is therefore necessary to design another class of surrogate functions that can be used to solve the adversarial consistency issue. As a first step towards designing such a class, we identify sufficient and necessary conditions for a surrogate loss to be calibrated in both the adversarial and standard settings. Finally, we give some directions for building a class of losses that could be consistent in the adversarial framework.

On the robustness of randomized classifiers to adversarial examples

This paper investigates the theory of robustness against adversarial attacks. We focus on randomized classifiers (i.e. classifiers that output random variables) and provide a thorough analysis of their behavior through the lens of statistical learning theory and information theory. To this aim, we introduce a new notion of robustness for randomized classifiers, enforcing local Lipschitzness using probability metrics. Equipped with this definition, we make two new contributions. The first one consists in devising a new upper bound on the adversarial generalization gap of randomized classifiers. More precisely, we devise bounds on the generalization gap and the adversarial gap i.e. the gap between the risk and the worst-case risk under attack) of randomized classifiers. The second contribution presents a yet simple but efficient noise injection method to design robust randomized classifiers. We show that our results are applicable to a wide range of machine learning models under mild hypotheses. We further corroborate our findings with experimental results using deep neural networks on standard image datasets, namely CIFAR-10 and CIFAR-100. On these tasks, we manage to design robust models that simultaneously achieve state-of-the-art accuracy (over 0.82 clean accuracy on CIFAR-10) and enjoy guaranteed robust accuracy bounds (0.45 against l2 adversaries with magnitude 0.5 on CIFAR-10).

Byzantine Machine Learning Made Easy by Resilent Averaging of Momentums

Byzantine resilience emerged as a prominent topic within the distributed machine learning community. Essentially, the goal is to enhance distributed optimization algorithms, such as distributed SGD, in a way that guarantees convergence despite the presence of some misbehaving (a.k.a., Byzantine) workers. Although a myriad of techniques addressing the problem have been proposed, the field arguably rests on fragile foundations. These techniques are hard to prove correct and rely on assumptions that are (a) quite unrealistic, i.e., often violated in practice, and (b) heterogeneous, i.e., making it difficult to compare approaches. We present RESAM (RESilient Averaging of Momentums), a unified framework that makes it simple to establish optimal Byzantine resilience, relying only on standard machine learning assumptions. Our framework is mainly composed of two operators, namely resilient averaging at the server and distributed momentum at the workers. We prove a general theorem stating the convergence of distributed SGD under RESAM. Interestingly, demonstrating and comparing the convergence of many existing techniques become direct corollaries of our theorem, without resorting to stringent assumptions. We also present an empirical evaluation of the practical relevance of RESAM.

The Universal Gossip Fighter

The notion of adversary is a staple of distributed computing. An adversary typically models hostile assumptions about the underlying distributed environment, e.g., a network that can drop messages, an operating system that can delay processes or an attacker that can hack machines. So far, the goal of distributed computing researchers has mainly been to develop a distributed algorithm that can face a given adversary, the abstraction characterizing worst-case scenarios. This paper initiates the study of the somehow opposite approach. Given a distributed algorithm, the adversary is the abstraction we seek to implement. More specifically, we consider the problem of controlling the spread of messages in a large- scale system, conveying the practical motivation of limiting the dissemination of fake news or viruses. Essentially, we assume a general class of gossip protocols, called all-to-all gossip protocols, and devise a practical method to hinder the dissemination. We present the Universal Gossip Fighter (UGF). Just like classical adversaries in distributed computing, UGF can observe the status of a dissemination and decide to stop some processes or delay some messages. The originality of UGF lies in the fact that it is universal, i.e., it applies to any all-to-all gossip protocol. We show that any gossip protocol attacked by UGF ends up exhibiting a quadratic message complexity (in the total number of processes) if it achieves sublinear time of dissemination. We also show that if a gossip protocol aims to achieve a message complexity α times smaller than quadratic, then the time complexity rises exponentially in relation to α. We convey the practical relevance of our theoretical findings by implementing UGF and conducting a set of empirical experiments that confirm some of our results.

Differential Privacy and Byzantine Resilience in SGD: Do They Add Up?

This paper addresses the problem of combining Byzantine resilience with privacy in machine learning (ML). Specifically, we study whether a distributed implementation of the renowned Stochastic Gradient Descent (SGD) learning algorithm is feasible with both differential privacy (DP) and (α,f)-Byzantine resilience. To the best of our knowledge, this is the first work to tackle this problem from a theoretical point of view. A key finding of our analyses is that the classical approaches to these two (seemingly) orthogonal issues are incompatible. More precisely, we show that a direct composition of these techniques makes the guarantees of the resulting SGD algorithm depend unfavourably upon the number of parameters in the ML model, making the training of large models practically infeasible. We validate our theoretical results through numerical experiments on publicly-available datasets; showing that it is impractical to ensure DP and Byzantine resilience simultaneously.


Université Paris Dauphine PSL

  • Lecturer in Trustworthy Machine Learning - MBa C.S (2019-2020)

  • Lecturer in Mathematics for Machine Learning - MSc C.S (2019-2020)

  • Teaching assistant in Introduction to Machine Learning - MSc C.S (2017-2020)

  • Teaching assistant in Introduction to Machine Learning - MBa B.I (2017-2020)

Sorbonne Université (Paris 1)

  • Teaching assistant in Calculus - BSc Mathematics (2017-2018)


Contact at EPFL

  • E-Mail address: {firstname}.{lastname}@epfl.ch
  • Postal address: Distributed Computing Lab, INR building, room 315

Contact at Dauphine University (may be deprecated)

  • E-Mail address: {firstname}.{lastname}@dauphine.fr