A unified view on differential privacy and robustness to adversarial examples

Abstract

This short note highlights some links between two lines of research within the emerging topic of trustworthy machine learning differential privacy and robustness to adversarial examples. By abstracting the definitions of both notions, we show that they build upon the same theoretical ground and hence results obtained so far in one domain can be transferred to the other. More precisely, our analysis is based on two key elements probabilistic mappings (also called randomized algorithms in the differential privacy community), and the Renyi divergence which subsumes a large family of divergences. We first generalize the definition of robustness against adversarial examples to encompass probabilistic mappings. Then we observe that Renyi-differential privacy (a recently proposed generalization of differential privacy) and our definition of robustness share several similarities. We finally discuss how both communities can benefit from this connection to transfer technical tools from one research field to the other.

Publication
In European Conference on Machine Learning
Avatar
Rafael Pinot
Junior Professor in Machine Learning