Privacy in Neural Network Learning: Threats and Countermeasures
Privacy in Neural Network Learning: Threats and Countermeasures
by Shan Chang, Chao Li
Institute of Electrical and Electronics Engineers (IEEE)
Algorithmic breakthroughs, the feasibility of collecting huge amount of data, and increasing computational power, contribute to the remarkable achievements of NNs. In particular, since Deep Neural Network (DNN) learning presents astonishing results in speech and image recognition, the amount of sophisticated applications based on it has exploded. However, the increasing number of instances of privacy leakage has been reported, and the corresponding severe consequences have caused great worry in this area. In this article, we focus on privacy issues in NN learning. First, we identify the privacy threats during NN training, and present privacy-preserving training schemes in terms of using centralized and distributed approaches. Second, we consider the privacy of prediction requests, and discuss the privacy-preserving protocols for NN prediction. We also analyze the privacy vulnerabilities of trained models. Three types of attacks on private information embedded in trained NN models are discussed, and a differential privacy-based solution is introduced.