Privacy Preserving Adversarial Perturbations for Neural Decoders

Machine Learning Diplomarbeiten (never - cancelled)

Betreuer

Emilio Balda,

Abstract

Replacing conventional decoders with neural networks has become a popular research topic in recent years, due to the potentially low complexity of neural networks when compared with classical iterative decoders. Nevertheless, it has been shown that neural networks are particularly unstable when maliciously designed noise is added to their inputs. Such noise is known as adversarial noise or adversarial perturbation. ...

Research Area

Machine Learning for Communication Systems

Keywords

Adversarial examples, deep learning, privacy

Requirements

Download information as pdf.