Adversarial Examples in Deep Neural Networks: An Overview

Authors

E. R. Balda, A. Behboodi, R. Mathar,

Abstract

        Deep learning architectures are vulnerable to adversarial perturbations. They are added to the input and alter drastically the output of deep networks. These instances are called adversarial examples. They are observed in various learning tasks from supervised learning to unsupervised and reinforcement learning. In this chapter, we review some of the most important highlights in theory and practice of adversarial examples. The focus is on designing adversarial attacks, theoretical investigation into the nature of adversarial examples, and establishing defenses against adversarial attacks. A common thread in the design of adversarial attacks is the perturbation analysis of learning algorithms. Many existing algorithms rely implicitly on perturbation analysis for generating adversarial examples. The summary of most powerful attacks are presented in this light. We overview various theories behind the existence of adversarial examples as well as theories that consider the relation between the generalization error and adversarial robustness. Finally, various defenses against adversarial examples are also discussed.

Keywords

Adversarial examples · Deep learning · Classification · Regression · Perturbation analysis · Statistical learning · Adversarial training · Adversarial defenses

DOI: 10.1007/978-3-030-31760-7_2

BibTEX Reference Entry 

@inbook{BaBeMa20,
	author = {Emilio Rafael Balda and Arash Behboodi and Rudolf Mathar},
	title = "Adversarial Examples in Deep Neural Networks: An Overview",
	pages = "31-65",
	volume = "865",
	series = "Studies in Computational Intelligence",
	ISBN = "978-3-030-31759-1",
	month = Jan,
	year = 2020,
	hsb = RWTH-2019-11739,
	}

Downloads

 Download paper  Download bibtex-file

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights there in are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.