Toward Secure Deep Learning Systems

Download Toward Secure Deep Learning Systems full books in PDF, epub, and Kindle. Read online free Toward Secure Deep Learning Systems ebook anywhere anytime directly on your device. Fast Download speed and no annoying ads. We cannot guarantee that every ebooks is available!

Toward Secure Deep Learning Systems

Toward Secure Deep Learning Systems
Author :
Publisher :
Total Pages :
Release :
ISBN-10 : OCLC:1258263365
ISBN-13 :
Rating : 4/5 ( Downloads)

Book Synopsis Toward Secure Deep Learning Systems by : Xinyang Zhang

Download or read book Toward Secure Deep Learning Systems written by Xinyang Zhang and published by . This book was released on 2021 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Machine learning (ML) and deep learning (DL) methods achieve state-of-art performances on various intelligence tasks, such as visual recognition and natural language processing. Yet, the technical committee overlooks the security threats against ML and DL systems. Due to the trends of deploying DL systems for online service and online infrastructure, these systems are faced more malicious attacks from adversaries. Therefore, it is urgent to understand the space of threats and propose solutions against them. In this dissertation, our study focus on the security and privacy of DL systems. Three common security threats against DL systems are adversarial examples, data poisoning and backdoor attacks, and privacy leakages. Adversarial examples are maliciously perturbed input that causes DL models to misbehave. To keep a system relying on DL models safe when deployed, the technical committee seeks methods to detect those adversarial examples or design robust DL models. In a data poisoning attack, adversary plants poisoned input in a target task's training set. A DL classifier trained on this polluted dataset will misclassify the adversary's target input. Backdoor attacks are an advanced variant of data poisoning attacks. The adversary poisons a DL model either by polluting its training data or modifying its parameters directly so that the poisoned model responds abnormally to inputs embedded with trigger patterns (e.g., patches or stickers in an image). DL developers need techniques to ensure training sets are clean and models used as components are unpolluted for these two types of attacks. The popularity of DL's application also raises many privacy concerns. On the one hand, DL models are encoded with knowledge from a training set containing sensitive information from its contributor. It is critical to developing a method to prevent leakage of sensitive information from DL models. On the other hand, because high-performance DL models demand many training examples, multiple data owners may collectively train a model in an asynchronous and distributional manner. A proper private learning mechanism is necessary for this distributed learning to protect each party's proprietary information. In this dissertation, we will present our contributions to understand DL systems' security vulnerabilities and mitigate privacy concerns of DL systems. We first explore the interaction of model interpret-ability with adversarial examples. An interpretable deep learning system is built upon a classifier for classification and an interpreter for explaining the classifier's decision. We show the additional model interpretation does not enhance the security of DL systems against adversarial examples. In particular, we develop ADV^2 attacks that simultaneously cause target classifier to misclassify the target input and induce a target interpretation map for the interpreter. Empirically studies demonstrate that our attack is effective on different DL models and datasets. We also provide an analysis of the root cause of the attack and potential counter-measures. We then present another two studies on data poisoning and backdoor attacks against DL systems. In the first work, we challenge the practice of fine-tuning pre-trained models for downstream tasks. Since state-of-arts DL models demand more and more computational resources to train, developers tend to build their models from third parties' pre-trained models. We propose model-reuse attacks that directly modify a clean DL model's parameters so that it misclassifies a target input when the poisoned is used for fine-tuning the target task. We keep the degradation in models' performance on the pre-trained task during this attack negligible. We validate the effectiveness and easiness of model-reuse attacks with three different case studies. Similar to ADV^2 work, we explore the causes for this attack and discuss defenses against it. In the second work, we extend backdoor attacks to the natural language processing domain. Our Trojan^{LM} attacks poison pre-trained Transformer language models (LMs) so that after they are fine-tuned for an adversary's target task, the final models misbehave when keywords defined by the adversary appear in the input sequence. Trojan^{LM} is evaluated under both supervised tasks and unsupervised tasks. We supply additional experiments with two approaches to defend against Trojan^{LM} attacks. We finally move to private ML and DL. We develop \propto MDL, a new multi-party DL paradigm. It is built upon three primitives: asynchronous optimization, lightweight homomorphic encryption, and threshold secret sharing. Through extensive empirical evaluation using benchmark datasets and deep learning architectures, we demonstrate the efficacy of $\propto$ MDL on supporting secure and private distributed DL among multiple parties. At the end of this dissertation, we highlight three future directions to explore the intersection of computer security and DL: defending adversarial examples in physical systems, discovering vulnerabilities in reinforcement learning, and applying machine learning to software security.


Toward Secure Deep Learning Systems Related Books

Toward Secure Deep Learning Systems
Language: en
Pages:
Authors: Xinyang Zhang
Categories:
Type: BOOK - Published: 2021 - Publisher:

DOWNLOAD EBOOK

Machine learning (ML) and deep learning (DL) methods achieve state-of-art performances on various intelligence tasks, such as visual recognition and natural lan
Federated Learning Systems
Language: en
Pages: 207
Authors: Muhammad Habib ur Rehman
Categories: Technology & Engineering
Type: BOOK - Published: 2021-06-11 - Publisher: Springer Nature

DOWNLOAD EBOOK

This book covers the research area from multiple viewpoints including bibliometric analysis, reviews, empirical analysis, platforms, and future applications. Th
AI, Machine Learning and Deep Learning
Language: en
Pages: 347
Authors: Fei Hu
Categories: Computers
Type: BOOK - Published: 2023-06-05 - Publisher: CRC Press

DOWNLOAD EBOOK

Today, Artificial Intelligence (AI) and Machine Learning/ Deep Learning (ML/DL) have become the hottest areas in information technology. In our society, many in
Machine Learning and Security
Language: en
Pages: 385
Authors: Clarence Chio
Categories: Computers
Type: BOOK - Published: 2018-01-26 - Publisher: "O'Reilly Media, Inc."

DOWNLOAD EBOOK

Can machine learning techniques solve our computer security problems and finally put an end to the cat-and-mouse game between attackers and defenders? Or is thi
Crypto and AI
Language: en
Pages: 229
Authors: Behrouz Zolfaghari
Categories: Technology & Engineering
Type: BOOK - Published: 2023-11-14 - Publisher: Springer Nature

DOWNLOAD EBOOK

This book studies the intersection between cryptography and AI, highlighting the significant cross-impact and potential between the two technologies. The author