update readme

This commit is contained in:
ZhidanLiu
2020-09-22 10:28:06 +08:00
parent 5e02eeba8b
commit 8fdbaffa12
12 changed files with 68 additions and 9 deletions

View File

@@ -12,17 +12,50 @@
## What is MindArmour
A tool box for MindSpore users to enhance model security and trustworthiness and protect privacy data.
MindArmour focus on security and privacy of artificial intelligence. MindArmour can be used as a tool box for MindSpore users to enhance model security and trustworthiness and protect privacy data.
MindArmour contains three module: Adversarial Robustness Module, Fuzz Testing Module, Privacy Protection and Evaluation Module.
MindArmour model security module is designed for adversarial examples, including four submodule: adversarial examples generation, adversarial examples detection, model defense and evaluation. The architecture is shown as follow
### Adversarial Robustness Module
![mindarmour_architecture](docs/mindarmour_architecture.png)
Adversarial robustness module is designed for evaluating the robustness of the model against adversarial examples,
and provides model enhancement methods to enhance the model's ability to resist the adversarial attack and improve the model's robustness.
This module includes four submodule: Adversarial Examples Generation, Adversarial Examples Detection, Model Defense and Evaluation.
MindArmour differential privacy module Differential-Privacy implements the differential privacy optimizer. Currently, SGD, Momentum and Adam are supported. They are differential privacy optimizers based on the Gaussian mechanism.
This mechanism supports both non-adaptive and adaptive policy. Rényi differential privacy (RDP) and Zero-Concentrated differential privacyZDP are provided to monitor differential privacy budgets. The architecture is shown as follow
The architecture is shown as follow
![mindarmour_architecture](docs/adversarial_robustness_en.png)
### Fuzz Testing Module
Fuzz Testing module is a security test for AI models. We introduce neuron coverage gain as a guide to fuzz testing according to the characteristics of neural networks.
Fuzz testing is guided to generate samples in the direction of increasing neuron coverage rate, so that the input can activate more neurons and neuron values have a wider distribution range to fully test neural networks and explore different types of model output results and wrong behaviors.
The architecture is shown as follow
![fuzzer_architecture](docs/fuzzer_architecture_en.png)
### Privacy Protection and Evaluation Module
Privacy Protection and Evaluation Module includes two modules: Differential Privacy Training Module and Privacy Leakage Evaluation Module.
#### Differential Privacy Training Module
Differential Privacy Training Module implements the differential privacy optimizer. Currently, SGD, Momentum and Adam are supported. They are differential privacy optimizers based on the Gaussian mechanism.
This mechanism supports both non-adaptive and adaptive policy. Rényi differential privacy (RDP) and Zero-Concentrated differential privacyZCDP are provided to monitor differential privacy budgets.
The architecture is shown as follow
![dp_architecture](docs/differential_privacy_architecture_en.png)
#### Privacy Leakage Evaluation Module
Privacy Leakage Evaluation Module is used to assess the risk of a model revealing user privacy. The privacy data security of the deep learning model is evaluated by using membership inference method to infer whether the sample belongs to training dataset.
The architecture is shown as follow
![privacy_leakage](docs/privacy_leakage_en.png)
## Setting up MindArmour
### Dependencies

View File

@@ -12,16 +12,42 @@
## 简介
MindArmour可用于增强模型的安全可信、保护用户的数据隐私。
MindArmour关注AI的安全和隐私问题。致力于增强模型的安全可信、保护用户的数据隐私。主要包含3个模块对抗样本鲁棒性模块、Fuzz Testing模块、隐私保护与评估模块。
模型安全主要针对对抗样本包含了4个子模块对抗样本的生成、对抗样本的检测、模型防御、攻防评估。对抗样本的架构图如下
### 对抗样本鲁棒性模块
对抗样本鲁棒性模块用于评估模型对于对抗样本的鲁棒性并提供模型增强方法用于增强模型抗对抗样本攻击的能力提升模型鲁棒性。对抗样本鲁棒性模块包含了4个子模块对抗样本的生成、对抗样本的检测、模型防御、攻防评估。
![mindarmour_architecture](docs/mindarmour_architecture_cn.png)
对抗样本鲁棒性模块的架构图如下:
隐私保护支持差分隐私包括动态或者非动态的差分隐私SGD、Momentum、Adam优化器噪声机制支持高斯分布噪声、拉普拉斯分布噪声差分隐私预算监测包含ZDP、RDP。差分隐私的架构图如下
![mindarmour_architecture](docs/adversarial_robustness_cn.png)
### Fuzz Testing模块
Fuzz Testing模块是针对AI模型的安全测试根据神经网络的特点引入神经元覆盖率作为Fuzz测试的指导引导Fuzzer朝着神经元覆盖率增加的方向生成样本让输入能够激活更多的神经元神经元值的分布范围更广以充分测试神经网络探索不同类型的模型输出结果和错误行为。
Fuzz Testing模块的架构图如下
![fuzzer_architecture](docs/fuzzer_architecture_cn.png)
### 隐私保护模块
隐私保护模块包含差分隐私训练与隐私泄露评估。
#### 差分隐私训练模块
差分隐私训练包括动态或者非动态的差分隐私SGD、Momentum、Adam优化器噪声机制支持高斯分布噪声、拉普拉斯分布噪声差分隐私预算监测包含ZCDP、RDP。
差分隐私的架构图如下:
![dp_architecture](docs/differential_privacy_architecture_cn.png)
#### 隐私泄露评估模块
隐私泄露评估模块用于评估模型泄露用户隐私的风险。利用成员推理方法来推测样本是否属于用户训练数据集,从而评估深度学习模型的隐私数据安全。
隐私泄露评估模块框架图如下:
![privacy_leakage](docs/privacy_leakage_cn.png)
## 开始

Binary file not shown.

After

Width:  |  Height:  |  Size: 232 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 290 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 231 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 213 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 385 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 385 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

BIN
docs/privacy_leakage_cn.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 326 KiB

BIN
docs/privacy_leakage_en.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 363 KiB