update readme
43
README.md
@@ -12,17 +12,50 @@
|
||||
|
||||
## What is MindArmour
|
||||
|
||||
A tool box for MindSpore users to enhance model security and trustworthiness and protect privacy data.
|
||||
MindArmour focus on security and privacy of artificial intelligence. MindArmour can be used as a tool box for MindSpore users to enhance model security and trustworthiness and protect privacy data.
|
||||
MindArmour contains three module: Adversarial Robustness Module, Fuzz Testing Module, Privacy Protection and Evaluation Module.
|
||||
|
||||
MindArmour model security module is designed for adversarial examples, including four submodule: adversarial examples generation, adversarial examples detection, model defense and evaluation. The architecture is shown as follow:
|
||||
### Adversarial Robustness Module
|
||||
|
||||

|
||||
Adversarial robustness module is designed for evaluating the robustness of the model against adversarial examples,
|
||||
and provides model enhancement methods to enhance the model's ability to resist the adversarial attack and improve the model's robustness.
|
||||
This module includes four submodule: Adversarial Examples Generation, Adversarial Examples Detection, Model Defense and Evaluation.
|
||||
|
||||
MindArmour differential privacy module Differential-Privacy implements the differential privacy optimizer. Currently, SGD, Momentum and Adam are supported. They are differential privacy optimizers based on the Gaussian mechanism.
|
||||
This mechanism supports both non-adaptive and adaptive policy. Rényi differential privacy (RDP) and Zero-Concentrated differential privacy(ZDP) are provided to monitor differential privacy budgets. The architecture is shown as follow:
|
||||
The architecture is shown as follow:
|
||||
|
||||

|
||||
|
||||
### Fuzz Testing Module
|
||||
|
||||
Fuzz Testing module is a security test for AI models. We introduce neuron coverage gain as a guide to fuzz testing according to the characteristics of neural networks.
|
||||
Fuzz testing is guided to generate samples in the direction of increasing neuron coverage rate, so that the input can activate more neurons and neuron values have a wider distribution range to fully test neural networks and explore different types of model output results and wrong behaviors.
|
||||
|
||||
The architecture is shown as follow:
|
||||
|
||||

|
||||
|
||||
### Privacy Protection and Evaluation Module
|
||||
|
||||
Privacy Protection and Evaluation Module includes two modules: Differential Privacy Training Module and Privacy Leakage Evaluation Module.
|
||||
|
||||
#### Differential Privacy Training Module
|
||||
|
||||
Differential Privacy Training Module implements the differential privacy optimizer. Currently, SGD, Momentum and Adam are supported. They are differential privacy optimizers based on the Gaussian mechanism.
|
||||
This mechanism supports both non-adaptive and adaptive policy. Rényi differential privacy (RDP) and Zero-Concentrated differential privacy(ZCDP) are provided to monitor differential privacy budgets.
|
||||
|
||||
The architecture is shown as follow:
|
||||
|
||||

|
||||
|
||||
#### Privacy Leakage Evaluation Module
|
||||
|
||||
Privacy Leakage Evaluation Module is used to assess the risk of a model revealing user privacy. The privacy data security of the deep learning model is evaluated by using membership inference method to infer whether the sample belongs to training dataset.
|
||||
|
||||
The architecture is shown as follow:
|
||||
|
||||

|
||||
|
||||
|
||||
## Setting up MindArmour
|
||||
|
||||
### Dependencies
|
||||
|
||||
34
README_CN.md
@@ -12,16 +12,42 @@
|
||||
|
||||
## 简介
|
||||
|
||||
MindArmour可用于增强模型的安全可信、保护用户的数据隐私。
|
||||
MindArmour关注AI的安全和隐私问题。致力于增强模型的安全可信、保护用户的数据隐私。主要包含3个模块:对抗样本鲁棒性模块、Fuzz Testing模块、隐私保护与评估模块。
|
||||
|
||||
模型安全主要针对对抗样本,包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。对抗样本的架构图如下:
|
||||
### 对抗样本鲁棒性模块
|
||||
对抗样本鲁棒性模块用于评估模型对于对抗样本的鲁棒性,并提供模型增强方法用于增强模型抗对抗样本攻击的能力,提升模型鲁棒性。对抗样本鲁棒性模块包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。
|
||||
|
||||

|
||||
对抗样本鲁棒性模块的架构图如下:
|
||||
|
||||
隐私保护支持差分隐私,包括动态或者非动态的差分隐私SGD、Momentum、Adam优化器,噪声机制支持高斯分布噪声、拉普拉斯分布噪声,差分隐私预算监测包含ZDP、RDP。差分隐私的架构图如下:
|
||||

|
||||
|
||||
### Fuzz Testing模块
|
||||
Fuzz Testing模块是针对AI模型的安全测试,根据神经网络的特点,引入神经元覆盖率,作为Fuzz测试的指导,引导Fuzzer朝着神经元覆盖率增加的方向生成样本,让输入能够激活更多的神经元,神经元值的分布范围更广,以充分测试神经网络,探索不同类型的模型输出结果和错误行为。
|
||||
|
||||
Fuzz Testing模块的架构图如下:
|
||||
|
||||

|
||||
|
||||
### 隐私保护模块
|
||||
|
||||
隐私保护模块包含差分隐私训练与隐私泄露评估。
|
||||
|
||||
#### 差分隐私训练模块
|
||||
|
||||
差分隐私训练包括动态或者非动态的差分隐私SGD、Momentum、Adam优化器,噪声机制支持高斯分布噪声、拉普拉斯分布噪声,差分隐私预算监测包含ZCDP、RDP。
|
||||
|
||||
差分隐私的架构图如下:
|
||||
|
||||

|
||||
|
||||
#### 隐私泄露评估模块
|
||||
|
||||
隐私泄露评估模块用于评估模型泄露用户隐私的风险。利用成员推理方法来推测样本是否属于用户训练数据集,从而评估深度学习模型的隐私数据安全。
|
||||
|
||||
隐私泄露评估模块框架图如下:
|
||||
|
||||

|
||||
|
||||
|
||||
## 开始
|
||||
|
||||
|
||||
BIN
docs/adversarial_robustness_cn.png
Normal file
|
After Width: | Height: | Size: 232 KiB |
BIN
docs/adversarial_robustness_en.png
Normal file
|
After Width: | Height: | Size: 290 KiB |
|
Before Width: | Height: | Size: 37 KiB After Width: | Height: | Size: 231 KiB |
|
Before Width: | Height: | Size: 48 KiB After Width: | Height: | Size: 213 KiB |
BIN
docs/fuzzer_architecture_cn.png
Normal file
|
After Width: | Height: | Size: 385 KiB |
BIN
docs/fuzzer_architecture_en.png
Normal file
|
After Width: | Height: | Size: 385 KiB |
|
Before Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 17 KiB |
BIN
docs/privacy_leakage_cn.png
Normal file
|
After Width: | Height: | Size: 326 KiB |
BIN
docs/privacy_leakage_en.png
Normal file
|
After Width: | Height: | Size: 363 KiB |