攻击:
快速梯度符号法(FGSM),通过在损失梯度的梯度方向上添加增量来生成一个对抗示例:
Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples.
基本迭代方法(BIM),它是FGSM的改进版本,与FGSM相比,BIM执行多个步骤,多次迭代:
Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale.
PGD 多次迭代循环的梯度下降方法:
Madry, Aleksander, et al. Towards deep learning models resistant to adversarial attacks.
CW: CarliniWagner他们设计了有效的优化目标,以找到最小的扰动:
Carlini, Nicholas, and David Wagner. Towards evaluating the robustness of neural networks.
DeepFool: 最少量的修改原始图像,来达到欺骗AI模型的目的:
DeepFool: a simple and accurate method to fool deep neural networks

最低0.47元/天 解锁文章

1026

被折叠的 条评论
为什么被折叠?



