论文链接:https://arxiv.org/abs/1412.6572
源码出处:https://github.com/Harry24k/adversarial-attacks-pytorch/tree/master
源码
import torch
import torch.nn as nn
from ..attack import Attack
class FGSM(Attack):
r"""
FGSM in the paper 'Explaining and harnessing adversarial examples'
[https://arxiv.org/abs/1412.6572]
Distance Measure : Linf
Arguments:
model (nn.Module): model to attack.
eps (float): maximum perturbation. (Default: 8/255)
Shape:
- images: :math:`(N, C, H, W)` where `N = number of batches`, `C = number of channels`, `H = height` and `W = width`. It must have a range [0, 1].
- labels: :math:`(N)` where each value :math:`y_i` is :math:`0 \leq y_i \leq` `number of labels`.
- output: :math:`(N, C, H, W)`.
Examples::
>>> attack = torchattacks.FGSM(model, eps=8/255)
>>> adv_images = attack(images, labels)
"""
def __init__(self, model, eps=8/255):
super().__init__("FGSM", model)
self.eps = eps
self.supported_mode = ['default', 'targeted']
def forward(self, images, labels):
r"""
Overridden.
"""
self._check_inputs(images)
images = images.clone().detach().to(self.device)
labels = labels.clone().detach().to(self.device)
if self.targeted:
target_labels = self.get_target_label(images, labels)
loss = nn.CrossEntropyLoss()
images.requires_grad = True
outputs = self.get_logits(images)
# Calculate loss
if self.targeted:
cost = -loss(outputs, target_labels)

本文详细介绍了FGSM(FastGradientSignMethod)算法,一种在白盒环境下的无目标和目标攻击方法,通过计算损失函数对输入的梯度进行扰动。它假设目标损失函数与输入之间的关系近似线性,用于生成对抗样本。
最低0.47元/天 解锁文章
2万+

被折叠的 条评论
为什么被折叠?



