对抗攻击与防御(2022年顶会顶刊AAAI、ACM、 ECCV、NIPS、ICLR、CVPR)adversarial attack and defense汇总

文章详细列举了在AAAI、CVPR、ACM、ECCV和ICLR等会议上关于深度学习模型的对抗攻击和防御策略的研究论文,涵盖了一系列攻击方法如黑盒攻击、白盒攻击、数据中毒、后门攻击,以及防御策略如对抗训练、鲁棒初始化和对抗性增强。研究重点在于提高模型的鲁棒性和防止对抗性样本的转移性。

AAAI’ 2022 论文汇总

  • attack

Learning to Learn Transferable Attack

Towards Transferable Adversarial Attacks on Vision Transformers

Sparse-RS: A Versatile Framework for Query-Efficient Sparse Black-Box Adversarial Attacks

Shape Prior Guided Attack: Sparser Perturbations on 3D Point Clouds

Adversarial Attack for Asynchronous Event-Based Data

CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets

TextHoaxer: Budgeted Hard-Label Adversarial Attacks on Text

Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks

Hard to Forget: Poisoning Attacks on Certified Machine Unlearning

Attacking Video Recognition Models with Bullet-Screen Comments

Context-Aware Transfer Attacks for Object Detection

A Fusion-Denoising Attack on InstaHide with Data Augmentation

FCA: Learning a 3D Full-Coverage Vehicle Camouflage for Multi-View Physical Adversarial Attack

Backdoor Attacks on the DNN Interpretation System

Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs

Synthetic Disinformation Attacks on Automated Fact Verification Systems

Adversarial Bone Length Attack on Action Recognition

Improved Gradient Based Adversarial Attacks for Quantized Networks

Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification

Has CEO Gender Bias Really Been Fixed? Adversarial Attacking and Improving Gender Fairness in Image Search

Boosting the Transferability of Video Adversarial Examples via Temporal Translation

Learning Universal Adversarial Perturbation by Adversarial Example

Making Adversarial Examples More Transferable and Indistinguishable

Vision Transformers are Robust Learners

  • defense

Certified Robustness of Nearest Neighbors Against Data Poisoning and Backdoor Attacks

Preemptive Image Robustification for Protecting Users Against Man-in-the-Middle Adversarial Attacks

Practical Fixed-Parameter Algorithms for Defending Active Directory Style Attack Graphs

When Can the Defender Effectively Deceive Attackers in Security Games?

Robust Heterogeneous Graph Neural Networks against Adversarial Attacks

Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation

Consistency Regularization for Adversarial Robustness

Adversarial Robustness in Multi-Task Learning: Promises and Illusions

LogicDef: An Interpretable Defense Framework Against Adversarial Examples via Inductive Scene Graph Reasoning

Efficient Robust Training via Backward Smoothing

Input-Specific Robustness Certification for Randomized Smoothing

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks

CVPR‘2022论文汇总

Adversarial Texture for Fooling Person Detectors in the Physical World

Adversarial Eigen Attack on Black-Box Models

Bounded Adversarial Attack on Deep Content Features

Backdoor Attacks on Self-Supervised Learning

Bandits for Structure Perturbation-Based Black-Box Attacks To Graph Neural Networks With Theoretical Guarantees

Boosting Black-Box Attack With Partially Transferred Conditional Adversarial Distribution

BppAttack: Stealthy and Efficient Trojan Attacks Against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning

Cross-Modal Transferable Adversarial Attacks From Images to Videos

Can You Spot the Chameleon? Adversarially Camouflaging Images From Co-Salient Object Detection

DTA: Physical Camouflage Attack

国际计算机视觉模式识别会议(CVPR)是IEEE一一度的学术性会议,聚焦计算机视觉模式识别技术,是世界级的计算机视觉会议之一(三大会之一,另外两个是ICCV和ECCV),近来每约1500名参加者,收录论文数量一般300篇左右,每有固定研讨主题且有公司赞助并获得会场展示机会[^1]。 欧洲计算机视觉会议(ECCV)是两一次的研究会议,被认为是计算机视觉领域的级会议之一,CVPR和ICCV并驾齐驱,且在ICCV不在的份举行,和大多数计算机科学会议一样,包括教程、技术会议和海报会议[^2]。 CVPR全称是International Conference on Computer Vision and Pattern Recognition,一举办一次,举办地从未出过美国,会议除视觉文章外,还有不少模式识别文章,两方面的结合也是重点[^3]。 对于翻译实现三层卷积神经网络的文献并制作PPT汇报讲解,可按以下步骤进行: 1. **文献查找**:可通过会议官方网站查找相关会议文献。如NeurIPS 2022的接收列表可在https://openreview.net/group?id=NeurIPS.cc/2022/Conference 及https://nips.cc/Conferences/2022/Schedule?type=Poster 查看 [^4]。 2. **文献翻译**:先通读文献,了解整体内容和结构。对于专业术语,可参考专业词典或相关领域的标准译法。逐句翻译时,确保语句通顺、表意准确。 3. **PPT制作**: - **封面**:包含文献标题、会议名称、汇报人等信息。 - **引言**:介绍三层卷积神经网络的背景和意义。 - **文献概述**:简述文献的主要内容和创新点。 - **技术细节**:详细讲解三层卷积神经网络的实现原理、架构等。 - **实验结果**:展示文献中的实验数据和结果分析。 - **结论**:总结文献的主要贡献和研究成果。 - **致谢**:感谢听众聆听。 ### 代码示例 以下是一个简单的三层卷积神经网络的PyTorch实现示例: ```python import torch import torch.nn as nn class ThreeLayerCNN(nn.Module): def __init__(self): super(ThreeLayerCNN, self).__init__() self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, padding=1) self.relu1 = nn.ReLU() self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, padding=1) self.relu2 = nn.ReLU() self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv3 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, padding=1) self.relu3 = nn.ReLU() self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2) self.fc1 = nn.Linear(64 * 4 * 4, 128) self.relu4 = nn.ReLU() self.fc2 = nn.Linear(128, 10) def forward(self, x): x = self.pool1(self.relu1(self.conv1(x))) x = self.pool2(self.relu2(self.conv2(x))) x = self.pool3(self.relu3(self.conv3(x))) x = x.view(-1, 64 * 4 * 4) x = self.relu4(self.fc1(x)) x = self.fc2(x) return x # 示例使用 model = ThreeLayerCNN() input_tensor = torch.randn(1, 3, 32, 32) output = model(input_tensor) print(output.shape) ```
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值