对抗攻击与防御(2022年顶会顶刊AAAI、ACM、 ECCV、NIPS、ICLR、CVPR)adversarial attack and defense汇总

文章详细列举了在AAAI、CVPR、ACM、ECCV和ICLR等会议上关于深度学习模型的对抗攻击和防御策略的研究论文,涵盖了一系列攻击方法如黑盒攻击、白盒攻击、数据中毒、后门攻击,以及防御策略如对抗训练、鲁棒初始化和对抗性增强。研究重点在于提高模型的鲁棒性和防止对抗性样本的转移性。

AAAI’ 2022 论文汇总

  • attack

Learning to Learn Transferable Attack

Towards Transferable Adversarial Attacks on Vision Transformers

Sparse-RS: A Versatile Framework for Query-Efficient Sparse Black-Box Adversarial Attacks

Shape Prior Guided Attack: Sparser Perturbations on 3D Point Clouds

Adversarial Attack for Asynchronous Event-Based Data

CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets

TextHoaxer: Budgeted Hard-Label Adversarial Attacks on Text

Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks

Hard to Forget: Poisoning Attacks on Certified Machine Unlearning

Attacking Video Recognition Models with Bullet-Screen Comments

Context-Aware Transfer Attacks for Object Detection

A Fusion-Denoising Attack on InstaHide with Data Augmentation

FCA: Learning a 3D Full-Coverage Vehicle Camouflage for Multi-View Physical Adversarial Attack

Backdoor Attacks on the DNN Interpretation System

Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs

Synthetic Disinformation Attacks on Automated Fact Verification Systems

Adversarial Bone Length Attack on Action Recognition

Improved Gradient Based Adversarial Attacks for Quantized Networks

Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification

Has CEO Gender Bias Really Been Fixed? Adversarial Attacking and Improving Gender Fairness in Image Search

Boosting the Transferability of Video Adversarial Examples via Temporal Translation

Learning Universal Adversarial Perturbation by Adversarial Example

Making Adversarial Examples More Transferable and Indistinguishable

Vision Transformers are Robust Learners

  • defense

Certified Robustness of Nearest Neighbors Against Data Poisoning and Backdoor Attacks

Preemptive Image Robustification for Protecting Users Against Man-in-the-Middle Adversarial Attacks

Practical Fixed-Parameter Algorithms for Defending Active Directory Style Attack Graphs

When Can the Defender Effectively Deceive Attackers in Security Games?

Robust Heterogeneous Graph Neural Networks against Adversarial Attacks

Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation

Consistency Regularization for Adversarial Robustness

Adversarial Robustness in Multi-Task Learning: Promises and Illusions

LogicDef: An Interpretable Defense Framework Against Adversarial Examples via Inductive Scene Graph Reasoning

Efficient Robust Training via Backward Smoothing

Input-Specific Robustness Certification for Randomized Smoothing

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks

CVPR‘2022论文汇总

Adversarial Texture for Fooling Person Detectors in the Physical World

Adversarial Eigen Attack on Black-Box Models

Bounded Adversarial Attack on Deep Content Features

Backdoor Attacks on Self-Supervised Learning

Bandits for Structure Perturbation-Based Black-Box Attacks To Graph Neural Networks With Theoretical Guarantees

Boosting Black-Box Attack With Partially Transferred Conditional Adversarial Distribution

BppAttack: Stealthy and Efficient Trojan Attacks Against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning

Cross-Modal Transferable Adversarial Attacks From Images to Videos

Can You Spot the Chameleon? Adversarially Camouflaging Images From Co-Salient Object Detection

DTA: Physical Camouflage Attack

评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值