AAAI’ 2022 论文汇总
- attack
Learning to Learn Transferable Attack
Towards Transferable Adversarial Attacks on Vision Transformers
Sparse-RS: A Versatile Framework for Query-Efficient Sparse Black-Box Adversarial Attacks
Shape Prior Guided Attack: Sparser Perturbations on 3D Point Clouds
Adversarial Attack for Asynchronous Event-Based Data
CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets
TextHoaxer: Budgeted Hard-Label Adversarial Attacks on Text
Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Attacking Video Recognition Models with Bullet-Screen Comments
Context-Aware Transfer Attacks for Object Detection
A Fusion-Denoising Attack on InstaHide with Data Augmentation
FCA: Learning a 3D Full-Coverage Vehicle Camouflage for Multi-View Physical Adversarial Attack
Backdoor Attacks on the DNN Interpretation System
Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs
Synthetic Disinformation Attacks on Automated Fact Verification Systems
Adversarial Bone Length Attack on Action Recognition
Improved Gradient Based Adversarial Attacks for Quantized Networks
Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification
Has CEO Gender Bias Really Been Fixed? Adversarial Attacking and Improving Gender Fairness in Image Search
Boosting the Transferability of Video Adversarial Examples via Temporal Translation
Learning Universal Adversarial Perturbation by Adversarial Example
Making Adversarial Examples More Transferable and Indistinguishable
Vision Transformers are Robust Learners
- defense
Certified Robustness of Nearest Neighbors Against Data Poisoning and Backdoor Attacks
Preemptive Image Robustification for Protecting Users Against Man-in-the-Middle Adversarial Attacks
Practical Fixed-Parameter Algorithms for Defending Active Directory Style Attack Graphs
When Can the Defender Effectively Deceive Attackers in Security Games?
Robust Heterogeneous Graph Neural Networks against Adversarial Attacks
Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation
Consistency Regularization for Adversarial Robustness
Adversarial Robustness in Multi-Task Learning: Promises and Illusions
LogicDef: An Interpretable Defense Framework Against Adversarial Examples via Inductive Scene Graph Reasoning
Efficient Robust Training via Backward Smoothing
Input-Specific Robustness Certification for Randomized Smoothing
CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
CVPR‘2022论文汇总
- 部分参考于该地址
- attack
Adversarial Texture for Fooling Person Detectors in the Physical World
Adversarial Eigen Attack on Black-Box Models
Bounded Adversarial Attack on Deep Content Features
Backdoor Attacks on Self-Supervised Learning
Bandits for Structure Perturbation-Based Black-Box Attacks To Graph Neural Networks With Theoretical Guarantees
Boosting Black-Box Attack With Partially Transferred Conditional Adversarial Distribution
BppAttack: Stealthy and Efficient Trojan Attacks Against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
Cross-Modal Transferable Adversarial Attacks From Images to Videos
Can You Spot the Chameleon? Adversarially Camouflaging Images From Co-Salient Object Detection
DTA: Physical Camouflage Attack

文章详细列举了在AAAI、CVPR、ACM、ECCV和ICLR等会议上关于深度学习模型的对抗攻击和防御策略的研究论文,涵盖了一系列攻击方法如黑盒攻击、白盒攻击、数据中毒、后门攻击,以及防御策略如对抗训练、鲁棒初始化和对抗性增强。研究重点在于提高模型的鲁棒性和防止对抗性样本的转移性。
最低0.47元/天 解锁文章
5606

被折叠的 条评论
为什么被折叠?



