
对抗
文章平均质量分 94
薄荷奶绿Yena
211研究生在读,研究方向为视觉问答、视觉对话、多模态对抗攻防。
展开
-
【对抗鲁棒性】Fight Perturbations with Perturbations: Defending Adversarial Attacks via Neuron Influence
The vulnerabilities of deep learning models towards adversarial attacks have attracted increasing attention, especially when models are deployed in security-critical domains. Numerous defense methods, including reactive and proactive ones, have been propos原创 2025-01-13 18:30:00 · 1636 阅读 · 0 评论 -
【对抗提示】GuardT2I: Defending Text-to-Image Models from Adversarial Prompts
Recent advancements in Text-to-Image models have raised significant safety concerns about their potential misuse for generating inappropriate or Not-Safe-ForWork contents, despite existing countermeasures such as NSFW classifiers or model fine-tuning for i原创 2025-01-13 14:10:11 · 611 阅读 · 0 评论 -
【对抗鲁棒性】Semantically Consistent Visual Representation for Adversarial Robustness
为了克服这些限制,我们在本文中专注于基本的图像分类任务,并提出了语义约束对抗鲁棒学习 (SCARL) 框架。具体来说,我们首先制定语义互信息,以弥合视觉表示和语义词向量之间的信息差距,因为视觉图像和语义词之间的表示空间不一致。我们提供了一个互信息下界,可以有效地优化该下界,以使两者的分布信息更接近。其次,我们引入了语义结构约束损失,以使视觉表示的结构与词向量的结构保持一致。目的是使视觉表示能够反映不同类别之间的语义关联,例如语义词向量。最后,我们将上述两种技术与对抗性训练相结合,以学习一个稳健的模型。原创 2024-12-26 15:30:46 · 879 阅读 · 0 评论 -
【通用对抗扰动】Universal Adversarial Perturbations for Vision-Language Pre-trained Models
原文标题: Universal Adversarial Perturbations for Vision-Language Pre-trained Models原文代码: https://github.com/sduzpf/UAP_VLP发布年度: 2024为了应对这些挑战,我们提出了一种新颖的黑盒 UAP 生成方法,称为有效且可转移的通用对抗攻击 (ETU)。ETU 专注于攻击各种 VLP 模型,而无需事先了解模型细节,例如架构、下游任务和训练数据集。它彻底考虑了 UAP 的特性和不同模态之间的内在原创 2024-11-27 17:00:06 · 997 阅读 · 0 评论 -
【自通用性】Enhancing the Self-Universality for Transferable Targeted Attacks
原文标题: Enhancing the Self-Universality for Transferable Targeted Attacks原文代码: https://github.com/zhipeng-wei/Self-Universality发布年度: 2023发布期刊: CVPRIn this paper, we propose a novel transfer-based targeted attack method that optimizes the adversarial pertu原创 2024-11-06 16:20:53 · 767 阅读 · 0 评论 -
【真实对抗环境】MC-Net: Realistic Sample Generation for Black-Box Attacks
这篇文章着重强调是构建一个更真实的攻击场景,从而生成更有效的攻击。原创 2024-11-06 16:20:24 · 874 阅读 · 0 评论 -
【快速对抗训练】Improving Fast Adversarial Training With Prior-Guided Knowledge
基于上述观察,本文提出了一个问题:是否有可能在不产生额外训练时间的情况下获得对抗性样本初始化。因此,本文提出采用先验引导的初始化,该初始化是利用历史训练过程中的高质量对抗性扰动生成的。具体来说,本文提出通过动量机制使用之前所有时期的缓冲梯度作为附加先验,此外,根据历史对抗样本的质量累积不同权重的缓冲梯度。除此之外,还提出了一种简单而有效的正则化方法,用于防止学习模型在当前对抗性示例上的输出与先验引导初始化所初始化的样本上的输出偏差太大。在最小化的优化步骤中,对由先验引导的初始化和对抗性扰动生成的两类对抗原创 2024-11-06 16:19:40 · 677 阅读 · 0 评论 -
【扩散对抗】AdvDiffuser: Natural Adversarial Example Synthesis with Diffusion Models
原文标题: AdvDiffuser: Natural Adversarial Example Synthesis with Diffusion Models原文代码: https://github.com/lafeat/advdiffuser发布年度: 2023发布期刊: ICCVPrevious work on adversarial examples typically involves a fixed norm perturbation budget, which fails to captu原创 2024-07-10 15:05:23 · 1030 阅读 · 0 评论 -
【多模态攻击】Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-train
原文标题: Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models原文代码: https://github.com/Zoky-2020/SGA发布年度: 2023发布期刊: ICCVVision-language pre-training (VLP) models have shown vulnerability to adversarial examp原创 2024-06-21 20:35:22 · 1225 阅读 · 0 评论 -
【图像攻击转移性】FACL-Attack: Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks
原文标题: FACL-Attack: Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks原文代码: 暂无发布年度: 2024发布期刊: AAAIDeep neural networks are known to be vulnerable to security risks due to the inherent transferable nature of adversarial examples. De原创 2024-05-27 16:28:41 · 1038 阅读 · 0 评论 -
【多模态对抗】VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Model
原文标题: VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Models原文代码: https://github.com/ericyinyzy/VQAttack发布年度: 2024发布期刊: AAAIVisual Question Answering (VQA) is a fundamental task in computer vision and natural lang原创 2024-05-27 16:27:33 · 910 阅读 · 0 评论 -
[物理对抗攻击]Adversarial Attack with Raindrops
原文标题: Adversarial Attack with Raindrops原文代码: 暂无发布年度: 2023发布期刊: CVPRDeep neural networks (DNNs) are known to be vulnerable to adversarial examples, which are usually designed artificially to fool DNNs, but rarely exist in real-world scenarios. In this pa原创 2024-03-29 15:26:36 · 1021 阅读 · 1 评论 -
【多模态对抗攻击】VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models
原文标题: VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models原文代码: https://github.com/ericyinyzy/VLAttack发布年度: 2023发布期刊: NeurIPSVision-Language (VL) pre-trained models have shown their superiority on many multimodal task原创 2024-03-29 15:25:55 · 2490 阅读 · 0 评论 -
【多模态对抗】AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
原文标题: AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning原文代码: https://github.com/CGCL-codes/AdvCLIP发布年度: 2023发布期刊: ACM MMMultimodal contrastive learning aims to train a general-purpose feature extractor, such as CLIP, o原创 2024-03-07 16:03:29 · 1602 阅读 · 1 评论 -
【文本对抗攻击】Bridge the Gap Between CV and NLP!A Gradient-based Textual Adversarial Attack Framework
原文标题: Bridge the Gap Between CV and NLP!A Gradient-based Textual Adversarial Attack Framework原文代码: https://github.com/Phantivia/T-PGD发布年度: 2023发布期刊: ACLDespite recent success on various tasks, deep learning techniques still perform poorly on adversari原创 2024-03-07 16:01:48 · 953 阅读 · 1 评论 -
【语义扰动的对抗攻击】Mutual-modality Adversarial Attack with Semantic Perturbation
原文标题: Mutual-modality Adversarial Attack with Semantic Perturbation原文代码: 暂无发布年度: 2024发布期刊: AAAIAdversarial attacks constitute a notable threat to machine learning systems, given their potential to induce erroneous predictions and classifications. Howeve原创 2024-02-29 22:17:37 · 1126 阅读 · 0 评论 -
【图对抗】Local-Global Defense against Unsupervised Adversarial Attacks on Graphs
原文标题: Local-Global Defense against Unsupervised Adversarial Attacks on Graphs原文代码: https://github.com/jindi-tju/ULGD/blob/main发布年度: 2023发布期刊: AAAIUnsupervised pre-training algorithms for graph representation learning are vulnerable to adversarial attac原创 2023-12-18 00:01:57 · 889 阅读 · 1 评论