- 博客(0)
- 资源 (1)
- 收藏
- 关注
Efficient Defenses Against Adversarial Attacks
Deep learning has proven its prowess across a wide range of computer vision applications, from visual
recognition to image generation [17]. Their rapid deployment in critical systems, like medical imaging, surveillance systems or security-sensitive applications, mandates that reliability and security are
established a priori for deep learning models. Similarly to any computer-based system, deep learning models can potentially be attacked using all the standard methods (such as denial of service or
spoofing attacks), and their protection only depends on the security measures deployed around the
system. Additionally, DNNs have been shown to be sensitive to a threat specific to prediction models:
adversarial examples. These are input samples which have deliberately been modified to produce a
desired response by a model (often, misclassification or a specific incorrect prediction which would
benefit the attacker)
2018-10-04
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人