Zero-delay Lightweight Defenses againstWebsite Fingerprinting

本文介绍了两种新的网络安全防御技术FRONT和GLUE,FRONT通过混淆跟踪前端和随机数据包分布提高攻击者学习难度,而GLUE则通过粘合痕迹来欺骗攻击者,防止他们对多页浏览进行分类。两者在数据开销和性能上优于前任方案,如WTF-PAD。

2020 usenix

code

前人

  • WTF-PAD[2016] -> broken by DF
  • Tamaraw[2014] -> 

FRONT:侧重于用虚拟数据包混淆跟踪前端。随机化了虚拟数据包的数量和分布,以实现跟踪间随机性,从而阻碍攻击者的学习过程。

GLUE:在单独的跟踪之间添加虚拟数据包,看起来好像客户端正在连续访问页面而没有暂停,使攻击者无法找到他们的起点或终点,也便无法对其进行分类。

在33%的数据开销下,攻击者性能和信息泄露分析方面,FRONT优于WTF-PAD;GLUE 数据开销约为22%-44%,可以将最佳WF攻击的准确性 TPR 和精度降低到与最佳重量级防御相当的水平,这两种防御都没有延迟开销

FRONT

概述

实现可部署性需具备三个特性:零延迟(无延迟开销)、轻量级(数据开销小)、易于实施

唯一已知的与 FRONT 共享这些属性的防御是 WTF-PAD。在 WTF-PAD 中,客户端和服务器分别维护两个直方图,在其中

### Zero-DCE Algorithm Implementation and Usage in Image Enhancement The Zero-Reference Deep Curve Estimation (Zero-DCE) algorithm is a deep learning-based method designed to enhance low-light images without requiring paired training data, which means it does not need corresponding well-exposed reference images during the training phase[^1]. This characteristic makes Zero-DCE particularly powerful for practical applications where obtaining such pairs can be challenging. In terms of architecture, Zero-DCE employs an end-to-end trainable network that learns pixel-wise curves for adjusting brightness while preserving details. The model outputs multiple illumination maps representing different levels of enhancement. These maps are combined through weighted summation with original input images to produce enhanced results. For implementing this approach, one typically uses frameworks like TensorFlow or PyTorch. Below shows how to implement basic components using Python code: ```python import torch from torchvision import models import torch.nn as nn class EnhanceNet(nn.Module): def __init__(self): super(EnhanceNet, self).__init__() # Define layers here based on paper specifications def forward(self, x): curve_maps = ... # Output from neural net enhanced_image = ... # Apply curve mappings to input image 'x' return enhanced_image ``` To train the model effectively, loss functions play a crucial role. In addition to standard losses used in computer vision tasks, specific constraints were introduced into the objective function to ensure physical plausibility and prevent overfitting issues common when working with unpaired datasets: - Spatial consistency loss encourages smooth transitions between adjacent pixels. - Exposure control loss ensures global exposure remains within reasonable bounds. - Color constancy loss maintains natural color balance across all regions. Once trained successfully, applying pre-trained weights allows real-time processing even on mobile devices due to lightweight design principles adopted throughout development stages.
评论 1
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值