DiffIR: Efficient Diffusion Model for Image Restoration

问题引入

  • IR任务和image synthesis任务不同点是IR任务本身有一个很强的低质量图片作为先验,所以可以不完全遵循图片生成的范式,本文主要在compact的IPR空间进行DM;
  • 本文提出的模型分为三个部分,1)CPEN(compact IR prior extraction network)来得到IPR(IR prior representation),这个作为回归模型的指导信息;2)DIRformer回归模型,类比为decoder;3)DM来通过LQ图片得到IPR
  • 训练分为两个stage,首先第一个stage训练CPEN和DIRformer,此时CPEN输入的是高质量图片;第二个stage使用的IPR是DM得到的;

methods

在这里插入图片描述

  • stage1: 训练CPEN和DIRformer,首先将gt和LQ concat到一起,然后经过pixelunshuffle得到CPEN的输入,输出IPR Z = C P E N S 1 ( P i x e l U n s h u f f l e ( C o n c a t ( I G T , I L Q ) ) ) , Z ∈ R 4 C ′ Z = CPEN_{S1}(PixelUnshuffle(Concat(I_{GT},I_{LQ}))),Z\in\mathbb{R}^{4C'} Z=CPENS1(PixelUnshuffle(Concat(IGT,ILQ))),ZR4C,之后IPR被送到DIRformer的DGFN和DMTA模块,第一阶段训练的损失是GT和生成HQ的L1损失,超分和inpainting任务还有erceptual loss and adversarial
    loss;
  • DMTA的操作 F ′ = W l 1 Z ⊙ N o r m ( F ) + W l 2 Z F' = W_l^1Z\odot Norm(F) + W_l^2 Z F=Wl1ZNorm(F)+Wl2Z,其中 W l W_l Wl是linear层, F , F ′ F,F' F,F分别是输入和输出的feature map, Q = W d Q W c Q F ′ , K = W d K W c K F ′ , V = W d V W c V F ′ Q = W_d^QW_c^QF',K=W_d^KW_c^KF',V = W_d^VW_c^VF' Q=WdQWcQF,K=WdKWcKF,V=WdVWcVF,其中 W d W_d Wd是depthwise卷积, W c W_c Wc是pointwise卷积,之后被reshape成 Q ^ ∈ R H ^ W ^ × C ^ , K ^ ∈ R C ^ × H ^ W ^ , V ^ ∈ R H ^ W ^ × C ^ \widehat{Q}\in\mathbb{R}^{\widehat{H}\widehat{W}\times\widehat{C}},\widehat{K}\in\mathbb{R}^{\widehat{C}\times\widehat{H}\widehat{W}},\widehat{V}\in\mathbb{R}^{\widehat{H}\widehat{W}\times\widehat{C}} Q RH W ×C ,K RC ×H W ,V RH W ×C ,最后 F ^ = W c V ^ ⋅ S o f t m a x ( K ^ ⋅ Q ^ / γ ) + F \widehat{F}=W_c\widehat{V}\cdot Softmax(\widehat{K}\cdot \widehat{Q}/\gamma)+F F =WcV Softmax(K Q /γ)+F
  • DGFN的操作: F ^ = G E L U ( W d 1 W c 1 F ′ ) ⊙ W d 2 W c 2 F ′ + F \widehat{F}=GELU(W_d^1W_c^1F')\odot W^2_dW_c^2F' + F F =GELU(Wd1Wc1F)Wd2Wc2F+F
  • stage2:同时训练三个部分,首先使用 C P E N S 1 CPEN_{S1} CPENS1得到 Z Z Z,之后经过diffusion process得到 Z T ∈ R 4 C ′ Z_T\in\mathbb{R}^{4C'} ZTR4C C P E N S 2 CPEN_{S2} CPENS2得到 D = C P E N S 2 ( P i x e l U n s h u f f l e ( I L Q ) ) D = CPEN_{S2}(PixelUnshuffle(I_{LQ})) D=CPENS2(PixelUnshuffle(ILQ)),之后进行DM,以D为条件,进行去噪t-1次得到 Z ^ \widehat{Z} Z ,和 C P E N S 1 CPEN_{S1} CPENS1得到的 Z Z Z计算损失 L d i f f = 1 4 C ′ ∑ i = 1 4 C ′ ∣ Z ^ ( i ) − Z ( i ) ∣ L_{diff} = \frac{1}{4C'}\sum_{i = 1}^{4C'}|\widehat{Z}(i) - Z(i)| Ldiff=4C1i=14CZ (i)Z(i),这损失和stage1的损失在一起计算总损失;

实验

### CLIP Anomaly Detection Implementation and Solutions In the context of anomaly detection using Contrastive Language–Image Pretraining (CLIP), an attribute restoration framework has been proposed that leverages pre-trained models like CLIP for detecting anomalies in images or other data types[^1]. This approach involves utilizing the powerful feature extraction capabilities of CLIP to identify patterns that deviate from normal behavior. For implementing CLIP-based anomaly detection, one can follow a structured methodology: #### Data Preparation Collecting both normal and anomalous samples is crucial. The dataset should cover various scenarios where anomalies might occur. For each sample, extract features using CLIP's encoder architecture which maps inputs into high-dimensional embedding spaces shared between text and image domains. ```python import clip import torch device = "cuda" if torch.cuda.is_available() else "cpu" model, preprocess = clip.load("ViT-B/32", device=device) def get_clip_embedding(image_path): image = preprocess(Image.open(image_path)).unsqueeze(0).to(device) with torch.no_grad(): image_features = model.encode_image(image) return image_features.cpu().numpy() ``` #### Model Training Train a classifier on top of frozen CLIP embeddings obtained during the previous step. Alternatively, fine-tune only the last few layers while keeping earlier parts fixed. Another option includes employing unsupervised learning techniques such as autoencoders trained specifically for reconstruction error minimization over CLIP representations. #### Evaluation Metrics Use metrics tailored for imbalanced datasets since anomalies typically constitute minority classes. Common choices include precision-recall curves, F1 scores, ROC AUC values, etc., alongside visual inspection tools for qualitative assessment. Regarding specific implementations, research papers focusing on building bridges across spatial and temporal resolutions may offer insights applicable when dealing with spatiotemporal anomalies detected through changes prior modeling combined with conditional diffusion processes[^2]. System-level optimizations play a significant role too; efficient utilization of hardware resources ensures faster processing times without compromising accuracy levels significantly[^3]. Lastly, integrating geographic information systems could provide additional contextual awareness beneficial for certain applications involving location-specific events monitoring tasks[^4].
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值