Feb 27 paper reading ~Gradient-based low rank method

文章提出了一种结合图像梯度特性的低秩图像修复方法,称为grad-LR方法。该方法在优化模型中同时考虑了图像本身和其梯度特性,实现了更精细的图像修复效果。通过将图像补全问题转化为标准的低秩逼近问题,并利用正则化函数的创新形式,如Lp,q范数,以及对矩阵的特殊处理,如矩阵分解,有效提高了图像修复的质量。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1. Gradient-based low rank method and its application in image inpainting

https://link.springer.com/article/10.1007/s11042-017-4509-0

这篇文章讲了low-rank下的inpainting问题,给观察y,想要修复得到x,一开始都是给了字典,想用L1 norm,接下来开始发现数据的特点,可以矩阵化的处理,后来转化成standard low-rank approximation problem. 再紧接着,regularisation function可以改变形式,引入 L p , q L_{p,q} Lp,q norm, 这样子在p,q分别取2,1 的时候,就可以简化形式,同时,A 也可以写成一个对角矩阵和另一个矩阵相乘的形式,这样子整个模型就再次简化了,之后就可以进行相应的处理了,一般都是altenately 求解。 这是SAIST的算法。

精华也就是regularization 项的放松,成了最后的核模,还有一个就是对矩阵那部分的处理,矩阵分解。

之后才引入本文最重要的idea 就是那个grad-LR method, 其实这个算法和前面那个算法是有关系的,它既考虑了整个图片的性质爷考虑了图像梯度的性质,所以在她的优化模型中,把图像本身,和图像梯度都囊括进去了,combine这两部分,然后解优化问题。

1 Theory introduction

1.1 Importance of utilizing the priors from gradient domain

Our motivation comes from the fundamental property of a regular image.
i.e. if the rank of similarly patches extracted from the image is low, the rank of similarly patches extracted from the gradient image would also be low at the same scale order of magnitude or at the smaller scale.

1.2 Review of SAIST

spatially adaptive iterative singular-value thresholding (SAIST) method, In SAIST, it first assumes that the inpatined/desired image is to be sparse in some dictionary and can be formulated into the following minimization problem:
( U , α ) = arg ⁡ min ⁡ U , α ∑ i = 1 N ∣ ∣ y i − U α i ∣ ∣ 2 2 + τ ∑ i N ∣ ∣ α i ∣ ∣ 1 (U,\alpha)=\underset{U,\alpha}{\arg\min}\sum_{i=1}^{N}||y_i-U\alpha_i||_2^2+\tau\sum_i^N||\alpha_i||_1 (U,α)=U,αargmini=1NyiUαi22+τiNαi1

U is a dictionary,
y i ∈ R n y_i\in R^n yiRn (n is patch size)
In SAIST method, the author used the group sparsity to group a set of similar patches
Y = [ y 1 , y 2 , … , y m ] ∈ R n × m Y = [y_1,y_2,{\ldots} ,y_m]\in R^{n\times m} Y=[y1,y2,,ym]Rn×m
e.g.finding the k-nearest-neighbor of an exemplar( 样例)patch y 1 y_1 y1, and exploited a pseudo-matrix norm ∣ ∣ A ∣ ∣ p , q ||A||_{p,q} Ap,q to define the group sparsity.
( U , A ) = arg ⁡ min ⁡ A ∣ ∣ Y − U A ∣ ∣ F 2 + τ ∣ ∣ A ∣ ∣ p , q (U,A)=\underset{A}{\arg\min}||Y-UA||_F^2+\tau||A||_{p,q} (U,A)=AargminYUAF2+τAp,q

where, A = [ α 1 , α 2 , … , α n ] A = [\alpha^1, \alpha^2, {\ldots}, \alpha^n] A=[α1,α2,,αn] is related to image patches by X = UA, and the pseudo-matrix norm is defined by:
∣ ∣ A ∣ ∣ p , q = ∑ i = 1 n ∣ ∣ α i ∣ ∣ q p = ( ∑ i = 1 n ( ∑ j = 1 m ∣ α i , j ∣ p ) q / p ) 1 / q ||A||_{p,q}=\sum_{i=1}^{n}||\alpha^i||_q^p = \large(\sum_{i=1}^{n} (\sum_{j=1}^m|\alpha_{i,j}|^p)^{q/p}\large) ^{1/q} Ap,q=i=1nαiqp=(i=1n(j=1mαi,jp)q/p)1/q
α i = [ α i , 1 , α i , 2 , … , α i , m ] , α i \alpha^i=[\alpha_{i,1},\alpha_{i,2},{\ldots},\alpha_{i,m}], \alpha^i αi=[αi,1,αi,2,,αi,m],αi is i-th row of matrix A.

when p=1, q= 2,
∣ ∣ A ∣ ∣ 1 , 2 = ∑ i = 1 n ∣ ∣ α i ∣ ∣ 2 1 = ∑ i = 1 n ( ∑ j = 1 m ∣ α i , j ∣ 2 ) 1 / 2 ||A||_{1,2}=\sum_{i=1}^{n}||\alpha^i||_2^1=\large\sum_{i=1}^{n}(\sum_{j=1}^m|\alpha_{i,j}|^2\large)^{1/2} A1,2=i=1nαi21=i=1n(j=1mαi,j2)1/2
which is the sum of standard deviation associated with sparse coefficient vector in each row.
The main innovation of SAIST is that, under the assumption that the basis U is orthogonal, Specifically, when the pseudo- matrix norm ‖⋅‖ 1 , 2 _{1,2} 1,2 is used in SSC, the minimization is:
( U , A ) = arg ⁡ min ⁡ A ∣ ∣ Y − U A ∣ ∣ F 2 + τ ∣ ∣ A ∣ ∣ 1 , 2 (U,A)=\underset{A}{\arg\min}||Y-UA||_F^2+\tau||A||_{1,2} (U,A)=AargminYUAF2+τA1,2
U is a dictionary.
Y = [ y 1 , y 2 , … , y m ] ∈ R n × m Y = [y_1,y_2,{\ldots} ,y_m]\in R^{n\times m} Y=[y1,y2,,ym]Rn×m is a group if a set of similar patches.
By rewriting that A = Σ V T A =\Sigma V^T A=ΣVT where Σ \Sigma Σ=diag{ λ 1 , λ 2 , … , λ i , … , λ I \lambda_1, \lambda_2,{\ldots}, \lambda_i,{\ldots},\lambda_I λ1,λ2,,λi,,λI} which (I =min{n,m}) is a diagonal matrix in R I × I R^{I\times I} RI×I
V is a right-multipling matrix of which each column in R m × I R^{m\times I} Rm×I can be decomposed to v i = 1 λ i ( α i ) T v_i = \frac{1}{\lambda_i}(\alpha_i)^T vi=λi1(αi)T
we also have ∣ ∣ A ∣ ∣ 1 , 2 = ∑ i = 1 I m σ i , σ i 2 = λ i 2 / m ||A||_{1,2}=\sum_{i=1}^{I}\sqrt{m}\sigma_i, \sigma_i^2 = \lambda _i^2/m A1,2=i=1Im σi,σi2=λi2/m

Then model can be rewritten as:
( U , Σ , V ) = arg ⁡ min ⁡ U , Σ , V ∣ ∣ Y − U Σ V T ∣ ∣ F 2 + τ ∑ i = 1 I λ i (U,\Sigma,V)=\underset{U,\Sigma,V}{\arg\min}||Y-U\Sigma V^T||_F^2+\tau\sum_{i=1}^{I}\lambda_i (U,Σ,V)=U,Σ,VargminYUΣVTF2+τi=1Iλi
this is a standard low-rank approximation problem.

∑ i = 1 I λ i \sum_{i=1}^{I}\lambda_i i=1Iλi is a nuclear norm, (defined as the sum of its single values) , and it is a relaxation of the rank function, which measures the number of non-zero singular values.

In practice,
( U , Σ , V ) = s v d ( Y ) ; Σ = S τ ( Σ ) (U,\Sigma,V)= svd(Y); \Sigma = S_{\tau}(\Sigma) (U,Σ,V)=svd(Y);Σ=Sτ(Σ)

S τ S_{\tau} Sτdenotes the soft thresholding operator with threshold τ \tau τ (regularization parameter) and the reconstructed data matrix is conveniently obtained by Y = U Σ V T . Y =U\Sigma V^T. Y=UΣVT.

2 Proposed grad-LR method

(replace the function by their gradient in objective function)

2.1 Grad-LR method

Model:
( U l , Σ l , V l , y ) = arg ⁡ min ⁡ U , Σ , V ∑ l ∣ ∣ R ~ l ( ∇ y ) − U l T ∣ ∣ F 2 + τ ∑ i = 1 I λ l , i (U_l,\Sigma_l,V_l,y)= \underset{U,\Sigma,V}{\arg\min}\sum_{l}||\widetilde{R}_l(\nabla_y)-U_l^T||_F^2+\tau\sum_{i=1}^{I}\lambda_{l,i} (Ul,Σl,Vl,y)=U,Σ,VargminlR l(y)UlTF2+τi=1Iλl,i
s . t . y ( Ω ) = y 0 ( Ω ) s.t. y(\Omega)= y_0(\Omega) s.t.y(Ω)=y0(Ω)

This is the part of model description.

2. Gradient based Low rank Method for Highly Under-sampled Magnetic resonance Imaging Reconstruction

the model used in this paper is similar with the model used in last one~~

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值