Feb 26 Adaptive Block Compressive Sensing for Noisy Images

本篇文章是来自于一本书的某一章

讲述图像去噪,获得原图X的时候,引进一种新的方法,是可以在前期处理的过程中,引入Lipschitz去做逼近的,同时利用Taylor展式,可以把原有的目标函数变换成convex function并且带有一个penalty 项的样子 分成了两部分处理,一部分可以继续用梯度下降的方法,另一部分就单独留着再继续处理。

除此之外,在image 处理过程中,也和以前的方法不同,以前通常都是一个大的image,并且大多是square形状的,在本文的paper中,允许多样性的存在,

Compressive Sensing Methodology

For a noisy image, Compressive Sensing can be expressed as :
y = Φ Ψ s + ω y = \Phi\Psi s+ \omega y=ΦΨs+ω                     ~~~~~~~~~~~~~~~~~~~                    (1)
ω \omega ω is a N-dimension noise signal
Ψ \Psi Ψ is a N × \times ×N orthogonal basis matrix
Φ \Phi Φ is an M × \times × N random measurement matrix (M < N)
signal s in (1) can be estimated from measurement y by solving the convex minimization problem as follows.

argmin x ∣ ∣ Φ x − y ∣ ∣ 2 2 + λ ∣ ∣ x ∣ ∣ 1 _x||\Phi x -y ||_2^2+\lambda||x||_1 xΦxy22+λx1         ~~~~~~~        (2)
(2) is a constrained minimization problem of a convex function.
We can solve this problem by gradient-based method, which generate a new sequence x k x_k xk via:

x 0 ∈ R N , x k = x k − 1 − t k ∇ g ( x k − 1 ) x_0\in \mathbb{R}^N,x_k=x_{k-1}-t_k\nabla g(x_{k-1}) x0RN,xk=xk1tkg(xk1)
where g(x) is a convex function, and t k t_k tk is step size.

For (2),Let us look at the objective function, it can be rewritten as F(x) = g(x) + f(x)
g(x) is a convex function, and f(x) = λ ∣ ∣ x ∣ ∣ 1 \lambda||x||_1 λx1

Then the Function g(x) can be approximated by a quadratic function

g ( x , x k ) = g ( x k − 1 ) + &lt; ( x − x k − 1 ) , ∇ g ( x k − 1 ) &gt; + 1 2 t k ∣ ∣ x − x k − 1 ∣ ∣ 2 2 g(x,x_k)=g(x_{k-1})+&lt;(x-x_{k-1}),\nabla g(x_{k-1})&gt;+\frac{1}{2t_k}||x-x_{k-1}||_2^2 g(x,xk)=g(xk1)+<(xxk1),g(xk1)>+2tk1xxk122
this function t k t_k tk can be replaced by a constant 1/L which is related to the Lipschitz constant.

Combined with other papers I have read. Applying the same idea to the non-smooth l 1 l_1 l1 norm regularized problem:

minF(x) = min {g(x) + λ ∣ ∣ x ∣ ∣ 1 \lambda||x||_1 λx1 }

which can lead to the following iterative scheme:
x k = a r g m i n x g ( x k − 1 ) + &lt; ( x − x k − 1 ) , ∇ g ( x k − 1 ) &gt; + 1 2 t k ∣ ∣ x − x k − 1 ∣ ∣ 2 2 + λ ∣ ∣ x ∣ ∣ 1 x_k = argmin_x{g(x_{k-1})+&lt;(x-x_{k-1}),\nabla g(x_{k-1})&gt;+\frac{1}{2t_k}||x-x_{k-1}||_2^2+\lambda||x||_1} xk=argminxg(xk1)+<(xxk1),g(xk1)>+2tk1xxk122+λx1

After the constant term is ignored, we get:

x k = a r g m i n x ( 1 2 t k ∣ ∣ x − ( x k − 1 − t k ∇ g ( x k − 1 ) ∣ ∣ 2 2 + λ ∣ ∣ x ∣ ∣ 1 ) x_k =argmin_x\left( \large \frac{1}{2t_k}||x-(x_{k-1}-t_k\nabla g(x_{k-1})||_2^2+\lambda||x||_1\large \right) xk=argminx(2tk1x(xk1tkg(xk1)22+λx1)

According to the Lipschitz gradient:
∣ ∣ ∇ g ( x ) − ∇ g ( y ) ∣ ∣ ≤ L ∣ ∣ x − y ∣ ∣ ||\nabla g(x) -\nabla g(y)||\leq L||x-y|| g(x)g(y)Lxy for all x & y,

we know that when x is close to y, ∣ ∣ ∇ g ( x ) − ∇ g ( y ) ∣ ∣ ∣ ∣ x − y ∣ ∣ \frac{||\nabla g(x) -\nabla g(y)||}{||x-y||} xyg(x)g(y) is the approximation of g ′ ′ ( x ) g\prime\prime(x) g(x) at point x.

So our model and function can be approximated by:
F ( x ) = g ( x k − 1 ) + &lt; ( x − x k − 1 ) , ∇ g ( x k − 1 ) &gt; + L 2 ∣ ∣ x − x k − 1 ∣ ∣ 2 2 + f ( x ) F(x)=g(x_{k-1})+&lt;(x-x_{k-1}),\nabla g(x_{k-1})&gt;+\frac{L}{2}||x-x_{k-1}||_2^2 + f(x) F(x)=g(xk1)+<(xxk1),g(xk1)>+2Lxxk122+f(x)

x k = a r g m i n x ( L 2 ∣ ∣ x − ( x k − 1 − 1 L ∇ g ( x k − 1 ) ∣ ∣ 2 2 + λ ∣ ∣ x ∣ ∣ 1 ) x_k =argmin_x\left( \large \frac{L}{2}||x-(x_{k-1}-\frac{1}{L}\nabla g(x_{k-1})||_2^2+\lambda||x||_1\large \right) xk=argminx(2Lx(xk1L1g(xk1)22+λx1)

Or equivalently:
x k = a r g m i n x ( L 2 ∣ ∣ x − d k ∣ ∣ 2 2 + λ ∣ ∣ x ∣ ∣ 1 ) x_k =argmin_x\left( \large \frac{L}{2}||x-d_k||_2^2+\lambda||x||_1\large \right) xk=argminx(2Lxdk22+λx1)
We recall the equation before, we know that:
y = Φ Ψ s + ω y = \Phi\Psi s+ \omega y=ΦΨs+ω

g(x)= ∣ ∣ Φ x − y ∣ ∣ 2 2 = ∣ ∣ Φ Ψ s − y ∣ ∣ 2 2 ||\Phi x -y ||_2 ^2=||\Phi \Psi s -y ||_2 ^2 Φxy22=ΦΨsy22

d k = x k − 1 − 1 L ∇ g ( x k − 1 ) d_k = x_{k-1}-\frac{1}{L}\nabla g(x_{k-1}) dk=xk1L1g(xk1)

d k = x k − 1 − 1 L ( Φ Ψ T ) T ( Φ Ψ T x k − 1 − y ) d_k=x_{k-1}-\frac{1}{L}(\Phi\Psi^T)^T(\Phi\Psi^Tx_{k-1}-y) dk=xk1L1(ΦΨT)T(ΦΨTxk1y)

1 L \frac{1}{L} L1 is the step size

The adaptive Block CS with sparsity

l 0 l_0 l0 = #{j, c j c_j cj=0}
l ε 0 l_\varepsilon^0 lε0 = #{j, c j ≤ ε c_j\leq\varepsilon cjε}

THE END of notes.

Matlab 提供了丰富的工具箱来进行数字图像处理,其中包括对噪声的去除。常见的数字图像去噪技术有以下几种: 1. **均值滤波(Mean Filtering)**:这是最简单的平滑方法之一,通过对邻域像素取平均值来减少随机噪声的影响。 ```matlab img = imread('your_image.jpg'); % 加载图像 noisy_img = imnoise(img, 'gaussian', 0.01); % 添加高斯噪声 filtered_img = medfilt2(noisy_img, [3 3]); % 使用3x3的均值滤波器 ``` 2. **中值滤波(Median Filtering)**:适用于椒盐噪声等离散型噪声,它通过替换每个像素为其周围像素的中值来保持边缘完整性。 ```matlab filtered_img = medfilt2(noisy_img); ``` 3. **小波分析(Wavelet Denoising)**:利用小波变换分离图像的不同频率成分进行降噪,例如`wavedec2` 和 `waverec2` 函数。 ```matlab % 小波分解 coeffs = wavedec2(noisy_img, 'db4', 'hard'); % 去噪操作,这里可以选择阈值方法(如硬阈值或软阈值) clean_coeffs = shrink(coeffs, noise_threshold); % 重构 denoised_img = waverec2(clean_coeffs, 'db4'); ``` 4. **自适应滤波(Adaptive Filters)**:如使用局部加权均值滤波(LoG),可以根据局部区域变化动态调整权重。 ```matlab img = imgaussfilt(noisy_img, 5); % 使用局部加权高斯滤波 ``` 5. **稀疏表示与压缩感知(Compressive Sensing)**:对于某些特定类型的噪声,可以尝试找到更少的测量数据来重建图像,这需要线性代数技巧。 6. **深度学习(Deep Learning for denoising)**:使用神经网络,如U-Net、CycleGAN等,提供更强大的去噪能力。需训练模型后使用,如用`deepDream` 或 `pretrainedLayers`。 以上只是一些基本的方法,实际应用时可能需要根据具体图像和噪声类型选择最适合的方法,并可能结合其他图像处理技术,比如边缘检测、形态学操作等。你有什么特定的噪声类型或应用场景吗?我可以提供更具体的建议。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值