CS recovery algorithms(OMP,ROMP,IRLS, GBP,...)

本文对比了多种压缩感知恢复算法,包括正交匹配追踪、压缩采样匹配追踪、贪婪基追踪、迭代重加权最小二乘、子空间追踪、正则化正交匹配追踪、迭代硬阈值化和线性规划,通过实验结果展示了它们在图像恢复任务中的表现及复杂性差异。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

There are many compressed sensing (CS) recovery algorithms that have been proposed. Here I list some of them, and provide the corresponding experimental results. Essentially, the recovery algorithms are similar to the sparse coding based on the over-complete dictionary. The corresponding MATLAB Codes of the following algorithms can be downloaded from my homepage, http://home.ustc.edu.cn/~roy.

<wbr></wbr>

1. Orthogonal Matching Pursuit (OMP) [1]

<wbr></wbr>

[转载]CS<wbr>recovery<wbr>algorithms(OMP,ROMP,IRLS,<wbr>GBP,...)

Fig. 1. Recovery result by using OMP

<wbr></wbr>

<wbr> I employ the OMP to recovery an image in the column-by-column fashion, i.e. process each column separately. The recovery result is shown in <span style="font-family:Times New Roman">Fig. 1. The advantages of OMP are easy implementation and fast speed.</span></wbr>

<wbr></wbr>

2. Compressive Sampling Matching Pursuit (CoSaMP) [2]

<wbr></wbr>

<wbr><wbr></wbr></wbr><wbr></wbr>

<wbr> The foundation of this algorithm is that “<em>For an s-sparse signal <strong> x</strong>, the vector y<wbr>can serve as a proxy for the signal because the energy in each set of s components of <strong>y</strong> approximates the energy in the corresponding s components of <strong> x</strong>. In particular, the largest s entries of the proxy <strong>y</strong> point toward the largest s entries of the signal <strong>x</strong>.</wbr></em>”</wbr>

<wbr></wbr>

[转载]CS<wbr>recovery<wbr>algorithms(OMP,ROMP,IRLS,<wbr>GBP,...)

Fig. 2. Recovery result by using CoSaMP

<wbr></wbr>

<wbr> Fig. 2 illustrates the recovered result of CoSaMP. Specifically, the image is recovered in column-by-column fashion, so as to reduce the complexity. My experimental results depict that the OMP has a better performance than the CoSaMP, while the CoSaMP is much more complex than the OMP in the sense of computation. </wbr>


<wbr></wbr>

3. Greedy Basis Pursuit (GBP) [3]

<wbr></wbr>

<wbr> The authors declared that the GBP could discard some inappropriate atoms from the selected set of atoms. Note that, this property is not reflected in the following description.</wbr>

<wbr></wbr>

<wbr></wbr>

[转载]CS<wbr>recovery<wbr>algorithms(OMP,ROMP,IRLS,<wbr>GBP,...)

Fig. 3. Recovery result by using GBP

<wbr></wbr>

<wbr> According to my experimental results, the performance of GBP, shown in <span style="font-family:Times New Roman"> Fig. 3, is better than that of OMP and CoSaMP. However, the GBP is much more complex than OMP and CoSaMP as well.</span><br><br><wbr><strong>4. Iteratively Reweighted Least Square (IRLS) <span style="font-family:Times New Roman"> [4]</span></strong></wbr></wbr>

<wbr></wbr>

<wbr></wbr>

[转载]CS<wbr>recovery<wbr>algorithms(OMP,ROMP,IRLS,<wbr>GBP,...)

Fig. 4. Recovery result by using IRLS
<wbr></wbr>

<wbr> Fig. 4 <span style="color:#000000"> shows the recovery result based on this algorithm, as can be seen, there are some improvements comparing with the previous algorithms, in the sense of PSNR.</span><br></wbr>

<wbr></wbr>

5. Suspace Pursuit (SP) [5]

<wbr></wbr>

<wbr></wbr>

<wbr> As declared by the authors, the main difference between SP and CoSaMP is the manner of adding new candidates. More precisely, SP only adds <em>K</em> new candidates in each iteration, while CoSaMP adds 2<em>K</em>, which makes the SP computationally more efficient, but the underlying analysis more complex.</wbr>

<wbr></wbr>

[转载]CS<wbr>recovery<wbr>algorithms(OMP,ROMP,IRLS,<wbr>GBP,...)

Fig. 5. Recovery result by using SP

<wbr></wbr>

<wbr> Fig. 5 illustrates the recovery result by using SP, as can be seen, the performance is better than that of CoSaMP in the sense of PSNR.</wbr>

<wbr></wbr>

6. Regularized Orthogonal Matching Pursuit (ROMP) [6]

<wbr> This algorithm is similar to OMP and SP, however, the main difference is that ROMP has a regularization stage. This stage is described by the authors as “Among all subsets <wbr>with comparable coordinates: <wbr>for all , choose <wbr>with the maximal energy .”. Furthermore, the authors explain the regularization more specifically as “The regularization step of ROMP, i.e. selecting , can be done fast by observing that <wbr>is an interval in the decreasing rearrangement of coefficients. Moreover, the analysis of the algorithm shows that instead of searching over all intervals , it suffices to look for <wbr>among <wbr>consecutive intervals with endpoints where the magnitude of coefficients decreases by a factor of 2”.</wbr></wbr></wbr></wbr></wbr></wbr></wbr>

<wbr> Here I can’t understand the above description well, and can’t implement properly.</wbr>

<wbr></wbr>

7. Iterative Hard Thresholding (IHT) [7]

<wbr></wbr>

<wbr><wbr><wbr></wbr></wbr></wbr>

<wbr> I implement this scheme according to the above formula. However, the recovery result is terrible, which is depicted in Fig. 6. The reason may be that the operator norm <wbr>is not smaller than one, i.e. it doesn’t meet the requirement of IHT theoretically.<br></wbr></wbr>

[转载]CS<wbr>recovery<wbr>algorithms(OMP,ROMP,IRLS,<wbr>GBP,...)

Fig. 6. Recovery result by using IHT

<wbr></wbr>

<wbr></wbr>

8. Linear Programming (LP) [8]

<wbr></wbr>

<wbr></wbr>

Fig. 7 shows the recovery result by solving the standard BP problem using a primal-dual algorithm.

<wbr></wbr>

[转载]CS<wbr>recovery<wbr>algorithms(OMP,ROMP,IRLS,<wbr>GBP,...)

Fig. 7. Recovery result by using BP


<wbr> Besides the solution of <em>l</em>1 minimization with equality constraint, the signal can also be recovered by solving the <em>l</em>1 minimization with quadratic constraint.</wbr>

<wbr> Fig. 8 illustrates the recovered results by solving the <em>l</em>1 minimization with quadratic constraint. As can be seen, the performance is better than the above.<br></wbr>

[转载]CS<wbr>recovery<wbr>algorithms(OMP,ROMP,IRLS,<wbr>GBP,...)

Fig. 8. Recovery result by solving l1 minimization with quadratic constraint

<wbr></wbr>

<wbr> All the above descriptions and experiments are based on my perception, if there is somewhere inappropriate, please contact with me &lt;roy@mail.ustc.edu.cn&gt;.</wbr>

<wbr></wbr>

Reference

[1] J. Tropp and A. Gilbert, “Signal Recovery from Random Measurements via Orthogonal Matching Pursuit,” 2007.

[2] D. Deedell andJ. Tropp, “COSAMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples,” 2008.

[3] P. Huggins and S. Zucker, “Greedy Basis Pursuit,” 2006.

[4] R. Chartrand and W. Yin, “Iteratively Reweighted Algorithms for Compressed Sensing,” 2008.

[5] W. Dai and O. Milenkovic, “Subspace Pursuit for Compressive Sensing Signal Reconstruction,” 2009.

[6] D. Needell and R. Vershynin, “Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit,” 2007.

[7] T. Blumensath and M. Davies, “Iterative Hard Thresholding for Compressed Sensing,” 2008.

[8] E. Candes and J. Romberg, “l1-Magic: Recovery of Sparse Signals via Convex Programming,” 2005.


<wbr></wbr>


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值