Image blurring occurs when the object being captured is out of the camera's focus. The top two figures on the right are an example of an image and its blurred version. Restoring the original image given only the blurred version is
one of the most interesting topics in image processing. This process is called deblurring, which will be your task for this problem.
In this problem, all images are in grey-scale (no colours). Images are represented as a 2 dimensional matrix of real numbers, where each cell corresponds to the brightness of the corresponding pixel. Although not mathematically accurate, one way to describe a blurred image is through averaging all the pixels that are within (less than or equal to) a certain Manhattan distance?from each pixel (including the pixel itself ). Here's an example of how to calculate the blurring of a 3x3 image with a blurring distance of 1:
Given the blurred version of an image, we are interested in reconstructing the original version assuming that the image was blurred as explained above.


In this problem, all images are in grey-scale (no colours). Images are represented as a 2 dimensional matrix of real numbers, where each cell corresponds to the brightness of the corresponding pixel. Although not mathematically accurate, one way to describe a blurred image is through averaging all the pixels that are within (less than or equal to) a certain Manhattan distance?from each pixel (including the pixel itself ). Here's an example of how to calculate the blurring of a 3x3 image with a blurring distance of 1:
Given the blurred version of an image, we are interested in reconstructing the original version assuming that the image was blurred as explained above.
Zero or more lines (made entirely of white spaces) may appear between cases. The last line of the input file consists of three zeros.
2 2 1 1 1 1 1 3 3 1 19 14 20 12 15 18 13 14 16 4 4 2 14 15 14 15 14 15 14 15 14 15 14 15 14 15 14 15 0 0 0
1.00 1.00
1.00 1.00
2.00 30.00 17.00
25.00 7.00 13.00
14.00 0.00 35.00
1.00 27.00 2.00 28.00
21.00 12.00 17.00 8.00
21.00 12.00 17.00 8.00
1.00 27.00 2.00 28.00
The Manhattan Distance (sometimes called the Taxicab distance) between two points is the sum of the (absolute) difference of their coordinates. The grid on the lower right illustrates the Manhattan distances from the grayed cell.题意:给一个M*N的矩阵,每个点有个浮点数,求它的原矩阵,原矩阵对应点的值为曼哈顿距离<=d的点之和的平均值。
思路:浮点型高斯消元。
# include <iostream>
# include <cstdio>
# include <cstring>
# include <cmath>
# include <cstdlib>
using namespace std;
const double eps = 0.0000001;
int n, m, d;
double a[110][110];
double x[110];
void init()
{
memset(a, 0, sizeof(a));
memset(x, 0, sizeof(x));
for(int i=0; i<n; ++i)
{
for(int j=0; j<m; ++j)
{
int t = i*m+j, num = 0;
for(int k=0; k<n; ++k)
{
for(int l=0; l<m; ++l)
{
if(abs(k-i)+abs(l-j) <= d)
a[t][k*m+l] = 1, ++num;
else a[t][k*m+l] = 0;
}
}
a[t][n*m] = num*1.0;//因为是平均值,要乘以num(<=d的点数)
}
}
}
int gauss()
{
int r=0,c = 0;
while(r < n*m && c < n*m)
{
int id = r;
for(int i=r+1; i<n*m; ++i)
if(fabs(a[i][c]) - fabs(a[id][c]) > eps)
id = i;
if(id != r)
for(int i=0; i<=n*m; ++i)
swap(a[id][i], a[r][i]);
if(fabs(a[r][c]) > eps)
{
for(int i=r+1; i<n*m; ++i)
{
if(fabs(a[i][c]) > eps)
{
double t = a[i][c]/a[r][c];
for(int j=c; j<=n*m; ++j) a[i][j] = a[i][j] - a[r][j]*t;
}
}
++r;
}
++c;
}
for(int i=n*m-1; i>=0; --i)
{
if(fabs(a[i][i]) < eps) continue;
double tmp = a[i][n*m];
for(int j=i+1; j<n*m; ++j) tmp = tmp-a[i][j]*x[j];
x[i] = tmp/a[i][i];
}
}
int main()
{
int cas = 0;
while(~scanf("%d%d%d",&m,&n,&d),n+m+d)
{
init();
double num;
if(cas++) puts("");
for(int i=0; i<n; ++i)
{
for(int j=0; j<m; ++j)
{
int t = i*m+j;
scanf("%lf",&num);
a[t][n*m] *= num;
}
}
gauss();
for(int i=0; i<n*m; ++i)
{
printf("%8.2f",x[i]);
if((i+1)%m == 0) puts("");
}
}
return 0;
}
本文介绍了一种基于灰度图像的去模糊算法实现方法,通过逆运算处理被模糊的图像,恢复其原始清晰度。该算法利用了曼哈顿距离的概念来确定像素间的关联,并采用浮点型高斯消元法解决反向问题。
1494

被折叠的 条评论
为什么被折叠?



