题目链接:http://acm.hdu.edu.cn/showproblem.php?pid=1078
FatMouse and Cheese
Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 65536/32768 K (Java/Others)Total Submission(s): 7471 Accepted Submission(s): 3076
FatMouse begins by standing at location (0,0). He eats up the cheese where he stands and then runs either horizontally or vertically to another location. The problem is that there is a super Cat named Top Killer sitting near his hole, so each time he can run at most k locations to get into the hole before being caught by Top Killer. What is worse -- after eating up the cheese at one location, FatMouse gets fatter. So in order to gain enough energy for his next run, he has to run to a location which have more blocks of cheese than those that were at the current hole.
Given n, k, and the number of blocks of cheese at each grid location, compute the maximum amount of cheese FatMouse can eat before being unable to move.
a line containing two integers between 1 and 100: n and k
n lines, each with n numbers: the first line contains the number of blocks of cheese at locations (0,0) (0,1) ... (0,n-1); the next line contains the number of blocks of cheese at locations (1,0), (1,1), ... (1,n-1), and so on.
The input ends with a pair of -1's.
3 1 1 2 5 10 11 6 12 12 7 -1 -1
37
思路:给所以奶酪排个序,然后在找的的时候就能保证比当前节点小的奶酪已经是最优解,已经更新过了。 注意flag数组表示左上角能不能到达当前点,不能到达则不能计入更新。
代码:
#include <iostream>
#include <cstdio>
#include <cstring>
#include <algorithm>
using namespace std;
#define N 110
int m[N][N];
int flag[N][N];
int dp[N][N];
struct Node
{
int num;
int x,y;
}node[N*N];
bool cmp(Node a,Node b)
{
return a.num<b.num;
}
int main()
{
int n,k;
while(scanf("%d %d",&n,&k)&&n>0&&k>0)
{
for(int i=0;i<n;i++)
{
for(int j=0;j<n;j++)
{
scanf("%d",&m[i][j]);
node[i*n+j].num=m[i][j];
node[i*n+j].x=i;
node[i*n+j].y=j;
}
}
sort(node,node+n*n,cmp);
memset(dp,0,sizeof(dp));
memset(flag,0,sizeof(flag));
dp[0][0]=m[0][0];
int ans=dp[0][0];
int maxn;
for(int i=0;i<n*n;i++)
{
int x=node[i].x,y=node[i].y;
maxn=-1;
for(int j=1;j<=k;j++)
{
if(x-j>=0&&m[x-j][y]<m[x][y]&&!flag[x-j][y]){
if(maxn<dp[x-j][y])
maxn=dp[x-j][y];
}
if(x+j<n&&m[x+j][y]<m[x][y]&&!flag[x+j][y]){
if(maxn<dp[x+j][y])
maxn=dp[x+j][y];
}
if(y-j>=0&&m[x][y-j]<m[x][y]&&!flag[x][y-j]){
if(maxn<dp[x][y-j])
maxn=dp[x][y-j];
}
if(y+j<n&&m[x][y+j]<m[x][y]&&!flag[x][y+j]){
if(maxn<dp[x][y+j])
maxn=dp[x][y+j];
}
}
if(maxn==-1)
{
if(x==0&&y==0)
continue;
flag[x][y]=1;
continue;
}
dp[x][y]=max(dp[x][y],maxn+m[x][y]);
ans=max(ans,dp[x][y]);
}
printf("%d\n",ans);
}
return 0;
}
本文探讨了深度学习在音视频处理、直播流媒体、图像处理、AR特效等领域的应用,包括图像色彩空间、视频编解码算法、音频处理滤波、流媒体协议等关键技术。此外,还涉及了AI音视频处理、测试、DevOps等主题,展现了深度学习在这些领域的实际应用场景和最新进展。
1078

被折叠的 条评论
为什么被折叠?



