poj3259

在FJ的农场探索中,发现了一系列神秘的虫洞。FJ希望利用这些虫洞,从某些地点出发,通过路径和虫洞,返回到出发前的时间点。本文详细介绍了如何使用SPFA算法解决这个问题,以及在特定情况下是否可能实现时间旅行。
Wormholes

Description

While exploring his many farms, Farmer John has discovered a number of amazing wormholes. A wormhole is very peculiar because it is a one-way path that delivers you to its destination at a time that is BEFORE you entered the wormhole! Each of FJ's farms comprises N (1 ≤ N ≤ 500) fields conveniently numbered 1..NM (1 ≤ M ≤ 2500) paths, and W (1 ≤ W ≤ 200) wormholes.

As FJ is an avid time-traveling fan, he wants to do the following: start at some field, travel through some paths and wormholes, and return to the starting field a time before his initial departure. Perhaps he will be able to meet himself :) .

To help FJ find out whether this is possible or not, he will supply you with complete maps to F (1 ≤ F ≤ 5) of his farms. No paths will take longer than 10,000 seconds to travel and no wormhole can bring FJ back in time by more than 10,000 seconds.

Input

Line 1: A single integer,  FF farm descriptions follow. 
Line 1 of each farm: Three space-separated integers respectively:  NM, and  W 
Lines 2.. M+1 of each farm: Three space-separated numbers ( SET) that describe, respectively: a bidirectional path between  S and  E that requires  T seconds to traverse. Two fields might be connected by more than one path. 
Lines  M+2.. M+ W+1 of each farm: Three space-separated numbers ( SET) that describe, respectively: A one way path from  S to  E that also moves the traveler back  T seconds.

Output

Lines 1.. F: For each farm, output "YES" if FJ can achieve his goal, otherwise output "NO" (do not include the quotes).

Sample Input

2
3 3 1
1 2 2
1 3 4
2 3 1
3 1 3
3 2 1
1 2 3
2 3 4
3 1 8

Sample Output

NO
YES

Hint

For farm 1, FJ cannot travel back in time. 
For farm 2, FJ could travel back in time by the cycle 1->2->3->1, arriving back at his starting location 1 second before he leaves. He could start from anywhere on the cycle to accomplish this.

Source

spfa算法中,当出现负环的时候某个节点进出队列的次数一定大于或等于节点数。
#include<iostream>
#include<queue>
#include<algorithm>
using namespace std;
int map[5500][5500];
int dist[5500];
int cnt[5500];
int inqueue[5500];

int spfa(int n) {
    queue<int> Q;
    while (!Q.empty())
        Q.pop();
    memset(inqueue, 0, sizeof (inqueue));
    memset(cnt, 0, sizeof (cnt));
    for (int i = 0; i < 5500; i++)
        dist[i] = 100000000;
    dist[1] = 0;
    inqueue[1] = 1;
    cnt[1]++;
    Q.push(1);
    while (!Q.empty()) {
        int tmp = Q.front();
        Q.pop();
        inqueue[tmp] = 0;
        for (int i = 1; i <= n; i++) {
            if (dist[tmp] + map[tmp][i] < dist[i]) {
                dist[i] = dist[tmp] + map[tmp][i];
                if (!inqueue[i]) {
                    cnt[i]++;
                    if (cnt[i] >= n)
                        return 1;
                    inqueue[i] = 1;
                    Q.push(i);
                }
            }
        }
    }
    return 0;
}

int main() {
    int test_cases;
    cin >> test_cases;
    while (test_cases--) {
        int n, m, w;
        cin >> n >> m >> w;
        for (int i = 0; i <= n; i++)
            for (int j = 0; j <= n; j++)
                map[i][j] = 100000000;
        for (int i = 0; i < m; i++) {
            int s, e, t;
            cin >> s >> e >> t;
            if (map[s][e] > t)
                map[s][e] = map[e][s] = t;
        }
        for (int i = 0; i < w; i++) {
            int s, e, t;
            cin >> s >> e >> t;
            map[s][e] = -t;
        }
        if (spfa(n))
            cout << "YES\n";
        else
            cout << "NO\n";
    }
    return 0;
}


训练数据保存为deep_convnet_params.pkl,UI使用wxPython编写。卷积神经网络(CNN)是一种专门针对图像、视频等结构化数据设计的深度学习模型,在计算机视觉、语音识别、自然语言处理等多个领域有广泛应用。其核心设计理念源于对生物视觉系统的模拟,主要特点包括局部感知、权重共享、多层级抽象以及空间不变性。 **1. 局部感知与卷积操作** 卷积层是CNN的基本构建块,使用一组可学习的滤波器对输入图像进行扫描。每个滤波器在图像上滑动,以局部区域内的像素值与滤波器权重进行逐元素乘法后求和,生成输出值。这一过程能够捕获图像中的边缘、纹理等局部特征。 **2. 权重共享** 同一滤波器在整个输入图像上保持相同的权重。这显著减少了模型参数数量,增强了泛化能力,并体现了对图像平移不变性的内在假设。 **3. 池化操作** 池化层通常紧随卷积层之后,用于降低数据维度并引入空间不变性。常见方法有最大池化和平均池化,它们可以减少模型对微小位置变化的敏感度,同时保留重要特征。 **4. 多层级抽象** CNN通常包含多个卷积和池化层堆叠在一起。随着网络深度增加,每一层逐渐提取更复杂、更抽象的特征,从底层识别边缘、角点,到高层识别整个对象或场景,使得CNN能够从原始像素数据中自动学习到丰富的表示。 **5. 激活函数与正则化** CNN中使用非线性激活函数来引入非线性表达能力。为防止过拟合,常采用正则化技术,如L2正则化和Dropout,以增强模型的泛化性能。 **6. 应用场景** CNN在诸多领域展现出强大应用价值,包括图像分类、目标检测、语义分割、人脸识别、图像生成、医学影像分析以及自然语言处理等任务。 **7. 发展与演变** CNN的概念起源于20世纪80年代,其影响力在硬件加速和大规模数据集出现后真正显现。经典模型如LeNet-5用于手写数字识别,而AlexNet、VGG、GoogLeNet、ResNet等现代架构推动了CNN技术的快速发展。如今,CNN已成为深度学习图像处理领域的基石,并持续创新。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值