1030. Travel Plan (30)-PAT甲级真题

1030. Travel Plan (30)

A traveler’s map gives the distances between cities along the highways, together with the cost of each highway. Now you are supposed to write a program to help a traveler to decide the shortest path between his/her starting city and the destination. If such a shortest path is not unique, you are supposed to output the one with the minimum cost, which is guaranteed to be unique.

Input Specification:

Each input file contains one test case. Each case starts with a line containing 4 positive integers N, M, S, and D, where N (<=500) is the number of cities (and hence the cities are numbered from 0 to N-1); M is the number of highways; S and D are the starting and the destination cities, respectively. Then M lines follow, each provides the information of a highway, in the format:

City1 City2 Distance Cost

where the numbers are all integers no more than 500, and are separated by a space.

Output Specification:

For each test case, print in one line the cities along the shortest path from the starting point to the destination, followed by the total distance and the total cost of the path. The numbers must be separated by a space and there must be no extra space at the end of output.

Sample Input
4 5 0 3
0 1 1 20
1 3 2 30
0 3 4 10
0 2 2 20
2 3 1 20
Sample Output
0 2 3 3 40


分析:这道题是很标准的最短路径加上开销的问题,就是用Dijkstra求出最短的路径,再用一个pre保存每一个节点的前置节点,再dfs遍历运算统计开销的最小值最后得出结果。

#include <cstdio>
#include <algorithm>
#include <vector>
using namespace std;
int n , m , s , d;
int e[510][510] , dis[510] , cost[510][510];
vector<int> pre[510];
bool visit[510];
const int inf = 99999999;
vector<int> path , temppath;
int mincost = inf;
void dfs (int v)
{
    if (v == s)
    {
        temppath.push_back (v);
        int tempcost = 0;
        for (int i = temppath.size () - 1; i > 0; i--)
        {
            int id = temppath[i] , nextid = temppath[i - 1];
            tempcost += cost[id][nextid];
        }
        if (tempcost < mincost)
        {
            mincost = tempcost;
            path = temppath;
        }
        temppath.pop_back ();
        return;
    }
    temppath.push_back (v);
    for (int i = 0; i < pre[v].size (); i++)
        dfs (pre[v][i]);
    temppath.pop_back ();
}
int main ()
{
    fill (e[0] , e[0] + 510 * 510 , inf);
    fill (dis , dis + 510 , inf);
    scanf_s ("%d%d%d%d" , &n , &m , &s , &d);
    for (int i = 0; i < m; i++)
    {
        int a , b;
        scanf_s ("%d%d" , &a , &b);
        scanf_s ("%d" , &e[a][b]);
        e[b][a] = e[a][b];
        scanf_s ("%d" , &cost[a][b]);
        cost[b][a] = cost[a][b];
    }
    pre[s].push_back (s);
    dis[s] = 0;
    for (int i = 0; i < n; i++)
    {
        int u = -1 , minn = inf;
        for (int j = 0; j < n; j++)
        {
            if (visit[j] == false && dis[j] < minn)
            {
                u = j;
                minn = j;
            }
        }
        if (u == -1) break;
        visit[u] = true;
        for (int v = 0; v < n; v++)
        {
            if (visit[v] == false && e[u][v] != inf)
            {
                if (dis[v] > dis[u] + e[u][v])
                {
                    dis[v] = dis[u] + e[u][v];
                    pre[v].clear ();
                    pre[v].push_back (u);
                }
                else if (dis[v] == dis[u] + e[u][v])
                {
                    pre[v].push_back (u);
                }
            }
        }
    }
    dfs (d);
    for (int i = path.size () - 1; i >= 0; i--)
        printf ("%d " , path[i]);
    printf ("%d %d" , dis[d] , mincost);
    return 0;
}
好的,以下是一个使用Python编写的爬虫代码,可以爬取该网页的相关数据: ```python import requests from bs4 import BeautifulSoup # 设置请求头信息 headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3' } # 定义爬取网页数据的函数 def get_data(): # 指定要爬取的网页链接 url = 'https://www.mafengwo.cn/travel-scenic-spot/mafengwo/84711.html' # 发送HTTP请求 response = requests.get(url, headers=headers) # 解析HTML文档 soup = BeautifulSoup(response.text, 'html.parser') # 获取景点名称 spot_name = soup.find('h1', class_='t-title').text # 获取景点评分 score = soup.find('span', class_='score').text # 获取景点地址 spot_address = soup.find('span', class_='item-address').text.strip() # 获取景点介绍 spot_intro = soup.find('div', class_='summary').text.strip() # 获取景点图片链接 image_url = soup.find('img', class_='img-responsive')['src'] # 输出景点信息 print('景点名称:', spot_name) print('评分:', score) print('地址:', spot_address) print('介绍:', spot_intro) print('图片链接:', image_url) # 主函数 if __name__ == '__main__': # 调用爬取网页数据的函数 get_data() ``` 在代码中,我们首先使用`requests`库发送HTTP请求,然后使用`BeautifulSoup`库解析HTML文档。通过分析网页的HTML结构,我们可以使用`find`方法获取到需要的景点名称、评分、地址、介绍和图片链接等数据。最后,我们输出这些数据到控制台中。当然,您可以将这些数据存储到文件或数据库中,以便后续分析。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值