The Largest Generation (25)

本文介绍了一个算法问题,即如何通过给定的家庭成员信息找到人口最多的那一辈及其对应的辈分。通过对家庭成员之间的父子关系建立映射,并采用递归的方式计算各代的人口数量,最终确定了人口最多的代。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

The Largest Generation (25)

时间限制 1000 ms 内存限制 65536 KB 代码长度限制 100 KB 判断程序 Standard (来自 小小)
题目描述
A family hierarchy is usually presented by a pedigree tree where all the nodes on the same level belong to the same generation. Your task is to find the generation with the largest population.

输入描述:
Each input file contains one test case. Each case starts with two positive integers N (<100) which is the total number of family members in the tree (and hence assume that all the members are numbered from 01 to N), and M ( < N ) which is the number of family members who have children. Then M lines follow, each contains the information of a family member in the following format:

ID K ID[1] ID[2] … ID[K]
where ID is a two-digit number representing a family member, K (>0) is the number of his/her children, followed by a sequence of two-digit ID’s of his/her children. For the sake of simplicity, let us fix the root ID to be 01. All the numbers in a line are separated by a space.
输出描述:
For each test case, print in one line the largest population number and the level of the corresponding generation. It is assumed that such a generation is unique, and the root level is defined to be 1.
输入例子:
23 13
21 1 23
01 4 03 02 04 05
03 3 06 07 08
06 2 12 13
13 1 21
08 2 15 16
02 2 09 10
11 2 19 20
17 1 22
05 1 11
07 1 14
09 1 17
10 1 18

输出例子:
9 4

#include <iostream>
#include <malloc.h>
#include <fstream>
using namespace std;

/*
    题目大意:给出家族的所有人数目和有孩子的人的数目,并且把每个人进行编号,找出人最多的那一代一共多
    少人并且输出该代是第几代。
    构建数组family[child] = parent; 并自低向顶计算代数。
*/
int getLayer(int *family, int child)
{
    int gen = 1;
    //依次往上取值
    while (family[child] != child)
    {
        child = family[child];
        gen++;
    }
    return gen;
}

int main()
{
    int n,m;
    //ifstream fin("test.txt");
    cin >> n >> m;
    //孩子为下标,父母为值
    int *family = (int *)malloc((n+1)*sizeof(int));
    family[1] = 1;  //赋值根节点
    for (int i = 0; i < m; i++)
    {
        int parent, childNum;
        cin >> parent >> childNum;
        for (int j = 0; j < childNum; j++)
        {
            int child;
            cin >> child;
            family[child] = parent;
        }
    }
    //计算某代多少人
    int *layer = (int *)malloc((n + 1)*sizeof(int));
    for (int i = 1; i <= n; i++)
        layer[i] = 0;
    for (int i = 1; i <= n; i++)
        layer[getLayer(family, i)]++;   //把得到的相应层次人数累加;
    int maxPeople=1,p=1;
    for (int i = 1; i <= n; i++)
    {
        if (layer[i] > maxPeople)
        { 
            maxPeople = layer[i];
            p = i;
        }
    }
    cout << maxPeople << ' ' << p;
    //getchar();
}
import time import torch, torch_npu from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig # 替换成本地的模型权重路径 MODEL_PATH = "/models/z50051264/Qwen2.5-7B-Instruct" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, # Support torch.float16, torch.float32, torch.bfloat16 bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=False, bnb_4bit_quant_storage=torch.uint8 ) torch.npu.synchronize() start_time = time.time() model = AutoModelForCausalLM.from_pretrained( MODEL_PATH, device_map={"":0}, quantization_config=bnb_config, low_cpu_mem_usage=True, torch_dtype=torch.float16 # Support torch.float16, torch.float32, torch.bfloat16 ) torch.npu.synchronize() print(f"[+] load time: {time.time() - start_time:.6}s") tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH) model.eval() prompt = "Once upon a time, " inputs = tokenizer([prompt], return_tensors="pt") input_ids = inputs.input_ids.npu() attention_mask = inputs.attention_mask.npu() torch.npu.synchronize() start_time = time.time() generated_ids = model.generate( input_ids=input_ids, attention_mask=attention_mask, max_new_tokens=32, do_sample=False, ) torch.npu.synchronize() print(f"[+] inference time: {time.time() - start_time:.6}s") print(tokenizer.batch_decode(generated_ids)) 我在使用npu版本的bitsandbytes,但是执行以上代码,出现错误: [root@190f3c453709 inference]# python nf4.py /usr/local/python3.10.17/lib/python3.10/site-packages/torch_npu/utils/storage.py:38: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() if self.device.type != 'cpu': Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:13<00:00, 3.26s/it] [+] load time: 14.9728s The following generation flags are not valid and may be ignored: ['temperature', 'top_p', 'top_k']. Set `TRANSFORMERS_VERBOSITY=info` for more details. [+] inference time: 3.78472s ['Once upon a time, 123456789 was the largest known prime number. If a new prime number, 123456789'] 请分析问题原因,并给出详细解决方法
最新发布
07-23
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值