1094 The Largest Generation (25 point(s))

本篇博客介绍了一道编程题的解决方案,题目要求找出一个家族树中人口最多的一代。通过深度优先搜索(DFS)算法遍历树结构,统计每一代的人口数量,最终确定并输出人口最多的那一代及其人口数。

1094 The Largest Generation (25 point(s))

A family hierarchy is usually presented by a pedigree tree where all the nodes on the same level belong to the same generation. Your task is to find the generation with the largest population.

Input Specification:

Each input file contains one test case. Each case starts with two positive integers N (<100) which is the total number of family members in the tree (and hence assume that all the members are numbered from 01 to N), and M (<N) which is the number of family members who have children. Then M lines follow, each contains the information of a family member in the following format:

ID K ID[1] ID[2] ... ID[K]

where ID is a two-digit number representing a family member, K (>0) is the number of his/her children, followed by a sequence of two-digit ID's of his/her children. For the sake of simplicity, let us fix the root ID to be 01. All the numbers in a line are separated by a space.

Output Specification:

For each test case, print in one line the largest population number and the level of the corresponding generation. It is assumed that such a generation is unique, and the root level is defined to be 1.

Sample Input:

23 13
21 1 23
01 4 03 02 04 05
03 3 06 07 08
06 2 12 13
13 1 21
08 2 15 16
02 2 09 10
11 2 19 20
17 1 22
05 1 11
07 1 14
09 1 17
10 1 18

Sample Output:

9 4

树的遍历。参考1004。

考点:DFS

易错点:maxLevel初始化为1,否则当只有一个根节点时没有输出(case#1出现WA,23分)。

#include<iostream>
#include<cstring>
#include<vector>
#include<queue>
using namespace std;
const int MAX = 107;
vector<int> graph[MAX];
int level[MAX]={0};
int cnt[MAX]={0};
int maxLevel = 1;//最少是1 
void dfs(int v){
	cnt[level[v]]++;
	for(int i=0;i<graph[v].size();i++){
		int cur = graph[v][i];
		level[cur]=level[v]+1;
		maxLevel = max(maxLevel,level[cur]);
		dfs(cur);
	}
}
int main(void){
	int N,M;cin>>N>>M;
	int id,k,a;
	while(M--){
		cin>>id>>k;
		while(k--){
			cin>>a;
			graph[id].push_back(a);
		}
	}
	int root = 1;level[root]=1;
	dfs(root);
	int max = 0;int generation; 
	for(int i=1;i<=maxLevel;i++){
		if(cnt[i]>max){
			max = cnt[i];
			generation = i;
		}
	}
	cout<<max<<" "<<generation<<endl;
	return 0;
} 

看看别人更简洁的代码!使用迭代器!

### ONNXRuntime-GPU Version 1.18.1 Installation and Usage ONNX Runtime is an open-source runtime for machine learning models that supports multiple frameworks, including PyTorch, TensorFlow, Keras, Scikit-learn, etc., enabling high-performance inference across various platforms. Below are the details regarding installing `onnxruntime-gpu` version 1.18.1. #### Prerequisites Before proceeding with the installation of `onnxruntime-gpu`, ensure the following prerequisites are met: 1. **CUDA Toolkit**: Ensure CUDA is installed correctly as per your GPU architecture requirements[^3]. For example, if using NVIDIA GPUs, verify compatibility between the CUDA toolkit version and the driver. 2. **cuDNN Library**: Confirm cuDNN has been successfully set up to work alongside CUDA[^2]. 3. **Python Environment**: It's recommended to use a virtual environment (e.g., via Conda or venv). This ensures dependencies do not conflict with other projects[^4]. #### Installing onnxruntime-gpu Using Pip To install `onnxruntime-gpu` version 1.18.1 through pip while leveraging a mirror repository such as TUNA at Tsinghua University, execute the command below: ```bash pip install --upgrade onnxruntime-gpu==1.18.1 -i https://pypi.tuna.tsinghua.edu.cn/simple ``` This leverages the specified URL for faster downloads by utilizing China’s largest academic software mirror service[^1]. Alternatively, without specifying any mirrors but ensuring access to global repositories, run this standard line instead: ```bash pip install onnxruntime-gpu==1.18.1 ``` For users preferring Anaconda environments over traditional Python setups, consider employing conda-forge channel since it often provides precompiled binaries optimized specifically within its ecosystem: ```bash conda install -c conda-forge onnxruntime-gpu=1.18.1 ``` #### Verifying Successful Installation After completing the setup process outlined above, validate whether everything functions properly by importing inside python scripts like so: ```python import onnxruntime print(onnxruntime.__version__) ``` If no errors occur during execution and output matches exactly what was intended (`'1.18.1'`), congratulations! Everything works fine thus far. Additionally, check available providers which indicate support status towards hardware acceleration features provided by specific backends e.g., TensorRT or DirectML depending upon system configurations: ```python providers = onnxruntime.get_available_providers() if 'CUDAExecutionProvider' in providers: print('GPU Support Enabled') else: print('Only CPU Execution Available.') ``` #### Basic Example Code Demonstrating Inference With ONNX Model Loaded By Onnxruntime-GPU Here follows a minimalistic demonstration showing how one might load pretrained neural networks saved into .onnx format files then perform predictions against new input data samples efficiently thanks largely due to enhanced performance brought forth courtesy of specialized libraries designed around maximizing computational efficiency when dealing extensively large scale tensor operations common throughout modern deep learning pipelines today. ```python import numpy as np import onnxruntime as ort session_options = ort.SessionOptions() # Enable optimizations & parallelism settings here... model_path = './path/to/model.onnx' ort_session = ort.InferenceSession(model_path, session_options) input_name = ort_session.get_inputs()[0].name output_names = [o.name for o in ort_session.get_outputs()] dummy_input_data = np.random.rand(1, 3, 224, 224).astype(np.float32) outputs = ort_session.run(output_names=output_names, input_feed={input_name: dummy_input_data}) for idx, out_tensor in enumerate(outputs): print(f"Output {idx}: Shape={out_tensor.shape}") ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值