UVA 10859 Placing Lampposts--树形dp

本文解析了UVA-10859问题,通过树形DP算法来解决如何在保证所有边被照亮的前提下,使得灯的数量最少且被两盏灯照亮的边数最多的问题。介绍了如何转化目标函数并使用动态规划求解。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

原题链接:http://vjudge.net/problem/UVA-10859


题意:

给一个N个点M条边的无向无环图,就是树的意思,每个节点都可以放灯。每盏灯将照亮以它为一个端点的所有边,在所有边都被照亮的前提下使得被两盏灯照亮的边的总数尽可能多,求灯的总数最小值,被两盏灯同时被照亮的边数和被一盏灯照亮的边数。


分析:

一个很有用的技巧:有两个所求的值要优化,比如让a尽量小,b也尽量小
那么可以转化为让 M*a+b尽量小,其中M应该是一个比“a的最大值和b的最小值之差”还要大的数
最终的答案为ans/M, ans%M

回到这题,要求放的灯总数最小,被两盏灯同时照亮的边数尽量大。
因为每条边要么被一盏灯照亮,要么被两盏灯照亮,所以可以转换为:
求:放的灯总数量最少,被一盏灯照亮的边数尽量少。
就可以变成球 M*a+b 的最小值,a为放置的灯数量,b为被一盏灯照的边数

f[u][1]表示u点放灯时的整个子树最小值
f[u][0]表示u点不放灯时的整个子树最小值


#define _CRT_SECURE_NO_DEPRECATE 

#include<iostream>
#include<vector>
#include<cstring>
#include<queue>
#include<stack>
#include<algorithm>
#include<cmath>
#include<string>
#include<stdio.h>
#define INF 99999999
#define eps 0.0001
using namespace std;

int t;
int n, m;
const int M = 2000;
int dp[1005][2];
int cnt[1005];
bool vis[1005];
vector<int> vec[1005];

void tree_dp(int u)
{
	vis[u] = 1;
	dp[u][0] = 0;
	dp[u][1] = M;
	for (int i = 0; i < vec[u].size(); i++)
	{
		int v = vec[u][i];
		if (!vis[v])
		{
			tree_dp(v);
			dp[u][0] += (dp[v][1] + 1);//儿子一定点灯,然后多了一条边只被一个灯打亮,所以+1
			dp[u][1] += min(dp[v][1], dp[v][0] + 1);
		}
	}
}

int main() 
{
	scanf("%d", &t);
	while (t--)
	{
		scanf("%d%d", &n, &m);
		for (int i = 0; i < n; i++)
		{
			vec[i].clear();
			cnt[i] = 0;
			vis[i] = 0;
		}

		int u, v;
		for (int i = 0; i < m; i++)
		{
			scanf("%d%d", &u, &v);
			vec[u].push_back(v);
			vec[v].push_back(u);
		}

		int ans = 0;
		for (int i = 0; i < n;i++)
			if (!vis[i])
			{
				tree_dp(i);
				ans += min(dp[i][0], dp[i][1]);
			}

		printf("%d %d %d\n", ans / M, m - ans%M, ans%M);
	}
	return 0;
}



### Facebook HuBERT-Large-LS960-ft Model Information and Resources The HuBERT (Hidden Unit BERT) model named `hubert-large-ls960-ft` is a pre-trained transformer-based architecture specifically designed for automatic speech recognition tasks. This particular variant has undergone fine-tuning on the LibriSpeech dataset, which contains approximately 960 hours of transcribed audio recordings[^1]. HuBERT models leverage self-supervised learning techniques during their initial training phase before being further refined through supervised fine-tuning on labeled datasets like LibriSpeech. Such an approach allows these models to achieve state-of-the-art performance across various benchmarks when compared against other leading solutions such as Wav2Vec 2.0 at different levels of data availability ranging from just ten minutes up to hundreds or even thousands of hours worth of material[^2]. For users encountering issues with loading tokenizers associated with specific versions hosted by Hugging Face under namespaces related to Facebook projects including ESM series, one potential solution involves manually downloading all necessary files directly from the repository page provided by Hugging Face and placing them into appropriate directories within local environments where Python packages are installed[^3]. #### Example Code Snippet Demonstrating How to Load Pre-Trained Model Using Transformers Library ```python from transformers import AutoModelForCTC, Wav2Vec2Processor processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft") model = AutoModelForCTC.from_pretrained("facebook/hubert-large-ls960-ft") # Assuming 'audio_input' represents processed waveform tensor input suitable for feeding into this model. logits = model(audio_input).logits predicted_ids = logits.argmax(dim=-1) transcription = processor.batch_decode(predicted_ids)[0] print(f"Transcription result: {transcription}") ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值