UVA 12166 Equilibrium Mobile (天平性质 + DFS)

本文介绍了一种基于DFS递归思想的算法,用于解决特定类型的砝码树问题。通过选定基准砝码来最小化整棵树的总重量变化,利用map记录不同深度下砝码的出现次数,并采用递归方式分解并解决子问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

思路:题目要求是求改变的最小,那么就需要选定一个基准砝码,选择方式如下:对于任何一个结点砝码m来说,如果选定为基准,那么整棵树的总重量为m*2^deepth (m<<deepth),其中deepth为此砝码的深度,deeoth从0开始;用map<long long ,int>映射重量为(m<<deepth)的个数int,个数最多者选定为基准砝码。

DFS 递归思想:将表达式分成两份(通过,)然后分别DFS向下递归求解

#include<iostream>
#include<map>
#include<cstring>
using namespace std;
map<long long ,int >map1;
int tatal;
string str;
void dfs(int cur,int len,int deepth){
   if(str[cur]=='['){
   	int temp=0;
   	for(int i=cur+1;i<len;i++){
   		if(str[i]=='[')temp++;
   		else if(str[i]==']')temp--;
   		else if(str[i]==',' && temp==0){
   			dfs(cur+1,i-1,deepth+1);
   			dfs(i+1,len-1,deepth+1);
   			break;     //找到中间点就退出了 
		   }
	   }
   }
   else {
   	tatal++;
   	long long  sum=0;  //一定要定义为long long 型,sum<<deepth 可能大于int类型值 
   	while(cur<=len){
   		sum=sum*10+str[cur]-'0';
   		cur++;
	   }
	   map1[sum<<deepth]++;
   }
}
int main(){
	int T;
    scanf("%d",&T);
	while(T--){
		cin>>str;
		tatal=0;
		int len=str.length();
		dfs(0,len-1,0);
		int max1=0;
		for(map<long long ,int>::iterator iter=map1.begin();iter!=map1.end();iter++){
		max1=max(max1,iter->second);
		}
		printf("%d\n",tatal-max1);
		map1.clear();
	} 
	return 0; 
} 


### Deep Equilibrium Modeling in Machine Learning Deep equilibrium (DEQ) models represent a class of neural networks that focus on finding fixed points rather than stacking layers sequentially. These models are particularly useful when dealing with problems where depth plays an essential role but comes at significant computational costs. In DEQs, instead of explicitly constructing deep architectures layer-by-layer, the network computes solutions as implicit functions defined through fixed-point iterations. The core idea behind DEQ lies in solving equations such as \( f(x; \theta) = x \), where \( x \) represents the output and \( \theta \) denotes parameters learned during training. This approach allows for efficient memory usage since no intermediate activations need storage across multiple layers[^1]. Training involves optimizing objectives based on these fixed points while ensuring stability conditions hold true throughout computation processes. In terms of implementation within machine learning frameworks like PyTorch or TensorFlow: ```python import torch from scipy.optimize import root_scalar def deq_forward(z_init, func, theta): """Forward pass function.""" def eqn(z): return z - func(z, theta) sol = root_scalar(eqn, bracket=[-1e3, 1e3], method='brentq') if not sol.converged: raise RuntimeError('Fixed point iteration did not converge.') return sol.root class DEQLayer(torch.nn.Module): def __init__(self, func): super().__init__() self.func = func def forward(self, inputs): outputs = deq_forward(inputs, self.func, self.theta) return outputs ``` This code snippet demonstrates how one might implement a basic version of a DEQ layer leveraging Python's `scipy` library alongside PyTorch tensors. Note this example simplifies actual implementations found in research papers which often include additional considerations regarding numerical precision and convergence guarantees. Regarding reinforcement learning contexts specifically mentioned earlier via references provided about dueling networks & value estimation techniques [^4][^3]: One could extend traditional Q-learning algorithms incorporating ideas from DEQ theory into policy evaluation steps aiming towards more stable representations over time without needing explicit temporal difference updates per se given proper formulations aligning well under certain assumptions concerning environment dynamics along Markov Decision Processes definitions typically utilized therein scenarios involving sequential decision makings tasks among others aspects potentially explored further depending upon specific application requirements accordingly then too!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

柏油

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值