Uva - 10410 - Tree Reconstruction

本文探讨了广度优先搜索(BFS)和深度优先搜索(DFS)在树结构上的具体应用,通过将树分段并使用队列存储子树的方法,详细解释了如何利用这两种算法遍历树,并提供了AC代码示例。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >


先搞清楚这个数不一定是二叉树,然后把树分段,把所有子树用队列存放。搞清楚BFS和DFS的关系就很明确了。

AC代码:

#include <iostream>
#include <cstdio>
#include <cstdlib>
#include <cctype>
#include <cstring>
#include <string>
#include <sstream>
#include <vector>
#include <set>
#include <map>
#include <algorithm>
#include <stack>
#include <queue>
#include <bitset> 
#include <cassert> 
#include <cmath>

using namespace std;

const int maxn = 1005;

struct Seg {
	int lef, rig;
	Seg(int lef, int rig) : lef(lef), rig(rig) {}
};

queue<Seg> q;
vector<int> v[maxn];

int bfs[maxn], dfs[maxn];
int n;

void solve()
{
	// 整个数先入队
	q.push(Seg(0, n));
	int p = 1;
	int root;
	while (!q.empty()) {
		Seg s = q.front();
		q.pop();
		if (s.rig - s.lef <= 1 || p == n) {
			continue;
		}

		root = dfs[s.lef];
		int pre = s.lef + 1; // 分离根结点
		// 在dfs中往下找子树
		for (int i = pre; i < s.rig; i++) {
			if (dfs[i] == bfs[p]) {
				q.push(Seg(pre, i));
				v[root].push_back(dfs[i]);
				p++;
				pre = i;
			}
		}
		if (pre < s.rig) {
			q.push(Seg(pre, s.rig));
		}
	}
}


int main()
{
	ios::sync_with_stdio(false);
	while (cin >> n) {
		for (int i = 0; i < n; i++) {
			cin >> bfs[i];
		}
		for (int i = 0; i < n; i++) {
			cin >> dfs[i];
		}
		solve();
		for (int i = 1; i <= n; i++) {
			cout << i << ":";
			for (int j = 0; j < v[i].size(); j++) {
				cout << " " << v[i][j];
			}
			cout << endl;
			v[i].clear();
		}

	}

	return 0;
}




转载于:https://www.cnblogs.com/zhangyaoqi/p/4591535.html

### Transformer-Based Reconstruction Module Implementation In the context of deep learning, transformer-based reconstruction modules leverage self-attention mechanisms to learn complex patterns within data sequences. These models are particularly adept at handling long-range dependencies and capturing intricate relationships between elements in a sequence. A typical implementation involves several key components: #### Encoder Layer The encoder layer processes input sequences through multi-head attention followed by feed-forward networks. Each position in the input can attend to all positions in previous layers, allowing for richer information flow[^1]. ```python import torch.nn as nn from transformers import BertModel class EncoderLayer(nn.Module): def __init__(self, d_model=768, nhead=8): super(EncoderLayer, self).__init__() self.self_attn = nn.MultiheadAttention(d_model, nhead) self.feed_forward = nn.Sequential( nn.Linear(d_model, 2048), nn.ReLU(), nn.Linear(2048, d_model) ) def forward(self, src): attn_output, _ = self.self_attn(src, src, src) output = self.feed_forward(attn_output) return output ``` #### Decoder Layer with Masked Attention For tasks requiring sequential prediction such as text generation or time series forecasting, masked attention ensures that predictions only depend on past inputs during training. This prevents future leakage while maintaining temporal consistency. ```python class DecoderLayer(nn.Module): def __init__(self, d_model=768, nhead=8): super(DecoderLayer, self).__init__() self.masked_self_attn = nn.MultiheadAttention(d_model, nhead) self.cross_attn = nn.MultiheadAttention(d_model, nhead) def forward(self, tgt, memory, tgt_mask=None): masked_attn_output, _ = self.masked_self_attn(tgt, tgt, tgt, attn_mask=tgt_mask) cross_attn_output, _ = self.cross_attn(masked_attn_output, memory, memory) return cross_attn_output ``` #### Full Model Architecture Combining both encoders and decoders forms an end-to-end architecture capable of performing various downstream tasks including image captioning, machine translation, etc. ```python class TransformerReconstructionModule(nn.Module): def __init__(self, num_layers=6, d_model=768, nhead=8): super(TransformerReconstructionModule, self).__init__() self.encoder_stack = nn.ModuleList([EncoderLayer(d_model=d_model, nhead=nhead) for _ in range(num_layers)]) self.decoder_stack = nn.ModuleList([DecoderLayer(d_model=d_model, nhead=nhead) for _ in range(num_layers)]) def encode(self, src): out = src for layer in self.encoder_stack: out = layer(out) return out def decode(self, tgt, memory, tgt_mask=None): out = tgt for layer in self.decoder_stack: out = layer(out, memory, tgt_mask=tgt_mask) return out def forward(self, src, tgt, tgt_mask=None): encoded_src = self.encode(src) decoded_tgt = self.decode(tgt, encoded_src, tgt_mask=tgt_mask) return decoded_tgt ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值