834. Sum of Distances in Tree

本文介绍了一种高效算法,用于解决给定树状结构中每个节点到所有其他节点距离之和的问题。通过两次深度优先搜索(DFS),算法能在O(N)时间内完成计算,避免了重复遍历。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Description

An undirected, connected tree with N nodes labelled 0…N-1 and N-1 edges are given.

The ith edge connects nodes edges[i][0] and edges[i][1] together.

Return a list ans, where ans[i] is the sum of the distances between node i and all other nodes.

Example 1:

Input: N = 6, edges = [[0,1],[0,2],[2,3],[2,4],[2,5]]
Output: [8,12,6,10,10,10]
Explanation:
Here is a diagram of the given tree:
0
/
1 2
/|
3 4 5
We can see that dist(0,1) + dist(0,2) + dist(0,3) + dist(0,4) + dist(0,5)
equals 1 + 1 + 2 + 2 + 2 = 8. Hence, answer[0] = 8, and so on.
Note: 1 <= N <= 10000

Problem URL


Solution

给一个二维数组表示的树,问以每一个节点为起点,到其他各个点的距离的和。

Intuition:
What if given a tree, with a certain root 0?
In O(N) we can find sum of distances in tree from root and all other nodes.
Now for all N nodes?
Of course, we can do it N times and solve it in O(N^2).
C++ and Java may get accepted luckly, but it’s not what we want.

When we move our root from one node to its connected node, one part of nodes get closer, one the other part get further.
If we know exactly hom many nodes in both parts, we can solve this problem.

With one single traversal in tree, we should get enough information for it and don’t need to do it again and again.

Explanation:
0. Let’s solve it with node 0 as root.

Initial an array of hashset tree, tree[i] contains all connected nodes to i.
Initial an array count, count[i] counts all nodes in the subtree i.
Initial an array of res, res[i] counts sum of distance in subtree i.

Post order dfs traversal, update count and res:
count[root] = sum(count[i]) + 1
res[root] = sum(res[i]) + sum(count[i])

Pre order dfs traversal, update res:
When we move our root from parent to its child i, count[i] points get 1 closer to root, n - count[i] nodes get 1 futhur to root.
res[i] = res[root] - count[i] + N - count[i]

return res, done.

Time Complexity:
dfs: O(N)
dfs2: O(N)

Code
class Solution {
    private int[] res;
    private int[] count;
    private ArrayList<HashSet<Integer>> tree;
    private int n;
    public int[] sumOfDistancesInTree(int N, int[][] edges) {
        tree = new ArrayList<>();
        res = new int[N];
        count = new int[N];
        n = N;
        for (int i = 0; i < N; i++){
            tree.add(new HashSet<Integer>());
        }
        for (int[] edge : edges){
            tree.get(edge[0]).add(edge[1]);
            tree.get(edge[1]).add(edge[0]);
        }
        dfs(0, new HashSet<Integer>());
        dfs2(0, new HashSet<Integer>());
        return res;
    }
    
    private void dfs(int root, HashSet<Integer> seen){
        seen.add(root);
        for (int i : tree.get(root)){
            if (!seen.contains(i)){
                dfs(i, seen);
                count[root] += count[i];
                res[root] += res[i] + count[i];
            }
        }
        count[root]++;
    }
    
    private void dfs2(int root, HashSet<Integer> seen){
        seen.add(root);
        for (int i : tree.get(root)){
            if (!seen.contains(i)){
                res[i] = res[root] - count[i] + n - count[i];
                dfs2(i, seen);
            }
        }
    }
}

Time Complexity: O(N)
Space Complexity: O(N)


Review
import torch import numpy as np import open3d as o3d from tqdm import tqdm from scipy.spatial import KDTree from skimage.metrics import dice_score class MultiViewConsistency: def init(self, rgb_paths, mask_paths, depth_paths): self.device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”) # 初始化模型参数 self.point_cloud = self.init_point_cloud() self.optimizer = torch.optim.Adam([self.point_cloud], lr=0.001) self.load_data(rgb_paths, mask_paths, depth_paths) def load_data(self, rgb_paths, mask_paths, depth_paths): # 数据加载与预处理 self.views = [] for i in range(len(rgb_paths)): view_data = { 'rgb': self.preprocess_image(rgb_paths[i]), 'mask': self.load_mask(mask_paths[i]), 'depth': self.load_depth(depth_paths[i]), 'pose': self.generate_camera_pose(i) # 假设的相机位姿生成 } self.views.append(view_data) def multi_view_dice_loss(self, pred_masks, gt_masks): """多视角Dice损失计算""" total_loss = 0.0 for pred, gt in zip(pred_masks, gt_masks): intersection = torch.sum(pred * gt) union = torch.sum(pred) + torch.sum(gt) total_loss += 1 - (2. * intersection) / (union + 1e-8) return total_loss / len(pred_masks) def project_pointcloud_to_views(self): """将点云投影到各视角生成预测mask""" projected_masks = [] for view in self.views: # 使用深度信息和相机位姿进行投影 mask = self.render_projection(view['pose'], view['depth']) projected_masks.append(mask) return projected_masks def optimize_consistency(self, epochs=100): """多视角一致性优化循环""" best_loss = float('inf') for epoch in tqdm(range(epochs), desc="Optimizing Consistency"): self.optimizer.zero_grad() # 生成预测mask pred_masks = self.project_pointcloud_to_views() gt_masks = [view['mask'] for view in self.views] # 计算组合损失 dice_loss = self.multi_view_dice_loss(pred_masks, gt_masks) geometric_loss = self.calculate_geometric_constraints() total_loss = dice_loss + 0.5 * geometric_loss # 权重可调 # 反向传播优化 total_loss.backward() self.optimizer.step() # 记录最佳结果 if total_loss.item() < best_loss: best_loss = total_loss.item() self.save_best_pointcloud() def calculate_geometric_constraints(self): """几何约束项(法线一致性、曲率平滑性)""" # 使用Open3D计算点云法线 pcd = o3d.geometry.PointCloud() pcd.points = o3d.utility.Vector3dVector(self.point_cloud.detach().cpu().numpy()) pcd.estimate_normals() # 构建KDTree进行邻近点查询 kdtree = KDTree(np.asarray(pcd.points)) distances, indices = kdtree.query(np.asarray(pcd.points), k=5) # 计算法线一致性损失 normals = torch.tensor(np.asarray(pcd.normals)).to(self.device) normal_loss = torch.mean(1 - torch.abs(torch.sum(normals * normals[indices], dim=1))) return normal_loss def save_best_pointcloud(self, output_path="best_pointcloud.ply"): # 保存优化后的点云 pcd = o3d.geometry.PointCloud() pcd.points = o3d.utility.Vector3dVector(self.point_cloud.detach().cpu().numpy()) o3d.io.write_point_cloud(output_path, pcd)以上代码你生成的吗?
03-21
我的文件报错以下内容是为什么?Traceback (most recent call last): File "C:\Dai JianKun\Python Project\Increase dimension.py", line 28, in <module> Anomoles_prediction = lof_model.fit_predict(data_Train) File "C:\Dai JianKun\Python\lib\site-packages\sklearn\neighbors\_lof.py", line 255, in fit_predict return self.fit(X)._predict() File "C:\Dai JianKun\Python\lib\site-packages\sklearn\base.py", line 1389, in wrapper return fit_method(estimator, *args, **kwargs) File "C:\Dai JianKun\Python\lib\site-packages\sklearn\neighbors\_lof.py", line 290, in fit self._distances_fit_X_, _neighbors_indices_fit_X_ = self.kneighbors( File "C:\Dai JianKun\Python\lib\site-packages\sklearn\neighbors\_base.py", line 923, in kneighbors chunked_results = Parallel(n_jobs, prefer="threads")( File "C:\Dai JianKun\Python\lib\site-packages\sklearn\utils\parallel.py", line 77, in __call__ return super().__call__(iterable_with_config) File "C:\Dai JianKun\Python\lib\site-packages\joblib\parallel.py", line 1918, in __call__ return output if self.return_generator else list(output) File "C:\Dai JianKun\Python\lib\site-packages\joblib\parallel.py", line 1847, in _get_sequential_output res = func(*args, **kwargs) File "C:\Dai JianKun\Python\lib\site-packages\sklearn\utils\parallel.py", line 139, in __call__ return self.function(*args, **kwargs) File "sklearn\\neighbors\\_binary_tree.pxi", line 1191, in sklearn.neighbors._kd_tree.BinaryTree64.query File "sklearn\\neighbors\\_binary_tree.pxi", line 527, in sklearn.neighbors._kd_tree.NeighborsHeap64.__init__ numpy._core._exceptions.MemoryError: Unable to allocate 37.3 GiB for an array with shape (1000000, 5001) and data type int64
03-21
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值