No. 11 - Print Binary Trees from Top to Bottom

No. 11 - Print Binary Trees from Top to Bottom


Problem: Please print a binary tree from its top level to bottom level, and print nodes from left to right if they are in same level. 

For example, it prints the binary tree in Figure 1 in order of 8, 6, 10, 5, 7, 9, 11.

A binary tree node is defined as below:
struct BinaryTreeNode
{
    int                    m_nValue;
    BinaryTreeNode*        m_pLeft; 
    BinaryTreeNode*        m_pRight;
};


Analysis: It examines candidates’ understanding of tree traverse algorithms, but the traverse here is not the traditional pre-order, in-order or post-order traverses. If we are not familiar with it, we may analyze the printing process with some examples during interview. Let us take the binary tree in Figure 1 as an example. 

Since we begin to print from the top level of the tree in Figure 1, we can start our analysis from its root node. Firstly we print the value in its root node, which is 8. We need to store the children nodes with value 6 and 10 in a data container in order to print them after we print the root. There are two nodes in our container at this time.

Secondly we retrieve the node 6 from the container, since nodes 6 and 10 are in same level and we need to print them from left to right. We also need to store the nodes 5 and 7 after we print the node 6. There are three nodes in the container now, which are node 10, 5 and 7.

Thirdly we retrieve the node 10 from the container. It is noticeable that node 10 is stored into the container before nodes 5 and 7 are stored, and it is also retrieved ahead of nodes 5 and 7. It is typically “First in first out”, so the container is essentially a queue. After print the node 10, we store its two children nodes 9 and 11 into the container too.

Since nodes 5, 7, 9, 11 do not have children, we print them in order.

The printing process can be summarized in the following Table 1:

Step
Operation
Nodes in queue
1
Print Node 8
Node 6, Node 10
2
Print Node 6
Node 10, Node 5, Node 7
3
Print Node 10
Node 5, Node 7, Node 9, Node 11
4
Print Node 5
Node 7, Node 9, Node 11
5
Print Node 7
Node 9, Node 11
6
Print Node 9
Node 11
7
Print Node 11

Table 1: The process to print the binary tree in Figure 1 from top to bottom

We can summarize the rules to print a binary tree from top level to bottom level: Once we print a node, we store its children nodes into a queue if it has. We continue to print the head of the queue, pop it from the queue and store its children until there are no nodes left in the queue.

The following sample code is based on the deque class of STL:

void PrintFromTopToBottom(BinaryTreeNode* pTreeRoot)
{
    if(!pTreeRoot)
        return;

    std::deque<BinaryTreeNode *> dequeTreeNode;

    dequeTreeNode.push_back(pTreeRoot);

    while(dequeTreeNode.size())
    {
        BinaryTreeNode *pNode = dequeTreeNode.front();
        dequeTreeNode.pop_front();

        printf("%d ", pNode->m_nValue);

        if(pNode->m_pLeft)
            dequeTreeNode.push_back(pNode->m_pLeft);

        if(pNode->m_pRight)
            dequeTreeNode.push_back(pNode->m_pRight);
    }
}

结合以下两个算法优化,算法1: import cv2 import numpy as np import os class RobustVideoStitcher: def __init__(self, reset_interval=6, max_width=800, min_matches=15): self.reset_interval = reset_interval # 每 N 个关键帧重置基准图 self.max_width = max_width # 显示和处理的最大宽度(保持性能) self.min_matches = min_matches # 匹配点阈值 self.sift = cv2.SIFT_create() self.flann = cv2.FlannBasedMatcher({'algorithm': 1, 'trees': 5}, {'checks': 50}) self.panorama = None self.frame_count = 0 self.keyframe_count = 0 def resize_with_aspect_ratio(self, image): h, w = image.shape[:2] if w > self.max_width: scale = self.max_width / w new_w, new_h = int(w * scale), int(h * scale) return cv2.resize(image, (new_w, new_h)), scale return image.copy(), 1.0 def match_and_stitch(self, left_img, right_img): """ 使用 SIFT 特征匹配 + RANSAC 单应性估计进行两图拼接 返回拼接后的图像,失败则返回 None """ gray1 = cv2.cvtColor(left_img, cv2.COLOR_BGR2GRAY) gray2 = cv2.cvtColor(right_img, cv2.COLOR_BGR2GRAY) kp1, des1 = self.sift.detectAndCompute(gray1, None) kp2, des2 = self.sift.detectAndCompute(gray2, None) if des1 is None or des2 is None or len(kp1) < 4 or len(kp2) < 4: print("Insufficient features.") return None matches = self.flann.knnMatch(des1, des2, k=2) good_matches = [m for m, n in matches if m.distance < 0.7 * n.distance] if len(good_matches) < self.min_matches: print(f"Not enough good matches: {len(good_matches)}") return None src_pts = np.float32([kp1[m.queryIdx].pt for m in good_matches]).reshape(-1, 1, 2) dst_pts = np.float32([kp2[m.trainIdx].pt for m in good_matches]).reshape(-1, 1, 2) # 计算单应性矩阵(Homography) H, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0) if H is None or abs(np.linalg.det(H)) < 1e-6: print("Homography estimation failed.") return None h1, w1 = left_img.shape[:2] h2, w2 = right_img.shape[:2] # 获取左图四个角在新坐标系下的位置 corners = np.float32([[0, 0], [0, h1], [w1, h1], [w1, 0]]).reshape(-1, 1, 2) transformed_corners = cv2.perspectiveTransform(corners, H) # 关键修复:展平为 (N, 2) pts_transformed = transformed_corners.reshape(-1, 2) # shape: (4, 2) # 右图的四个角(在目标画布中的位置) pts_right_img = np.float32([[0, 0], [0, h2], [w2, h2], [w2, 0]]) # shape: (4, 2) # 合并所有点 all_points = np.vstack([pts_transformed, pts_right_img]) # 使用 vstack 更清晰 # 计算包围框 [x_min, y_min] = np.int32(all_points.min(axis=0)) [x_max, y_max] = np.int32(all_points.max(axis=0)) translation_dist = [-x_min, -y_min] H_translated = np.dot(np.array([ [1, 0, translation_dist[0]], [0, 1, translation_dist[1]], [0, 0, 1] ]), H) result = cv2.warpPerspective(left_img, H_translated, (x_max - x_min, y_max - y_min)) # 将右图融合进去(避免重叠区域覆盖问题) roi_top = translation_dist[1] roi_left = translation_dist[0] roi_bottom = roi_top + h2 roi_right = roi_left + w2 # 简单叠加(可替换为加权融合) if len(right_img.shape) == 3 and len(result.shape) == 3: result[roi_top:roi_bottom, roi_left:roi_right] = right_img else: result[roi_top:roi_bottom, roi_left:roi_right] = cv2.cvtColor(right_img, cv2.COLOR_GRAY2BGR) return result def stitch_frame(self, frame): self.frame_count += 1 # 缩放用于处理 small_frame, scale = self.resize_with_aspect_ratio(frame) if self.panorama is None or self.frame_count % self.reset_interval == 0: # 定期重置基准图,防止误差累积 self.panorama = small_frame.copy() self.keyframe_count += 1 print(f"Keyframe {self.keyframe_count}: Reset base panorama at frame {self.frame_count}") return self.panorama.copy() # 尝试拼接 print(f"Attempting to stitch frame {self.frame_count}...") stitched = self.match_and_stitch(self.panorama, small_frame) if stitched is not None: self.panorama = stitched print(f"Stitching succeeded.") return stitched else: print(f"Stitching failed, keeping previous panorama.") return self.panorama.copy() def main(video_path, skip_frames=10, output_file="final_panorama.jpg"): cap = cv2.VideoCapture(video_path) if not cap.isOpened(): print("Error: Cannot open video file.") return stitcher = RobustVideoStitcher(reset_interval=5, max_width=640) frame_idx = 0 while True: ret, frame = cap.read() if not ret: break frame_idx += 1 # 抽关键帧 if frame_idx % skip_frames != 0: continue print(f"\n--- Processing Frame {frame_idx} ---") # 执行拼接 result = stitcher.stitch_frame(frame) if result is not None: # 显示中间结果(缩放到适合窗口) display = result.copy() if display.shape[1] > 800: scale = 800 / display.shape[1] display = cv2.resize(display, (800, int(display.shape[0] * scale))) cv2.imshow("Intermediate Panorama", display) cv2.imshow("Current Frame", cv2.resize(frame, (320, 240))) key = cv2.waitKey(1) & 0xFF if key == 27: # ESC 退出 break cap.release() cv2.destroyAllWindows() # 保存最终结果 if stitcher.panorama is not None: imwrite_chinese(output_file, stitcher.panorama) print(f"Final panorama saved to '{output_file}'") else: print("No panorama was generated.") # 替代 cv2.imwrite 的方式,支持中文路径 def imwrite_chinese(path, img): ext = path.split('.')[-1].lower() result, encoded_img = cv2.imencode(f'.{ext}', img) if result: with open(path, 'wb') as f: f.write(encoded_img) return True else: return False if __name__ == "__main__": VIDEO_PATH = r"N:\work\LearningDoc\markdown\arcgis\arcgis_desktop\mxd动态发布服务脚本\gisproject\media\video\test.mp4" # 替换为你的视频路径 output_file = r"N:\work\LearningDoc\markdown\arcgis\arcgis_desktop\mxd动态发布服务脚本\gisproject\media\video\final_panorama.jpg" # 替换为你的输出文件名 if not os.path.exists(VIDEO_PATH): print(f"Video file '{VIDEO_PATH}' not found.") else: main(VIDEO_PATH, skip_frames=10,output_file=output_file) 算法2: import cv2 import numpy as np import os # 支持中文路径保存图像 def imwrite_chinese(path, img): ext = path.split('.')[-1].lower() result, encoded_img = cv2.imencode(f'.{ext}', img) if result: with open(path, 'wb') as f: f.write(encoded_img) return True return False class RobustVideoStitcher: def __init__(self, max_width=800, min_matches=15, keyframe_interval=10): self.max_width = max_width # 缩放最大宽度用于加速 self.min_matches = min_matches # 最小匹配点数 self.keyframe_interval = keyframe_interval # 关键帧间隔 self.sift = cv2.SIFT_create( nfeatures=1000, contrastThreshold=0.04, edgeThreshold=10 ) self.flann = cv2.FlannBasedMatcher({'algorithm': 1, 'trees': 5}, {'checks': 100}) self.reference_frame = None # 小图参考帧(用于快速匹配) self.panorama = None # 当前全景图(高分辨率) self.frame_count = 0 # 当前处理帧编号 self.keyframe_count = 0 # 成功添加的关键帧数量 def resize_with_aspect_ratio(self, image): h, w = image.shape[:2] if w > self.max_width: scale = self.max_width / w new_size = (int(w * scale), int(h * scale)) resized = cv2.resize(image, new_size, interpolation=cv2.INTER_AREA) return resized, scale return image.copy(), 1.0 def adjust_homography_for_scale(self, H_small, scale_up): """ 将在小图上计算的 H 转换为原图尺度 H_small: 基于缩放后图像计算的单应性 scale_up: 原图 / 小图 的比例(>1) """ S = np.array([[scale_up, 0, 0], [0, scale_up, 0], [0, 0, 1]]) H_large = S @ H_small @ np.linalg.inv(S) return H_large def match_features(self, dst_img, src_img): """ 计算 H: 把 src_img warp 到 dst_img 空间 dst_img: 目标图(当前全景小图) src_img: 源图(新帧小图) 返回 H: (src) -> (dst) """ gray_dst = cv2.cvtColor(dst_img, cv2.COLOR_BGR2GRAY) gray_src = cv2.cvtColor(src_img, cv2.COLOR_BGR2GRAY) kp_dst, des_dst = self.sift.detectAndCompute(gray_dst, None) kp_src, des_src = self.sift.detectAndCompute(gray_src, None) if des_dst is None or des_src is None or len(kp_dst) < 8 or len(kp_src) < 8: return None # FLANN 匹配:query=src, train=dst matches = self.flann.knnMatch(des_src, des_dst, k=2) good_matches = [m for m, n in matches if m.distance < 0.7 * n.distance] if len(good_matches) < self.min_matches: return None # src_pts: 在 src_img 上的点 # dst_pts: 在 dst_img 上的点 src_pts = np.float32([kp_src[m.queryIdx].pt for m in good_matches]) dst_pts = np.float32([kp_dst[m.trainIdx].pt for m in good_matches]) # 计算 H: src_pts -> dst_pts H, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, ransacReprojThreshold=2.0, maxIters=2000, confidence=0.999) # 更高置信度 if H is None: return None # ✅ 严格检查 H 是否合理 try: det = abs(np.linalg.det(H)) if det < 0.05 or det > 20: print("Rejected H: determinant =", det) return None # 检查是否接近仿射(最后一行近似 [0,0,1]) h20, h21, h22 = H[2, 0], H[2, 1], H[2, 2] if abs(h20) > 0.01 or abs(h21) > 0.01 or abs(h22 - 1.0) > 0.1: print("Rejected H: Not approximately affine.") return None # 检查是否有翻转(行列式为负) if np.linalg.det(H) < 0: print("Rejected H: Reflection detected.") return None except: return None return H def update_panorama(self, new_frame, H): h_pano, w_pano = self.panorama.shape[:2] h_new, w_new = new_frame.shape[:2] # 获取新帧四个角,并变换 corners = np.float32([[0,0], [0,h_new], [w_new,h_new], [w_new,0]]).reshape(-1,1,2) warped_corners = cv2.perspectiveTransform(corners, H) # 所有点:新帧投影点 + 原全景边界 all_points = np.vstack([ warped_corners.reshape(-1, 2), [[0,0], [0,h_pano], [w_pano,h_pano], [w_pano,0]] ]) x_min, y_min = np.int32(all_points.min(axis=0)) x_max, y_max = np.int32(all_points.max(axis=0)) # 平移补偿 t_x, t_y = -x_min, -y_min M_translate = np.eye(3) M_translate[0,2], M_translate[1,2] = t_x, t_y final_H = M_translate @ H canvas_w = x_max - x_min canvas_h = y_max - y_min # Warp 新帧 warped = cv2.warpPerspective(new_frame, final_H, (canvas_w, canvas_h)) # 创建新画布 result = np.zeros((canvas_h, canvas_w, 3), dtype=np.uint8) result[t_y:t_y+h_pano, t_x:t_x+w_pano] = self.panorama # ✅ 使用掩码融合:避免黑色区域覆盖原有内容 mask = (warped != 0).all(axis=2) # 三通道都非黑 result[mask] = warped[mask] self.panorama = result def stitch_frame(self, frame): self.frame_count += 1 orig_frame = frame.copy() # 是否为关键帧? is_key = (self.frame_count % self.keyframe_interval == 0) or (self.panorama is None) if not is_key: return self.panorama.copy() if self.panorama is not None else orig_frame.copy() # 缩放当前帧用于匹配 small_curr, scale = self.resize_with_aspect_ratio(orig_frame) if self.panorama is None: self.panorama = orig_frame.copy() self.reference_frame = small_curr.copy() self.keyframe_count += 1 print(f"[Keyframe {self.keyframe_count}] Initialized.") return self.panorama.copy() # 缩放当前全景图为小图用于匹配 small_pano, _ = self.resize_with_aspect_ratio(self.panorama) # ✅ 关键:dst=small_pano, src=small_curr H_small = self.match_features(small_pano, small_curr) if H_small is None: print(f"[Keyframe {self.keyframe_count+1}] Matching failed.") return self.panorama.copy() # ✅ 正确提升 H 到原始尺度 H_large = self.adjust_homography_for_scale(H_small, 1.0 / scale) # 更新全景图 self.update_panorama(orig_frame, H_large) self.reference_frame = small_curr.copy() self.keyframe_count += 1 print(f"[Keyframe {self.keyframe_count}] Added. Size: {self.panorama.shape[1]}x{self.panorama.shape[0]}") return self.panorama.copy() def auto_crop_black_borders(img, margin=5, threshold=30): """自动裁剪掉四周黑色边缘""" if len(img.shape) == 3: gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) else: gray = img _, binary = cv2.threshold(gray, threshold, 255, cv2.THRESH_BINARY) coords = cv2.findNonZero(binary) if coords is None: return img x, y, w, h = cv2.boundingRect(coords) x = max(x - margin, 0) y = max(y - margin, 0) w = min(w + 2*margin, img.shape[1] - x) h = min(h + 2*margin, img.shape[0] - y) return img[y:y+h, x:x+w] def main(video_path, output_file="panorama_final.jpg", preview=True, skip_frames=0): if not os.path.exists(video_path): print(f"Error: Video file '{video_path}' not found.") return cap = cv2.VideoCapture(video_path) if not cap.isOpened(): print("Error: Cannot open video.") return # 设置跳帧(可选加速) fps = cap.get(cv2.CAP_PROP_FPS) total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) print(f"Video Info: {fps} FPS, {total_frames} frames") stitcher = RobustVideoStitcher( max_width=800, min_matches=15, keyframe_interval=15 # 每15帧选一个关键帧(可根据运动速度调整) ) frame_idx = 0 while True: ret, frame = cap.read() if not ret: break frame_idx += 1 if skip_frames > 0 and frame_idx % (skip_frames + 1) != 0: continue print(f"\rProcessing Frame {frame_idx}/{total_frames}...", end="", flush=True) result = stitcher.stitch_frame(frame) # 实时预览 if preview and frame_idx % 15 == 0: disp = result.copy() if disp.shape[1] > 1600: scale = 1600 / disp.shape[1] disp = cv2.resize(disp, (1600, int(disp.shape[0] * scale))) cv2.imshow("Panorama", disp) if cv2.waitKey(1) & 0xFF == 27: # ESC退出 break cap.release() cv2.destroyAllWindows() # 最终裁剪黑边 if stitcher.panorama is not None: final_result = auto_crop_black_borders(stitcher.panorama) success = imwrite_chinese(output_file, final_result) if success: h, w = final_result.shape[:2] print(f"\n✅ Final panorama saved to '{output_file}'") print(f" Size: {w}x{h}") else: print("\n❌ Failed to save panorama.") else: print("\n❌ No panorama was generated.") if __name__ == "__main__": # === 修改这些路径为你自己的文件路径 === VIDEO_PATH = r"N:\work\LearningDoc\markdown\arcgis\arcgis_desktop\mxd动态发布服务脚本\gisproject\media\video\test.mp4" OUTPUT_PATH = r"N:\work\LearningDoc\markdown\arcgis\arcgis_desktop\mxd动态发布服务脚本\gisproject\media\video\final_panorama3_1.jpg" if not os.path.exists(VIDEO_PATH): print(f"Video file not found: {VIDEO_PATH}") else: main( video_path=VIDEO_PATH, output_file=OUTPUT_PATH, preview=True, skip_frames=0 # 可设为 1~2 加速(跳过部分帧) ) 算法1只能输出相邻几帧的拼接结果,但是结果正常,拼接无黑缝;算法2能拼接前几十帧,但是后拼接的图像会越来越小,到几十帧后就没法匹配了;综合上述两种算法,优化拼接过程和结果
10-10
基于径向基函数神经网络RBFNN的自适应滑模控制学习(Matlab代码实现)内容概要:本文介绍了基于径向基函数神经网络(RBFNN)的自适应滑模控制方法,并提供了相应的Matlab代码实现。该方法结合了RBF神经网络的非线性逼近能力和滑模控制的强鲁棒性,用于解决复杂系统的控制问题,尤其适用于存在不确定性和外部干扰的动态系统。文中详细阐述了控制算法的设计思路、RBFNN的结构与权重更新机制、滑模面的构建以及自适应律的推导过程,并通过Matlab仿真验证了所提方法的有效性和稳定性。此外,文档还列举了大量相关的科研方向和技术应用,涵盖智能优化算法、机器学习、电力系统、路径规划等多个领域,展示了该技术的广泛应用前景。; 适合人群:具备一定自动控制理论基础和Matlab编程能力的研究生、科研人员及工程技术人员,特别是从事智能控制、非线性系统控制及相关领域的研究人员; 使用场景及目标:①学习和掌握RBF神经网络与滑模控制相结合的自适应控制策略设计方法;②应用于电机控制、机器人轨迹跟踪、电力电子系统等存在模型不确定性或外界扰动的实际控制系统中,提升控制精度与鲁棒性; 阅读建议:建议读者结合提供的Matlab代码进行仿真实践,深入理解算法实现细节,同时可参考文中提及的相关技术方向拓展研究思路,注重理论分析与仿真验证相结合。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值