解决PLUGINLIB_DECLARE_CLASS(turtlebot_follower, TurtlebotFollower, turtlebot_follower::TurtlebotFoll错误

本文解决了一项在使用JetsonNano和TI_mmwaveIWR1443进行自主机器人开发过程中遇到的编译错误。具体错误是在catkin_make时提示follower.cpp文件中的PLUGINLIB_DECLARE_CLASS语法不正确。通过将PLUGINLIB_DECLARE_CLASS更改为PLUGINLIB_EXPORT_CLASS成功解决了该问题。

硬件:jetson nano ,TI_mmwave IWR1443
进行Autonomous Robotics with ROS for mmWave时错误记录1。

问题描述:在ros工作区catkin_make,出现错误:

/home/jetbot/TI_ws/src/turtlebot/turtlebot_apps/turtlebot_follower/src/follower.cpp:317:24: error: expected constructor, destructor, or type conversion before ‘(’ token
PLUGINLIB_DECLARE_CLASS(turtlebot_follower, TurtlebotFollower, turtlebot_follower::TurtlebotFollower, nodelet::Nodelet);

在这里插入图片描述

解决方案:定位到上述文件位置,打开follower.cpp,将
PLUGINLIB_DECLARE_CLASS(turtlebot_follower, TurtlebotFollower, turtlebot_follower::TurtlebotFollower, nodelet::Nodelet);
更改为:
PLUGINLIB_EXPORT_CLASS(TurtlebotFollower , nodelet::Nodelet);

建议不要用:PLUGINLIB_DECLARE_CLASS,以PLUGINLIB_EXPORT_CLASS替换。其中PLUGINLIB_DECLARE_CLASS的参数为4个,而PLUGINLIB_EXPORT_CLASS的参数为两个。

#!/usr/bin/env python3 # -*- coding: utf-8 -*- """ ROS2 节点:识别到绿色启动后扫白色面积寻迹,并在指定时间后检测路面黑色字/块并短暂停止 新增:黑块暂停结束后,开启“红球推行(融合白线巡线)” - 在未检测到绿色之前原地等待(不旋转、不前进) - 识别到绿色并连续确认若干帧后进入白色区域质心巡线 - 在运行了指定时间后(默认 91.5s),若在白色道路中间检测到黑色块则停住指定时长(默认 3s)再继续运行 - 暂停结束后开启红球推行:保持白线巡线 + 根据红球位置微调方向、根据红球面积动态减速 发布:/cmd_vel(geometry_msgs/Twist) 订阅:/image_raw(sensor_msgs/Image) 处理后图像发布:/camera/process_image """ import time import rclpy from rclpy.node import Node from sensor_msgs.msg import Image as RosImage from geometry_msgs.msg import Twist from cv_bridge import CvBridge import cv2 import numpy as np class WhiteAreaFollower(Node): def __init__(self): super().__init__('white_area_follower') # ===== 巡线基础参数 ===== self.declare_parameter('desired_area', 20000) # 期望白色区域面积(像素) self.declare_parameter('max_speed', 0.85) self.declare_parameter('min_speed', 0.15) self.declare_parameter('angular_k', 0.012) self.declare_parameter('roi_top_ratio', 0.55) self.declare_parameter('min_area_detect', 600) self.declare_parameter('show_window', False) # ===== 绿色检测参数(HSV) ===== self.declare_parameter('green_h_low', 59) self.declare_parameter('green_s_low', 63) self.declare_parameter('green_v_low', 130) self.declare_parameter('green_h_high', 97) self.declare_parameter('green_s_high', 255) self.declare_parameter('green_v_high', 255) self.declare_parameter('min_green_area', 400) # 绿色最小面积(像素) self.declare_parameter('green_confirm_frames', 4) # 连续帧确认数 # ===== 黑块/黑字暂停参数 ===== self.declare_parameter('stop_after_seconds', 78.5) # 运行多少秒后开始寻找黑块 self.declare_parameter('stop_duration', 3) # 暂停时长(秒) self.declare_parameter('min_black_area', 500) # 黑块最小面积(像素) self.declare_parameter('black_detect_method', 'otsu') # 'otsu' 或 'fixed_v' self.declare_parameter('black_fixed_v', 80) # 若使用 fixed_v,则 V 阈值 self.declare_parameter('center_tolerance_ratio', 0.15) # 黑块质心需位于图像中间 ±ratio # ===== 红球推行参数(新增) ===== # HSV 双区间(绕 0/180) self.declare_parameter('red_h_low', 0) self.declare_parameter('red_h_high', 10) self.declare_parameter('red_h2_low', 160) self.declare_parameter('red_h2_high', 179) self.declare_parameter('red_s_low', 100) self.declare_parameter('red_v_low', 100) self.declare_parameter('min_red_area', 350) # 红球最小面积(像素) self.declare_parameter('max_red_area', 20000) # 红球面积上限(用来近距离降速) self.declare_parameter('red_angular_k', 0.0045) # 红球角速度系数(越大越“跟球”) self.declare_parameter('red_influence', 0.3) # 红球对角速度影响权重(0~1) self.declare_parameter('red_min_linear', 0.08) # 推球时最小线速度下限 self.declare_parameter('red_linear_scale', 0.6) # 推球线速度比例(相对 max_speed) # ===== 读取参数 ===== self.desired_area = float(self.get_parameter('desired_area').value) self.max_speed = float(self.get_parameter('max_speed').value) self.min_speed = float(self.get_parameter('min_speed').value) self.angular_k = float(self.get_parameter('angular_k').value) self.roi_top_ratio = float(self.get_parameter('roi_top_ratio').value) self.min_area_detect = float(self.get_parameter('min_area_detect').value) self.show_window = bool(self.get_parameter('show_window').value) self.green_h_low = int(self.get_parameter('green_h_low').value) self.green_s_low = int(self.get_parameter('green_s_low').value) self.green_v_low = int(self.get_parameter('green_v_low').value) self.green_h_high = int(self.get_parameter('green_h_high').value) self.green_s_high = int(self.get_parameter('green_s_high').value) self.green_v_high = int(self.get_parameter('green_v_high').value) self.min_green_area = int(self.get_parameter('min_green_area').value) self.green_confirm_frames = int(self.get_parameter('green_confirm_frames').value) self.stop_after_seconds = float(self.get_parameter('stop_after_seconds').value) self.stop_duration = float(self.get_parameter('stop_duration').value) self.min_black_area = int(self.get_parameter('min_black_area').value) self.black_detect_method = str(self.get_parameter('black_detect_method').value) self.black_fixed_v = int(self.get_parameter('black_fixed_v').value) self.center_tolerance_ratio = float(self.get_parameter('center_tolerance_ratio').value) # 红球参数 self.red_h_low = int(self.get_parameter('red_h_low').value) self.red_h_high = int(self.get_parameter('red_h_high').value) self.red_h2_low = int(self.get_parameter('red_h2_low').value) self.red_h2_high = int(self.get_parameter('red_h2_high').value) self.red_s_low = int(self.get_parameter('red_s_low').value) self.red_v_low = int(self.get_parameter('red_v_low').value) self.min_red_area = int(self.get_parameter('min_red_area').value) self.max_red_area = int(self.get_parameter('max_red_area').value) self.red_angular_k = float(self.get_parameter('red_angular_k').value) self.red_influence = float(self.get_parameter('red_influence').value) self.red_min_linear = float(self.get_parameter('red_min_linear').value) self.red_linear_scale = float(self.get_parameter('red_linear_scale').value) # ===== 状态机 ===== self.started = False self._green_count = 0 self._pause_triggered = False self._paused_until = None self._start_time = time.monotonic() # 黑块暂停结束后,开启红球推行融合 self._red_push_enabled = False self.bridge = CvBridge() self.sub = self.create_subscription(RosImage, '/image_raw', self.image_callback, 10) self.pub_img = self.create_publisher(RosImage, '/camera/process_image', 10) self.cmd_pub = self.create_publisher(Twist, 'cmd_vel', 10) self.get_logger().info('white_area_follower started (waiting for green to begin)') # ===== 绿色检测 ===== def detect_green(self, hsv, roi_slice=None): lower = np.array([self.green_h_low, self.green_s_low, self.green_v_low]) upper = np.array([self.green_h_high, self.green_s_high, self.green_v_high]) mask = cv2.inRange(hsv, lower, upper) if roi_slice is not None: full = np.zeros_like(mask) full[roi_slice] = mask[roi_slice] mask = full kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (7, 7)) clean = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel, iterations=1) clean = cv2.morphologyEx(clean, cv2.MORPH_CLOSE, kernel, iterations=2) contours, _ = cv2.findContours(clean, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) max_area = max([cv2.contourArea(c) for c in contours], default=0) return clean, max_area, contours # ===== 红球检测(新增) ===== def detect_red_ball(self, hsv): lower1 = np.array([self.red_h_low, self.red_s_low, self.red_v_low]) upper1 = np.array([self.red_h_high, 255, 255]) lower2 = np.array([self.red_h2_low, self.red_s_low, self.red_v_low]) upper2 = np.array([self.red_h2_high, 255, 255]) mask1 = cv2.inRange(hsv, lower1, upper1) mask2 = cv2.inRange(hsv, lower2, upper2) mask = cv2.bitwise_or(mask1, mask2) mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, np.ones((5, 5), np.uint8)) mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, np.ones((7, 7), np.uint8)) contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) if not contours: return None, mask largest = max(contours, key=cv2.contourArea) area = cv2.contourArea(largest) if area < self.min_red_area: return None, mask M = cv2.moments(largest) if M['m00'] == 0: return None, mask cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) return (cx, cy, area), mask def image_callback(self, msg: RosImage): try: frame = self.bridge.imgmsg_to_cv2(msg, desired_encoding='bgr8') except Exception as e: self.get_logger().error(f'cv_bridge convert failed: {e}') return now = time.monotonic() elapsed = now - self._start_time h, w = frame.shape[:2] roi_top = int(h * self.roi_top_ratio) hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) # 绿色检测(整图) green_mask, green_area, green_contours = self.detect_green(hsv) vis = frame.copy() # 绿色可视化 if green_contours: largest_g = max(green_contours, key=cv2.contourArea) area_g = cv2.contourArea(largest_g) if area_g >= self.min_green_area: x, y, gw, gh = cv2.boundingRect(largest_g) cv2.rectangle(vis, (x, y), (x + gw, y + gh), (0, 255, 0), 2) cv2.putText(vis, f'GREEN {int(area_g)}', (x, max(0, y - 6)), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 2) # 更新绿色确认计数 if green_area >= self.min_green_area: self._green_count += 1 else: self._green_count = 0 if not self.started and self._green_count >= self.green_confirm_frames: self.started = True self.get_logger().info(f'Green detected for {self.green_confirm_frames} frames, starting white-area following.') twist = Twist() found = False largest = None # 用于黑块搜索 ROI # ===== 暂停期:保持停止 ===== if self._paused_until is not None and now < self._paused_until: twist.linear.x = 0.0 twist.angular.z = 0.0 remaining = self._paused_until - now cv2.putText(vis, f'PAUSED {remaining:.1f}s', (10, 50), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 2) else: # 刚结束暂停:开启红球推行 if self._paused_until is not None and now >= self._paused_until: self._paused_until = None self._red_push_enabled = True self.get_logger().info('Pause finished, RED-PUSH enabled, resuming behavior.') if self.started: # ===== 白色检测(ROI 下半区域) ===== lower_w = np.array([0, 0, 200]) upper_w = np.array([180, 60, 255]) white_mask = cv2.inRange(hsv, lower_w, upper_w) roi_mask = np.zeros_like(white_mask) roi_mask[roi_top:h, :] = white_mask[roi_top:h, :] kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5)) clean = cv2.morphologyEx(roi_mask, cv2.MORPH_OPEN, kernel, iterations=1) clean = cv2.morphologyEx(clean, cv2.MORPH_CLOSE, kernel, iterations=2) contours, _ = cv2.findContours(clean, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # 可视化白色 mask 到 ROI 区域 vis[roi_top:h, :] = cv2.addWeighted(vis[roi_top:h, :], 0.7, cv2.cvtColor(clean, cv2.COLOR_GRAY2BGR)[roi_top:h, :], 0.9, 0) if contours: largest = max(contours, key=cv2.contourArea) area = cv2.contourArea(largest) if area >= self.min_area_detect: M = cv2.moments(largest) if M['m00'] != 0: cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) + roi_top cv2.drawContours(vis, [largest + np.array([0, roi_top])], -1, (0, 255, 0), 2) cv2.circle(vis, (cx, cy), 6, (0, 0, 255), -1) cv2.line(vis, (w // 2, h), (cx, cy), (255, 0, 0), 2) # ====== 巡线基础控制 ====== err = cx - (w / 2) ang_white = - float(err) * self.angular_k ratio = min(area / self.desired_area, 2.0) linear_base = self.min_speed + (self.max_speed - self.min_speed) * (1.0 - ratio) linear_base = float(np.clip(linear_base, self.min_speed, self.max_speed)) twist.angular.z = ang_white twist.linear.x = linear_base found = True # ====== 红球推行融合(仅在暂停后启用) ====== if self._red_push_enabled: red_res, red_mask = self.detect_red_ball(hsv) if red_res is not None: rcx, rcy, rarea = red_res # 红球可视化 cv2.circle(vis, (rcx, rcy), 10, (0, 0, 255), -1) cv2.putText(vis, f'RED {int(rarea)}', (10, 40), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 2) # 角度融合:白线为主,红球辅助 err_r = rcx - (w / 2) ang_red = - float(err_r) * self.red_angular_k twist.angular.z = (1.0 - self.red_influence) * ang_white + self.red_influence * ang_red # 线速度:越近越慢 near = float(np.clip(rarea / max(1.0, self.max_red_area), 0.0, 1.0)) # 线速度在 [red_min_linear, max_speed*red_linear_scale] 按距离反比缩放 lin_cap = self.max_speed * self.red_linear_scale * (1.0 - near) lin_cap = max(lin_cap, self.red_min_linear) twist.linear.x = min(twist.linear.x, lin_cap) if not found: # 启动后但未检测到白区:停止(不旋转) twist.linear.x = 0.0 twist.angular.z = 0.0 # ===== 黑块/黑字检测:达到时间且未触发过时检测 ===== if (not self._pause_triggered) and (elapsed >= self.stop_after_seconds): # 优先在当前白色最大区域内搜索黑块;若没有白区则在 ROI 全区搜索 if largest is not None: bx, by, bw_box, bh_box = cv2.boundingRect(largest) sx, sy, sw_box, sh_box = bx, by, bw_box, bh_box else: sx, sy, sw_box, sh_box = 0, roi_top, w, h - roi_top pad = 8 x1 = max(0, sx - pad) y1 = max(roi_top, sy - pad) x2 = min(w, sx + sw_box + pad) y2 = min(h, sy + sh_box + pad) if x2 > x1 and y2 > y1: crop = frame[y1:y2, x1:x2] gray = cv2.cvtColor(crop, cv2.COLOR_BGR2GRAY) gray = cv2.GaussianBlur(gray, (5, 5), 0) if self.black_detect_method == 'fixed_v': v_crop = hsv[y1:y2, x1:x2, 2] _, black_bin = cv2.threshold(v_crop, self.black_fixed_v, 255, cv2.THRESH_BINARY_INV) black_bin = black_bin.astype('uint8') else: _, th = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) black_bin = cv2.bitwise_not(th) k2 = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5)) black_clean = cv2.morphologyEx(black_bin, cv2.MORPH_OPEN, k2, iterations=1) black_clean = cv2.morphologyEx(black_clean, cv2.MORPH_CLOSE, k2, iterations=2) b_contours, _ = cv2.findContours(black_clean, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) if b_contours: for b in b_contours: b_area = cv2.contourArea(b) if b_area < self.min_black_area: continue BM = cv2.moments(b) if BM['m00'] == 0: continue bcx_local = int(BM['m10'] / BM['m00']) bcy_local = int(BM['m01'] / BM['m00']) bcx = x1 + bcx_local bcy = y1 + bcy_local center_dist = abs(bcx - (w / 2)) if center_dist <= (self.center_tolerance_ratio * w): # 触发暂停(一次性) self._pause_triggered = True self._paused_until = now + self.stop_duration twist.linear.x = 0.0 twist.angular.z = 0.0 self.get_logger().info( f'Black region detected at center (area={int(b_area)}), pausing for {self.stop_duration}s' ) # 可视化黑块(映射回整图坐标) b_shifted = b + np.array([[x1, y1]]) cv2.drawContours(vis, [b_shifted], -1, (0, 0, 255), 2) cv2.putText(vis, f'BLACK {int(b_area)} - PAUSE', (10, 70), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) break else: # 未启动:等待绿色触发 twist.linear.x = 0.0 twist.angular.z = 0.0 cv2.putText(vis, 'WAITING FOR GREEN...', (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 255), 2) # ===== 发布速度 ===== try: self.cmd_pub.publish(twist) except Exception as e: self.get_logger().warn(f'publish cmd_vel failed: {e}') # ===== 输出图像(带状态与绿色掩码缩略图) ===== try: gvis = cv2.cvtColor(green_mask, cv2.COLOR_GRAY2BGR) small = cv2.resize(gvis, (int(w * 0.25), int(h * 0.25))) sw, sh = small.shape[1], small.shape[0] vis[0:sh, w - sw:w] = cv2.addWeighted(vis[0:sh, w - sw:w], 0.6, small, 0.9, 0) status_text = ( 'PAUSED' if (self._paused_until is not None and time.monotonic() < self._paused_until) else ('STARTED' if self.started else f'WAITING (green frames: {self._green_count}/{self.green_confirm_frames})') ) extra = ' RED-PUSH:ON' if self._red_push_enabled else '' info_text = f'{status_text}{extra} elapsed:{int(elapsed)}s' cv2.putText(vis, info_text, (10, h - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 0), 2) out_msg = self.bridge.cv2_to_imgmsg(vis, encoding='bgr8') out_msg.header.stamp = self.get_clock().now().to_msg() out_msg.header.frame_id = msg.header.frame_id if msg.header.frame_id else 'camera' self.pub_img.publish(out_msg) except Exception as e: self.get_logger().warn(f'publish process_image failed: {e}') if self.show_window: cv2.imshow('white_area_follower', vis) try: if self.started and 'clean' in locals(): cv2.imshow('white_mask', clean) except Exception: pass cv2.waitKey(1) def destroy_node(self): try: if self.show_window: cv2.destroyAllWindows() except Exception: pass super().destroy_node() def main(args=None): rclpy.init(args=args) node = WhiteAreaFollower() try: rclpy.spin(node) except KeyboardInterrupt: pass finally: node.get_logger().info('shutting down') node.destroy_node() rclpy.shutdown() if __name__ == '__main__': main() 这个识别红球的各个参数代表什么,把注释写详细一些
最新发布
11-13
import rclpy from rclpy.node import Node from sensor_msgs.msg import CompressedImage from nav_msgs.msg import Odometry from geometry_msgs.msg import Twist from cv_bridge import CvBridge import cv2 import numpy as np import math import time from transforms3d.euler import quat2euler from test1_yolo_bin import YoloNV12Inferencer bridge = CvBridge() class ImageProcessor(Node): def __init__(self): super().__init__('image_processor_node') self.subscription = self.create_subscription( CompressedImage, '/image', self.image_callback, 10) self.odom_sub = self.create_subscription( Odometry, '/odom', self.odom_callback, 10) self.publisher_ = self.create_publisher(Twist, '/cmd_vel', 10) self.lost_lane_counter = 0 self.lost_lane_threshold = 5 self.inferencer = YoloNV12Inferencer('/root/dev_ws/3w/bin/yolov5s_672x672_nv12.bin') self.get_logger().info("✅ YOLO模型加载成功") self.image_cx = self.declare_parameter('image_cx', 320.0).value self.focal_length = self.declare_parameter('focal_length', 550.0).value self.target_height = self.declare_parameter('target_height', 0.3).value # 避障状态及阶段 self.in_avoidance = False self.avoid_stage = 0 # 0=正常巡线,1=避障转弯,2=回正转弯 self.avoid_direction = None # 'left' or 'right' self.avoid_start_time = None # 当前车头朝向 self.current_yaw = 0.0 # 保存最新图像,用于回正阶段检测车道线 self.latest_frame = None self.timer = self.create_timer(0.05, self.control_loop) # 20Hz执行控制 def odom_callback(self, msg): q = msg.pose.pose.orientation _, _, yaw = quat2euler([q.x, q.y, q.z, q.w]) self.current_yaw = yaw def control_speed(self, linear_speed, angular_z): msg = Twist() msg.linear.x = linear_speed msg.angular.z = angular_z self.publisher_.publish(msg) # 你的 img2alpha 和 detect_lane 函数保持不变 def img2alpha(self, frame): hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) lower_black = np.array([0, 0, 0]) upper_black = np.array([180, 255, 50]) black_mask = cv2.inRange(hsv, lower_black, upper_black) black_mask = cv2.dilate(black_mask, None, iterations=5) black_mask = cv2.erode(black_mask, None, iterations=5) detection_band = 50 x_coords = np.where(black_mask[-detection_band:] == 255)[1] overlay_color = np.full_like(frame[-detection_band:, :], (0, 255, 0)) frame[-detection_band:, :] = cv2.addWeighted( frame[-detection_band:, :], 0.7, overlay_color, 0.3, 0 ) if len(x_coords) != 0: self.lost_lane_counter = 0 mean_x = np.mean(x_coords) h, w = frame.shape[:2] alpha = float(-(mean_x - w / 2) / (w / 2)) v = 0.3 else: self.lost_lane_counter += 1 self.get_logger().info(f"🚫 未识别车道线: {self.lost_lane_counter} 帧") if self.lost_lane_counter >= self.lost_lane_threshold: self.get_logger().info("🛑 连续 5 帧未识别,停车") v = 0.0 alpha = 0.0 else: v = 0.2 alpha = 0 time.sleep(0.8) return v, alpha def detect_lane(self, frame): hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) lower_black = np.array([0, 0, 0]) upper_black = np.array([180, 255, 50]) black_mask = cv2.inRange(hsv, lower_black, upper_black) detection_band = 50 x_coords = np.where(black_mask[-detection_band:] == 255)[1] return len(x_coords) > 0 def detect_obstacle_on_lane(self, frame): detections = self.inferencer.infer(frame) h_img, w_img = frame.shape[:2] for class_id, score, (x1, y1, x2, y2) in detections: if class_id != 0: continue bbox_h = y2 - y1 if bbox_h <= 0: continue distance = (self.target_height * self.focal_length) / bbox_h if distance < 0.6: cx = (x1 + x2) // 2 offset = cx - self.image_cx if abs(offset) < w_img * 0.4: center_threshold = w_img * 0.1 if abs(offset) < center_threshold: direction = 'right' else: direction = 'left' if offset < 0 else 'right' self.get_logger().info(f"⚠️ 触发避障: 距离 {distance:.2f} m, 方位: {direction}") return True, direction return False, None def image_callback(self, msg): img = bridge.compressed_imgmsg_to_cv2(msg) h1, w = img.shape[:2] bili = 0.4 img_cropped = img[int(h1 * bili):, :] frame = cv2.resize(img_cropped, (160, 120)) self.latest_frame = frame # 保存最新帧供回正检测车道线用 if not self.in_avoidance: obstacle_detected, direction = self.detect_obstacle_on_lane(frame) if obstacle_detected: self.in_avoidance = True self.avoid_stage = 1 self.avoid_direction = direction self.avoid_start_time = self.get_clock().now() self.get_logger().info(f"🚗 避障开始,方向: {direction}") return if not self.in_avoidance: # 保留你原有巡线逻辑不变 v, alpha = self.img2alpha(frame.copy()) self.get_logger().info(f"当前速度和角度为: v={v:.2f}, alpha={alpha:.2f}") self.control_speed(v, alpha) def control_loop(self): if not self.in_avoidance: return if self.latest_frame is None: return now = self.get_clock().now() elapsed = (now - self.avoid_start_time).nanoseconds / 1e9 if self.avoid_stage == 1: angular_speed = 1.046 # 60度/1秒 if self.avoid_direction == 'left': angular_speed = -angular_speed else: angular_speed = angular_speed self.control_speed(0.25, angular_speed) if elapsed >= 1: self.avoid_stage = 2 self.avoid_start_time = now self.get_logger().info("进入回正阶段") elif self.avoid_stage == 2: # 回正时检测车道线,检测到立即切换巡线 if self.detect_lane(self.latest_frame): self.get_logger().info("✅ 回正阶段检测到车道线,立即恢复巡线") self.avoid_stage = 0 self.in_avoidance = False self.avoid_direction = None v, alpha = self.img2alpha(self.latest_frame.copy()) self.control_speed(v, alpha) return angular_speed = 2.093 # 120度/0.6秒 if self.avoid_direction == 'left': angular_speed = angular_speed else: angular_speed = -angular_speed self.control_speed(0.25, angular_speed) if elapsed >= 1: self.get_logger().info("回正完成,恢复巡线") self.avoid_stage = 0 self.in_avoidance = False self.avoid_direction = None v, alpha = self.img2alpha(self.latest_frame.copy()) self.control_speed(v, alpha) def main(args=None): rclpy.init(args=args) node = ImageProcessor() rclpy.spin(node) node.destroy_node() rclpy.shutdown() if __name__ == '__main__': main() 请你帮我检查我的上述代码是否实现了边巡线边避障
07-23
<think>好的,我现在需要帮助用户解决关于PLUGINLIB_EXPORT_CLASS的用法和编译问题。首先,用户提到了几个引用,其中涉及PLUGINLIB_REGISTER_CLASSPLUGINLIB_EXPORT_CLASS的宏,以及一些示例代码。我需要先理解这些宏的作用和区别。 根据引用[1],在Groovy版本之前,使用的是PLUGINLIB_REGISTER_CLASSPLUGINLIB_DECLARE_CLASS,而之后改为PLUGINLIB_EXPORT_CLASS。这说明用户可能在使用较新版本的ROS,但可能在迁移旧代码时遇到了问题。用户的问题主要集中在PLUGINLIB_EXPORT_CLASS的用法和编译错误,所以需要详细解释这个宏的正确使用方式。 接下来,引用[2]和[3]展示了PLUGINLIB_EXPORT_CLASS的用法示例,比如在.cpp文件中包含必要的头文件,并在末尾使用宏来注册插件类。例如,PLUGINLIB_EXPORT_CLASS(rectangle_namespace::Rectangle, polygon_namespace::Polygon)。这提示用户需要确保他们的插件类正确继承自基类,并且宏的参数顺序正确:派生类在前,基类在后。 引用[4]提到创建基类头文件,这说明用户可能需要在插件系统中定义一个基类,所有插件都继承自这个基类。因此,需要检查基类是否正确声明为抽象类,并且派生类是否实现了所有纯虚函数,否则可能导致编译错误。 引用[5]看起来是PHP相关的示例,可能与当前问题无关,可能用户误贴了内容,但暂时忽略。 用户可能遇到的编译错误包括:插件未找到、类未注册、链接错误等。根据经验,这些错误通常是由于宏未正确使用、插件描述文件缺失或格式错误、包未正确声明依赖等原因造成的。 需要分步骤解释PLUGINLIB_EXPORT_CLASS的用法,包括头文件包含、宏的位置、参数顺序,以及如何编写插件描述文件(.xml)。同时,常见的编译错误如“ClassLoader找不到插件”可能是因为插件描述文件路径不正确,或者没有在package.xml中导出插件。 另外,用户可能在迁移旧代码时使用了旧的宏,导致与新ROS版本不兼容,需要替换为PLUGINLIB_EXPORT_CLASS。同时,需要注意命名空间的使用是否正确,基类是否被正确导出。 最后,用户可能需要示例代码来验证他们的实现是否正确,因此提供一个完整的示例,包括基类、派生类、插件描述文件和package.xml的配置会很有帮助。同时,调试步骤如检查插件列表、查看编译日志等也是解决问题的关键。</think>##### 一、PLUGINLIB_EXPORT_CLASS 核心用法 1. **基本语法** ```cpp PLUGINLIB_EXPORT_CLASS(派生类全限定名, 基类全限定名) ``` - 必须位于 `.cpp` 文件的全局作用域 - 参数顺序:**派生类在前,基类在后** - 示例: ```cpp #include <pluginlib/class_list_macros.h> #include "my_plugin_base.h" #include "my_plugin_impl.h" PLUGINLIB_EXPORT_CLASS(my_ns::MyPlugin, my_base_ns::BaseClass) ``` 2. **实现要求** - 派生类必须**公有继承**基类 - 基类需包含 `#include <pluginlib/class_loader.h>` - 必须在**独立编译单元**(.cpp文件)中使用该宏[^1] ##### 二、典型编译错误解决方案 **错误1:`ClassLoader was unable to find library`** - **原因**:插件描述文件未正确配置 - **修复步骤**: 1. 创建 `plugins/my_plugin.xml`: ```xml <library path="lib/libmy_plugins"> <class name="my_plugin/MyPlugin" type="my_ns::MyPlugin" base_class_type="my_base_ns::BaseClass"/> </library> ``` 2. 在 `package.xml` 添加: ```xml <export> <my_base_ns plugin="${prefix}/plugins/my_plugin.xml"/> </export> ``` **错误2:`undefined reference to 'typeinfo for BaseClass'`** - **原因**:基类未声明虚析构函数 - **修复方法**: ```cpp // base_class.h class BaseClass { public: virtual ~BaseClass() = default; // 必须声明虚析构函数 virtual void doSomething() = 0; // 纯虚函数 }; ``` ##### 三、完整开发流程示例 1. **基类定义** ```cpp // calculator_base.h #include <pluginlib/class_loader.h> namespace calculator { class BaseOperation { public: virtual ~BaseOperation() = default; virtual double compute(double a, double b) = 0; }; } // namespace calculator ``` 2. **插件实现** ```cpp // add_plugin.cpp #include "calculator_base.h" #include <pluginlib/class_list_macros.h> namespace my_plugins { class Add : public calculator::BaseOperation { public: double compute(double a, double b) override { return a + b; } }; } // namespace my_plugins PLUGINLIB_EXPORT_CLASS(my_plugins::Add, calculator::BaseOperation) ``` 3. **加载插件** ```cpp #include <pluginlib/class_loader.h> #include "calculator_base.h" int main() { pluginlib::ClassLoader<calculator::BaseOperation> loader( "plugin_demo", "calculator::BaseOperation"); auto add = loader.createInstance("my_plugins/Add"); std::cout << "3+5=" << add->compute(3,5); // 输出 8 return 0; } ``` ##### 四、调试技巧 1. **检查插件注册**: ```bash rospack plugins --attrib=plugin export <包名> ``` 2. **查看已注册插件**: ```bash rospack find <基类包名> # 确认插件路径正确 ``` 3. **编译参数验证**: ```cmake find_package(pluginlib REQUIRED) add_library(add_plugin src/add_plugin.cpp) target_link_libraries(add_plugin ${catkin_LIBRARIES}) ``` ##### 五、ROS版本差异对照表 | 错误现象 | ROS Noetic 解决方案 | ROS Melodic 解决方案 | |-------------------------|-------------------------------|--------------------------------| | 插件加载超时 | 确认 `class_loader` 使用多线程 | 增加 `ClassLoader` 缓存时间 | | 动态链接库符号丢失 | 检查 `__attribute__((visibility))` | 添加 `-Wl,--export-dynamic` 编译选项 | [^1]: 引用自 ROS pluginlib 官方文档,要求宏必须放置在全局命名空间 [^2]: 通过 `rospack plugins` 命令验证插件注册状态是调试的关键步骤 [^3]: 虚析构函数缺失会导致 RTTI 信息不完整,这是 C++ 动态加载的核心机制
评论 4
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值