TensorFlow:tf.scan与tf.map_fn

该文介绍了TensorFlow中的scan和map_fn函数,它们用于处理张量序列。scan函数沿第一维度对元素应用可调用函数,而map_fn则对输入序列中的每个元素应用函数。两者都支持并行迭代和反向传播,并可以设置是否交换GPU-CPU内存来优化性能。用户需指定函数输出类型、迭代次数等参数。
部署运行你感兴趣的模型镜像
scan(
    fn,
    elems,
    initializer=None,
    parallel_iterations=10,
    back_prop=True,
    swap_memory=False,
    infer_shape=True,
    name=None
)
map_fn(
    fn,
    elems,
    dtype=None,
    parallel_iterations=10,
    back_prop=True,
    swap_memory=False,
    infer_shape=True,
    name=None
)
map_fnscan
fn:

可调用的执行.它接受一个参数,它将具有与之相同的(可能嵌套的)结构 elems.其输出必须具有与 dtype 相同的结构(如果提供了),否则它必须具有与elems 相同的结构.

elems:

张量或(可能是嵌套的)张量序列,其中的每一个都将沿着它们的第一维度进行解压.生成的切片的嵌套序列将应用于 fn.

dtype(initializer)

(可选)fn 的输出类型.如果 fn 返回与 elems 结构不同的张量结构,则 dtype 不是可选的,并且必须具有与 fn 的输出相同的结构.

parallel_iterations:

(可选)允许并行运行的迭代次数.

back_prop:

(可选)True 允许支持反向传播.

swap_memory:

(可选)True 可实现 GPU-CPU 内存交换.

infer_shape:

(可选)False 禁用对一致输出形状的测试.

name

(可选)返回的张量的名称前缀.

您可能感兴趣的与本文相关的镜像

TensorFlow-v2.15

TensorFlow-v2.15

TensorFlow

TensorFlow 是由Google Brain 团队开发的开源机器学习框架,广泛应用于深度学习研究和生产环境。 它提供了一个灵活的平台,用于构建和训练各种机器学习模型

这是报错信息:Traceback (most recent call last): File "main_SemanticKITTI.py", line 206, in <module> dataset.init_input_pipeline() File "main_SemanticKITTI.py", line 176, in init_input_pipeline self.batch_train_data = self.batch_train_data.map(map_func=map_func) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1861, in map return MapDataset(self, map_func, preserve_cardinality=True) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4985, in __init__ use_legacy_function=use_legacy_function) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4218, in __init__ self._function = fn_factory() File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3151, in get_concrete_function *args, **kwargs) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3116, in _get_concrete_function_garbage_collected graph_function, _ = self._maybe_define_function(args, kwargs) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3463, in _maybe_define_function graph_function = self._create_graph_function(args, kwargs) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3308, in _create_graph_function capture_by_value=self._capture_by_value), File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 1007, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4195, in wrapped_fn ret = wrapper_helper(*args) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4125, in wrapper_helper ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args) File "/root/miniconda3/envs/randlanet/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 695, in wrapper raise e.ag_error_metadata.to_exception(e) AttributeError: in user code: main_SemanticKITTI.py:145 tf_map * up_i = tf.py_func(DP.knn_search, [sub_points, batch_pc, 1], tf.int32) AttributeError: module 'tensorflow' has no attribute 'py_func' 这是代码:from helper_tool import DataProcessing as DP from helper_tool import ConfigSemanticKITTI as cfg from helper_tool import Plot from os.path import join from RandLANet import Network from tester_SemanticKITTI import ModelTester import tensorflow as tf import numpy as np import os, argparse, pickle class SemanticKITTI: def __init__(self, test_id): self.name = 'SemanticKITTI' self.dataset_path = '/root/autodl-tmp/RandLA-Net-master/data/semantic_kitti/dataset/sequences_0.06' self.label_to_names = {0: 'unlabeled', 1: 'car', 2: 'bicycle', 3: 'motorcycle', 4: 'truck', 5: 'other-vehicle', 6: 'person', 7: 'bicyclist', 8: 'motorcyclist', 9: 'road', 10: 'parking', 11: 'sidewalk', 12: 'other-ground', 13: 'building', 14: 'fence', 15: 'vegetation', 16: 'trunk', 17: 'terrain', 18: 'pole', 19: 'traffic-sign'} self.num_classes = len(self.label_to_names) self.label_values = np.sort([k for k, v in self.label_to_names.items()]) self.label_to_idx = {l: i for i, l in enumerate(self.label_values)} self.ignored_labels = np.sort([0]) self.val_split = '08' self.seq_list = np.sort(os.listdir(self.dataset_path)) self.test_scan_number = str(test_id) self.train_list, self.val_list, self.test_list = DP.get_file_list(self.dataset_path, self.test_scan_number) self.train_list = DP.shuffle_list(self.train_list) self.val_list = DP.shuffle_list(self.val_list) self.possibility = [] self.min_possibility = [] # Generate the input data flow def get_batch_gen(self, split): if split == 'training': num_per_epoch = int(len(self.train_list) / cfg.batch_size) * cfg.batch_size path_list = self.train_list elif split == 'validation': num_per_epoch = int(len(self.val_list) / cfg.val_batch_size) * cfg.val_batch_size cfg.val_steps = int(len(self.val_list) / cfg.batch_size) path_list = self.val_list elif split == 'test': num_per_epoch = int(len(self.test_list) / cfg.val_batch_size) * cfg.val_batch_size * 4 path_list = self.test_list for test_file_name in path_list: points = np.load(test_file_name) self.possibility += [np.random.rand(points.shape[0]) * 1e-3] self.min_possibility += [float(np.min(self.possibility[-1]))] def spatially_regular_gen(): # Generator loop for i in range(num_per_epoch): if split != 'test': cloud_ind = i pc_path = path_list[cloud_ind] pc, tree, labels = self.get_data(pc_path) # crop a small point cloud pick_idx = np.random.choice(len(pc), 1) selected_pc, selected_labels, selected_idx = self.crop_pc(pc, labels, tree, pick_idx) else: cloud_ind = int(np.argmin(self.min_possibility)) pick_idx = np.argmin(self.possibility[cloud_ind]) pc_path = path_list[cloud_ind] pc, tree, labels = self.get_data(pc_path) selected_pc, selected_labels, selected_idx = self.crop_pc(pc, labels, tree, pick_idx) # update the possibility of the selected pc dists = np.sum(np.square((selected_pc - pc[pick_idx]).astype(np.float32)), axis=1) delta = np.square(1 - dists / np.max(dists)) self.possibility[cloud_ind][selected_idx] += delta self.min_possibility[cloud_ind] = np.min(self.possibility[cloud_ind]) if True: yield (selected_pc.astype(np.float32), selected_labels.astype(np.int32), selected_idx.astype(np.int32), np.array([cloud_ind], dtype=np.int32)) gen_func = spatially_regular_gen gen_types = (tf.float32, tf.int32, tf.int32, tf.int32) gen_shapes = ([None, 3], [None], [None], [None]) return gen_func, gen_types, gen_shapes def get_data(self, file_path): seq_id = file_path.split('/')[-3] frame_id = file_path.split('/')[-1][:-4] kd_tree_path = join(self.dataset_path, seq_id, 'KDTree', frame_id + '.pkl') # Read pkl with search tree with open(kd_tree_path, 'rb') as f: search_tree = pickle.load(f) points = np.array(search_tree.data, copy=False) # Load labels if int(seq_id) >= 11: labels = np.zeros(np.shape(points)[0], dtype=np.uint8) else: label_path = join(self.dataset_path, seq_id, 'labels', frame_id + '.npy') labels = np.squeeze(np.load(label_path)) return points, search_tree, labels @staticmethod def crop_pc(points, labels, search_tree, pick_idx): # crop a fixed size point cloud for training center_point = points[pick_idx, :].reshape(1, -1) select_idx = search_tree.query(center_point, k=cfg.num_points)[1][0] select_idx = DP.shuffle_idx(select_idx) select_points = points[select_idx] select_labels = labels[select_idx] return select_points, select_labels, select_idx @staticmethod def get_tf_mapping2(): def tf_map(batch_pc, batch_label, batch_pc_idx, batch_cloud_idx): features = batch_pc input_points = [] input_neighbors = [] input_pools = [] input_up_samples = [] for i in range(cfg.num_layers): neighbour_idx = tf.py_func(DP.knn_search, [batch_pc, batch_pc, cfg.k_n], tf.int32) sub_points = batch_pc[:, :tf.shape(batch_pc)[1] // cfg.sub_sampling_ratio[i], :] pool_i = neighbour_idx[:, :tf.shape(batch_pc)[1] // cfg.sub_sampling_ratio[i], :] up_i = tf.py_func(DP.knn_search, [sub_points, batch_pc, 1], tf.int32) input_points.append(batch_pc) input_neighbors.append(neighbour_idx) input_pools.append(pool_i) input_up_samples.append(up_i) batch_pc = sub_points input_list = input_points + input_neighbors + input_pools + input_up_samples input_list += [features, batch_label, batch_pc_idx, batch_cloud_idx] return input_list return tf_map def init_input_pipeline(self): print('Initiating input pipelines') cfg.ignored_label_inds = [self.label_to_idx[ign_label] for ign_label in self.ignored_labels] gen_function, gen_types, gen_shapes = self.get_batch_gen('training') gen_function_val, _, _ = self.get_batch_gen('validation') gen_function_test, _, _ = self.get_batch_gen('test') self.train_data = tf.data.Dataset.from_generator(gen_function, gen_types, gen_shapes) self.val_data = tf.data.Dataset.from_generator(gen_function_val, gen_types, gen_shapes) self.test_data = tf.data.Dataset.from_generator(gen_function_test, gen_types, gen_shapes) self.batch_train_data = self.train_data.batch(cfg.batch_size) self.batch_val_data = self.val_data.batch(cfg.val_batch_size) self.batch_test_data = self.test_data.batch(cfg.val_batch_size) map_func = self.get_tf_mapping2() self.batch_train_data = self.batch_train_data.map(map_func=map_func) self.batch_val_data = self.batch_val_data.map(map_func=map_func) self.batch_test_data = self.batch_test_data.map(map_func=map_func) self.batch_train_data = self.batch_train_data.prefetch(cfg.batch_size) self.batch_val_data = self.batch_val_data.prefetch(cfg.val_batch_size) self.batch_test_data = self.batch_test_data.prefetch(cfg.val_batch_size) iter = tf.data.Iterator.from_structure(self.batch_train_data.output_types, self.batch_train_data.output_shapes) self.flat_inputs = iter.get_next() self.train_init_op = iter.make_initializer(self.batch_train_data) self.val_init_op = iter.make_initializer(self.batch_val_data) self.test_init_op = iter.make_initializer(self.batch_test_data) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--gpu', type=int, default=0, help='the number of GPUs to use [default: 0]') parser.add_argument('--mode', type=str, default='train', help='options: train, test, vis') parser.add_argument('--test_area', type=str, default='14', help='options: 08, 11,12,13,14,15,16,17,18,19,20,21') parser.add_argument('--model_path', type=str, default='None', help='pretrained model path') FLAGS = parser.parse_args() os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ['CUDA_VISIBLE_DEVICES'] = str(FLAGS.gpu) os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' Mode = FLAGS.mode test_area = FLAGS.test_area dataset = SemanticKITTI(test_area) dataset.init_input_pipeline() if Mode == 'train': model = Network(dataset, cfg) model.train(dataset) elif Mode == 'test': cfg.saving = False model = Network(dataset, cfg) if FLAGS.model_path is not 'None': chosen_snap = FLAGS.model_path else: chosen_snapshot = -1 logs = np.sort([os.path.join('results', f) for f in os.listdir('results') if f.startswith('Log')]) chosen_folder = logs[-1] snap_path = join(chosen_folder, 'snapshots') snap_steps = [int(f[:-5].split('-')[-1]) for f in os.listdir(snap_path) if f[-5:] == '.meta'] chosen_step = np.sort(snap_steps)[-1] chosen_snap = os.path.join(snap_path, 'snap-{:d}'.format(chosen_step)) tester = ModelTester(model, dataset, restore_snap=chosen_snap) tester.test(model, dataset) else: ################## # Visualize data # ################## with tf.Session() as sess: sess.run(tf.global_variables_initializer()) sess.run(dataset.train_init_op) while True: flat_inputs = sess.run(dataset.flat_inputs) pc_xyz = flat_inputs[0] sub_pc_xyz = flat_inputs[1] labels = flat_inputs[17] Plot.draw_pc_sem_ins(pc_xyz[0, :, :], labels[0, :]) Plot.draw_pc_sem_ins(sub_pc_xyz[0, :, :], labels[0, 0:np.shape(sub_pc_xyz)[1]])
07-30
#include "ucar/ucar.hpp" #include <tf2/LinearMath/Quaternion.h> #include <ros/console.h> FileTransfer::FileTransfer(const std::string& baseDir ) : m_baseDir(baseDir), m_running(false), m_serverThread() { createDirectory(m_baseDir); } FileTransfer::~FileTransfer() { stopServer(); } // 功能1:保存字符串到本地文件 bool FileTransfer::saveToFile(const std::string& content, const std::string& filename) { std::string fullPath = m_baseDir + "/" + filename; std::ofstream file(fullPath); if (!file.is_open()) { std::cerr << "无法打开文件: " << fullPath << std::endl; return false; } file << content; file.close(); std::cout << "文件已保存到: " << fullPath << std::endl; return true; } // 功能2:发送文件到指定IP bool FileTransfer::sendTo(const std::string& serverIP, int port, const std::string& localFile, const std::string& remotePath ) { // 1. 读取文件内容 std::ifstream file(localFile, std::ios::binary | std::ios::ate); if (!file.is_open()) { std::cerr << "无法打开本地文件: " << localFile << std::endl; return false; } size_t fileSize = file.tellg(); file.seekg(0, std::ios::beg); std::vector<char> buffer(fileSize); if (!file.read(buffer.data(), fileSize)) { std::cerr << "读取文件错误: " << localFile << std::endl; return false; } file.close(); // 2. 创建Socket int sock = socket(AF_INET, SOCK_STREAM, 0); if (sock < 0) { std::cerr << "创建Socket失败: " << strerror(errno) << std::endl; return false; } // 3. 连接服务器 sockaddr_in serverAddr; memset(&serverAddr, 0, sizeof(serverAddr)); serverAddr.sin_family = AF_INET; serverAddr.sin_port = htons(port); if (inet_pton(AF_INET, serverIP.c_str(), &serverAddr.sin_addr) <= 0) { std::cerr << "无效的IP地址: " << serverIP << std::endl; close(sock); return false; } if (connect(sock, (sockaddr*)&serverAddr, sizeof(serverAddr)) < 0) { std::cerr << "连接服务器失败: " << strerror(errno) << std::endl; close(sock); return false; } // 4. 发送文件信息 if (!sendData(sock, remotePath, buffer.data(), fileSize)) { close(sock); return false; } close(sock); std::cout << "文件已发送到 " << serverIP << ":" << port << " 保存为 " << remotePath << std::endl; return true; } // 功能3:阻塞等待接收文件并读取内容 std::string FileTransfer::receiveAndRead(int port, int timeoutMs ) { // 确保服务器正在运行 if (!m_running) { startServer(port); } // 等待文件到达 ReceivedFile file; if (!waitForFile(file, timeoutMs)) { throw std::runtime_error("等待文件超时或服务器停止"); } stopServer(); // 返回文件内容 return std::string(file.content.data(), file.content.size()); } // 启动服务器(后台线程) void FileTransfer::startServer(int port) { if (m_running) { stopServer(); } m_running = true; m_serverThread = std::thread(&FileTransfer::serverThreadFunc, this, port); } // 停止服务器 void FileTransfer::stopServer() { if (m_running) { m_running = false; if (m_serverThread.joinable()) { m_serverThread.join(); } } } // 创建目录 bool FileTransfer::createDirectory(const std::string& path) { if (mkdir(path.c_str(), 0777) == -1 && errno != EEXIST) { std::cerr << "无法创建目录: " << path << " - " << strerror(errno) << std::endl; return false; } return true; } // 发送数据到socket bool FileTransfer::sendData(int sock, const std::string& remotePath, const char* data, size_t size) { // 1. 发送路径长度和路径 uint32_t pathLen = htonl(static_cast<uint32_t>(remotePath.size())); if (send(sock, &pathLen, sizeof(pathLen), 0) != sizeof(pathLen)) { std::cerr << "发送路径长度失败" << std::endl; return false; } if (send(sock, remotePath.c_str(), remotePath.size(), 0) != static_cast<ssize_t>(remotePath.size())) { std::cerr << "发送路径失败" << std::endl; return false; } // 2. 发送文件长度和内容 uint32_t netSize = htonl(static_cast<uint32_t>(size)); if (send(sock, &netSize, sizeof(netSize), 0) != sizeof(netSize)) { std::cerr << "发送文件长度失败" << std::endl; return false; } ssize_t totalSent = 0; while (totalSent < static_cast<ssize_t>(size)) { ssize_t sent = send(sock, data + totalSent, size - totalSent, 0); if (sent < 0) { std::cerr << "发送文件内容失败: " << strerror(errno) << std::endl; return false; } totalSent += sent; } return true; } // 服务器线程函数 void FileTransfer::serverThreadFunc(int port) { // 1. 创建Socket int listenSock = socket(AF_INET, SOCK_STREAM, 0); if (listenSock < 0) { std::cerr << "创建Socket失败: " << strerror(errno) << std::endl; return; } // 2. 设置SO_REUSEADDR int opt = 1; setsockopt(listenSock, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)); // 3. 绑定端口 sockaddr_in serverAddr; memset(&serverAddr, 0, sizeof(serverAddr)); serverAddr.sin_family = AF_INET; serverAddr.sin_addr.s_addr = INADDR_ANY; serverAddr.sin_port = htons(port); if (bind(listenSock, (sockaddr*)&serverAddr, sizeof(serverAddr)) < 0) { std::cerr << "绑定端口失败: " << strerror(errno) << std::endl; close(listenSock); return; } // 4. 开始监听 if (listen(listenSock, 5) < 0) { std::cerr << "监听失败: " << strerror(errno) << std::endl; close(listenSock); return; } std::cout << "文件接收服务器启动,监听端口: " << port << std::endl; // 5. 接受连接循环 bool receivedFile = false; while (m_running&& !receivedFile) { // 设置超时以定期检查停止条件 fd_set readSet; FD_ZERO(&readSet); FD_SET(listenSock, &readSet); timeval timeout{0, 100000}; // 100ms int ready = select(listenSock + 1, &readSet, nullptr, nullptr, &timeout); if (ready < 0) { if (errno != EINTR) { std::cerr << "select错误: " << strerror(errno) << std::endl; } continue; } if (ready == 0) continue; // 超时,继续循环 // 6. 接受客户端连接 sockaddr_in clientAddr; socklen_t clientLen = sizeof(clientAddr); int clientSock = accept(listenSock, (sockaddr*)&clientAddr, &clientLen); if (clientSock < 0) { if (errno != EAGAIN && errno != EWOULDBLOCK) { std::cerr << "接受连接失败: " << strerror(errno) << std::endl; } continue; } char clientIP[INET_ADDRSTRLEN]; inet_ntop(AF_INET, &clientAddr.sin_addr, clientIP, INET_ADDRSTRLEN); std::cout << "收到来自 " << clientIP << " 的文件传输请求" << std::endl; // 7. 接收文件 ReceivedFile file; if (receiveFile(clientSock, file)) { std::lock_guard<std::mutex> lock(m_mutex); m_receivedFiles.push(std::move(file)); receivedFile = true; m_cv.notify_one(); // 通知等待线程 } close(clientSock); } close(listenSock); std::cout << "文件接收服务器已停止" << std::endl; } // 接收文件 bool FileTransfer::receiveFile(int sock, ReceivedFile& file) { // 1. 接收路径长度和路径 uint32_t pathLen; if (recv(sock, &pathLen, sizeof(pathLen), 0) != sizeof(pathLen)) { std::cerr << "接收路径长度失败" << std::endl; return false; } pathLen = ntohl(pathLen); std::vector<char> pathBuffer(pathLen + 1); ssize_t received = recv(sock, pathBuffer.data(), pathLen, 0); if (received != static_cast<ssize_t>(pathLen)) { std::cerr << "接收路径失败" << std::endl; return false; } pathBuffer[pathLen] = '\0'; file.filename = pathBuffer.data(); // 2. 接收文件长度 uint32_t fileSize; if (recv(sock, &fileSize, sizeof(fileSize), 0) != sizeof(fileSize)) { std::cerr << "接收文件长度失败" << std::endl; return false; } fileSize = ntohl(fileSize); // 3. 接收文件内容 file.content.resize(fileSize); ssize_t totalReceived = 0; while (totalReceived < static_cast<ssize_t>(fileSize) && m_running) { ssize_t bytes = recv(sock, file.content.data() + totalReceived, fileSize - totalReceived, 0); if (bytes <= 0) { if (bytes < 0) { std::cerr << "接收文件内容失败: " << strerror(errno) << std::endl; } return false; } totalReceived += bytes; } std::cout << "成功接收文件: " << file.filename << " (" << fileSize << " 字节)" << std::endl; return true; } // 等待文件到达 bool FileTransfer::waitForFile(ReceivedFile& file, int timeoutMs) { std::unique_lock<std::mutex> lock(m_mutex); // 只要有文件就立即返回 auto pred = [&] { return !m_receivedFiles.empty(); // 只需检查队列是否非空 }; if (timeoutMs > 0) { if (!m_cv.wait_for(lock, std::chrono::milliseconds(timeoutMs), pred)) { return false; // 超时 } } else { m_cv.wait(lock, pred); } if (m_receivedFiles.empty()) return false; file = std::move(m_receivedFiles.front()); m_receivedFiles.pop(); return true; } // 接收文件并读取两行内容 bool FileTransfer::receiveAndReadTwoStrings(int port, std::string& outStr1, std::string& outStr2, int timeoutMs) { // 确保服务器正在运行 if (!m_running) { startServer(port); } // 等待文件到达 ReceivedFile file; if (!waitForFile(file, timeoutMs)) { throw std::runtime_error("等待文件超时或服务器停止"); } // 读取文件内容 std::string content(file.content.data(), file.content.size()); // 查找第一行结尾 size_t pos = content.find('\n'); if (pos == std::string::npos) { // 没有换行符,整个内容作为第一行 outStr1 = content; outStr2 = ""; } else { // 提取第一行 outStr1 = content.substr(0, pos); // 提取第二行(跳过换行符) outStr2 = content.substr(pos + 1); // 移除第二行可能的多余换行符 if (!outStr2.empty() && outStr2.back() == '\n') { outStr2.pop_back(); } } return true; } bool Ucar::navigateTo(const geometry_msgs::Pose& target) { move_base_msgs::MoveBaseGoal goal; goal.target_pose.header.frame_id = "map"; goal.target_pose.pose = target; move_base_client_.sendGoal(goal); return move_base_client_.waitForResult(navigation_timeout_); } // void Ucar::recovery() { // setlocale(LC_ALL, ""); // std_srvs::Empty srv; // if (clear_costmaps_client_.call(srv)) { // ROS_INFO("代价地图清除成功"); // } else { // ROS_ERROR("代价地图清除服务调用失败"); // } // } Ucar::Ucar(ros::NodeHandle& nh) : nh_(nh), retry_count_(0), current_state_(State::INIT) , control_rate_(10) ,move_base_client_("move_base", true) { latest_odom_.reset(); // 等待move_base服务器就绪(参考网页6的actionlib使用规范) if(!move_base_client_.waitForServer(ros::Duration(5.0))) { setlocale(LC_ALL, ""); ROS_ERROR("无法连接move_base服务器"); } scanner.set_config(zbar::ZBAR_NONE, zbar::ZBAR_CFG_ENABLE, 1); // 初始化代价地图清除服务 clear_costmaps_client_ = nh_.serviceClient<std_srvs::Empty>("/move_base/clear_costmaps"); setlocale(LC_ALL, ""); // 标记所有的点位总共21个点位 nh_.param("pose1_x", pose_1.position.x, 1.67); nh_.param("pose1_y", pose_1.position.y, 0.04); pose_1.orientation.x = 0.0; pose_1.orientation.y = 0.0; pose_1.orientation.z = 0.707; pose_1.orientation.w = 0.707; nh_.param("pose2_x", pose_2.position.x, 1.83); nh_.param("pose2_y", pose_2.position.y, 0.380); pose_2.orientation.x = 0.0; pose_2.orientation.y = 0.0; pose_2.orientation.z = 1; pose_2.orientation.w = 0; nh_.param("pose3_x", pose_3.position.x, 1.0); nh_.param("pose3_y", pose_3.position.y, 0.494); pose_3.orientation.x = 0.0; pose_3.orientation.y = 0.0; pose_3.orientation.z = 1; pose_3.orientation.w = 0; nh_.param("pose4_x", pose_4.position.x, 0.15166891130059032); nh_.param("pose4_y", pose_4.position.y, 0.5446138339072089); pose_4.orientation.x = 0.0; pose_4.orientation.y = 0.0; pose_4.orientation.z = 0.707; pose_4.orientation.w = 0.707; nh_.param("pose5_x", pose_5.position.x, 0.387605134459779); nh_.param("pose5_y", pose_5.position.y, 0.949243539748394); pose_5.orientation.x = 0.0; pose_5.orientation.y = 0.0; pose_5.orientation.z = 0.0; pose_5.orientation.w = 1.0; nh_.param("pose6_x", pose_6.position.x, 1.0250469329539987); nh_.param("pose6_y", pose_6.position.y, 1.0430107266961729); pose_6.orientation.x = 0.0; pose_6.orientation.y = 0.0; pose_6.orientation.z = 0.0; pose_6.orientation.w = 1.0; nh_.param("pose7_x", pose_7.position.x, 1.715746358650675); nh_.param("pose7_y", pose_7.position.y, 1.0451169673664757); pose_7.orientation.x = 0.0; pose_7.orientation.y = 0.0; pose_7.orientation.z = 0.707; pose_7.orientation.w = 0.707; nh_.param("pose8_x", pose_8.position.x, 1.820954899866641); nh_.param("pose8_y", pose_8.position.y, 1.405578846446346); pose_8.orientation.x = 0.0; pose_8.orientation.y = 0.0; pose_8.orientation.z = 1; pose_8.orientation.w = 0; nh_.param("pose9_x", pose_9.position.x, 1.287663212010699); nh_.param("pose9_y", pose_9.position.y, 1.4502232396357953); pose_9.orientation.x = 0.0; pose_9.orientation.y = 0.0; pose_9.orientation.z = 1; pose_9.orientation.w = 0; nh_.param("pose10_x", pose_10.position.x, 0.433025661760874); nh_.param("pose10_y", pose_10.position.y, 1.5362058814619577); pose_10.orientation.x = 0.0; pose_10.orientation.y = 0.0; pose_10.orientation.z = 0.707; pose_10.orientation.w = 0.707; //开始进入视觉定位区域 中心点加上四个中线点对应四面墙 nh_.param("pose_center_x", pose_11.position.x, 1.13); nh_.param("pose_center_y", pose_11.position.y, 3.04); nh_.param("pose_center_x", pose_12.position.x, 1.13); nh_.param("pose_center_y", pose_12.position.y, 3.04); nh_.param("pose_center_x", pose_13.position.x, 1.13); nh_.param("pose_center_y", pose_13.position.y, 3.04); nh_.param("pose_center_x", pose_14.position.x, 1.13); nh_.param("pose_center_y", pose_14.position.y, 3.04); nh_.param("pose_center_x", pose_15.position.x, 1.13); nh_.param("pose_center_y", pose_15.position.y, 3.04); //退出视觉区域进入路灯识别区域 nh_.param("pose_center_x", pose_16.position.x, 1.13); nh_.param("pose_center_y", pose_16.position.y, 3.04); nh_.param("pose_center_x", pose_17.position.x, 1.13); nh_.param("pose_center_y", pose_17.position.y, 3.04); nh_.param("pose_center_x", pose_18.position.x, 1.13); nh_.param("pose_center_y", pose_18.position.y, 3.04); //退出路灯识别区域,进入巡线区域 注意两个点位取一个,必须标定正确的方向 nh_.param("pose_center_x", pose_19.position.x, 1.13); nh_.param("pose_center_y", pose_19.position.y, 3.04); pose_19.orientation.x = 0.0; pose_19.orientation.y = 0.0; pose_19.orientation.z = 0.707; pose_19.orientation.w = 0.707; nh_.param("pose_center_x", pose_20.position.x, 1.13); nh_.param("pose_center_y", pose_20.position.y, 3.04); pose_20.orientation.x = 0.0; pose_20.orientation.y = 0.0; pose_20.orientation.z = 0.707; pose_20.orientation.w = 0.707; //result_sub_ = nh_.subscribe("result_of_object", 1, &Ucar::detectionCallback, this); result_client_ = nh_.serviceClient<rfbot_yolov8_ros::DetectGood>("object_realsense_recognization"); cmd_vel_pub = nh_.advertise<geometry_msgs::Twist>("/cmd_vel", 10); odom_sub = nh_.subscribe("/odom", 10, &Ucar::odomCallback, this); interaction_client = nh_.serviceClient<ucar::ObjectDetection>("object_detection"); ROS_INFO_STREAM("状态机初始化完成"); } void Ucar::navigate_to_aruco_place(){ setlocale(LC_ALL, ""); navigateTo(pose_1);ROS_INFO("到达pose_1"); navigateTo(pose_2);ROS_INFO("到达pose_2"); navigateTo(pose_3);ROS_INFO("到达pose_3"); } void Ucar::navigate_to_recongnition_place(){ setlocale(LC_ALL, ""); navigateTo(pose_4);ROS_INFO("到达pose_4"); navigateTo(pose_5);ROS_INFO("到达pose_5"); navigateTo(pose_6);ROS_INFO("到达pose_6"); navigateTo(pose_7);ROS_INFO("到达pose_7"); navigateTo(pose_8);ROS_INFO("到达pose_8"); navigateTo(pose_9);ROS_INFO("到达pose_9"); navigateTo(pose_10);ROS_INFO("到达pose_10"); } void Ucar::run() { while (ros::ok()) { ros::spinOnce(); switch(current_state_) { case State::INIT: {handleInit(); break;} case State::ARUCO: {navigate_to_aruco_place(); ROS_INFO("导航到二维码区域"); see_aruco(); if(target_object=="vegetable") playAudioFile("/home/ucar/ucar_ws/src/wav/sc.wav"); if(target_object=="fruit") playAudioFile("/home/ucar/ucar_ws/src/wav/sg.wav"); if(target_object=="dessert") playAudioFile("/home/ucar/ucar_ws/src/wav/tp.wav"); break;} case State::FIND: {navigate_to_recongnition_place(); spin_to_find();//转着找目标 break;} case State::GAZEBO: {gazebo(); break;} } control_rate_.sleep(); } } void Ucar::handleInit() { setlocale(LC_ALL, ""); ROS_DEBUG("执行初始化流程"); current_state_ = State::ARUCO; } void Ucar::see_aruco() { qr_found_ = false; image_transport::ImageTransport it(nh_); // 2. 创建订阅器,订阅/usb_cam/image_raw话题 image_transport::Subscriber sub = it.subscribe( "/usb_cam/image_raw", 1, &Ucar::imageCallback, this ); std::cout << "已订阅摄像头话题,等待图像数据..." << std::endl; // 3. 创建ZBar扫描器 scanner.set_config(zbar::ZBAR_NONE, zbar::ZBAR_CFG_ENABLE, 1); target_object = ""; // 重置结果 // 4. 设置超时检测 const int MAX_SCAN_DURATION = 30; // 最大扫描时间(秒) auto scan_start_time = std::chrono::steady_clock::now(); // 5. ROS事件循环 ros::Rate loop_rate(30); // 30Hz while (ros::ok() && !qr_found_) { // 检查超时 auto elapsed = std::chrono::steady_clock::now() - scan_start_time; if (std::chrono::duration_cast<std::chrono::seconds>(elapsed).count() > MAX_SCAN_DURATION) { std::cout << "扫描超时" << std::endl; break; } ros::spinOnce(); loop_rate.sleep(); } // 6. 清理资源 if (!qr_found_) target_object = ""; } void Ucar::imageCallback(const sensor_msgs::ImageConstPtr& msg) { try { // 7. 将ROS图像消息转换为OpenCV格式 cv_bridge::CvImagePtr cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); cv::Mat frame = cv_ptr->image; cv::Mat gray; cv::cvtColor(frame, gray, cv::COLOR_BGR2GRAY); // 8. 准备ZBar图像 cv::Mat gray_cont = gray.clone(); zbar::Image image(gray.cols, gray.rows, "Y800", gray_cont.data, gray.cols * gray.rows); // 9. 扫描二维码 if (scanner.scan(image) >= 0) { for (zbar::Image::SymbolIterator symbol = image.symbol_begin(); symbol != image.symbol_end(); ++symbol) { // 获取原始结果 std::string rawResult = symbol->get_data(); // 在存储到全局变量前将首字母转换为小写 target_object = toLowerFirstChar(rawResult); ROS_INFO("识别到二维码: %s", target_object.c_str()); current_state_ = State::FIND; qr_found_ = true; // 在视频帧上绘制结果 std::vector<cv::Point> points; for (int i = 0; i < symbol->get_location_size(); i++) { points.push_back(cv::Point(symbol->get_location_x(i), symbol->get_location_y(i))); } // 绘制二维码边界框 cv::RotatedRect rect = cv::minAreaRect(points); cv::Point2f vertices[4]; rect.points(vertices); for (int i = 0; i < 4; i++) { cv::line(frame, vertices[i], vertices[(i + 1) % 4], cv::Scalar(0, 255, 0), 2); } // 显示原始二维码内容 cv::putText(frame, rawResult, cv::Point(10, 30), cv::FONT_HERSHEY_SIMPLEX, 0.7, cv::Scalar(0, 255, 0), 2); // 显示转换后的结果 cv::putText(frame, "Stored: " + target_object, cv::Point(10, 60), cv::FONT_HERSHEY_SIMPLEX, 0.7, cv::Scalar(0, 0, 255), 2); // 显示最终结果画面 cv::imshow("QR Code Scanner", frame); cv::waitKey(250); // 短暂显示结果 } } // 显示当前帧 cv::imshow("QR Code Scanner", frame); cv::waitKey(1); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge异常: %s", e.what()); } } //播放音频文件函数 bool Ucar::playAudioFile(const std::string& filePath) { // 创建子进程播放音频 pid_t pid = fork(); if (pid == 0) { // 子进程 // 尝试使用 aplay (适用于 WAV 文件) execlp("aplay", "aplay", "-q", filePath.c_str(), (char*)NULL); // 如果 aplay 失败,尝试 ffplay execlp("ffplay", "ffplay", "-nodisp", "-autoexit", "-loglevel", "quiet", filePath.c_str(), (char*)NULL); // 如果 ffplay 失败,尝试 mpg123 (适用于 MP3) execlp("mpg123", "mpg123", "-q", filePath.c_str(), (char*)NULL); // 如果所有播放器都不可用,退出 exit(1); } else if (pid > 0) { // 父进程 int status; waitpid(pid, &status, 0); return WIFEXITED(status) && WEXITSTATUS(status) == 0; } else { // fork 失败 perror("fork failed"); return false; } } // 辅助函数:将字符串首字母转换为小写 std::string Ucar::toLowerFirstChar(const std::string& str) { if (str.empty()) return str; std::string result = str; result[0] = static_cast<char>(std::tolower(static_cast<unsigned char>(result[0]))); return result; } // 在odomCallback中更新最新里程计数据 void Ucar::odomCallback(const nav_msgs::Odometry::ConstPtr& msg) { latest_odom_ = msg; // 保存最新里程计数据 if (!position_recorded) { initial_position = msg->pose.pose.position; position_recorded = true; //ROS_INFO("初始位置记录完成: (%.3f, %.3f)", // initial_position.x, initial_position.y); } } void Ucar::moveLeftDistance(double distance_m, double linear_speed) { // 订阅里程计话题(根据实际机器人配置修改话题名) // 等待位置记录完成 position_recorded = false; while (!position_recorded && ros::ok()) { ros::spinOnce(); ros::Duration(0.1).sleep(); } // 计算理论运动时间 double duration = distance_m / linear_speed; //ROS_INFO("开始左移: 距离=%.2fm, 速度=%.2fm/s, 理论时间=%.1fs", // distance_m, linear_speed, duration); // 创建控制消息 geometry_msgs::Twist move_cmd; move_cmd.linear.y = linear_speed; // 全向移动Y轴速度控制左右移动 // 记录开始时间 auto start_time = ros::Time::now(); // 持续发送控制命令 while ((ros::Time::now() - start_time).toSec() < duration && ros::ok()) { cmd_vel_pub.publish(move_cmd); ros::Duration(0.05).sleep(); // 50ms控制周期 } // 发送停止指令 move_cmd.linear.y = 0; cmd_vel_pub.publish(move_cmd); //ROS_INFO("左移完成"); } void Ucar::moveRightDistance(double distance_m, double linear_speed) { // 等待位置记录完成 position_recorded = false; while (!position_recorded && ros::ok()) { ros::spinOnce(); ros::Duration(0.1).sleep(); } // 计算理论运动时间 double duration = distance_m / linear_speed; //ROS_INFO("开始右移: 距离=%.2fm, 速度=%.2fm/s, 理论时间=%.1fs", // distance_m, linear_speed, duration); // 创建控制消息 geometry_msgs::Twist move_cmd; move_cmd.linear.y = -linear_speed; // 全向移动Y轴速度控制左右移动 // 记录开始时间 auto start_time = ros::Time::now(); // 持续发送控制命令 while ((ros::Time::now() - start_time).toSec() < duration && ros::ok()) { cmd_vel_pub.publish(move_cmd); ros::Duration(0.05).sleep(); // 50ms控制周期 } // 发送停止指令 move_cmd.linear.y = 0; cmd_vel_pub.publish(move_cmd); //ROS_INFO("右移完成"); } void Ucar::moveForwardDistance(double distance_m, double linear_speed) { // 确保速度为正值(前进方向) if (linear_speed < 0) { ROS_WARN("前进速度应为正值,自动取绝对值"); linear_speed = fabs(linear_speed); } // 等待位置记录完成 position_recorded = false; while (!position_recorded && ros::ok()) { ros::spinOnce(); ros::Duration(0.1).sleep(); } // 计算理论运动时间 double duration = distance_m / linear_speed; //ROS_INFO("开始前进: 距离=%.2fm, 速度=%.2fm/s, 理论时间=%.1fs", // distance_m, linear_speed, duration); // 创建控制消息 geometry_msgs::Twist move_cmd; move_cmd.linear.x = linear_speed; // X轴速度控制前后移动 // 记录开始时间 auto start_time = ros::Time::now(); // 持续发送控制命令 while ((ros::Time::now() - start_time).toSec() < duration && ros::ok()) { cmd_vel_pub.publish(move_cmd); ros::Duration(0.05).sleep(); // 50ms控制周期 } // 发送停止指令 move_cmd.linear.x = 0.0; cmd_vel_pub.publish(move_cmd); ROS_INFO("前进完成"); } //输入1逆时针,输入-1顺时针 void Ucar::rotateCounterClockwise5Degrees(int a) { // 设置旋转参数 const double angular_speed = 0.2; // rad/s (约11.5度/秒) const double degrees = 5.0; const double duration = degrees * (M_PI / 180.0) / angular_speed; // 约0.436秒 //if(a==1) //ROS_INFO("开始逆时针旋转5度: 速度=%.2f rad/s, 时间=%.3f秒", // angular_speed, duration); //else if(a==-1) //ROS_INFO("开始顺时针旋转5度: 速度=%.2f rad/s, 时间=%.3f秒", // angular_speed, duration); // 创建控制消息 geometry_msgs::Twist twist_msg; twist_msg.angular.z = angular_speed*a; // 正值表示逆时针 // 记录开始时间 auto start_time = ros::Time::now(); // 持续发送控制命令 while ((ros::Time::now() - start_time).toSec() < duration && ros::ok()) { cmd_vel_pub.publish(twist_msg); ros::Duration(0.05).sleep(); // 20Hz控制周期 } // 发送停止命令(确保接收) twist_msg.angular.z = 0.0; for (int i = 0; i < 3; i++) { cmd_vel_pub.publish(twist_msg); ros::Duration(0.02).sleep(); } if(a==1) ROS_INFO("逆时针旋转5度完成"); else if (a==-1) ROS_INFO("顺时针旋转5度完成"); } void Ucar::rotateCounterClockwise360Degrees() { // 设置旋转参数 const double angular_speed = 1; // rad/s (约11.5度/秒) const double degrees = 360.0; const double duration = degrees * (M_PI / 180.0) / angular_speed; // 约0.436秒 // 创建控制消息 geometry_msgs::Twist twist_msg; twist_msg.angular.z = angular_speed; // 正值表示逆时针 // 记录开始时间 auto start_time = ros::Time::now(); // 持续发送控制命令 while ((ros::Time::now() - start_time).toSec() < duration && ros::ok()) { cmd_vel_pub.publish(twist_msg); ros::Duration(0.05).sleep(); // 20Hz控制周期 } // 发送停止命令(确保接收) twist_msg.angular.z = 0.0; for (int i = 0; i < 3; i++) { cmd_vel_pub.publish(twist_msg); ros::Duration(0.02).sleep(); } } void Ucar::left_and_right_move_old(){ rfbot_yolov8_ros::DetectGood srv; while((find_object_x2_old+find_object_x1_old)/2>324){ if((find_object_x2_old+find_object_x1_old)>340) moveLeftDistance(0.15,0.1);//控制小车往左移动15cm else moveLeftDistance(0.05,0.1);//控制小车往左移动5cm moveForwardDistance(0.05,0.1); Ucar::result_client_.call(srv); for (size_t j = 0; j < srv.response.goodName.size(); ++j) { ROS_INFO("响应结果:"); ROS_INFO("位置: x1: %d, y1: %d, x2: %d, y2: %d", srv.response.u1[j], srv.response.v1[j], srv.response.u2[j], srv.response.v2[j]); //ROS_INFO("置信度: %f", srv.response.confidence[j]); find_object_x1_old=srv.response.u1[0]; find_object_y1_old=srv.response.v1[0]; find_object_x2_old=srv.response.u2[0]; find_object_y2_old=srv.response.v2[0]; } ROS_INFO("当前目标中心横坐标为:%d",(find_object_x2_old+find_object_x1_old)/2); } while((find_object_x2_old+find_object_x1_old)/2<316){ if((find_object_x2_old+find_object_x1_old)<300) moveRightDistance(0.15,0.1);//控制小车往右移动15cm else moveRightDistance(0.05,0.1);//控制小车往右移动5cm moveForwardDistance(0.05,0.1); Ucar::result_client_.call(srv); for (size_t j = 0; j < srv.response.goodName.size(); ++j) { ROS_INFO("响应结果:"); ROS_INFO("位置: x1: %d, y1: %d, x2: %d, y2: %d", srv.response.u1[j], srv.response.v1[j], srv.response.u2[j], srv.response.v2[j]); //ROS_INFO("置信度: %f", srv.response.confidence[j]); find_object_x1_old=srv.response.u1[0]; find_object_y1_old=srv.response.v1[0]; find_object_x2_old=srv.response.u2[0]; find_object_y2_old=srv.response.v2[0]; } ROS_INFO("当前目标中心横坐标为:%d",(find_object_x2_old+find_object_x1_old)/2); } ROS_INFO("左右移动完成"); } //前进函数(涵盖第二种停靠算法) void Ucar::go(){ rfbot_yolov8_ros::DetectGood srv; while(1){ moveForwardDistance(0.05,0.1);//控制小车前进 Ucar::result_client_.call(srv); for (size_t j = 0; j < srv.response.goodName.size(); ++j) { ROS_INFO("响应结果:"); ROS_INFO("位置: x1: %d, y1: %d, x2: %d, y2: %d", srv.response.u1[j], srv.response.v1[j], srv.response.u2[j], srv.response.v2[j]); //ROS_INFO("置信度: %f", srv.response.confidence[j]); find_object_x1_new=srv.response.u1[0]; find_object_y1_new=srv.response.v1[0]; find_object_x2_new=srv.response.u2[0]; find_object_y2_new=srv.response.v2[0]; } //图像左边先到达边线——>逆时针往右 if(find_object_x2_new==640&&find_object_y1_new==0&&find_object_x1_new!=0&&find_object_y2_new>=350){ if(find_object_x1_new>240||(find_object_x2_new-find_object_x1_new)<=(find_object_y2_new-find_object_y1_new)){ rotateCounterClockwise5Degrees(1); rotateCounterClockwise5Degrees(1); rotateCounterClockwise5Degrees(1); rotateCounterClockwise5Degrees(1); rotateCounterClockwise5Degrees(1); moveRightDistance(0.35,0.1); ROS_INFO("大摆头"); break; } else if(find_object_x1_new>120){ rotateCounterClockwise5Degrees(1); rotateCounterClockwise5Degrees(1); rotateCounterClockwise5Degrees(1); moveRightDistance(0.25,0.1); ROS_INFO("中摆头"); break; } else{ rotateCounterClockwise5Degrees(1);//逆时针 moveRightDistance(0.1,0.1); ROS_INFO("小摆头"); break; } } //图像右边先到达边线——>顺时针往左 else if(find_object_x1_new==0&&find_object_y1_new==0&&find_object_x2_new!=640&&find_object_y2_new>=350){ if(find_object_x2_new<400||(find_object_x2_new-find_object_x1_new)<=(find_object_y2_new-find_object_y1_new)){ rotateCounterClockwise5Degrees(-1); rotateCounterClockwise5Degrees(-1); rotateCounterClockwise5Degrees(-1); rotateCounterClockwise5Degrees(-1); rotateCounterClockwise5Degrees(-1); moveLeftDistance(0.35,0.1); ROS_INFO("大摆头"); break; } else if(find_object_x2_new<520){ rotateCounterClockwise5Degrees(-1); rotateCounterClockwise5Degrees(-1); rotateCounterClockwise5Degrees(-1); moveLeftDistance(0.25,0.1); ROS_INFO("中摆头"); break; } else{ rotateCounterClockwise5Degrees(-1);//顺时针 moveLeftDistance(0.1,0.1); ROS_INFO("小摆头"); break; } } } } void Ucar::visualservo(){ rfbot_yolov8_ros::DetectGood srv; //左移右移 left_and_right_move_old(); //提取长宽比 Ucar::result_client_.call(srv); for (size_t j = 0; j < srv.response.goodName.size(); ++j) { ROS_INFO("响应结果:"); ROS_INFO("位置: x1: %d, y1: %d, x2: %d, y2: %d", srv.response.u1[j], srv.response.v1[j], srv.response.u2[j], srv.response.v2[j]); //ROS_INFO("置信度: %f", srv.response.confidence[j]); find_object_x1_old=srv.response.u1[0]; find_object_y1_old=srv.response.v1[0]; find_object_x2_old=srv.response.u2[0]; find_object_y2_old=srv.response.v2[0]; } changkuanbi_old=(find_object_x2_old-find_object_x1_old)/(find_object_y2_old-find_object_y1_old); ROS_INFO("长宽比为:%f",changkuanbi_old);go(); if(find_object=="dessert1") playAudioFile("/home/ucar/ucar_ws/src/wav/kl.wav"); if(find_object=="dessert2") playAudioFile("/home/ucar/ucar_ws/src/wav/dg.wav"); if(find_object=="dessert3") playAudioFile("/home/ucar/ucar_ws/src/wav/nn.wav"); if(find_object=="vegetable1") playAudioFile("/home/ucar/ucar_ws/src/wav/lj.wav"); if(find_object=="vegetable2") playAudioFile("/home/ucar/ucar_ws/src/wav/xhs.wav"); if(find_object=="vegetable3") playAudioFile("/home/ucar/ucar_ws/src/wav/td.wav"); if(find_object=="fruit1") playAudioFile("/home/ucar/ucar_ws/src/wav/xj.wav"); if(find_object=="fruit2") playAudioFile("/home/ucar/ucar_ws/src/wav/pg.wav"); if(find_object=="fruit3") playAudioFile("/home/ucar/ucar_ws/src/wav/xg.wav"); // if(abs(changkuanbi_old-1.666666667)<0.05){ // ROS_INFO("比较接近16:9,不需要旋转"); // //前进 // go(); // } // else { // //先逆时针转个10度 // rotateCounterClockwise5Degrees(1); // rotateCounterClockwise5Degrees(1); // Ucar::result_client_.call(srv); // for (size_t j = 0; j < srv.response.goodName.size(); ++j) { // ROS_INFO("响应结果:"); // ROS_INFO("位置: x1: %d, y1: %d, x2: %d, y2: %d", // srv.response.u1[j], // srv.response.v1[j], // srv.response.u2[j], // srv.response.v2[j]); // //ROS_INFO("置信度: %f", srv.response.confidence[j]); // find_object_x1_new=srv.response.u1[0]; // find_object_y1_new=srv.response.v1[0]; // find_object_x2_new=srv.response.u2[0]; // find_object_y2_new=srv.response.v2[0]; // } // changkuanbi_new=(find_object_x2_new-find_object_x1_new)/(find_object_y2_new-find_object_y1_new); // ROS_INFO("长宽比为:%f",changkuanbi_new); // if(changkuanbi_new<changkuanbi_old)//方向错了 // { // while(abs(changkuanbi_new-1.666666667)>0.4)//不准就再转 // { // rotateCounterClockwise5Degrees(-1); // Ucar::result_client_.call(srv); // for (size_t j = 0; j < srv.response.goodName.size(); ++j) { // ROS_INFO("响应结果:"); // ROS_INFO("位置: x1: %d, y1: %d, x2: %d, y2: %d", // srv.response.u1[j], // srv.response.v1[j], // srv.response.u2[j], // srv.response.v2[j]); // //ROS_INFO("置信度: %f", srv.response.confidence[j]); // find_object_x1_new=srv.response.u1[0]; // find_object_y1_new=srv.response.v1[0]; // find_object_x2_new=srv.response.u2[0]; // find_object_y2_new=srv.response.v2[0]; // //保持正对目标 // while((find_object_x2_new+find_object_x1_new)/2>324) // {moveLeftDistance(0.05,0.1);//控制小车往左移动5cm // Ucar::result_client_.call(srv); // for (size_t j = 0; j < srv.response.goodName.size(); ++j) { // ROS_INFO("响应结果:"); // ROS_INFO("位置: x1: %d, y1: %d, x2: %d, y2: %d", // srv.response.u1[j], // srv.response.v1[j], // srv.response.u2[j], // srv.response.v2[j]); // //ROS_INFO("置信度: %f", srv.response.confidence[j]); // find_object_x1_new=srv.response.u1[0]; // find_object_y1_new=srv.response.v1[0]; // find_object_x2_new=srv.response.u2[0]; // find_object_y2_new=srv.response.v2[0]; // } // changkuanbi_new=(find_object_x2_new-find_object_x1_new)/(find_object_y2_new-find_object_y1_new); // ROS_INFO("长宽比为:%f",changkuanbi_new);} // while((find_object_x2_new+find_object_x1_new)/2<316) // {moveRightDistance(0.05,0.1);//控制小车往右移动5cm // Ucar::result_client_.call(srv); // for (size_t j = 0; j < srv.response.goodName.size(); ++j) { // ROS_INFO("响应结果:"); // ROS_INFO("位置: x1: %d, y1: %d, x2: %d, y2: %d", // srv.response.u1[j], // srv.response.v1[j], // srv.response.u2[j], // srv.response.v2[j]); // //ROS_INFO("置信度: %f", srv.response.confidence[j]); // find_object_x1_new=srv.response.u1[0]; // find_object_y1_new=srv.response.v1[0]; // find_object_x2_new=srv.response.u2[0]; // find_object_y2_new=srv.response.v2[0]; // } // changkuanbi_new=(find_object_x2_new-find_object_x1_new)/(find_object_y2_new-find_object_y1_new); // ROS_INFO("长宽比为:%f",changkuanbi_new);} // } // } // } // else{//方向对了 // while(abs(changkuanbi_new-1.666666667)>0.4)//不准就再转 // { // rotateCounterClockwise5Degrees(1); // //保持正对目标 // while((find_object_x2_new+find_object_x1_new)/2>324) moveLeftDistance(0.05,0.1);//控制小车往左移动5cm // while((find_object_x2_new+find_object_x1_new)/2<316) moveRightDistance(0.05,0.1);//控制小车往右移动5cm // Ucar::result_client_.call(srv); // for (size_t j = 0; j < srv.response.goodName.size(); ++j) { // ROS_INFO("响应结果:"); // ROS_INFO("位置: x1: %d, y1: %d, x2: %d, y2: %d", // srv.response.u1[j], // srv.response.v1[j], // srv.response.u2[j], // srv.response.v2[j]); // //ROS_INFO("置信度: %f", srv.response.confidence[j]); // find_object_x1_new=srv.response.u1[0]; // find_object_y1_new=srv.response.v1[0]; // find_object_x2_new=srv.response.u2[0]; // find_object_y2_new=srv.response.v2[0]; // } // changkuanbi_new=(find_object_x2_new-find_object_x1_new)/(find_object_y2_new-find_object_y1_new); // ROS_INFO("长宽比为:%f",changkuanbi_new); // } // } // //前进 // go(); // } ROS_INFO("导航完成"); ros::Duration(3000).sleep(); current_state_ = State::GAZEBO; } void Ucar::spin_to_find(){ setlocale(LC_ALL, ""); for(int i=0;i<9;i++){ navigateTo(pose_center[i]);//导航到i号点位 ros::Duration(3).sleep();//停留3秒 rfbot_yolov8_ros::DetectGood srv; Ucar::result_client_.call(srv); for (size_t j = 0; j < srv.response.goodName.size(); ++j) { ROS_INFO("响应结果:"); ROS_INFO("位置: x1: %d, y1: %d, x2: %d, y2: %d", srv.response.u1[j], srv.response.v1[j], srv.response.u2[j], srv.response.v2[j]); //ROS_INFO("置信度: %f", srv.response.confidence[j]); find_object_x1_old=srv.response.u1[0]; find_object_y1_old=srv.response.v1[0]; find_object_x2_old=srv.response.u2[0]; find_object_y2_old=srv.response.v2[0]; pose_center_object[i] = srv.response.goodName[0]; //看看这个方向有没有目标 } if(pose_center_object[i].find(target_object)!=std::string::npos&&(find_object_x2_old-find_object_x1_old)>=(find_object_y2_old-find_object_y1_old)) {ROS_INFO("本次任务目标识别成功"); find_object=pose_center_object[i]; visualservo(); break;} else{ ROS_INFO("本次任务目标识别失败,尝试下一个目标"); continue; } } } void Ucar::gazebo(){ navigateTo(pose_gazebo); //主从机通信 FileTransfer ft("/home/ucar/ucar_ws/src/ucar"); ft.saveToFile(target_object, "target.txt"); //电脑的ip ft.sendTo("192.168.1.100", 8080, "/home/ucar/ucar_ws/src/ucar/target.txt", "/home/zzs/gazebo_test_ws/src/ucar2/target.txt"); try { // 示例3:等待接收文件并读取内容 std::cout << "等待接收文件..." << std::endl; ft.receiveAndReadTwoStrings(8080, gazebo_object, gazebo_room); std::cout << "接收到的内容:\n" << gazebo_object << "\n"<<gazebo_room << std::endl; } catch (const std::exception& e) { std::cerr << "错误: " << e.what() << std::endl;} ros::Duration(30000).sleep();//停留3秒 } int main(int argc, char** argv) { ros::init(argc, argv, "multi_room_navigation_node"); ros::NodeHandle nh; Ucar ucar(nh); ucar.run(); return 0; } 帮我修改上述代码,其中导航到nh_.param("pose_center_x", pose_11.position.x, 1.13); nh_.param("pose_center_y", pose_11.position.y, 3.04); nh_.param("pose_center_x", pose_12.position.x, 1.13); nh_.param("pose_center_y", pose_12.position.y, 3.04); nh_.param("pose_center_x", pose_13.position.x, 1.13); nh_.param("pose_center_y", pose_13.position.y, 3.04); nh_.param("pose_center_x", pose_14.position.x, 1.13); nh_.param("pose_center_y", pose_14.position.y, 3.04); nh_.param("pose_center_x", pose_15.position.x, 1.13); nh_.param("pose_center_y", pose_15.position.y, 3.04);时候时,删除原来旋转十五度停下的情况,改成每个点都停留,然后开始原地旋转,每旋转0.2秒停顿0.5秒,然后旋转360度之后,前往下一个点,在其中任意一个点识别到目标时,就不前往之后的点而是直接进入视觉定位,最后直接前往 nh_.param("pose_center_x", pose_16.position.x, 1.13); nh_.param("pose_center_y", pose_16.position.y, 3.04);
最新发布
08-09
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值