17、意向性的STIT逻辑解读

意向性的STIT逻辑解读

在探讨意向性相关的逻辑问题时,我们首先会遇到一些关于逻辑性质和公理化的内容。下面我们将详细剖析其中的关键要点。

逻辑性质

我们先回顾一下所谓的意向认知STIT理论逻辑的一些性质,这些性质通过在iebt - 模型中公式的有效性或无效性来体现。

模态词的基本性质
  • □ϕ 和 [α]ϕ :在传统的STIT理论中,□ϕ 和 [α]ϕ 的基于逻辑的性质是相同的。
  • 知识模态词 Kα :Kα 是一个S5算子,与框架条件(OAC)相关的公式 (K_{\alpha}\phi \to [\alpha]\phi) 以及(Unif - H)相关的公式 (\diamond K_{\alpha}\phi \to K_{\alpha}\diamond\phi) 是有效的。
  • 意向算子 (I_{\alpha}) :(I_{\alpha}) 是一个KD45算子,其KD45图式的有效性对我们的意向性概念有以下影响:
    • (K)图式 :(I_{\alpha}(\phi \to \psi) \to (I_{\alpha}\phi \to I_{\alpha}\psi)) 的有效性意味着,如果在某一时刻一个主体有p - d意向(这里的p - d意向可以理解为一种特定类型的意向)去实现 (\phi),那么该主体也有p - d意向去实现 (\phi) 的所有逻辑后果。这表明我们所说的(可能是非深思熟虑的)p - d意向容易受到所谓的副作用问题
【四轴飞行器】非线性三自由度四轴飞行器模拟器研究(Matlab代码实现)内容概要:本文围绕非线性三自由度四轴飞行器的建模与仿真展开,重点介绍了基于Matlab的飞行器动力学模型构建与控制系统设计方法。通过对四轴飞行器非线性运动方程的推导,建立其在三维空间中的姿态与位置动态模型,并采用数值仿真手段实现飞行器在复杂环境下的行为模拟。文中详细阐述了系统状态方程的构建、控制输入设计以及仿真参数设置,并结合具体代码实现展示了如何对飞行器进行稳定控制与轨迹跟踪。此外,文章还提到了多种优化与控制策略的应用背景,如模型预测控制、PID控制等,突出了Matlab工具在无人机系统仿真中的强大功能。; 适合人群:具备一定自动控制理论基础和Matlab编程能力的高校学生、科研人员及从事无人机系统开发的工程师;尤其适合从事飞行器建模、控制算法研究及相关领域研究的专业人士。; 使用场景及目标:①用于四轴飞行器非线性动力学建模的教学与科研实践;②为无人机控制系统设计(如姿态控制、轨迹跟踪)提供仿真验证平台;③支持高级控制算法(如MPC、LQR、PID)的研究与对比分析; 阅读建议:建议读者结合文中提到的Matlab代码与仿真模型,动手实践飞行器建模与控制流程,重点关注动力学方程的实现与控制器参数调优,同时可拓展至多自由度或复杂环境下的飞行仿真研究。
# coding: utf-8 import argparse import torch from torch.utils.data import DataLoader import torch.nn as nn import imageio from spatial_network import build_SpatialNet, SpatialNet from temporal_network import build_TemporalNet, TemporalNet from smooth_network import build_SmoothNet, SmoothNet import os import numpy as np import skimage import cv2 import utils.torch_tps_transform as torch_tps_transform import utils.torch_tps_transform_point as torch_tps_transform_point from PIL import Image import glob import time from torchvision.transforms import GaussianBlur import grid_res grid_h = grid_res.GRID_H grid_w = grid_res.GRID_W import matplotlib.pyplot as plt plt.rcParams['axes.unicode_minus']=False last_path = os.path.abspath(os.path.join(os.path.dirname("__file__"), os.path.pardir)) MODEL_DIR = os.path.join(last_path, 'full_model_tra') def linear_blender(ref, tgt, ref_m, tgt_m, mask=False): blur = GaussianBlur(kernel_size=(21,21), sigma=20) r1, c1 = torch.nonzero(ref_m[0, 0], as_tuple=True) r2, c2 = torch.nonzero(tgt_m[0, 0], as_tuple=True) center1 = (r1.float().mean(), c1.float().mean()) center2 = (r2.float().mean(), c2.float().mean()) vec = (center2[0] - center1[0], center2[1] - center1[1]) ovl = (ref_m * tgt_m).round()[:, 0].unsqueeze(1) ref_m_ = ref_m[:, 0].unsqueeze(1) - ovl r, c = torch.nonzero(ovl[0, 0], as_tuple=True) ovl_mask = torch.zeros_like(ref_m_).cuda() proj_val = (r - center1[0]) * vec[0] + (c - center1[1]) * vec[1] ovl_mask[ovl.bool()] = (proj_val - proj_val.min()) / (proj_val.max() - proj_val.min() + 1e-3) mask1 = (blur(ref_m_ + (1-ovl_mask)*ref_m[:,0].unsqueeze(1)) * ref_m + ref_m_).clamp(0,1) if mask: return mask1 mask2 = (1-mask1) * tgt_m stit = ref * mask1 + tgt * mask2 return stit def recover_mesh(norm_mesh, height, width): #from [bs, pn, 2] to [bs, grid_h+1, grid_w+1, 2] batch_size = norm_mesh.size()[0] mesh_w = (norm_mesh[...,0]+1) * float(width) / 2. mesh_h = (norm_mesh[...,1]+1) * float(height) / 2. mesh = torch.stack([mesh_w, mesh_h], 2) # [bs,(grid_h+1)*(grid_w+1),2] return mesh.reshape([batch_size, grid_h+1, grid_w+1, 2]) def get_rigid_mesh(batch_size, height, width): ww = torch.matmul(torch.ones([grid_h+1, 1]), torch.unsqueeze(torch.linspace(0., float(width), grid_w+1), 0)) hh = torch.matmul(torch.unsqueeze(torch.linspace(0.0, float(height), grid_h+1), 1), torch.ones([1, grid_w+1])) if torch.cuda.is_available(): ww = ww.cuda() hh = hh.cuda() ori_pt = torch.cat((ww.unsqueeze(2), hh.unsqueeze(2)),2) # (grid_h+1)*(grid_w+1)*2 ori_pt = ori_pt.unsqueeze(0).expand(batch_size, -1, -1, -1) return ori_pt def get_norm_mesh(mesh, height, width): batch_size = mesh.size()[0] mesh_w = mesh[...,0]*2./float(width) - 1. mesh_h = mesh[...,1]*2./float(height) - 1. norm_mesh = torch.stack([mesh_w, mesh_h], 3) # bs*(grid_h+1)*(grid_w+1)*2 return norm_mesh.reshape([batch_size, -1, 2]) # bs*-1*2 def test(args): os.environ['CUDA_DEVICES_ORDER'] = "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu # define the network spatial_net = SpatialNet() temporal_net = TemporalNet() smooth_net = SmoothNet() if torch.cuda.is_available(): spatial_net = spatial_net.cuda() temporal_net = temporal_net.cuda() smooth_net = smooth_net.cuda() #load the existing models if it exists ckpt_list = glob.glob(MODEL_DIR + "/*.pth") ckpt_list.sort() if len(ckpt_list) == 3: # load spatial warp model spatial_model_path = MODEL_DIR + "/spatial_warp.pth" spatial_checkpoint = torch.load(spatial_model_path) spatial_net.load_state_dict(spatial_checkpoint['model']) print('load model from {}!'.format(spatial_model_path)) # load temporal warp model temporal_model_path = MODEL_DIR + "/temporal_warp.pth" temporal_checkpoint = torch.load(temporal_model_path) temporal_net.load_state_dict(temporal_checkpoint['model']) print('load model from {}!'.format(temporal_model_path)) # load smooth warp model smooth_model_path = MODEL_DIR + "/smooth_warp.pth" smooth_checkpoint = torch.load(smooth_model_path) smooth_net.load_state_dict(smooth_checkpoint['model']) print('load model from {}!'.format(smooth_model_path)) else: print('No checkpoint found!') exit(0) spatial_net.eval() temporal_net.eval() smooth_net.eval() print("##################start testing#######################") warp12_mesh1 = 0. warp12_mesh2 = 0. warp23_mesh1 = 0. warp23_mesh2 = 0. img1_list = [] img2_list = [] img3_list = [] video_path1 = args.video1_path video_path2 = args.video2_path video_path3 = args.video3_path for i in range(2): if i == 0: video_frame_path1 = video_path1 video_frame_path2 = video_path2 else: video_frame_path1 = video_path2 video_frame_path2 = video_path3 #define an online buffer (len == 7) buffer_len = 7 tmotion_tensor_list1 = [] smotion_tensor_list1 = [] tmotion_tensor_list2 = [] smotion_tensor_list2 = [] # img name list img1_name_list = glob.glob(os.path.join(video_frame_path1, '*.jpg')) img2_name_list = glob.glob(os.path.join(video_frame_path2, '*.jpg')) img1_name_list = sorted(img1_name_list) img2_name_list = sorted(img2_name_list) img1_tensor_list = [] img2_tensor_list = [] img1_hr_tensor_list = [] img2_hr_tensor_list = [] img_h = 360 img_w = 480 # load imgs for k in range(0, len(img2_name_list)): img1 = cv2.imread(img1_name_list[k]) # get high-resolution input img1_hr = img1.astype(dtype=np.float32) img1_hr = np.transpose(img1_hr, [2, 0, 1]) img1_hr_tensor = torch.tensor(img1_hr).unsqueeze(0) img1_hr_tensor_list.append(img1_hr_tensor) # get 360x480 input img1 = cv2.resize(img1, (img_w, img_h)) img1 = img1.astype(dtype=np.float32) img1 = np.transpose(img1, [2, 0, 1]) img1 = (img1 / 127.5) - 1.0 img1_tensor = torch.tensor(img1).unsqueeze(0) img1_tensor_list.append(img1_tensor) img2 = cv2.imread(img2_name_list[k]) # get high-resolution input img2_hr = img2.astype(dtype=np.float32) img2_hr = np.transpose(img2_hr, [2, 0, 1]) img2_hr_tensor = torch.tensor(img2_hr).unsqueeze(0) img2_hr_tensor_list.append(img2_hr_tensor) # get 360x480 input img2 = cv2.resize(img2, (img_w, img_h)) img2 = img2.astype(dtype=np.float32) img2 = np.transpose(img2, [2, 0, 1]) img2 = (img2 / 127.5) - 1.0 img2_tensor = torch.tensor(img2).unsqueeze(0) img2_tensor_list.append(img2_tensor) if i == 0: img1_list = img1_hr_tensor_list img2_list = img2_hr_tensor_list else: img3_list = img2_hr_tensor_list start_time1 = time.time() NOF = len(img2_name_list) # motion estimation for k in range(0, len(img2_name_list)): # step 1: spatial warp with torch.no_grad(): spatial_batch_out = build_SpatialNet(spatial_net, img1_tensor_list[k].cuda(), img2_tensor_list[k].cuda()) smotion1 = spatial_batch_out['motion1'] smotion2 = spatial_batch_out['motion2'] smotion_tensor_list1.append(smotion1) smotion_tensor_list2.append(smotion2) # step 2: temporal warp with torch.no_grad(): temporal_batch_out1 = build_TemporalNet(temporal_net, img1_tensor_list) temporal_batch_out2 = build_TemporalNet(temporal_net, img2_tensor_list) tmotion_tensor_list1 = temporal_batch_out1['motion_list'] tmotion_tensor_list2 = temporal_batch_out2['motion_list'] print("fps (spatial & temporal warp):") print(NOF/(time.time() - start_time1)) ############################################## ############# data preparation ############ # converting tmotion (t-th frame) into tsmotion ( (t-1)-th frame ) rigid_mesh = get_rigid_mesh(1, img_h, img_w) norm_rigid_mesh = get_norm_mesh(rigid_mesh, img_h, img_w) smesh_list1 = [] smesh_list2 = [] tsmotion_list1 = [] tsmotion_list2 = [] for k in range(len(tmotion_tensor_list1)): smotion1 = smotion_tensor_list1[k] smesh1 = rigid_mesh + smotion1 smotion2 = smotion_tensor_list2[k] smesh2 = rigid_mesh + smotion2 if k == 0: tsmotion1 = smotion1.clone() * 0 tsmotion2 = smotion2.clone() * 0 else: smotion1_1 = smotion_tensor_list1[k-1] smesh1_1 = rigid_mesh + smotion1_1 tmotion1 = tmotion_tensor_list1[k] tmesh1 = rigid_mesh + tmotion1 norm_smesh1_1 = get_norm_mesh(smesh1_1, img_h, img_w) norm_tmesh1 = get_norm_mesh(tmesh1, img_h, img_w) tsmesh1 = torch_tps_transform_point.transformer(norm_tmesh1, norm_rigid_mesh, norm_smesh1_1) tsmotion1 = recover_mesh(tsmesh1, img_h, img_w) - smesh1 smotion2_1 = smotion_tensor_list2[k-1] smesh2_1 = rigid_mesh + smotion2_1 tmotion2 = tmotion_tensor_list2[k] tmesh2 = rigid_mesh + tmotion2 norm_smesh2_1 = get_norm_mesh(smesh2_1, img_h, img_w) norm_tmesh2 = get_norm_mesh(tmesh2, img_h, img_w) tsmesh2 = torch_tps_transform_point.transformer(norm_tmesh2, norm_rigid_mesh, norm_smesh2_1) tsmotion2 = recover_mesh(tsmesh2, img_h, img_w) - smesh2 # append smesh_list1.append(smesh1) smesh_list2.append(smesh2) tsmotion_list1.append(tsmotion1) tsmotion_list2.append(tsmotion2) # step 3: smooth warp ori_mesh1 = 0 smooth_mesh1 = 0 delta_motion1 = 0 ori_mesh2 = 0 smooth_mesh2 = 0 delta_motion2 = 0 for k in range(len(tmotion_tensor_list1)-6): # tsmotion_sublist1 = tsmotion_list1[k:k+7] tsmotion_sublist1[0] = tsmotion_sublist1[0] * 0 tsmotion_sublist2 = tsmotion_list2[k:k+7] tsmotion_sublist2[0] = tsmotion_sublist2[0] * 0 with torch.no_grad(): smooth_batch_out = build_SmoothNet(smooth_net, tsmotion_sublist1, tsmotion_sublist2, smesh_list1[k:k+7], smesh_list2[k:k+7]) _ori_mesh1 = smooth_batch_out["ori_mesh1"] _smooth_mesh1 = smooth_batch_out["smooth_mesh1"] _ori_mesh2 = smooth_batch_out["ori_mesh2"] _smooth_mesh2 = smooth_batch_out["smooth_mesh2"] if k == 0: ori_mesh1 = _ori_mesh1 smooth_mesh1 = _smooth_mesh1 ori_mesh2 = _ori_mesh2 smooth_mesh2 = _smooth_mesh2 else: # for ref ori_mesh1 = torch.cat((ori_mesh1, _ori_mesh1[:,-1,...].unsqueeze(1)), 1) smooth_mesh1 = torch.cat((smooth_mesh1, _smooth_mesh1[:,-1,...].unsqueeze(1)), 1) # for tgt ori_mesh2 = torch.cat((ori_mesh2, _ori_mesh2[:,-1,...].unsqueeze(1)), 1) smooth_mesh2 = torch.cat((smooth_mesh2, _smooth_mesh2[:,-1,...].unsqueeze(1)), 1) print("fps (smooth warp):") print(NOF/(time.time() - start_time1)) # get meshes if i == 0: warp12_mesh1 = smooth_mesh1 # 1, N, grid_h+1, grid_w+1, 2 warp12_mesh2 = smooth_mesh2 else: warp23_mesh1 = smooth_mesh1 warp23_mesh2 = smooth_mesh2 ######################################################################################## # resize the mesh to the original resolution batch_size, _, img_h, img_w = img1_list[0].shape warp12_mesh1 = torch.stack([warp12_mesh1[...,0]*img_w/480, warp12_mesh1[...,1]*img_h/360], 4) warp12_mesh2 = torch.stack([warp12_mesh2[...,0]*img_w/480, warp12_mesh2[...,1]*img_h/360], 4) warp23_mesh1 = torch.stack([warp23_mesh1[...,0]*img_w/480, warp23_mesh1[...,1]*img_h/360], 4) warp23_mesh2 = torch.stack([warp23_mesh2[...,0]*img_w/480, warp23_mesh2[...,1]*img_h/360], 4) # mesh alignment (we suppose warp12_mesh2 and warp23_mesh1 correspond to the same view) offset = (warp12_mesh2 - warp23_mesh1).reshape(warp12_mesh2.shape[0], warp12_mesh2.shape[1], -1, 2) # bs, N, -1, 2 offset = torch.mean(offset, 2) # bs, N, 2 offset = offset.unsqueeze(2).unsqueeze(2) # bs, N, 1, 1, 2 # transform mesh coordinate warp23_mesh1 = warp23_mesh1 + offset warp23_mesh2 = warp23_mesh2 + offset # find the middle mesh plane middle_mesh = (warp12_mesh2 + warp23_mesh1)/2. # predefined canvas width_max1 = torch.max(warp12_mesh1[...,0]) width_max2 = torch.max(warp12_mesh2[...,0]) width_max3 = torch.max(warp23_mesh1[...,0]) width_max4 = torch.max(warp23_mesh2[...,0]) width_max = torch.maximum(width_max1, width_max2) width_max = torch.maximum(width_max, width_max3) width_max = torch.maximum(width_max, width_max4) width_min1 = torch.min(warp12_mesh1[...,0]) width_min2 = torch.min(warp12_mesh2[...,0]) width_min3 = torch.min(warp23_mesh1[...,0]) width_min4 = torch.min(warp23_mesh2[...,0]) width_min = torch.minimum(width_min1, width_min2) width_min = torch.minimum(width_min, width_min3) width_min = torch.minimum(width_min, width_min4) height_max1 = torch.max(warp12_mesh1[...,1]) height_max2 = torch.max(warp12_mesh2[...,1]) height_max3 = torch.max(warp23_mesh1[...,1]) height_max4 = torch.max(warp23_mesh2[...,1]) height_max = torch.maximum(height_max1, height_max2) height_max = torch.maximum(height_max, height_max3) height_max = torch.maximum(height_max, height_max4) height_min1 = torch.min(warp12_mesh1[...,1]) height_min2 = torch.min(warp12_mesh2[...,1]) height_min3 = torch.min(warp23_mesh1[...,1]) height_min4 = torch.min(warp23_mesh2[...,1]) height_min = torch.minimum(height_min1, height_min2) height_min = torch.minimum(height_min, height_min3) height_min = torch.minimum(height_min, height_min4) out_width = width_max - width_min out_height = height_max - height_min print("predefined canvas") print(out_width) print(out_height) warp12_mesh1 = torch.stack([warp12_mesh1[...,0]-width_min, warp12_mesh1[...,1]-height_min], 4) warp12_mesh2 = torch.stack([warp12_mesh2[...,0]-width_min, warp12_mesh2[...,1]-height_min], 4) warp23_mesh1 = torch.stack([warp23_mesh1[...,0]-width_min, warp23_mesh1[...,1]-height_min], 4) warp23_mesh2 = torch.stack([warp23_mesh2[...,0]-width_min, warp23_mesh2[...,1]-height_min], 4) middle_mesh = torch.stack([middle_mesh[...,0]-width_min, middle_mesh[...,1]-height_min], 4) warp12_mesh1_framelist = [] warp23_mesh2_framelist = [] for i in range(middle_mesh.shape[1]): norm_warp12_mesh1 = get_norm_mesh(warp12_mesh1[:,i,...], out_height, out_width) norm_warp12_mesh2 = get_norm_mesh(warp12_mesh2[:,i,...], out_height, out_width) norm_warp23_mesh1 = get_norm_mesh(warp23_mesh1[:,i,...], out_height, out_width) norm_warp23_mesh2 = get_norm_mesh(warp23_mesh2[:,i,...], out_height, out_width) norm_middle_mesh = get_norm_mesh(middle_mesh[:,i,...], out_height, out_width) norm_warp12_mesh1 = torch_tps_transform_point.transformer(norm_warp12_mesh1, norm_warp12_mesh2, norm_middle_mesh) warp12_mesh1_frame = recover_mesh(norm_warp12_mesh1, out_height, out_width) norm_warp23_mesh2 = torch_tps_transform_point.transformer(norm_warp23_mesh2, norm_warp23_mesh1, norm_middle_mesh) warp23_mesh2_frame = recover_mesh(norm_warp23_mesh2, out_height, out_width) warp12_mesh1_framelist.append(warp12_mesh1_frame) warp23_mesh2_framelist.append(warp23_mesh2_frame) warp12_mesh1 = torch.stack(warp12_mesh1_framelist, 1) warp23_mesh2 = torch.stack(warp23_mesh2_framelist, 1) print(warp12_mesh1.shape) # new canvas width_max1 = torch.max(warp12_mesh1[...,0]) width_max2 = torch.max(middle_mesh[...,0]) width_max3 = torch.max(warp23_mesh2[...,0]) width_max = torch.maximum(width_max1, width_max2) width_max = torch.maximum(width_max, width_max3) width_min1 = torch.min(warp12_mesh1[...,0]) width_min2 = torch.min(middle_mesh[...,0]) width_min3 = torch.min(warp23_mesh2[...,0]) width_min = torch.minimum(width_min1, width_min2) width_min = torch.minimum(width_min, width_min3) height_max1 = torch.max(warp12_mesh1[...,1]) height_max2 = torch.max(middle_mesh[...,1]) height_max3 = torch.max(warp23_mesh2[...,1]) height_max = torch.maximum(height_max1, height_max2) height_max = torch.maximum(height_max, height_max3) height_min1 = torch.min(warp12_mesh1[...,1]) height_min2 = torch.min(middle_mesh[...,1]) height_min3 = torch.min(warp23_mesh2[...,1]) height_min = torch.minimum(height_min1, height_min2) height_min = torch.minimum(height_min, height_min3) out_width = width_max - width_min out_height = height_max - height_min print("new canvas") print(out_width) print(out_height) batch_size, _, img_h, img_w = img1_list[0].shape print(img2_list[0].shape) rigid_mesh = get_rigid_mesh(batch_size, img_h, img_w) norm_rigid_mesh = get_norm_mesh(rigid_mesh, img_h, img_w) stable_list = [] print("warping and blending") # warp for i in range(warp12_mesh1.shape[1]): mesh1 = warp12_mesh1[:,i,:,:,:] mesh_trans1 = torch.stack([mesh1[...,0]-width_min, mesh1[...,1]-height_min], 3) norm_mesh1 = get_norm_mesh(mesh_trans1, out_height, out_width) img1 = img1_list[i].cuda() mesh2 = middle_mesh[:,i,:,:,:] mesh_trans2 = torch.stack([mesh2[...,0]-width_min, mesh2[...,1]-height_min], 3) norm_mesh2 = get_norm_mesh(mesh_trans2, out_height, out_width) img2 = img2_list[i].cuda() mesh3 = warp23_mesh2[:,i,:,:,:] mesh_trans3 = torch.stack([mesh3[...,0]-width_min, mesh3[...,1]-height_min], 3) norm_mesh3 = get_norm_mesh(mesh_trans3, out_height, out_width) img3 = img3_list[i].cuda() if args.fusion_mode == 'AVERAGE': img_warp = torch_tps_transform.transformer(torch.cat([img1, img2, img3], 0), torch.cat([norm_mesh1, norm_mesh2, norm_mesh3], 0), torch.cat([norm_rigid_mesh, norm_rigid_mesh, norm_rigid_mesh], 0), (out_height.int(), out_width.int()), mode = args.warp_mode) img12_fusion = img_warp[0] * (img_warp[0]/ (img_warp[0]+img_warp[1]+1e-6)) + img_warp[1] * (img_warp[1]/ (img_warp[0]+img_warp[1]+1e-6)) fusion = img12_fusion * (img12_fusion/ (img12_fusion+img_warp[2]+1e-6)) + img_warp[2] * (img_warp[2]/ (img12_fusion+img_warp[2]+1e-6)) else: mask = torch.ones_like(img1[:,0,...].unsqueeze(1)).cuda() img1 = torch.cat([img1, mask], 1) img2 = torch.cat([img2, mask], 1) img3 = torch.cat([img3, mask], 1) img_warp = torch_tps_transform.transformer(torch.cat([img1, img2, img3], 0), torch.cat([norm_mesh1, norm_mesh2, norm_mesh3], 0), torch.cat([norm_rigid_mesh, norm_rigid_mesh, norm_rigid_mesh], 0), (out_height.int(), out_width.int()), mode = args.warp_mode) mask1 = img_warp[0,3,...].unsqueeze(0).unsqueeze(0) mask2 = img_warp[1,3,...].unsqueeze(0).unsqueeze(0) mask3 = img_warp[2,3,...].unsqueeze(0).unsqueeze(0) img12_fusion = linear_blender(img_warp[0,0:3,...].unsqueeze(0), img_warp[1,0:3,...].unsqueeze(0), mask1, mask2) mask12 = mask1 + mask2 - mask1*mask2 fusion = linear_blender(img12_fusion, img_warp[2,0:3,...].unsqueeze(0), mask12, mask3) fusion = fusion[0] stable_list.append(fusion.cpu()) print("begin to write into video") # save video save_path = '../out.mp4' fourcc = cv2.VideoWriter_fourcc('m', 'p', '4', 'v') fps = 30 media_writer = cv2.VideoWriter(save_path, fourcc, fps, (out_width.int(), out_height.int())) for k in range(len(stable_list)): ave_fusion = stable_list[k].cpu().numpy().transpose(1,2,0) media_writer.write(ave_fusion.astype(np.uint8 )) media_writer.release() print("##################end testing#######################") if __name__=="__main__": parser = argparse.ArgumentParser() parser.add_argument('--gpu', type=str, default='0') # the path to load input videos # Note: video1 should overlap with video2, and video2 should overlap with video3 parser.add_argument('--video1_path', type=str, default='/opt/data/private/nl/Data/Tra-Dataset2/case5_2/video1/') parser.add_argument('--video2_path', type=str, default='/opt/data/private/nl/Data/Tra-Dataset2/case5_2/video2/') parser.add_argument('--video3_path', type=str, default='/opt/data/private/nl/Data/Tra-Dataset2/case5_3/video2/') # optional parameter: 'NORMAL' or 'FAST' # FAST: use F.grid_sample to interpolate. It's fast, but may produce thin black boundary. # NORMAL: use our implemented interpolation function. It's a bit slower, but avoid the black boundary. parser.add_argument('--warp_mode', type=str, default='NORMAL') # optional parameter: 'Normal' or 'Fast' # optional parameter: 'AVERAGE' or 'LINEAR' # AVERAGE: faster but more artifacts # LINEAR: slower but less artifacts parser.add_argument('--fusion_mode', type=str, default='LINEAR') print('<==================== Loading data ===================>\n') args = parser.parse_args() print(args) test(args) 这段代码能够用来进行图像的拼接吗
09-24
判断包含`torch`、`cv2`等库的Python代码是否可用于图像拼接,可从以下几个方面着手: ### 代码功能逻辑 查看代码中是否具备图像拼接的核心步骤,如特征提取、特征匹配、图像变换和图像融合。传统的图像拼接方法,常使用SIFT等特征提取算法,而深度学习方法则可能使用`torch`搭建的神经网络进行特征提取和匹配。若代码中有对图像特征点的提取、匹配操作,以及根据匹配结果进行图像变换和融合的步骤,那么它可能用于图像拼接。例如,在使用SIFT进行图像拼接时,会先提取图像的SIFT特征点,然后匹配这些特征点,最后根据匹配结果计算变换矩阵,将图像进行拼接。 ### 库的使用方式 - **`cv2`库**:`cv2`(OpenCV)是计算机视觉领域常用的库,提供了丰富的图像操作函数。若代码中使用了`cv2`的特征提取函数(如`cv2.SIFT_create()`)、特征匹配函数(如`cv2.BFMatcher()`)、图像变换函数(如`cv2.warpPerspective()`)和图像融合函数(如`cv2.addWeighted()`),则很可能用于图像拼接。例如,在图像拼接中,`cv2.warpPerspective()`可用于对图像进行透视变换,使两幅图像对齐。 - **`torch`库**:`torch`是深度学习框架,若代码中使用`torch`搭建了神经网络模型,且该模型用于图像特征提取或匹配,那么也可能用于图像拼接。例如,一些基于深度学习的图像拼接方法会使用卷积神经网络(CNN)提取图像的特征。 ### 代码的输入输出 检查代码的输入和输出。若输入是多幅图像,输出是拼接后的单幅图像,则代码可能用于图像拼接。例如,代码接收两幅或多幅有重叠区域的图像,经过一系列处理后输出一幅全景图像。 以下是一个简单的示例代码,用于判断代码是否包含图像拼接的关键步骤: ```python import re def can_perform_image_stitching(code): # 检查是否包含特征提取、匹配、变换和融合的关键函数 key_functions = [ "cv2.SIFT_create", "cv2.BFMatcher", "cv2.warpPerspective", "cv2.addWeighted" ] for func in key_functions: if re.search(re.escape(func), code): return True return False # 示例代码 code = """ import cv2 import numpy as np # 读取图像 img1 = cv2.imread('image1.jpg') img2 = cv2.imread('image2.jpg') # 创建SIFT对象 sift = cv2.SIFT_create() # 检测关键点和描述符 kp1, des1 = sift.detectAndCompute(img1, None) kp2, des2 = sift.detectAndCompute(img2, None) # 创建BFMatcher对象 bf = cv2.BFMatcher() matches = bf.knnMatch(des1, des2, k=2) # 应用比率测试 good = [] for m, n in matches: if m.distance < 0.75 * n.distance: good.append(m) # 计算变换矩阵 src_pts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2) dst_pts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1, 1, 2) M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0) # 拼接图像 h, w = img1.shape[:2] result = cv2.warpPerspective(img1, M, (img1.shape[1] + img2.shape[1], img2.shape[0])) result[0:img2.shape[0], 0:img2.shape[1]] = img2 cv2.imshow('Stitched Image', result) cv2.waitKey(0) cv2.destroyAllWindows() """ print(can_perform_image_stitching(code)) # 输出: True ```
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符  | 博主筛选后可见
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值