An OBB-Line Segment Test

本文介绍了一种用于检测线段与轴对齐包围盒(AABB)或定向包围盒(OBB)相交的方法,该方法通过分离轴定理实现,并提供了一个具体的代码示例。此外,还讨论了如何将OBB转换为AABB。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

FW: http://www.gamasutra.com/features/19991018/Gomez_6.htm

Testing a box and a line segment for intersection requires checking only six separating axes: the box's three principal axes, and the vector cross products of these axes with l, the line direction. Again, the vectors used for these tests do not have to be normalized, and these tests can be simplified by transforming the line segment into the box’s coordinate frame.

One application of this test is to see if a camera's line of sight is obscured. Testing every polygon in a scene could be prohibitively expensive, but if these polygons are stored in an AABB or an OBB tree, a box-segment test can quickly determine a potential set of polygons. A segment-polygon test can then be used to determine if any polygons in this subset are actually obscuring the line of sight.

 

The function in Listing 7 assumes the line segment has already been transformed to box space.

 

Listing 7.

#include "aabb.h"

const bool AABB_LineSegmentOverlap
(

 

const VECTOR& l, //line direction
const VECTOR& mid, //midpoint of the line
// segment
const SCALAR hl, //segment half-length
const AABB& b //box

 

)

{

 

/* ALGORITHM: Use the separating axis
theorem to
see if the line segment
and the box overlap. A
line
segment is a degenerate OBB. */

const VECTOR T = b.P - mid;
VECTOR v;
SCALAR r;

//do any of the principal axes
//form a separating axis?

if( fabs(T.x) > b.E.x + hl*fabs(l.x) )
return false;

if( fabs(T.y) > b.E.y + hl*fabs(l.y) )
return false;

if( fabs(T.z) > b.E.z + hl*fabs(l.z) )
return false;

/* NOTE: Since the separating axis is
perpendicular to the line in these
last four cases, the line does not
contribute to the projection. */

//l.cross(x-axis)?

r = b.E.y*fabs(l.z) + b.E.z*fabs(l.y);

if( fabs(T.y*l.z - T.z*l.y) > r )
return false;

//l.cross(y-axis)?

r = b.E.x*fabs(l.z) + b.E.z*fabs(l.x);

if( fabs(T.z*l.x - T.x*l.z) > r )
return false;

//l.cross(z-axis)?

r = b.E.x*fabs(l.y) + b.E.y*fabs(l.x);

if( fabs(T.x*l.y - T.y*l.x) > r )
return false;

return true;

 

}

 

OBB to AABB conversion

 

Converting an OBB to an AABB merely involves calculating the extents of the OBB along the x, y, and z-axes of its parent frame. For example the extent of the OBB along the x-axis is

 

The extents along the y and z-axes are calculated similarly.

 

这是我现在的参数配置文件,请你给出建议 # Ultralytics YOLO 🚀, AGPL-3.0 license # Default training settings and hyperparameters for medium-augmentation COCO training task: detect # (str) YOLO task, i.e. detect, segment, classify, pose, obb mode: train # (str) YOLO mode, i.e. train, val, predict, export, track, benchmark # Train settings ------------------------------------------------------------------------------------------------------- model: # (str, optional) path to model file, i.e. yolov8n.pt, yolov8n.yaml data: # (str, optional) path to data file, i.e. coco8.yaml epochs: 300 # (int) number of epochs to train for time: # (float, optional) number of hours to train for, overrides epochs if supplied patience: 15 # (int) epochs to wait for no observable improvement for early stopping of training batch: 8 # (int) number of images per batch (-1 for AutoBatch) imgsz: 640 # (int | list) input images size as int for train and val modes, or list[h,w] for predict and export modes save: True # (bool) save train checkpoints and predict results save_period: -1 # (int) Save checkpoint every x epochs (disabled if < 1) cache: False # (bool) True/ram, disk or False. Use cache for data loading device: # (int | str | list, optional) device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu workers: 4 # (int) number of worker threads for data loading (per RANK if DDP) project: # (str, optional) project name name: # (str, optional) experiment name, results saved to 'project/name' directory exist_ok: False # (bool) whether to overwrite existing experiment pretrained: True # (bool | str) whether to use a pretrained model (bool) or a model to load weights from (str) optimizer: auto # (str) optimizer to use, choices=[SGD, Adam, Adamax, AdamW, NAdam, RAdam, RMSProp, auto] verbose: True # (bool) whether to print verbose output seed: 0 # (int) random seed for reproducibility deterministic: True # (bool) whether to enable deterministic mode single_cls: False # (bool) train multi-class data as single-class rect: False # (bool) rectangular training if mode='train' or rectangular validation if mode='val' cos_lr: True # (bool) use cosine learning rate scheduler close_mosaic: 10 # (int) disable mosaic augmentation for final epochs (0 to disable) resume: False # (bool) resume training from last checkpoint amp: False # (bool) Automatic Mixed Precision (AMP) training, choices=[True, False], True runs AMP check fraction: 1.0 # (float) dataset fraction to train on (default is 1.0, all images in train set) profile: False # (bool) profile ONNX and TensorRT speeds during training for loggers freeze: None # (int | list, optional) freeze first n layers, or freeze list of layer indices during training multi_scale: False # (bool) Whether to use multiscale during training # Segmentation overlap_mask: True # (bool) masks should overlap during training (segment train only) mask_ratio: 2 # (int) mask downsample ratio (segment train only) # Classification dropout: 0.0 # (float) use dropout regularization (classify train only) # Val/Test settings ---------------------------------------------------------------------------------------------------- val: True # (bool) validate/test during training split: val # (str) dataset split to use for validation, i.e. 'val', 'test' or 'train' save_json: False # (bool) save results to JSON file save_hybrid: False # (bool) save hybrid version of labels (labels + additional predictions) conf: 0.39 # (float, optional) object confidence threshold for detection (default 0.25 predict, 0.001 val) iou: 0.65 # (float) intersection over union (IoU) threshold for NMS max_det: 300 # (int) maximum number of detections per image half: False # (bool) use half precision (FP16) dnn: False # (bool) use OpenCV DNN for ONNX inference plots: True # (bool) save plots and images during train/val # Predict settings ----------------------------------------------------------------------------------------------------- source: # (str, optional) source directory for images or videos vid_stride: 1 # (int) video frame-rate stride stream_buffer: False # (bool) buffer all streaming frames (True) or return the most recent frame (False) visualize: False # (bool) visualize model features augment: False # (bool) apply image augmentation to prediction sources agnostic_nms: False # (bool) class-agnostic NMS classes: # (int | list[int], optional) filter results by class, i.e. classes=0, or classes=[0,2,3] retina_masks: False # (bool) use high-resolution segmentation masks embed: # (list[int], optional) return feature vectors/embeddings from given layers # Visualize settings --------------------------------------------------------------------------------------------------- show: False # (bool) show predicted images and videos if environment allows save_frames: False # (bool) save predicted individual video frames save_txt: False # (bool) save results as .txt file save_conf: False # (bool) save results with confidence scores save_crop: False # (bool) save cropped images with results show_labels: True # (bool) show prediction labels, i.e. 'person' show_conf: True # (bool) show prediction confidence, i.e. '0.99' show_boxes: True # (bool) show prediction boxes line_width: # (int, optional) line width of the bounding boxes. Scaled to image size if None. # Export settings ------------------------------------------------------------------------------------------------------ format: torchscript # (str) format to export to, choices at https://docs.ultralytics.com/modes/export/#export-formats keras: False # (bool) use Kera=s optimize: False # (bool) TorchScript: optimize for mobile int8: False # (bool) CoreML/TF INT8 quantization dynamic: False # (bool) ONNX/TF/TensorRT: dynamic axes simplify: True # (bool) ONNX: simplify model using `onnxslim` opset: # (int, optional) ONNX: opset version workspace: # (int) TensorRT: workspace size (GB) nms: False # (bool) CoreML: add NMS # Hyperparameters ------------------------------------------------------------------------------------------------------ lr0: 0.005 # (float) initial learning rate (i.e. SGD=1E-2, Adam=1E-3) lrf: 0.00005 # (float) final learning rate (lr0 * lrf) momentum: 0.937 # (float) SGD momentum/Adam beta1 weight_decay: 0.0005 # (float) optimizer weight decay 5e-4 warmup_epochs: 5.0 # (float) warmup epochs (fractions ok) warmup_momentum: 0.8 # (float) warmup initial momentum warmup_bias_lr: 0.1 # (float) warmup initial bias lr box: 7.5 # (float) box loss gain cls: 0.5 # (float) cls loss gain (scale with pixels) dfl: 1.5 # (float) dfl loss gain pose: 12.0 # (float) pose loss gain kobj: 1.0 # (float) keypoint obj loss gain label_smoothing: 0.05 # (float) label smoothing (fraction) nbs: 64 # (int) nominal batch size hsv_h: 0.01 # (float) image HSV-Hue augmentation (fraction) hsv_s: 0.4 # (float) image HSV-Saturation augmentation (fraction) hsv_v: 0.4 # (float) image HSV-Value augmentation (fraction) degrees: 0.0 # (float) image rotation (+/- deg) translate: 0.0 # (float) image translation (+/- fraction) scale: 0.25 # (float) image scale (+/- gain) shear: 0.0 # (float) image shear (+/- deg) perspective: 0.0 # (float) image perspective (+/- fraction), range 0-0.001 flipud: 0.0 # (float) image flip up-down (probability) fliplr: 0.5 # (float) image flip left-right (probability) bgr: 0.0 # (float) image channel BGR (probability) mosaic: 0.3 # (float) image mosaic (probability) mixup: 0.1 # (float) image mixup (probability) copy_paste: 0.0 # (float) segment copy-paste (probability) copy_paste_mode: "flip" # (str) the method to do copy_paste augmentation (flip, mixup) auto_augment: randaugment # (str) auto augmentation policy for classification (randaugment, autoaugment, augmix) erasing: 0.4 # (float) probability of random erasing during classification training (0-0.9), 0 means no erasing, must be less than 1.0. crop_fraction: 1.0 # (float) image crop fraction for classification (0.1-1), 1.0 means no crop, must be greater than 0. # Custom config.yaml --------------------------------------------------------------------------------------------------- cfg: # (str, optional) for overriding defaults.yaml # Tracker settings ------------------------------------------------------------------------------------------------------ tracker: botsort.yaml # (str) tracker type, choices=[botsort.yaml, bytetrack.yaml]
07-19
Traceback (most recent call last): File "F:\anquanmaoshujuji\yolo-pyqt-master\run_gui.py", line 11, in <module> from yolo import yolov5, yolov7, yolov8, rtdetr File "F:\anquanmaoshujuji\yolo-pyqt-master\yolo.py", line 8, in <module> from ultralytics import YOLO, RTDETR File "F:\anquanmaoshujuji\yolo-pyqt-master\ultralytics\__init__.py", line 10, in <module> from ultralytics.data.explorer.explorer import Explorer File "F:\anquanmaoshujuji\yolo-pyqt-master\ultralytics\data\__init__.py", line 3, in <module> from .base import BaseDataset File "F:\anquanmaoshujuji\yolo-pyqt-master\ultralytics\data\base.py", line 17, in <module> from ultralytics.data.utils import FORMATS_HELP_MSG, HELP_URL, IMG_FORMATS File "F:\anquanmaoshujuji\yolo-pyqt-master\ultralytics\data\utils.py", line 19, in <module> from ultralytics.nn.autobackend import check_class_names File "F:\anquanmaoshujuji\yolo-pyqt-master\ultralytics\nn\__init__.py", line 3, in <module> from .tasks import ( File "F:\anquanmaoshujuji\yolo-pyqt-master\ultralytics\nn\tasks.py", line 10, in <module> from ultralytics.nn.modules import ( File "F:\anquanmaoshujuji\yolo-pyqt-master\ultralytics\nn\modules\__init__.py", line 65, in <module> from .head import OBB, Classify, Detect, Pose, RTDETRDecoder, Segment, WorldDetect File "F:\anquanmaoshujuji\yolo-pyqt-master\ultralytics\nn\modules\head.py", line 10, in <module> from ultralytics.utils.tal import TORCH_1_10, dist2bbox, dist2rbox, make_anchors File "F:\anquanmaoshujuji\yolo-pyqt-master\ultralytics\utils\__init__.py", line 21, in <module> import matplotlib.pyplot as plt File "D:\anaconda_2021\envs\yolov7strongsort\lib\site-packages\matplotlib\__init__.py", line 263, in <module> _check_versions() File "D:\anaconda_2021\envs\yolov7strongsort\lib\site-packages\matplotlib\__init__.py", line 257, in _check_versions module = importlib.import_module(modname) File "D:\anaconda_2021\envs\yolov7strongsort\lib
03-19
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值