dnn online help

Geting Started With DotNetNuke

Key Concepts

  • DotNetNuke is a portal framework that stores all information in a database, typically SQL.
  • DotNetNuke supports the hosting, design and maintenance of multiple online business portals using a single database.
  • A portal consists of Pages (web pages) which automatically create a dynamic portal Menu to enable navigation from Page to Page; and Modules that present a type of content to the user. E.g. a list of Links, a list of Documents, or a News Feed.
  • DotNetNuke uses Security Roles to control access to areas of the portal. Security Roles can be created by the Portal Administrator
  •  Two important Security Roles are provided with DotNetNuke, the Portal Administrator (Admin) and the Host Administrator (Host). Users who belong to these Security Roles have access to a range of additional tools. The Admin Role maintains control over a single portal, whereas the Host Role maintains control over all portals within the database and is able to create new portals.

Logging in

  1. Click Login or navigate to an Account Login module.
  2. In the User Name field, enter a user name.
  3. In the Password, enter the password.
  4. Click the Login button.

Update the Administrator Account Login Details

Update the Administrator Account Login Details

To protect your portal from any visitor logging in and uploading potentially dangerous files, you must first update the Administrator and Host Account Login detail.

  1. Click Login.
  2. In the User Name field, enter admin.
  3. In the Password field, enter admin.
  4. Click the Login button.
  5. Click Administrator Account.
  6. In the Old Password field, enter Admin.
  7. In the New Password field, enter a new password. Remember that passwords are case sensitive.
  8. In the Confirm New Password field, re-enter the new password.
  9. Modify any other field details as desired.
  10. Click Update.

Update Admin Login

Update the Host Account Login Details

Update the Host Account Login Details

To protect your portal from any visitor logging in and uploading potentially dangerous files, you must first update the Administrator and Host Account Login detail.

  1. Click Login.
  2. In the User Name field, enter admin.
  3. In the Password field, enter admin.
  4. Click the Login button.
  5. Click Host.
  6. In the Old Password field, enter host.
  7. In the New Password field, enter a new password. Remember that passwords are case sensitive.
  8. In the Confirm New Password field, re-enter the new password.
  9. Modify any other fields as desired.
  10. Click Update.
Reference

Color Codes and Names

AliceBlue
F0F8FF

AntiqueWhite
FAEBD7

Aqua
00FFFF

Aquamarine
7FFFD4

Azure
F0FFFF

Beige
F5F5DC

Bisque
FFE4C4

Black
000000

BlanchedAlmond
FFEBCD

Blue
0000FF

BlueViolet
8A2BE2

Brown
A52A2A

BurlyWood
DEB887

CadetBlue
5F9EA0

Chartreuse
7FFF00

Chocolate
D2691E

Coral
FF7F50

CornflowerBlue
6495ED

Cornsilk
FFF8DC

Crimson
DC143C

Cyan
00FFFF

DarkBlue
00008B

DarkCyan
008B8B

DarkGoldenRod
B8860B

Darkgray
A9A9A9

DarkGreen
006400

DarkKhaki
BDB76B

DarkMagenta
8B008B

DarkOliveGreen
556B2F

DarkOrange
FF8C00

DarkOrchid
9932CC

DarkRed
8B0000

DarkSalmon
E9967A

DarkSeaGreen
8FBC8F

DarkSlateBlue
483D8B

DarkSlateGray
2F4F4F

DarkTurquoise
00CED1

DarkViolet
9400D3

deepPink
FF1493

DeepSkyBlue
00BFFF

Dimgray
696969

DodgerBlue
1E90FF

FireBrick
B22222

Floralwhite
FFFAF0

ForestGreen
228B22

Fuchsia
FF00FF

Gainsboro
DCDCDC

Ghostwhite
F8F8FF

Gold
FFD700

GoldenRod
DAA520

Gray
808080

Green
008000

GreenYellow
ADFF2F

HoneyDew
F0FFF0

Hotpink
FF69B4

IndianRed
CD5C5C

Indigo
4B0082

Ivory
FFFFF0

Khaki
F0E68C

Lavender
E6E6FA

LavenderBlush
FFF0F5

LawnGreen
7CFC00

LemonChiffon
FFFACD

LightBlue
ADD8E6

LightCoral
F08080

LightCyan
E0FFFF

LightGoldenRodYellow
FAFAD2

LightGreen
90EE90

LightGrey
D3D3D3

LightPink
FFB6C1

LightSalmon
FFA07A

LightSeaGreen
20B2AA

LightSkyBlue
87CEFA

LightSlateGray
778899

LightSteelBlue
B0C4DE

LightYellow
FFFFE0

Lime
00FF00

LimeGreen
32CD32

Linen
FAF0E6

Magenta
FF00FF

Maroon
800000

MediumAquamarine
66CDAA

MediumBlue
0000CD

MediumOrchid
BA55D3

MediumPurple
9370D8

MediumSeaGreen
3CB371

MediumSlateBlue
7B68EE

MediumSpringGreen
00FA9A

MediumTurquoise
48D1CC

MediumVioletRed
C71585

MidnightBlue
191970

MintCream
F5FFFA

MistyRose
FFE4E1

Moccasin
FFE4B5

NavajoWhite
FFDEAD

Navy
000080

OldLace
FDF5E6

Olive
808000

OliveDrab
688E23

Orange
FFA500

OrangeRed
FF4500

Orchid
DA70D6

PaleGoldenRod
EEE8AA

PaleGreen
98FB98

PaleTurquoise
AFEEEE

PaleVioletRed
D87093

PapayaWhip
FFEFD5

PeachPuff
FFDAB9

<?xml:namespace prefix = st1 ns = "urn:schemas-microsoft-com:office:smarttags" />Peru
CD853F

Pink
FFC0CB

Plum
DDA0DD

PowderBlue
B0E0E6

Purple
800080

Red
FF0000

RosyBrown
BC8F8F

RoyalBlue
4169E1

SaddleBrown
8B4513

Salmon
FA8072

SandyBrown
F4A460

SeaGreen
2E8B57

Seashell
FFF5EE

Sienna
A0522D

Silver
C0C0C0

SkyBlue
87CEEB

SlateBlue
6A5ACD

SlateGray
708090

Snow
FFFAFA

SpringGreen
00FF7F

SteelBlue
4682B4

Tan
D2B48C

Teal
008080

Thistle
D8BFD8

Tomato
FF6347

Turquoise
40E0D0

Violet
EE82EE

Wheat
F5DEB3

White
FFFFFF

WhiteSmoke
F5F5F5

Yellow
FFFF00

YellowGreen
9ACD32

 

Glossary of Terms

Administrator (Admin)
Security Role permitting full portal administration rights. The portal administrator/s has full access to all tabs and modules in the portal and is able to add, edit and update all tabs and modules on the portal.

The Administrator/s also has access to the Admin tab, permitting management of security roles, members/users, bulk email, site settings, vendor management and file management.

Members Services
A member’s service is a Security Role that has been created as a Public Role. All registered users are able to sign up for members services which can include a trial fee and/or a service fee. Access to a member’s service can also be limited to a number of days, weeks or months. See Security Roles for more details.

Module
A module is a building block that permits the administrator/s to add content to a tab. Typically; one or more modules are added to each tab. Each module is designed to manage a common type of online content such as a list of FAQ’s, a calendar of events, or a list of downloadable documents. See Modules for more details.
Module SettingsModule settings control the Security Role access for the module, page location, module container design and more. More details.
PaneA pane is a column on a tab. By default, each page can display either one, two or three panes. Modules appear on the main pane by default and can be moved to either the left or right panes. The width of the left and right panes is set under Tab Setting.
PortalA portal is another term for a website. Typically, a portal is a website with lots of links to other websites. DotNetNuke can be used as either a portal or a website; however DotNetNuke is referred to as a portal in this documentation.
Registered UserAny visitor who registers to be a member of the portal is a Registered User. This is the default Security Role to which all Users are added. A User cannot be removed from the Registered User Security Role. Users can however be set as Unauthorised which removes Registered User privileges.
RolesSee Security Roles below.
Security RolesSecurity Roles control user access to view and edit tabs and modules on the portal. Each user can belong to one or more Security Roles.
Each new portal begins with three security roles - Administrator, Demo User and Registered User. The portal Administrator is then able to add new security roles according to their business needs.
SiteA portal or website.
TabIn DotNetNuke Version 1 and 2 a page was called a Tab. This term is no longer used.
Page SettingsTab settings control the security role access for the tab, mobile telephone accessibility, design and more. See The Admin Tab, Working with Tabs for more details.
UserA user is any person who visits your portal.
VendorA vendor is a person or company who has been given advertising rights on your portal. Advertising methods are either inclusion in the portal Service Directory and/or banner advertising on the portal.

Skin Objects

Token

Control

Description

[SOLPARTMENU]

< dnn:SolPartMenu runat="server" id="dnnSolPartMenu">

Displays the hierarchical navigation menu ( formerly [MENU] )

[LOGIN]

< dnn:Login runat="server" id="dnnLogin">

Dual state control – displays “Login” for anonymous users and “Logout” for authenticated users.

[BANNER]

< dnn:Banner runat="server" id="dnnBanner">

Displays a random banner ad

[BREADCRUMB]

< dnn:Breadcrumb runat="server" id="dnnBreadcrumb">

Displays the path to the currently selected tab in the form of TabName1 > TabName2 > TabName3

[COPYRIGHT]

< dnn:Copyright runat="server" id="dnnCopyright">

Displays the copyright notice for the portal

[CURRENTDATE]

< dnn:CurrentDate runat="server" id="dnnCurrentDate">

Displays the current date

[DOTNETNUKE]

< dnn:DotNetNuke runat="server" id="dnnDotnetNuke">

Displays the Copyright notice for DotNetNuke ( not required )

[HELP]

< dnn:Help runat="server" id="dnnHelp">

Displays a link for Help which will launch the users email client and send mail to the portal Administrator

[HOSTNAME]

< dnn:HostName runat="server" id="dnnHostName">

Displays the Host Title linked to the Host URL

[LINKS]

< dnn:Links runat="server" id="dnnLinks">

Displays a flat menu of links related to the current tab level and parent node. This is useful for search engine spiders and robots

[LOGO]

< dnn:Logo runat="server" id="dnnLogo">

Displays the portal logo

[PRIVACY]

< dnn:Privacy runat="server" id="dnnPrivacy">

Displays a link to the Privacy Information for the portal

[SIGNIN]

< dnn:Signin runat="server" id="dnnSignin">

Displays the signin control for providing your username and password.

[TERMS]

< dnn:Terms runat="server" id="dnnTerms">

Displays a link to the Terms and Conditions for the portal

[USER]

< dnn:User runat="server" id="dnnUser">

Dual state control – displays a “Register” link for anonymous users or the users name for authenticated users.

[CONTENTPANE]

<div runat=”server” id=”ContentPane”>

Injects a placeholder for module content


Skin Objects

Token

Control

Description

[SOLPARTMENU]

< dnn:SolPartMenu runat="server" id="dnnSolPartMenu">

Displays the hierarchical navigation menu ( formerly [MENU] )

[LOGIN]

< dnn:Login runat="server" id="dnnLogin">

Dual state control – displays “Login” for anonymous users and “Logout” for authenticated users.

[BANNER]

< dnn:Banner runat="server" id="dnnBanner">

Displays a random banner ad

[BREADCRUMB]

< dnn:Breadcrumb runat="server" id="dnnBreadcrumb">

Displays the path to the currently selected tab in the form of TabName1 > TabName2 > TabName3

[COPYRIGHT]

< dnn:Copyright runat="server" id="dnnCopyright">

Displays the copyright notice for the portal

[CURRENTDATE]

< dnn:CurrentDate runat="server" id="dnnCurrentDate">

Displays the current date

[DOTNETNUKE]

< dnn:DotNetNuke runat="server" id="dnnDotnetNuke">

Displays the Copyright notice for DotNetNuke ( not required )

[HELP]

< dnn:Help runat="server" id="dnnHelp">

Displays a link for Help which will launch the users email client and send mail to the portal Administrator

[HOSTNAME]

< dnn:HostName runat="server" id="dnnHostName">

Displays the Host Title linked to the Host URL

[LINKS]

< dnn:Links runat="server" id="dnnLinks">

Displays a flat menu of links related to the current tab level and parent node. This is useful for search engine spiders and robots

[LOGO]

< dnn:Logo runat="server" id="dnnLogo">

Displays the portal logo

[PRIVACY]

< dnn:Privacy runat="server" id="dnnPrivacy">

Displays a link to the Privacy Information for the portal

[SIGNIN]

< dnn:Signin runat="server" id="dnnSignin">

Displays the signin control for providing your username and password.

[TERMS]

< dnn:Terms runat="server" id="dnnTerms">

Displays a link to the Terms and Conditions for the portal

[USER]

< dnn:User runat="server" id="dnnUser">

Dual state control – displays a “Register” link for anonymous users or the users name for authenticated users.

[CONTENTPANE]

<div runat=”server” id=”ContentPane”>

Injects a placeholder for module content

The Host Administrator Role

Overview of Host Administrator Role

The Host pages are accessible to only to the SuperUser Account. A SuperUser Account is an account which is a different type of user account and users are added via the SuperUser Page under the Host Pages. For the purpose of this guide, we refer to the SuperUser as the Host Administrator; however it can also be referred to as the Host Account, the Host User or the Host.

The Host pages permits a portal Host to configure the settings of the Parent portal (e.g. http://www.dotnetnuke.com/) and any Child portals created under this Parent portal (e.g. http://www.dotnetnuke.com/Child), manage users of the portal, manage Host site vendor accounts and banners, manage security roles, manage files, and send bulk email

Find a User’s Verification Code

  1. Navigate to the Host > SQL page.
  2. Enter the following into the SQL window:

    SELECT dbo.Portals.PortalID, dbo.Users.UserID
    FROM dbo.Users INNER JOIN
    dbo.UserPortals ON dbo.Users.UserID = dbo.UserPortals.UserId INNER JOIN
    dbo.Portals ON dbo.UserPortals.PortalId = dbo.Portals.PortalID
    WHERE (dbo.Users.FirstName = N’FIRSTNAME’) AND (dbo.Users.LastName = N’LASTNAME’)
  3. Click Execute.

 

转载于:https://www.cnblogs.com/henry_zjk/articles/139104.html

# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license """ Run YOLOv5 detection inference on images, videos, directories, globs, YouTube, webcam, streams, etc. Usage - sources: $ python detect.py --weights yolov5s.pt --source 0 # webcam img.jpg # image vid.mp4 # video screen # screenshot path/ # directory list.txt # list of images list.streams # list of streams 'path/*.jpg' # glob 'https://youtu.be/LNwODJXcvt4' # YouTube 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream Usage - formats: $ python detect.py --weights yolov5s.pt # PyTorch yolov5s.torchscript # TorchScript yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn yolov5s_openvino_model # OpenVINO yolov5s.engine # TensorRT yolov5s.mlpackage # CoreML (macOS-only) yolov5s_saved_model # TensorFlow SavedModel yolov5s.pb # TensorFlow GraphDef yolov5s.tflite # TensorFlow Lite yolov5s_edgetpu.tflite # TensorFlow Edge TPU yolov5s_paddle_model # PaddlePaddle """ import argparse import csv import os import platform import sys from pathlib import Path import torch FILE = Path(__file__).resolve() ROOT = FILE.parents[0] # YOLOv5 root directory if str(ROOT) not in sys.path: sys.path.append(str(ROOT)) # add ROOT to PATH ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative from ultralytics.utils.plotting import Annotator, colors, save_one_box from models.common import DetectMultiBackend from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams from utils.general import ( LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2, increment_path, non_max_suppression, print_args, scale_boxes, strip_optimizer, xyxy2xywh, ) from utils.torch_utils import select_device, smart_inference_mode # 新增:计算IOU函数 def calculate_iou(box1, box2): """计算两个边界框的IOU""" x1, y1, x2, y2 = box1 x1g, y1g, x2g, y2g = box2 # 计算交集区域 xA = max(x1, x1g) yA = max(y1, y1g) xB = min(x2, x2g) yB = min(y2, y2g) # 计算交集面积 inter_area = max(0, xB - xA + 1) * max(0, yB - yA + 1) # 计算并集面积 box1_area = (x2 - x1 + 1) * (y2 - y1 + 1) box2_area = (x2g - x1g + 1) * (y2g - y1g + 1) union_area = float(box1_area + box2_area - inter_area) # 计算IOU iou = inter_area / union_area return iou # 新增:计算准确率函数 def calculate_accuracy(gt_labels, pred_detections, iou_threshold=0.5): """计算目标检测的准确率""" correct_predictions = 0 total_gt_objects = 0 total_pred_objects = 0 for img_name in gt_labels: if img_name not in pred_detections: continue gt_boxes = gt_labels[img_name] pred_boxes = pred_detections[img_name] total_gt_objects += len(gt_boxes) total_pred_objects += len(pred_boxes) # 标记已匹配的真实标签 gt_matched = [False] * len(gt_boxes) for pred_box in pred_boxes: pred_class, pred_bbox, pred_conf = pred_box best_iou = 0 best_gt_idx = -1 # 寻找最佳匹配的真实标签 for i, gt_box in enumerate(gt_boxes): gt_class, gt_bbox = gt_box if gt_matched[i]: continue iou = calculate_iou(pred_bbox, gt_bbox) if iou > best_iou and pred_class == gt_class: best_iou = iou best_gt_idx = i # 如果IOU超过阈值且类别正确,则计为正确预测 if best_gt_idx != -1 and best_iou >= iou_threshold: correct_predictions += 1 gt_matched[best_gt_idx] = True # 避免除零错误 if total_gt_objects == 0: return 0.0 # 计算准确率 return correct_predictions / total_gt_objects @smart_inference_mode() def run( weights=ROOT / "yolov5s.pt", # model path or triton URL source=ROOT / "data/images", # file/dir/URL/glob/screen/0(webcam) data=ROOT / "data/coco128.yaml", # dataset.yaml path imgsz=(640, 640), # inference size (height, width) conf_thres=0.25, # confidence threshold iou_thres=0.45, # NMS IOU threshold max_det=1000, # maximum detections per image device="", # cuda device, i.e. 0 or 0,1,2,3 or cpu view_img=False, # show results save_txt=False, # save results to *.txt save_format=0, # save boxes coordinates in YOLO format or Pascal-VOC format (0 for YOLO and 1 for Pascal-VOC) save_csv=False, # save results in CSV format save_conf=False, # save confidences in --save-txt labels save_crop=False, # save cropped prediction boxes nosave=False, # do not save images/videos classes=None, # filter by class: --class 0, or --class 0 2 3 agnostic_nms=False, # class-agnostic NMS augment=False, # augmented inference visualize=False, # visualize features update=False, # update all models project=ROOT / "runs/detect", # save results to project/name name="exp", # save results to project/name exist_ok=False, # existing project/name ok, do not increment line_thickness=3, # bounding box thickness (pixels) hide_labels=False, # hide labels hide_conf=False, # hide confidences half=False, # use FP16 half-precision inference dnn=False, # use OpenCV DNN for ONNX inference vid_stride=1, # video frame-rate stride gt_dir="", # 新增:真实标签目录 eval_interval=10, # 新增:评估间隔帧数 ): """ Runs YOLOv5 detection inference on various sources like images, videos, directories, streams, etc. Args: weights (str | Path): Path to the model weights file or a Triton URL. Default is 'yolov5s.pt'. source (str | Path): Input source, which can be a file, directory, URL, glob pattern, screen capture, or webcam index. Default is 'data/images'. data (str | Path): Path to the dataset YAML file. Default is 'data/coco128.yaml'. imgsz (tuple[int, int]): Inference image size as a tuple (height, width). Default is (640, 640). conf_thres (float): Confidence threshold for detections. Default is 0.25. iou_thres (float): Intersection Over Union (IOU) threshold for non-max suppression. Default is 0.45. max_det (int): Maximum number of detections per image. Default is 1000. device (str): CUDA device identifier (e.g., '0' or '0,1,2,3') or 'cpu'. Default is an empty string, which uses the best available device. view_img (bool): If True, display inference results using OpenCV. Default is False. save_txt (bool): If True, save results in a text file. Default is False. save_format (int): Whether to save boxes coordinates in YOLO format or Pascal-VOC format. Default is 0. save_csv (bool): If True, save results in a CSV file. Default is False. save_conf (bool): If True, include confidence scores in the saved results. Default is False. save_crop (bool): If True, save cropped prediction boxes. Default is False. nosave (bool): If True, do not save inference images or videos. Default is False. classes (list[int]): List of classes to filter detections by. Default is None. agnostic_nms (bool): If True, perform class-agnostic non-max suppression. Default is False. augment (bool): If True, use augmented inference. Default is False. visualize (bool): If True, visualize feature maps. Default is False. update (bool): If True, update all models' weights. Default is False. project (str | Path): Directory to save results. Default is 'runs/detect'. name (str): Name of the current experiment; used to create a subdirectory within 'project'. Default is 'exp'. exist_ok (bool): If True, existing directories with the same name are reused instead of being incremented. Default is False. line_thickness (int): Thickness of bounding box lines in pixels. Default is 3. hide_labels (bool): If True, do not display labels on bounding boxes. Default is False. hide_conf (bool): If True, do not display confidence scores on bounding boxes. Default is False. half (bool): If True, use FP16 half-precision inference. Default is False. dnn (bool): If True, use OpenCV DNN backend for ONNX inference. Default is False. vid_stride (int): Stride for processing video frames, to skip frames between processing. Default is 1. gt_dir (str): 新增:真实标签目录路径 eval_interval (int): 新增:每隔多少帧计算一次准确率 Returns: None """ source = str(source) save_img = not nosave and not source.endswith(".txt") # save inference images is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS) is_url = source.lower().startswith(("rtsp://", "rtmp://", "http://", "https://")) webcam = source.isnumeric() or source.endswith(".streams") or (is_url and not is_file) screenshot = source.lower().startswith("screen") if is_url and is_file: source = check_file(source) # download # Directories save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run (save_dir / "labels" if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir # Load model device = select_device(device) model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) stride, names, pt = model.stride, model.names, model.pt imgsz = check_img_size(imgsz, s=stride) # check image size # Dataloader bs = 1 # batch_size if webcam: view_img = check_imshow(warn=True) dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) bs = len(dataset) elif screenshot: dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt) else: dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) vid_path, vid_writer = [None] * bs, [None] * bs # 新增:加载真实标签数据 gt_labels = {} if gt_dir: gt_dir = Path(gt_dir) for txt_file in gt_dir.glob("*.txt"): img_name = txt_file.stem gt_labels[img_name] = [] with open(txt_file, "r") as f: for line in f: parts = line.strip().split() if len(parts) >= 5: cls = int(parts[0]) # 将YOLO格式转换为xyxy格式 x, y, w, h = map(float, parts[1:5]) # 假设真实标签对应的图像尺寸与输入图像一致 x1 = (x - w/2) * imgsz[1] y1 = (y - h/2) * imgsz[0] x2 = (x + w/2) * imgsz[1] y2 = (y + h/2) * imgsz[0] gt_labels[img_name].append((cls, (x1, y1, x2, y2))) # 新增:收集预测结果 pred_detections = {} frame_count = 0 accuracy = 0.0 # 初始化准确率 # Run inference model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz)) # warmup seen, windows, dt = 0, [], (Profile(device=device), Profile(device=device), Profile(device=device)) for path, im, im0s, vid_cap, s in dataset: with dt[0]: im = torch.from_numpy(im).to(model.device) im = im.half() if model.fp16 else im.float() # uint8 to fp16/32 im /= 255 # 0 - 255 to 0.0 - 1.0 if len(im.shape) == 3: im = im[None] # expand for batch dim if model.xml and im.shape[0] > 1: ims = torch.chunk(im, im.shape[0], 0) # Inference with dt[1]: visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False if model.xml and im.shape[0] > 1: pred = None for image in ims: if pred is None: pred = model(image, augment=augment, visualize=visualize).unsqueeze(0) else: pred = torch.cat((pred, model(image, augment=augment, visualize=visualize).unsqueeze(0)), dim=0) pred = [pred, None] else: pred = model(im, augment=augment, visualize=visualize) # NMS with dt[2]: pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det) # Second-stage classifier (optional) # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s) # Define the path for the CSV file csv_path = save_dir / "predictions.csv" # Create or append to the CSV file def write_to_csv(image_name, prediction, confidence): """Writes prediction data for an image to a CSV file, appending if the file exists.""" data = {"Image Name": image_name, "Prediction": prediction, "Confidence": confidence} file_exists = os.path.isfile(csv_path) with open(csv_path, mode="a", newline="") as f: writer = csv.DictWriter(f, fieldnames=data.keys()) if not file_exists: writer.writeheader() writer.writerow(data) # Process predictions for i, det in enumerate(pred): # per image seen += 1 if webcam: # batch_size >= 1 p, im0, frame = path[i], im0s[i].copy(), dataset.count s += f"{i}: " else: p, im0, frame = path, im0s.copy(), getattr(dataset, "frame", 0) p = Path(p) # to Path save_path = str(save_dir / p.name) # im.jpg txt_path = str(save_dir / "labels" / p.stem) + ("" if dataset.mode == "image" else f"_{frame}") # im.txt s += "{:g}x{:g} ".format(*im.shape[2:]) # print string gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh imc = im0.copy() if save_crop else im0 # for save_crop annotator = Annotator(im0, line_width=line_thickness, example=str(names)) if len(det): # Rescale boxes from img_size to im0 size det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round() # Print results for c in det[:, 5].unique(): n = (det[:, 5] == c).sum() # detections per class s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string # Write results for *xyxy, conf, cls in reversed(det): c = int(cls) # integer class label = names[c] if hide_conf else f"{names[c]}" confidence = float(conf) confidence_str = f"{confidence:.2f}" if save_csv: write_to_csv(p.name, label, confidence_str) if save_txt: # Write to file if save_format == 0: coords = ( (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() ) # normalized xywh else: coords = (torch.tensor(xyxy).view(1, 4) / gn).view(-1).tolist() # xyxy line = (cls, *coords, conf) if save_conf else (cls, *coords) # label format with open(f"{txt_path}.txt", "a") as f: f.write(("%g " * len(line)).rstrip() % line + "\n") if save_img or save_crop or view_img: # Add bbox to image c = int(cls) # integer class label = None if hide_labels else (names[c] if hide_conf else f"{names[c]} {conf:.2f}") annotator.box_label(xyxy, label, color=colors(c, True)) if save_crop: save_one_box(xyxy, imc, file=save_dir / "crops" / names[c] / f"{p.stem}.jpg", BGR=True) # 新增:收集预测结果 img_name = p.stem pred_detections[img_name] = [] if len(det): for *xyxy, conf, cls in det: c = int(cls) x1, y1, x2, y2 = map(int, xyxy) pred_detections[img_name].append((c, (x1, y1, x2, y2), float(conf))) # 新增:定期计算准确率并显示 frame_count += 1 if gt_dir and frame_count % eval_interval == 0: accuracy = calculate_accuracy(gt_labels, pred_detections) if save_img or view_img: accuracy_text = f"Accuracy: {accuracy:.2f}" annotator.text((10, 30), accuracy_text, txt_color=(255, 255, 255)) im0 = annotator.result() # Stream results im0 = annotator.result() if view_img: if platform.system() == "Linux" and p not in windows: windows.append(p) cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux) cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0]) cv2.imshow(str(p), im0) cv2.waitKey(1) # 1 millisecond # Save results (image with detections) if save_img: if dataset.mode == "image": cv2.imwrite(save_path, im0) else: # 'video' or 'stream' if vid_path[i] != save_path: # new video vid_path[i] = save_path if isinstance(vid_writer[i], cv2.VideoWriter): vid_writer[i].release() # release previous video writer if vid_cap: # video fps = vid_cap.get(cv2.CAP_PROP_FPS) w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) else: # stream fps, w, h = 30, im0.shape[1], im0.shape[0] save_path = str(Path(save_path).with_suffix(".mp4")) # force *.mp4 suffix on results videos vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h)) vid_writer[i].write(im0) # Print time (inference-only) LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1e3:.1f}ms") # 新增:在终端输出最终准确率 if gt_dir: accuracy = calculate_accuracy(gt_labels, pred_detections) LOGGER.info(f"Overall Accuracy: {accuracy:.4f}") # Print results t = tuple(x.t / seen * 1e3 for x in dt) # speeds per image LOGGER.info(f"Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}" % t) if save_txt or save_img: s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else "" LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") if update: strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning) def parse_opt(): """ Parse command-line arguments for YOLOv5 detection, allowing custom inference options and model configurations. Args: --weights (str | list[str], optional): Model path or triton URL. Defaults to ROOT / 'yolov5s.pt'. --source (str, optional): File/dir/URL/glob/screen/0(webcam). Defaults to ROOT / 'data/images'. --data (str, optional): Dataset YAML path. Provides dataset configuration information. --imgsz (list[int], optional): Inference size (height, width). Defaults to [640]. --conf-thres (float, optional): Confidence threshold. Defaults to 0.25. --iou-thres (float, optional): NMS IoU threshold. Defaults to 0.45. --max-det (int, optional): Maximum number of detections per image. Defaults to 1000. --device (str, optional): CUDA device, i.e. 0 or 0,1,2,3 or cpu. Defaults to "". --view-img (bool, optional): Flag to display results. Default is False. --save-txt (bool, optional): Flag to save results to *.txt files. Default is False. --save-format (int, optional): Whether to save boxes coordinates in YOLO format or Pascal-VOC format. Default is 0. --save-csv (bool, optional): Flag to save results in CSV format. Default is False. --save-conf (bool, optional): Flag to save confidences in labels saved via --save-txt. Default is False. --save-crop (bool, optional): Flag to save cropped prediction boxes. Default is False. --nosave (bool, optional): Flag to prevent saving images/videos. Default is False. --classes (list[int], optional): List of classes to filter results by. Default is None. --agnostic-nms (bool, optional): Flag for class-agnostic NMS. Default is False. --augment (bool, optional): Flag for augmented inference. Default is False. --visualize (bool, optional): Flag for visualizing features. Default is False. --update (bool, optional): Flag to update all models in the model directory. Default is False. --project (str, optional): Directory to save results. Default is ROOT / 'runs/detect'. --name (str, optional): Sub-directory name for saving results within --project. Default is 'exp'. --exist-ok (bool, optional): Flag to allow overwriting if the project/name already exists. Default is False. --line-thickness (int, optional): Thickness (in pixels) of bounding boxes. Default is 3. --hide-labels (bool, optional): Flag to hide labels in the output. Default is False. --hide-conf (bool, optional): Flag to hide confidences in the output. Default is False. --half (bool, optional): Flag to use FP16 half-precision inference. Default is False. --dnn (bool, optional): Flag to use OpenCV DNN for ONNX inference. Default is False. --vid-stride (int, optional): Video frame-rate stride. Default is 1. --gt-dir (str, optional): 新增:真实标签目录路径 --eval-interval (int, optional): 新增:每隔多少帧计算一次准确率 Returns: argparse.Namespace: Parsed command-line arguments as an argparse.Namespace object. """ parser = argparse.ArgumentParser() parser.add_argument("--weights", nargs="+", type=str, default=ROOT / "yolov5s.pt", help="model path or triton URL") parser.add_argument("--source", type=str, default=ROOT / "data/images", help="file/dir/URL/glob/screen/0(webcam)") parser.add_argument("--data", type=str, default=ROOT / "data/coco128.yaml", help="(optional) dataset.yaml path") parser.add_argument("--imgsz", "--img", "--img-size", nargs="+", type=int, default=[640], help="inference size h,w") parser.add_argument("--conf-thres", type=float, default=0.25, help="confidence threshold") parser.add_argument("--iou-thres", type=float, default=0.45, help="NMS IoU threshold") parser.add_argument("--max-det", type=int, default=1000, help="maximum detections per image") parser.add_argument("--device", default="", help="cuda device, i.e. 0 or 0,1,2,3 or cpu") parser.add_argument("--view-img", action="store_true", help="show results") parser.add_argument("--save-txt", action="store_true", help="save results to *.txt") parser.add_argument( "--save-format", type=int, default=0, help="whether to save boxes coordinates in YOLO format or Pascal-VOC format when save-txt is True, 0 for YOLO and 1 for Pascal-VOC", ) parser.add_argument("--save-csv", action="store_true", help="save results in CSV format") parser.add_argument("--save-conf", action="store_true", help="save confidences in --save-txt labels") parser.add_argument("--save-crop", action="store_true", help="save cropped prediction boxes") parser.add_argument("--nosave", action="store_true", help="do not save images/videos") parser.add_argument("--classes", nargs="+", type=int, help="filter by class: --classes 0, or --classes 0 2 3") parser.add_argument("--agnostic-nms", action="store_true", help="class-agnostic NMS") parser.add_argument("--augment", action="store_true", help="augmented inference") parser.add_argument("--visualize", action="store_true", help="visualize features") parser.add_argument("--update", action="store_true", help="update all models") parser.add_argument("--project", default=ROOT / "runs/detect", help="save results to project/name") parser.add_argument("--name", default="exp", help="save results to project/name") parser.add_argument("--exist-ok", action="store_true", help="existing project/name ok, do not increment") parser.add_argument("--line-thickness", default=3, type=int, help="bounding box thickness (pixels)") parser.add_argument("--hide-labels", default=False, action="store_true", help="hide labels") parser.add_argument("--hide-conf", default=False, action="store_true", help="hide confidences") parser.add_argument("--half", action="store_true", help="use FP16 half-precision inference") parser.add_argument("--dnn", action="store_true", help="use OpenCV DNN for ONNX inference") parser.add_argument("--vid-stride", type=int, default=1, help="video frame-rate stride") # 新增参数 parser.add_argument("--gt-dir", type=str, default="", help="ground truth labels directory") parser.add_argument("--eval-interval", type=int, default=10, help="evaluate accuracy every N frames") opt = parser.parse_args() opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand print_args(vars(opt)) return opt def main(opt): """ Executes YOLOv5 model inference based on provided command-line arguments, validating dependencies before running. Args: opt (argparse.Namespace): Command-line arguments for YOLOv5 detection. Returns: None """ check_requirements(ROOT / "requirements.txt", exclude=("tensorboard", "thop")) run(**vars(opt)) if __name__ == "__main__": opt = parse_opt() main(opt)代码如上。yolov5在detect.py得到有类别和置信度标注的视频和图片,请问我如何操作,才能在有类别和置信度标注的视频和图片的基础上,在视频或图片中显示识别准确率Accuracy。请给出修改后的完整代码(尽量少修改,不要改变代码的其他地方),要求直接在vscode点击运行即可生成显示识别准确率Accuracy的视频或图片
07-07
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值