Time and Duration and Epoch

本文详细介绍了Go语言中处理当前时间、指定时间、时间字段、比较、间隔计算、格式化以及时间戳的用法,包括年月日、时分秒、时区等,并展示了关键的时间操作技巧和示例。

当前时间

now := time.Now()
fmt.Println("now:", now)

now: 2022-04-16 10:50:52.283976 +0800 CST m=+0.000111228

指定时间

then := time.Date(2009, 11, 17, 20, 34, 58, 651387237, time.UTC)
fmt.Println("then:", then)

then: 2009-11-17 20:34:58.651387237 +0000 UTC

参数分别为:年,月,日,时,分,秒,纳秒,时区。

时间字段

now := time.Now()
fmt.Println("now:", now)
fmt.Println("year:", now.Year())
fmt.Println("month:", now.Month())
fmt.Println("day:", now.Day())
fmt.Println("hour:", now.Hour())
fmt.Println("minute:", now.Minute())
fmt.Println("second:", now.Second())
fmt.Println("nanosecond:", now.Nanosecond())
fmt.Println("location:", now.Location())
fmt.Println("weekday:", now.Weekday())

now: 2022-04-16 11:10:21.76101 +0800 CST m=+0.000108970
year: 2022
month: April
day: 16
hour: 11
minute: 10
second: 21
nanosecond: 761010000
location: Local
weekday: Saturday

时间比较

now := time.Now()
then := time.Date(2009, 11, 17, 20, 34, 58, 651387237, time.UTC)
fmt.Println("then before now:", then.Before(now))
fmt.Println("then after now:", then.After(now))
fmt.Println("then equal now:", then.Equal(now))

then before now: true
then after now: false
then equal now: false

时间间隔

	now := time.Now()
	then := time.Date(2009, 11, 17, 20, 34, 58, 651387237, time.UTC)
	diff := now.Sub(then)
	fmt.Println("diff:", diff)
	fmt.Println("hours:", diff.Hours())
	fmt.Println("minutes:", diff.Minutes())
	fmt.Println("seconds:", diff.Seconds())
	fmt.Println("nanoseconds:", diff.Nanoseconds())

diff: 108774h59m46.516185763s
hours: 108774.99625449604
minutes: 6.526499775269763e+06
seconds: 3.9158998651618576e+08
nanoseconds: 391589986516185763

时间计算

now := time.Now()
fmt.Println("now:", now)
fmt.Println("now add 3 hour", now.Add(3*time.Hour))
fmt.Println("now add -3 hour", now.Add(-3*time.Hour))

now: 2022-04-16 11:47:08.20477 +0800 CST m=+0.000109580
now add 3 hour 2022-04-16 14:47:08.20477 +0800 CST m=+10800.000109580
now add -3 hour 2022-04-16 08:47:08.20477 +0800 CST m=-10799.999890420

时间戳

now := time.Now()
fmt.Println("now:", now)
fmt.Println("now unix:", now.Unix())
fmt.Println("now unix milli:", now.UnixMilli())
fmt.Println("now unix nano:", now.UnixNano())

now: 2022-04-16 13:18:19.23993 +0800 CST m=+0.000111504
now unix: 1650086299
now unix milli: 1650086299239
now unix nano: 1650086299239930000

时间戳转换

fmt.Println("time from unix:", time.Unix(1650080700, 0))
fmt.Println("time from unix nano:", time.Unix(0, 1650080700587122000))

time from unix: 2022-04-16 11:45:00 +0800 CST
time from unix nano: 2022-04-16 11:45:00.587122 +0800 CST

时间格式化

以下是格式化的说明:

// Year: “2006” “06”
// Month: “Jan” “January”
// Textual day of the week: “Mon” “Monday”
// Numeric day of the month: “2” “_2” “02”
// Numeric day of the year: “__2” “002”
// Hour: “15” “3” “03” (PM or AM)
// Minute: “4” “04”
// Second: “5” “05”
// AM/PM mark: “PM”
//
// Numeric time zone offsets format as follows:
// “-0700” ±hhmm
// “-07:00” ±hh:mm
// “-07” ±hh
// Replacing the sign in the format with a Z triggers
// the ISO 8601 behavior of printing Z instead of an
// offset for the UTC zone. Thus:
// “Z0700” Z or ±hhmm
// “Z07:00” Z or ±hh:mm
// “Z07” Z or ±hh

now := time.Now()
fmt.Println(now.Format(time.RFC3339))
fmt.Println(now.Format("3:04PM"))
fmt.Println(now.Format("Mon Jan _2 15:04:05 2006"))
fmt.Println(now.Format("2006-01-02T15:04:05.999999-07:00"))

2022-04-16T14:24:19+08:00
2:24PM
Sat Apr 16 14:24:19 2022
2022-04-16T14:24:19.73223+08:00

now := time.Now()
fmt.Printf("%d-%02d-%02dT%02d:%02d:%02d-00:00\n", now.Year(), now.Month(), now.Day(), now.Hour(), now.Minute(), now.Second())

2022-04-16T14:31:59-00:00

时间字符串解析

layout := "3 04 PM"
t, _ := time.Parse(layout, "8 41 PM")
fmt.Println(t)

0000-01-01 20:41:00 +0000 UTC

t, _ := time.Parse(time.RFC3339, "2012-11-01T22:08:41+00:00")
fmt.Println(t)

2012-11-01 22:08:41 +0000 +0000

解析失败时,error会被返回。

layout := "Mon Jan _2 15:04:05 2006"
_, e := time.Parse(layout, "8:41PM")
fmt.Println(e)

parsing time “8:41PM” as “Mon Jan _2 15:04:05 2006”: cannot parse “8:41PM” as “Mon”

import numpy as np import os from glob import glob import trimesh import open3d as o3d import torch def farthest_point_sample(pointcloud, num_points): """最远点采样(FPS),用于统一输出点数""" pointcloud = np.array(pointcloud) num_all = pointcloud.shape[0] if num_all <= num_points: return pointcloud farthest_pts = np.zeros((num_points, 3)) farthest_pts[0] = pointcloud[np.random.randint(num_all)] distances = np.sum((pointcloud - farthest_pts[0]) ** 2, axis=1) for i in range(1, num_points): idx = np.argmax(distances) farthest_pts[i] = pointcloud[idx] distances = np.minimum(distances, np.sum((pointcloud - farthest_pts[i]) ** 2, axis=1)) return farthest_pts class UnifiedPointCloudDataset: def __init__(self, data_dirs): self.data_dirs = data_dirs self.file_paths = [] for data_dir in data_dirs: self.file_paths.extend(glob(os.path.join(data_dir, "*.npy"))) def __len__(self): return len(self.file_paths) def __getitem__(self, idx): file_path = self.file_paths[idx] points = np.load(file_path) # 直接加载预处理好的 .npy 文件 return torch.tensor(points, dtype=torch.float32) import os import numpy as np import trimesh import open3d as o3d def preprocess_and_save(input_path, output_path, voxel_size=0.02): ext = os.path.splitext(input_path)[1].lower() print(f"Processing: {input_path}") if ext == '.ply': pcd = trimesh.load(input_path) elif ext == '.stl': pcd = trimesh.load(input_path) elif ext == '.obj': pcd = trimesh.load(input_path) else: raise ValueError(f"Unsupported format: {ext}") # 转换为 open3d 点云并进行体素下采样 pcd_o3d = o3d.geometry.PointCloud() pcd_o3d.points = o3d.utility.Vector3dVector(pcd.vertices) pcd_o3d = pcd_o3d.voxel_down_sample(voxel_size=voxel_size) # 直接保存下采样后的点云 sampled_points = np.array(pcd_o3d.points) np.save(output_path, sampled_points) print(f"Saved: {output_path}") def run_preprocess(data_root, save_root, voxel_size=0.02): os.makedirs(save_root, exist_ok=True) for root, _, files in os.walk(data_root): for file in files: if file.lower().endswith(('.ply', '.stl', '.obj')): file_path = os.path.join(root, file) rel_path = os.path.relpath(file_path, data_root) save_path = os.path.join(save_root, os.path.splitext(rel_path)[0] + ".npy") os.makedirs(os.path.dirname(save_path), exist_ok=True) preprocess_and_save(file_path, save_path, voxel_size) if __name__ == '__main__': DATA_ROOT = r"D:\桌面\point\data1\part1" SAVE_ROOT = r"D:\桌面\point\preprocessed_data" run_preprocess(DATA_ROOT, SAVE_ROOT) import torch import torch.nn as nn class PointMLP(nn.Module): def __init__(self, input_points=4096, output_points=1024): super(PointMLP, self).__init__() self.input_points = input_points self.output_points = output_points # Encoder self.encoder = nn.Sequential( nn.Conv1d(3, 64, 1), nn.BatchNorm1d(64), nn.ReLU(), nn.Conv1d(64, 128, 1), nn.BatchNorm1d(128), nn.ReLU(), nn.Conv1d(128, 256, 1), nn.BatchNorm1d(256), nn.ReLU(), nn.Conv1d(256, 512, 1), nn.BatchNorm1d(512), nn.ReLU(), nn.Conv1d(512, 1024, 1), nn.BatchNorm1d(1024), nn.ReLU(), ) # Decoder self.decoder = nn.Sequential( nn.Linear(1024, 1024), nn.BatchNorm1d(1024), nn.ReLU(), nn.Linear(1024, 512), nn.BatchNorm1d(512), nn.ReLU(), nn.Linear(512, output_points * 3) ) def forward(self, x): batch_size = x.size(0) x = x.transpose(1, 2) features = self.encoder(x) global_feature = torch.max(features, dim=2)[0] output = self.decoder(global_feature) return output.view(batch_size, self.output_points, 3) import torch from torch import nn def chamfer_distance(pred, target): pred = pred.unsqueeze(1) target = target.unsqueeze(2) dist = torch.sum((pred - target) ** 2, dim=-1) dist1, _ = torch.min(dist, dim=1) dist2, _ = torch.min(dist, dim=2) loss = (torch.mean(dist1) + torch.mean(dist2)) / 2 return loss class ChamferLoss(nn.Module): def __init__(self): super(ChamferLoss, self).__init__() def forward(self, pred, target): return chamfer_distance(pred, target) import torch import torch.nn as nn from torch.utils.data import DataLoader from torch.cuda.amp import autocast, GradScaler import os import time import numpy as np from pointmlp import PointMLP from loss import ChamferLoss from dataset import UnifiedPointCloudDataset # ========== 配置参数 ========== DATA_ROOT = r"D:\桌面\point\preprocessed_data" # 预处理后的 .npy 数据目录 NUM_PARTS = 20 DATA_DIRS = [os.path.join(DATA_ROOT, f"part{i}") for i in range(1, NUM_PARTS + 1)] # 模型配置 NUM_POINTS = 4096 # 输入点数(预处理已统一) OUTPUT_POINTS = 1024 # 输出点数 BATCH_SIZE = 8 # 可根据显存调整 LR = 0.0005 EPOCHS = 50 # =========================================== def train(): dataset = UnifiedPointCloudDataset(DATA_DIRS) dataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, drop_last=True, num_workers=4) model = PointMLP(input_points=NUM_POINTS, output_points=OUTPUT_POINTS) optimizer = torch.optim.Adam(model.parameters(), lr=LR) criterion = ChamferLoss() device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) os.makedirs("checkpoints", exist_ok=True) # 初始化 GradScaler(用于 AMP) scaler = GradScaler() print(f"✅ 开始训练,总共 {len(dataset)} 个点云文件,使用 batch size {BATCH_SIZE}") for epoch in range(EPOCHS): epoch_start_time = time.time() model.train() total_loss = 0 print(f"\n{'=' * 20} Epoch [{epoch + 1}/{EPOCHS}] 开始 {'=' * 20}") batch_times = [] for batch_idx, points in enumerate(dataloader): batch_start_time = time.time() points = points.to(device) # 混合精度训练上下文 with autocast(): output = model(points) loss = criterion(output, points) # 使用 scaler 进行反向传播和优化 scaler.scale(loss).backward() scaler.step(optimizer) scaler.update() optimizer.zero_grad() total_loss += loss.item() batch_duration = time.time() - batch_start_time batch_times.append(batch_duration) if (batch_idx + 1) % 10 == 0: avg_batch_time = sum(batch_times[-10:]) / len(batch_times[-10:]) print(f"Epoch [{epoch + 1}/{EPOCHS}], Batch [{batch_idx + 1}/{len(dataloader)}], " f"Loss: {loss.item():.4f}, 耗时: {batch_duration:.2f}s, 平均: {avg_batch_time:.2f}s") avg_loss = total_loss / len(dataloader) epoch_duration = time.time() - epoch_start_time print(f"{'=' * 20} Epoch [{epoch + 1}/{EPOCHS}] 完成 {'=' * 20}") print(f"平均损失: {avg_loss:.4f}") print(f"总耗时: {epoch_duration:.2f}s") print(f"平均每个batch耗时: {sum(batch_times) / len(batch_times):.2f}s") print(f"最后保存模型: checkpoints/pointmlp_epoch_{epoch + 1}.pth") print("-" * 50) torch.save(model.state_dict(), f"checkpoints/pointmlp_epoch_{epoch + 1}.pth") if __name__ == '__main__': print("🚀 训练程序启动") start_time = time.time() train() total_duration = time.time() - start_time print(f"\n🏁 训练程序结束,总运行时间: {total_duration:.2f}s") 这是我的所有代码,之前预处理使用的是体素下采样和FPS下采样,同意输出点4096个,供模型进行训练,但是处理速度太慢了,我现在只使用了体素下采样,去掉了FPS,相应的你给我改一下其他的代码,因为我觉得只使用4096个点训练的模型不完整,因为我一个点云数据就有100万个点,怎么不得保留5%到10%的点才能完整的表现出原来数据的特征。你就我现在只使用体素下采样的预处理,对其他的代码进行修改
07-26
(01) wangtianze24@b0215dbf77d3:~/project_all_wang/qqAttnSleep-main/AttnSleep-main/prepare_datasets$ python prepare_physionet.py --data_dir /home/wangtianze24/sleep_edf20/SLEEP_EDF_20 --output_dir data_edf_20_npz/fpzcz --select_ch "EEG Fpz-Cz" Extracting EDF parameters from /home/wangtianze24/sleep_edf20/SLEEP_EDF_20/SC4001E0-PSG.edf... EDF file detected Setting channel info structure... Creating raw.info structure... Reading 0 ... 7949999 = 0.000 ... 79499.990 secs... /home/wangtianze24/project_all_wang/qqAttnSleep-main/AttnSleep-main/prepare_datasets/prepare_physionet.py:95: DeprecationWarning: scaling_time is deprecated in version 0.20 and will be removed in version 0.21. To replicate old behavior, use time_format="ms" to get time in milliseconds, or use time_format=None and scale the time column (in seconds) after DataFrame creation. raw_ch_df = raw.to_data_frame(scaling_time=100.0)[select_ch] Traceback (most recent call last): File "/home/wangtianze24/project_all_wang/qqAttnSleep-main/AttnSleep-main/prepare_datasets/prepare_physionet.py", line 218, in <module> main() File "/home/wangtianze24/project_all_wang/qqAttnSleep-main/AttnSleep-main/prepare_datasets/prepare_physionet.py", line 131, in main label_epoch = np.ones(duration_epoch, dtype=np.int) * label File "/home/wangtianze24/anaconda3/envs/01/lib/python3.9/site-packages/numpy/__init__.py", line 305, in __getattr__ raise AttributeError(__former_attrs__[attr]) AttributeError: module 'numpy' has no attribute 'int'. `np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information. The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
03-22
import torch import torch.nn as nn from torch.utils.data import DataLoader from torch.cuda.amp import autocast, GradScaler import os import time import numpy as np from pointmlp import PointMLP from loss import ChamferLoss from dataset import UnifiedPointCloudDataset import os from glob import glob DATA_ROOT = r"G:\point\preprocessed_data" # 预处理后的 .npy 数据目录 NUM_PARTS = 1 DATA_DIRS = [os.path.join(DATA_ROOT, f"part{i}") for i in range(1, NUM_PARTS + 1)] # ========== 递归查找所有 .npy 文件,只执行一次 ========== def find_all_npy_files(data_dir): """递归查找所有子目录中的 .npy 文件""" npy_files = [] for root, dirs, files in os.walk(data_dir): for file in files: if file.endswith(".npy"): npy_files.append(os.path.join(root, file)) return npy_files # 只搜索一次 print("🔍 正在搜索数据目录:") all_file_paths = [] for data_dir in DATA_DIRS: if not os.path.exists(data_dir): print(f"⚠️ 路径不存在: {data_dir}") continue print(f"📁 搜索路径: {data_dir}") files = find_all_npy_files(data_dir) print(f" - 找到 {len(files)} 个 .npy 文件") all_file_paths.extend(files) # 筛选符合点数范围的文件(可选) filtered_paths = [] min_points, max_points = 1000, 50000 for path in all_file_paths: try: points = np.load(path) if min_points <= points.shape[0] <= max_points: filtered_paths.append(path) else: print(f"📉 文件点数不符合要求,跳过: {path} ({points.shape[0]} points)") except Exception as e: print(f"❌ 加载文件失败: {path}, 错误: {e}") all_file_paths = filtered_paths print(f"✅ 最终筛选后,保留 {len(all_file_paths)} 个有效文件") # 模型配置 NUM_POINTS = 4096 # 输入点数(预处理已统一) OUTPUT_POINTS = 1024 # 输出点数 BATCH_SIZE = 8 # 可根据显存调整 LR = 0.0005 EPOCHS = 50 # =========================================== def pad_or_truncate(points, target_length): """统一长度,超出裁剪,不足补零""" num_points = points.shape[0] if num_points > target_length: return points[:target_length] else: pad = np.zeros((target_length - num_points, 3)) return np.vstack([points, pad]) def collate_fn(batch): """改进版:先转成 numpy array,统一 pad 后再构造 tensor""" max_len = max(len(points) for points in batch) padded_batch = [pad_or_truncate(points, max_len) for points in batch] array_batch = np.stack(padded_batch) # 构造连续内存的 numpy array return torch.from_numpy(array_batch).float() # 构造连续内存的 tensor def train(): dataset = UnifiedPointCloudDataset(file_paths=all_file_paths) dataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, drop_last=True, num_workers=4, collate_fn=collate_fn, pin_memory=True) model = PointMLP(output_points=OUTPUT_POINTS) optimizer = torch.optim.Adam(model.parameters(), lr=LR) criterion = ChamferLoss() device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) if torch.backends.cudnn.is_available(): torch.backends.cudnn.benchmark = True scaler = GradScaler() os.makedirs("checkpoints", exist_ok=True) print(f"✅ 开始训练,总共 {len(dataset)} 个点云文件,使用 batch size {BATCH_SIZE}") for epoch in range(EPOCHS): epoch_start_time = time.time() model.train() total_loss = 0 batch_times = [] torch.cuda.empty_cache() print(f"\n{'=' * 20} Epoch [{epoch + 1}/{EPOCHS}] 开始 {'=' * 20}") for batch_idx, points in enumerate(dataloader): # ✅ 在这里加入调试信息 print(f"Batch {batch_idx}: shape={points.shape}, contiguous={points.is_contiguous()}") batch_start_time = time.time() points = points.to(device, non_blocking=True) # 送入 GPU with autocast(): # 启动混合精度计算 output = model(points) loss = criterion(output, points) # 混合精度反向传播 scaler.scale(loss).backward() scaler.step(optimizer) scaler.update() optimizer.zero_grad() total_loss += loss.item() batch_duration = time.time() - batch_start_time batch_times.append(batch_duration) if (batch_idx + 1) % 10 == 0: avg_batch_time = sum(batch_times[-10:]) / len(batch_times[-10:]) print(f"Epoch [{epoch + 1}/{EPOCHS}], Batch [{batch_idx + 1}/{len(dataloader)}], " f"Loss: {loss.item():.4f}, 耗时: {batch_duration:.2f}s, 平均: {avg_batch_time:.2f}s") avg_loss = total_loss / len(dataloader) epoch_duration = time.time() - epoch_start_time print(f"{'=' * 20} Epoch [{epoch + 1}/{EPOCHS}] 完成 {'=' * 20}") print(f"平均损失: {avg_loss:.4f}") print(f"总耗时: {epoch_duration:.2f}s") print(f"平均每个batch耗时: {sum(batch_times) / len(batch_times):.2f}s") print(f"最后保存模型: checkpoints/pointmlp_epoch_{epoch + 1}.pth") print("-" * 50) torch.save(model.state_dict(), f"checkpoints/pointmlp_epoch2_{epoch + 1}.pth") if __name__ == '__main__': print("🚀 训练程序启动") start_time = time.time() train() total_duration = time.time() - start_time print(f"\n🏁 训练程序结束,总运行时间: {total_duration:.2f}s") import numpy as np import os from glob import glob import trimesh import open3d as o3d import torch def farthest_point_sample(pointcloud, num_points): """最远点采样(FPS),用于统一输出点数""" pointcloud = np.array(pointcloud) num_all = pointcloud.shape[0] if num_all <= num_points: return pointcloud farthest_pts = np.zeros((num_points, 3)) farthest_pts[0] = pointcloud[np.random.randint(num_all)] distances = np.sum((pointcloud - farthest_pts[0]) ** 2, axis=1) for i in range(1, num_points): idx = np.argmax(distances) farthest_pts[i] = pointcloud[idx] distances = np.minimum(distances, np.sum((pointcloud - farthest_pts[i]) ** 2, axis=1)) return farthest_pts import os import numpy as np from glob import glob class UnifiedPointCloudDataset: def __init__(self, data_dirs=None, file_paths=None, min_points=1000, max_points=1000000): """ 点云数据集类,支持从文件夹递归加载 .npy 文件,或直接传入文件路径列表 参数: data_dirs (list): 文件夹路径列表,用于搜索 .npy 文件 file_paths (list): 已知的 .npy 文件路径列表(优先使用) min_points (int): 点云最小点数阈值 max_points (int): 点云最大点数阈值 """ self.file_paths = [] # 优先使用传入的 file_paths if file_paths is not None and len(file_paths) > 0: self.file_paths = file_paths print(f"✅ 使用传入的 {len(self.file_paths)} 个 .npy 文件") elif data_dirs is not None and len(data_dirs) > 0: print("🔍 正在搜索数据目录:") for data_dir in data_dirs: if not os.path.exists(data_dir): print(f"⚠️ 路径不存在: {data_dir}") continue print(f"📁 搜索路径: {data_dir}") files = glob(os.path.join(data_dir, "**/*.npy"), recursive=True) print(f" - 找到 {len(files)} 个 .npy 文件") self.file_paths.extend(files) else: raise ValueError("必须提供 data_dirs 或 file_paths 中的一个") # 如果还没有文件,直接返回空数据集 if not self.file_paths: print("⚠️ 未找到任何 .npy 文件") return # 过滤掉点数不符合要求的文件 filtered_paths = [] for path in self.file_paths: try: points = np.load(path) if min_points <= points.shape[0] <= max_points: filtered_paths.append(path) else: print(f"📉 文件点数不符合要求,跳过: {path} ({points.shape[0]} points)") except Exception as e: print(f"❌ 加载文件失败: {path}, 错误: {e}") self.file_paths = filtered_paths print(f"✅ 最终筛选后,保留 {len(self.file_paths)} 个有效文件") def __len__(self): return len(self.file_paths) def __getitem__(self, idx): file_path = self.file_paths[idx] points = np.load(file_path) return torch.tensor(points, dtype=torch.float32)
07-26
【电力系统】单机无穷大电力系统短路故障暂态稳定Simulink仿真(带说明文档)内容概要:本文档围绕“单机无穷大电力系统短路故障暂态稳定Simulink仿真”展开,提供了完整的仿真模型与说明文档,重点研究电力系统在发生短路故障后的暂态稳定性问题。通过Simulink搭建单机无穷大系统模型,模拟不同类型的短路故障(如三相短路),分析系统在故障期间及切除后的动态响应,包括发电机转子角度、转速、电压和功率等关键参数的变化,进而评估系统的暂态稳定能力。该仿真有助于理解电力系统稳定性机理,掌握暂态过程分析方法。; 适合人群:电气工程及相关专业的本科生、研究生,以及从事电力系统分析、运行与控制工作的科研人员和工程师。; 使用场景及目标:①学习电力系统暂态稳定的基本概念与分析方法;②掌握利用Simulink进行电力系统建模与仿真的技能;③研究短路故障对系统稳定性的影响及提高稳定性的措施(如故障清除时间优化);④辅助课程设计、毕业设计或科研项目中的系统仿真验证。; 阅读建议:建议结合电力系统稳定性理论知识进行学习,先理解仿真模型各模块的功能与参数设置,再运行仿真并仔细分析输出结果,尝试改变故障类型或系统参数以观察其对稳定性的影响,从而深化对暂态稳定问题的理解。
本研究聚焦于运用MATLAB平台,将支持向量机(SVM)应用于数据预测任务,并引入粒子群优化(PSO)算法对模型的关键参数进行自动调优。该研究属于机器学习领域的典型实践,其核心在于利用SVM构建分类模型,同时借助PSO的全局搜索能力,高效确定SVM的最优超参数配置,从而显著增强模型的整体预测效能。 支持向量机作为一种经典的监督学习方法,其基本原理是通过在高维特征空间中构造一个具有最大间隔的决策边界,以实现对样本数据的分类或回归分析。该算法擅长处理小规模样本集、非线性关系以及高维度特征识别问题,其有效性源于通过核函数将原始数据映射至更高维的空间,使得原本复杂的分类问题变得线性可分。 粒子群优化算法是一种模拟鸟群社会行为的群体智能优化技术。在该算法框架下,每个潜在解被视作一个“粒子”,粒子群在解空间中协同搜索,通过不断迭代更新自身速度与位置,并参考个体历史最优解和群体全局最优解的信息,逐步逼近问题的最优解。在本应用中,PSO被专门用于搜寻SVM中影响模型性能的两个关键参数——正则化参数C与核函数参数γ的最优组合。 项目所提供的实现代码涵盖了从数据加载、预处理(如标准化处理)、基础SVM模型构建到PSO优化流程的完整步骤。优化过程会针对不同的核函数(例如线性核、多项式核及径向基函数核等)进行参数寻优,并系统评估优化前后模型性能的差异。性能对比通常基于准确率、精确率、召回率及F1分数等多项分类指标展开,从而定量验证PSO算法在提升SVM模型分类能力方面的实际效果。 本研究通过一个具体的MATLAB实现案例,旨在演示如何将全局优化算法与机器学习模型相结合,以解决模型参数选择这一关键问题。通过此实践,研究者不仅能够深入理解SVM的工作原理,还能掌握利用智能优化技术提升模型泛化性能的有效方法,这对于机器学习在实际问题中的应用具有重要的参考价值。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值