D:\anaconda\envs\mamba\python.exe E:\ultralytics-v8.3.63\ultralytics\models\yolo\pwcmamba\ceshi.py
WARNING ⚠️ no model scale passed. Assuming scale='n'.
解析第0层,from=-1,当前ch列表长度=1
解析第0层后,save列表:[]
解析第1层,from=-1,当前ch列表长度=1
解析第1层后,save列表:[]
解析第2层,from=-1,当前ch列表长度=2
解析第2层后,save列表:[]
解析第3层,from=-1,当前ch列表长度=3
解析第3层后,save列表:[]
解析第4层,from=-1,当前ch列表长度=4
解析第4层后,save列表:[]
解析第5层,from=-1,当前ch列表长度=5
解析第5层后,save列表:[]
解析第6层,from=-1,当前ch列表长度=6
解析第6层后,save列表:[]
解析第7层,from=-1,当前ch列表长度=7
解析第7层后,save列表:[]
解析第8层,from=-1,当前ch列表长度=8
解析第8层后,save列表:[]
解析第9层,from=-1,当前ch列表长度=9
解析第9层后,save列表:[]
解析第10层,from=-1,当前ch列表长度=10
解析第10层后,save列表:[]
解析第11层,from=-1,当前ch列表长度=11
解析第11层后,save列表:[]
解析第12层,from=[-1, 6],当前ch列表长度=12
Concat层12:将历史层6加入save列表
解析第12层后,save列表:[6]
解析第13层,from=-1,当前ch列表长度=13
解析第13层后,save列表:[6]
解析第14层,from=-1,当前ch列表长度=14
解析第14层后,save列表:[6]
解析第15层,from=-1,当前ch列表长度=15
解析第15层后,save列表:[6]
解析第16层,from=-1,当前ch列表长度=16
解析第16层后,save列表:[6]
解析第17层,from=[-1, 4],当前ch列表长度=17
Concat层17:将历史层4加入save列表
解析第17层后,save列表:[4, 6]
解析第18层,from=-1,当前ch列表长度=18
解析第18层后,save列表:[4, 6]
解析第19层,from=-1,当前ch列表长度=19
解析第19层后,save列表:[4, 6]
解析第20层,from=-1,当前ch列表长度=20
解析第20层后,save列表:[4, 6]
解析第21层,from=[-1, 14],当前ch列表长度=21
Concat层21:将历史层14加入save列表
解析第21层后,save列表:[4, 6, 14]
解析第22层,from=-1,当前ch列表长度=22
解析第22层后,save列表:[4, 6, 14]
解析第23层,from=-1,当前ch列表长度=23
解析第23层后,save列表:[4, 6, 14]
解析第24层,from=-1,当前ch列表长度=24
解析第24层后,save列表:[4, 6, 14]
解析第25层,from=[-1, 9],当前ch列表长度=25
Concat层25:将历史层9加入save列表
解析第25层后,save列表:[4, 6, 9, 14]
解析第26层,from=-1,当前ch列表长度=26
解析第26层后,save列表:[4, 6, 9, 14]
解析第27层,from=-1,当前ch列表长度=27
解析第27层后,save列表:[4, 6, 9, 14]
解析第28层,from=[19, 23, 27],当前ch列表长度=28
层28:将from列表中的19加入save列表
层28:将from列表中的23加入save列表
层28:将from列表中的27加入save列表
解析第28层后,save列表:[4, 6, 9, 14, 19, 23, 27]
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32)
输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96)
Layer 17: Concat, output device: cuda:0
Layer 18: Conv, input device: cuda:0
Layer 18: Conv, output device: cuda:0
Layer 19: PWCMamba, input device: cuda:0
Layer 19: PWCMamba, output device: cuda:0
Layer 20: Conv, input device: cuda:0
Layer 20: Conv, output device: cuda:0
Layer 21: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32)
输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96)
Layer 21: Concat, output device: cuda:0
Layer 22: Conv, input device: cuda:0
Layer 22: Conv, output device: cuda:0
Layer 23: PWCMamba, input device: cuda:0
Layer 23: PWCMamba, output device: cuda:0
Layer 24: Conv, input device: cuda:0
Layer 24: Conv, output device: cuda:0
Layer 25: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64)
输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192)
Layer 25: Concat, output device: cuda:0
Layer 26: Conv, input device: cuda:0
Layer 26: Conv, output device: cuda:0
Layer 27: PWCMamba, input device: cuda:0
Layer 27: PWCMamba, output device: cuda:0
Layer 28: Detect, input device: cuda:0
Layer 28: Detect, output device: cuda:0
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32)
输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96)
Layer 17: Concat, output device: cuda:0
Layer 18: Conv, input device: cuda:0
Layer 18: Conv, output device: cuda:0
Layer 19: PWCMamba, input device: cuda:0
Layer 19: PWCMamba, output device: cuda:0
Layer 20: Conv, input device: cuda:0
Layer 20: Conv, output device: cuda:0
Layer 21: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32)
输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96)
Layer 21: Concat, output device: cuda:0
Layer 22: Conv, input device: cuda:0
Layer 22: Conv, output device: cuda:0
Layer 23: PWCMamba, input device: cuda:0
Layer 23: PWCMamba, output device: cuda:0
Layer 24: Conv, input device: cuda:0
Layer 24: Conv, output device: cuda:0
Layer 25: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64)
输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192)
Layer 25: Concat, output device: cuda:0
Layer 26: Conv, input device: cuda:0
Layer 26: Conv, output device: cuda:0
Layer 27: PWCMamba, input device: cuda:0
Layer 27: PWCMamba, output device: cuda:0
Layer 28: Detect, input device: cuda:0
Layer 28: Detect, output device: cuda:0
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32)
输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96)
Layer 17: Concat, output device: cuda:0
Layer 18: Conv, input device: cuda:0
Layer 18: Conv, output device: cuda:0
Layer 19: PWCMamba, input device: cuda:0
Layer 19: PWCMamba, output device: cuda:0
Layer 20: Conv, input device: cuda:0
Layer 20: Conv, output device: cuda:0
Layer 21: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32)
输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96)
Layer 21: Concat, output device: cuda:0
Layer 22: Conv, input device: cuda:0
Layer 22: Conv, output device: cuda:0
Layer 23: PWCMamba, input device: cuda:0
Layer 23: PWCMamba, output device: cuda:0
Layer 24: Conv, input device: cuda:0
Layer 24: Conv, output device: cuda:0
Layer 25: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64)
输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192)
Layer 25: Concat, output device: cuda:0
Layer 26: Conv, input device: cuda:0
Layer 26: Conv, output device: cuda:0
Layer 27: PWCMamba, input device: cuda:0
Layer 27: PWCMamba, output device: cuda:0
Layer 28: Detect, input device: cuda:0
Layer 28: Detect, output device: cuda:0
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32)
输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96)
Layer 17: Concat, output device: cuda:0
Layer 18: Conv, input device: cuda:0
Layer 18: Conv, output device: cuda:0
Layer 19: PWCMamba, input device: cuda:0
Layer 19: PWCMamba, output device: cuda:0
Layer 20: Conv, input device: cuda:0
Layer 20: Conv, output device: cuda:0
Layer 21: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32)
输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96)
Layer 21: Concat, output device: cuda:0
Layer 22: Conv, input device: cuda:0
Layer 22: Conv, output device: cuda:0
Layer 23: PWCMamba, input device: cuda:0
Layer 23: PWCMamba, output device: cuda:0
Layer 24: Conv, input device: cuda:0
Layer 24: Conv, output device: cuda:0
Layer 25: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64)
输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192)
Layer 25: Concat, output device: cuda:0
Layer 26: Conv, input device: cuda:0
Layer 26: Conv, output device: cuda:0
Layer 27: PWCMamba, input device: cuda:0
Layer 27: PWCMamba, output device: cuda:0
Layer 28: Detect, input device: cuda:0
Layer 28: Detect, output device: cuda:0
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32)
输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96)
Layer 17: Concat, output device: cuda:0
Layer 18: Conv, input device: cuda:0
Layer 18: Conv, output device: cuda:0
Layer 19: PWCMamba, input device: cuda:0
Layer 19: PWCMamba, output device: cuda:0
Layer 20: Conv, input device: cuda:0
Layer 20: Conv, output device: cuda:0
Layer 21: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32)
输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96)
Layer 21: Concat, output device: cuda:0
Layer 22: Conv, input device: cuda:0
Layer 22: Conv, output device: cuda:0
Layer 23: PWCMamba, input device: cuda:0
Layer 23: PWCMamba, output device: cuda:0
Layer 24: Conv, input device: cuda:0
Layer 24: Conv, output device: cuda:0
Layer 25: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64)
输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192)
Layer 25: Concat, output device: cuda:0
Layer 26: Conv, input device: cuda:0
Layer 26: Conv, output device: cuda:0
Layer 27: PWCMamba, input device: cuda:0
Layer 27: PWCMamba, output device: cuda:0
Layer 28: Detect, input device: cuda:0
Layer 28: Detect, output device: cuda:0
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32)
输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96)
Layer 17: Concat, output device: cuda:0
Layer 18: Conv, input device: cuda:0
Layer 18: Conv, output device: cuda:0
Layer 19: PWCMamba, input device: cuda:0
Layer 19: PWCMamba, output device: cuda:0
Layer 20: Conv, input device: cuda:0
Layer 20: Conv, output device: cuda:0
Layer 21: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32)
输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96)
Layer 21: Concat, output device: cuda:0
Layer 22: Conv, input device: cuda:0
Layer 22: Conv, output device: cuda:0
Layer 23: PWCMamba, input device: cuda:0
Layer 23: PWCMamba, output device: cuda:0
Layer 24: Conv, input device: cuda:0
Layer 24: Conv, output device: cuda:0
Layer 25: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64)
输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192)
Layer 25: Concat, output device: cuda:0
Layer 26: Conv, input device: cuda:0
Layer 26: Conv, output device: cuda:0
Layer 27: PWCMamba, input device: cuda:0
Layer 27: PWCMamba, output device: cuda:0
Layer 28: Detect, input device: cuda:0
Layer 28: Detect, output device: cuda:0
Ultralytics 8.3.63 🚀 Python-3.10.18 torch-2.1.1+cu118 CUDA:0 (NVIDIA GeForce RTX 2060, 6144MiB)
engine\trainer: task=detect, mode=train, model=E:/ultralytics-v8.3.63/ultralytics/models/yolo/pwcmamba/yolov8pwcm.yaml, data=E:/ultralytics-v8.3.63/motogp.yaml, epochs=100, time=None, patience=100, batch=16, imgsz=768, save=True, save_period=-1, cache=True, device=0, workers=2, project=None, name=train12, exist_ok=False, pretrained=False, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=True, opset=None, workspace=None, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, copy_paste_mode=flip, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=runs\detect\train12
WARNING ⚠️ no model scale passed. Assuming scale='n'.
from n params module arguments
解析第0层,from=-1,当前ch列表长度=1
0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
解析第0层后,save列表:[]
解析第1层,from=-1,当前ch列表长度=1
1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
解析第1层后,save列表:[]
解析第2层,from=-1,当前ch列表长度=2
2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]
解析第2层后,save列表:[]
解析第3层,from=-1,当前ch列表长度=3
3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
解析第3层后,save列表:[]
解析第4层,from=-1,当前ch列表长度=4
4 -1 1 29056 ultralytics.nn.modules.block.C2f [64, 64, 1, True]
解析第4层后,save列表:[]
解析第5层,from=-1,当前ch列表长度=5
5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
解析第5层后,save列表:[]
解析第6层,from=-1,当前ch列表长度=6
6 -1 1 115456 ultralytics.nn.modules.block.C2f [128, 128, 1, True]
解析第6层后,save列表:[]
解析第7层,from=-1,当前ch列表长度=7
7 -1 1 184640 ultralytics.nn.modules.conv.Conv [128, 160, 3, 2]
解析第7层后,save列表:[]
解析第8层,from=-1,当前ch列表长度=8
8 -1 1 644690 ultralytics.nn.modules.block.PWCMamba [160, 64]
解析第8层后,save列表:[]
解析第9层,from=-1,当前ch列表长度=9
9 -1 1 18752 ultralytics.nn.modules.block.SPPF [64, 128, 5]
解析第9层后,save列表:[]
解析第10层,from=-1,当前ch列表长度=10
10 -1 1 8320 ultralytics.nn.modules.conv.Conv [128, 64, 1, 1]
解析第10层后,save列表:[]
解析第11层,from=-1,当前ch列表长度=11
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
解析第11层后,save列表:[]
解析第12层,from=[-1, 6],当前ch列表长度=12
Concat层12:将历史层6加入save列表
12 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1]
解析第12层后,save列表:[6]
解析第13层,from=-1,当前ch列表长度=13
13 -1 1 12416 ultralytics.nn.modules.conv.Conv [192, 64, 1, 1]
解析第13层后,save列表:[6]
解析第14层,from=-1,当前ch列表长度=14
14 -1 1 118178 ultralytics.nn.modules.block.PWCMamba [64, 64]
解析第14层后,save列表:[6]
解析第15层,from=-1,当前ch列表长度=15
15 -1 1 2112 ultralytics.nn.modules.conv.Conv [64, 32, 1, 1]
解析第15层后,save列表:[6]
解析第16层,from=-1,当前ch列表长度=16
16 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
解析第16层后,save列表:[6]
解析第17层,from=[-1, 4],当前ch列表长度=17
Concat层17:将历史层4加入save列表
17 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1]
解析第17层后,save列表:[4, 6]
解析第18层,from=-1,当前ch列表长度=18
18 -1 1 3136 ultralytics.nn.modules.conv.Conv [96, 32, 1, 1]
解析第18层后,save列表:[4, 6]
解析第19层,from=-1,当前ch列表长度=19
19 -1 1 34770 ultralytics.nn.modules.block.PWCMamba [32, 32]
解析第19层后,save列表:[4, 6]
解析第20层,from=-1,当前ch列表长度=20
20 -1 1 9280 ultralytics.nn.modules.conv.Conv [32, 32, 3, 2]
解析第20层后,save列表:[4, 6]
解析第21层,from=[-1, 14],当前ch列表长度=21
Concat层21:将历史层14加入save列表
21 [-1, 14] 1 0 ultralytics.nn.modules.conv.Concat [1]
解析第21层后,save列表:[4, 6, 14]
解析第22层,from=-1,当前ch列表长度=22
22 -1 1 6272 ultralytics.nn.modules.conv.Conv [96, 64, 1, 1]
解析第22层后,save列表:[4, 6, 14]
解析第23层,from=-1,当前ch列表长度=23
23 -1 1 118178 ultralytics.nn.modules.block.PWCMamba [64, 64]
解析第23层后,save列表:[4, 6, 14]
解析第24层,from=-1,当前ch列表长度=24
24 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
解析第24层后,save列表:[4, 6, 14]
解析第25层,from=[-1, 9],当前ch列表长度=25
Concat层25:将历史层9加入save列表
25 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1]
解析第25层后,save列表:[4, 6, 9, 14]
解析第26层,from=-1,当前ch列表长度=26
26 -1 1 24832 ultralytics.nn.modules.conv.Conv [192, 128, 1, 1]
解析第26层后,save列表:[4, 6, 9, 14]
解析第27层,from=-1,当前ch列表长度=27
27 -1 1 430914 ultralytics.nn.modules.block.PWCMamba [128, 128]
解析第27层后,save列表:[4, 6, 9, 14]
解析第28层,from=[19, 23, 27],当前ch列表长度=28
层28:将from列表中的19加入save列表
层28:将from列表中的23加入save列表
层28:将from列表中的27加入save列表
28 [19, 23, 27] 1 267123 ultralytics.nn.modules.head.Detect [1, [32, 64, 128]]
解析第28层后,save列表:[4, 6, 9, 14, 19, 23, 27]
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32)
输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96)
Layer 17: Concat, output device: cuda:0
Layer 18: Conv, input device: cuda:0
Layer 18: Conv, output device: cuda:0
Layer 19: PWCMamba, input device: cuda:0
Layer 19: PWCMamba, output device: cuda:0
Layer 20: Conv, input device: cuda:0
Layer 20: Conv, output device: cuda:0
Layer 21: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32)
输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96)
Layer 21: Concat, output device: cuda:0
Layer 22: Conv, input device: cuda:0
Layer 22: Conv, output device: cuda:0
Layer 23: PWCMamba, input device: cuda:0
Layer 23: PWCMamba, output device: cuda:0
Layer 24: Conv, input device: cuda:0
Layer 24: Conv, output device: cuda:0
Layer 25: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64)
输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192)
Layer 25: Concat, output device: cuda:0
Layer 26: Conv, input device: cuda:0
Layer 26: Conv, output device: cuda:0
Layer 27: PWCMamba, input device: cuda:0
Layer 27: PWCMamba, output device: cuda:0
Layer 28: Detect, input device: cuda:0
Layer 28: Detect, output device: cuda:0
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32)
输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96)
Layer 17: Concat, output device: cuda:0
Layer 18: Conv, input device: cuda:0
Layer 18: Conv, output device: cuda:0
Layer 19: PWCMamba, input device: cuda:0
Layer 19: PWCMamba, output device: cuda:0
Layer 20: Conv, input device: cuda:0
Layer 20: Conv, output device: cuda:0
Layer 21: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32)
输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96)
Layer 21: Concat, output device: cuda:0
Layer 22: Conv, input device: cuda:0
Layer 22: Conv, output device: cuda:0
Layer 23: PWCMamba, input device: cuda:0
Layer 23: PWCMamba, output device: cuda:0
Layer 24: Conv, input device: cuda:0
Layer 24: Conv, output device: cuda:0
Layer 25: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64)
输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192)
Layer 25: Concat, output device: cuda:0
Layer 26: Conv, input device: cuda:0
Layer 26: Conv, output device: cuda:0
Layer 27: PWCMamba, input device: cuda:0
Layer 27: PWCMamba, output device: cuda:0
Layer 28: Detect, input device: cuda:0
Layer 28: Detect, output device: cuda:0
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32)
输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96)
Layer 17: Concat, output device: cuda:0
Layer 18: Conv, input device: cuda:0
Layer 18: Conv, output device: cuda:0
Layer 19: PWCMamba, input device: cuda:0
Layer 19: PWCMamba, output device: cuda:0
Layer 20: Conv, input device: cuda:0
Layer 20: Conv, output device: cuda:0
Layer 21: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32)
输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96)
Layer 21: Concat, output device: cuda:0
Layer 22: Conv, input device: cuda:0
Layer 22: Conv, output device: cuda:0
Layer 23: PWCMamba, input device: cuda:0
Layer 23: PWCMamba, output device: cuda:0
Layer 24: Conv, input device: cuda:0
Layer 24: Conv, output device: cuda:0
Layer 25: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64)
输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192)
Layer 25: Concat, output device: cuda:0
Layer 26: Conv, input device: cuda:0
Layer 26: Conv, output device: cuda:0
Layer 27: PWCMamba, input device: cuda:0
Layer 27: PWCMamba, output device: cuda:0
Layer 28: Detect, input device: cuda:0
Layer 28: Detect, output device: cuda:0
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32)
输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96)
Layer 17: Concat, output device: cuda:0
Layer 18: Conv, input device: cuda:0
Layer 18: Conv, output device: cuda:0
Layer 19: PWCMamba, input device: cuda:0
Layer 19: PWCMamba, output device: cuda:0
Layer 20: Conv, input device: cuda:0
Layer 20: Conv, output device: cuda:0
Layer 21: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32)
输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96)
Layer 21: Concat, output device: cuda:0
Layer 22: Conv, input device: cuda:0
Layer 22: Conv, output device: cuda:0
Layer 23: PWCMamba, input device: cuda:0
Layer 23: PWCMamba, output device: cuda:0
Layer 24: Conv, input device: cuda:0
Layer 24: Conv, output device: cuda:0
Layer 25: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64)
输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192)
Layer 25: Concat, output device: cuda:0
Layer 26: Conv, input device: cuda:0
Layer 26: Conv, output device: cuda:0
Layer 27: PWCMamba, input device: cuda:0
Layer 27: PWCMamba, output device: cuda:0
Layer 28: Detect, input device: cuda:0
Layer 28: Detect, output device: cuda:0
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32)
输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96)
Layer 17: Concat, output device: cuda:0
Layer 18: Conv, input device: cuda:0
Layer 18: Conv, output device: cuda:0
Layer 19: PWCMamba, input device: cuda:0
Layer 19: PWCMamba, output device: cuda:0
Layer 20: Conv, input device: cuda:0
Layer 20: Conv, output device: cuda:0
Layer 21: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32)
输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96)
Layer 21: Concat, output device: cuda:0
Layer 22: Conv, input device: cuda:0
Layer 22: Conv, output device: cuda:0
Layer 23: PWCMamba, input device: cuda:0
Layer 23: PWCMamba, output device: cuda:0
Layer 24: Conv, input device: cuda:0
Layer 24: Conv, output device: cuda:0
Layer 25: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64)
输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192)
Layer 25: Concat, output device: cuda:0
Layer 26: Conv, input device: cuda:0
Layer 26: Conv, output device: cuda:0
Layer 27: PWCMamba, input device: cuda:0
Layer 27: PWCMamba, output device: cuda:0
Layer 28: Detect, input device: cuda:0
Layer 28: Detect, output device: cuda:0
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32)
输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96)
Layer 17: Concat, output device: cuda:0
Layer 18: Conv, input device: cuda:0
Layer 18: Conv, output device: cuda:0
Layer 19: PWCMamba, input device: cuda:0
Layer 19: PWCMamba, output device: cuda:0
Layer 20: Conv, input device: cuda:0
Layer 20: Conv, output device: cuda:0
Layer 21: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32)
输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96)
Layer 21: Concat, output device: cuda:0
Layer 22: Conv, input device: cuda:0
Layer 22: Conv, output device: cuda:0
Layer 23: PWCMamba, input device: cuda:0
Layer 23: PWCMamba, output device: cuda:0
Layer 24: Conv, input device: cuda:0
Layer 24: Conv, output device: cuda:0
Layer 25: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64)
输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192)
Layer 25: Concat, output device: cuda:0
Layer 26: Conv, input device: cuda:0
Layer 26: Conv, output device: cuda:0
Layer 27: PWCMamba, input device: cuda:0
Layer 27: PWCMamba, output device: cuda:0
Layer 28: Detect, input device: cuda:0
Layer 28: Detect, output device: cuda:0
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 4, 4]) (通道数: 64)
输入1形状: torch.Size([1, 128, 4, 4]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 4, 4]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 8, 8]) (通道数: 32)
输入1形状: torch.Size([1, 64, 7, 7]) (通道数: 64)
Concat拼接失败: Sizes of tensors must match except in dimension 1. Expected size 8 but got size 7 for tensor number 1 in the list.
Layer 0: Conv, input device: cuda:0
Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 40, 40]) (通道数: 64)
输入1形状: torch.Size([1, 128, 40, 40]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 40, 40]) (拼接后通道数: 192)
Layer 12: Concat, output device: cuda:0
Layer 13: Conv, input device: cuda:0
Layer 13: Conv, output device: cuda:0
Layer 14: PWCMamba, input device: cuda:0
Layer 14: PWCMamba, output device: cuda:0
Layer 15: Conv, input device: cuda:0
Layer 15: Conv, output device: cuda:0
Layer 16: Upsample, input device: cuda:0
Layer 16: Upsample, output device: cuda:0
Layer 17: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 80, 80]) (通道数: 32)
输入1形状: torch.Size([1, 64, 80, 80]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 80, 80]) (拼接后通道数: 96)
Layer 17: Concat, output device: cuda:0
Layer 18: Conv, input device: cuda:0
Layer 18: Conv, output device: cuda:0
Layer 19: PWCMamba, input device: cuda:0
Layer 19: PWCMamba, output device: cuda:0
Layer 20: Conv, input device: cuda:0
Layer 20: Conv, output device: cuda:0
Layer 21: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 32, 40, 40]) (通道数: 32)
输入1形状: torch.Size([1, 64, 40, 40]) (通道数: 64)
Concat输出形状: torch.Size([1, 96, 40, 40]) (拼接后通道数: 96)
Layer 21: Concat, output device: cuda:0
Layer 22: Conv, input device: cuda:0
Layer 22: Conv, output device: cuda:0
Layer 23: PWCMamba, input device: cuda:0
Layer 23: PWCMamba, output device: cuda:0
Layer 24: Conv, input device: cuda:0
Layer 24: Conv, output device: cuda:0
Layer 25: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([1, 64, 20, 20]) (通道数: 64)
输入1形状: torch.Size([1, 128, 20, 20]) (通道数: 128)
Concat输出形状: torch.Size([1, 192, 20, 20]) (拼接后通道数: 192)
Layer 25: Concat, output device: cuda:0
Layer 26: Conv, input device: cuda:0
Layer 26: Conv, output device: cuda:0
Layer 27: PWCMamba, input device: cuda:0
Layer 27: PWCMamba, output device: cuda:0
Layer 28: Detect, input device: cuda:0
Layer 28: Detect, output device: cuda:0
YOLOv8pwcm summary: 643 layers, 2,170,157 parameters, 2,170,141 gradients, 5.0 GFLOPs
Freezing layer 'model.28.dfl.conv.weight'
AMP: running Automatic Mixed Precision (AMP) checks...
AMP: checks skipped ⚠️. Unable to load YOLO11n for AMP checks due to possible Ultralytics package modifications. Setting 'amp=True'. If you experience zero-mAP or NaN losses you can disable AMP with amp=False.
WARNING ⚠️ imgsz=[768] must be multiple of max stride 51, updating to [816]
train: Scanning E:\ultralytics-v8.3.63\datasets\Motogp.v1i.yolov8\train\labels.cache... 97 images, 11 backgrounds, 0 corrupt: 100%|██████████| 97/97 [00:00<?, ?it/s]
WARNING ⚠️ cache='ram' may produce non-deterministic training results. Consider cache='disk' as a deterministic alternative if your disk space allows.
train: Caching images (0.1GB RAM): 100%|██████████| 97/97 [00:00<00:00, 549.04it/s]
val: Scanning E:\ultralytics-v8.3.63\datasets\Motogp.v1i.yolov8\valid\labels.cache... 28 images, 4 backgrounds, 0 corrupt: 100%|██████████| 28/28 [00:00<?, ?it/s]
WARNING ⚠️ cache='ram' may produce non-deterministic training results. Consider cache='disk' as a deterministic alternative if your disk space allows.
val: Caching images (0.0GB RAM): 100%|██████████| 28/28 [00:00<00:00, 282.27it/s]
WARNING ⚠️ imgsz=[816] must be multiple of max stride 32, updating to [832]
No module named 'seaborn'
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically...
optimizer: AdamW(lr=0.002, momentum=0.9) with parameter groups 145 weight(decay=0.0), 177 weight(decay=0.0005), 161 bias(decay=0.0)
Image sizes 816 train, 816 val
Using 2 dataloader workers
Logging results to runs\detect\train12
Starting training for 100 epochs...
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
Layer 0: Conv, input device: cuda:0
0%| | 0/7 [00:00<?, ?it/s]Layer 0: Conv, output device: cuda:0
Layer 1: Conv, input device: cuda:0
Layer 1: Conv, output device: cuda:0
Layer 2: C2f, input device: cuda:0
Layer 2: C2f, output device: cuda:0
Layer 3: Conv, input device: cuda:0
Layer 3: Conv, output device: cuda:0
Layer 4: C2f, input device: cuda:0
Layer 4: C2f, output device: cuda:0
Layer 5: Conv, input device: cuda:0
Layer 5: Conv, output device: cuda:0
Layer 6: C2f, input device: cuda:0
Layer 6: C2f, output device: cuda:0
Layer 7: Conv, input device: cuda:0
Layer 7: Conv, output device: cuda:0
Layer 8: PWCMamba, input device: cuda:0
Layer 8: PWCMamba, output device: cuda:0
Layer 9: SPPF, input device: cuda:0
Layer 9: SPPF, output device: cuda:0
Layer 10: Conv, input device: cuda:0
Layer 10: Conv, output device: cuda:0
Layer 11: Upsample, input device: cuda:0
Layer 11: Upsample, output device: cuda:0
Layer 12: Concat, input device: cuda:0
Concat层输入张量数量: 2
输入0形状: torch.Size([16, 64, 52, 52]) (通道数: 64)
输入1形状: torch.Size([16, 128, 51, 51]) (通道数: 128)
Concat拼接失败: Sizes of tensors must match except in dimension 1. Expected size 52 but got size 51 for tensor number 1 in the list.
0%| | 0/7 [00:00<?, ?it/s]
Traceback (most recent call last):
File "E:\ultralytics-v8.3.63\ultralytics\models\yolo\pwcmamba\ceshi.py", line 21, in <module>
train()
File "E:\ultralytics-v8.3.63\ultralytics\models\yolo\pwcmamba\ceshi.py", line 8, in train
results = model.train(
File "E:\ultralytics-v8.3.63\ultralytics\engine\model.py", line 806, in train
self.trainer.train()
File "E:\ultralytics-v8.3.63\ultralytics\engine\trainer.py", line 207, in train
self._do_train(world_size)
File "E:\ultralytics-v8.3.63\ultralytics\engine\trainer.py", line 381, in _do_train
self.loss, self.loss_items = self.model(batch)
File "D:\anaconda\envs\mamba\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\anaconda\envs\mamba\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\ultralytics-v8.3.63\ultralytics\nn\tasks.py", line 525, in forward
return super().forward(x, *args, **kwargs)
File "E:\ultralytics-v8.3.63\ultralytics\nn\tasks.py", line 110, in forward
return self.predict(x, *args, **kwargs)
File "E:\ultralytics-v8.3.63\ultralytics\nn\tasks.py", line 128, in predict
return self._predict_once(x, profile, visualize, embed)
File "E:\ultralytics-v8.3.63\ultralytics\nn\tasks.py", line 175, in _predict_once
x = m(x) # 前向传播
File "D:\anaconda\envs\mamba\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\anaconda\envs\mamba\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "E:\ultralytics-v8.3.63\ultralytics\nn\modules\conv.py", line 350, in forward
result = torch.cat(x, dim=self.d)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 52 but got size 51 for tensor number 1 in the list.
进程已结束,退出代码为 1