MotoGP Ignition:聚焦活动#8 来啦!

参与MotoGP Ignition的美洲赛道活动,完成关于赛道历史的测验,预测冠军赛车手积分,持有史诗+等级卡片并参加每周冠军赛。最高分及最接近积分预测的玩家将赢得REVV代币奖励。奖品包括2000、1000和500REVV。更多信息可在官方社交媒体平台上获取。

美洲主题赛道来了!

嗨,MotoGP 的粉丝们!我们希望你喜欢我们上周以 Ricardo Tormo 赛道为主题的第七次 MotoGP Ignition 聚焦活动!本周,我们很高兴能聚焦美洲赛道!继续阅读以了解如何参与并获取胜利!

如何参与

要参加此活动,你们需要完成一个关于特色赛道及其历史的有趣事实的快速测验(测验链接下文会给出)。在预期的决胜局中,你们还需要猜测下周末冠军赛中三名车手的总累积积分。符合条件的获胜者还需要持有 史诗+ 骑手、车队、自行车或任何骑手的热门卡(你们必须继续持有该卡,直到 Discord 宣布优胜者),并参加每周的冠军赛活动。

在测验中得分最高并得出最接近的猜测成果的人将赢得额外的 REVV!任何其他情况下的平局将取决于你的个人冠军赛排名。在每周冠军赛开始之前,你们必须完成测验。

奖品

本周在测验中得分最高的人,并且在平局情况下,在测验中猜出最接近所选车手数量的人,将赢得 2000 REVV 的大奖。

第 2 名将获得 1000 REVV。

第 3 至第 10 名将获得 500 REVV。

简要概括:

1. 完成测验:https://forms.gle/7D5nZkeBDnY6pnzo6 

2. 持有任何卡的史诗级+MotoGP Ignition 卡片(骑手/车队/自行车/热门卡片)。

3. 参加每周的 MotoGP Ignition 冠军赛活动。

有任何问题,请联系我们!

你可以在官方推特、Facebook 和 Telegram 频道了解更多关于 MotoGP™ Ignition、REVV 通证 和 REVV Motorsport 平台的信息,如有任何问题或意见,请联系我们!

 

D:\anaconda\envs\mamba\python.exe E:\ultralytics-v8.3.63\ultralytics\models\yolo\pwcmamba\ceshi.py WARNING ⚠️ no model scale passed. Assuming scale=&#39;n&#39;. 解析第0层,from=-1,当前ch列表长度=1 解析第0层后,save列表:[] 解析第1层,from=-1,当前ch列表长度=1 解析第1层后,save列表:[] 解析第2层,from=-1,当前ch列表长度=2 解析第2层后,save列表:[] 解析第3层,from=-1,当前ch列表长度=3 解析第3层后,save列表:[] 解析第4层,from=-1,当前ch列表长度=4 解析第4层后,save列表:[] 解析第5层,from=-1,当前ch列表长度=5 解析第5层后,save列表:[] 解析第6层,from=-1,当前ch列表长度=6 解析第6层后,save列表:[] 解析第7层,from=-1,当前ch列表长度=7 解析第7层后,save列表:[] 解析第8层,from=-1,当前ch列表长度=8 解析第8层后,save列表:[] 解析第9层,from=-1,当前ch列表长度=9 解析第9层后,save列表:[] 解析第10层,from=-1,当前ch列表长度=10 解析第10层后,save列表:[] 解析第11层,from=-1,当前ch列表长度=11 解析第11层后,save列表:[] 解析第12层,from=[-1, 6],当前ch列表长度=12 Concat层12:将历史层6加入save列表 解析第12层后,save列表:[6] 解析第13层,from=-1,当前ch列表长度=13 解析第13层后,save列表:[6] 解析第14层,from=-1,当前ch列表长度=14 解析第14层后,save列表:[6] 解析第15层,from=-1,当前ch列表长度=15 解析第15层后,save列表:[6] 解析第16层,from=-1,当前ch列表长度=16 解析第16层后,save列表:[6] 解析第17层,from=[-1, 4],当前ch列表长度=17 Concat层17:将历史层4加入save列表 解析第17层后,save列表:[4, 6] 解析第18层,from=-1,当前ch列表长度=18 解析第18层后,save列表:[4, 6] 解析第19层,from=-1,当前ch列表长度=19 解析第19层后,save列表:[4, 6] 解析第20层,from=-1,当前ch列表长度=20 解析第20层后,save列表:[4, 6] 解析第21层,from=[-1, 14],当前ch列表长度=21 Concat层21:将历史层14加入save列表 解析第21层后,save列表:[4, 6, 14] 解析第22层,from=-1,当前ch列表长度=22 解析第22层后,save列表:[4, 6, 14] 解析第23层,from=-1,当前ch列表长度=23 解析第23层后,save列表:[4, 6, 14] 解析第24层,from=-1,当前ch列表长度=24 解析第24层后,save列表:[4, 6, 14] 解析第25层,from=[-1, 9],当前ch列表长度=25 Concat层25:将历史层9加入save列表 解析第25层后,save列表:[4, 6, 9, 14] 解析第26层,from=-1,当前ch列表长度=26 解析第26层后,save列表:[4, 6, 9, 14] 解析第27层,from=-1,当前ch列表长度=27 解析第27层后,save列表:[4, 6, 9, 14] 解析第28层,from=[19, 23, 27],当前ch列表长度=28 层28:将from列表中的19加入save列表 层28:将from列表中的23加入save列表 层28:将from列表中的27加入save列表 解析第28层后,save列表:[4, 6, 9, 14, 19, 23, 27] Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64) 输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32) 输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96) Layer 17: Concat, output device: cuda:0 Layer 18: Conv, input device: cuda:0 Layer 18: Conv, output device: cuda:0 Layer 19: PWCMamba, input device: cuda:0 Layer 19: PWCMamba, output device: cuda:0 Layer 20: Conv, input device: cuda:0 Layer 20: Conv, output device: cuda:0 Layer 21: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32) 输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96) Layer 21: Concat, output device: cuda:0 Layer 22: Conv, input device: cuda:0 Layer 22: Conv, output device: cuda:0 Layer 23: PWCMamba, input device: cuda:0 Layer 23: PWCMamba, output device: cuda:0 Layer 24: Conv, input device: cuda:0 Layer 24: Conv, output device: cuda:0 Layer 25: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64) 输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192) Layer 25: Concat, output device: cuda:0 Layer 26: Conv, input device: cuda:0 Layer 26: Conv, output device: cuda:0 Layer 27: PWCMamba, input device: cuda:0 Layer 27: PWCMamba, output device: cuda:0 Layer 28: Detect, input device: cuda:0 Layer 28: Detect, output device: cuda:0 Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64) 输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32) 输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96) Layer 17: Concat, output device: cuda:0 Layer 18: Conv, input device: cuda:0 Layer 18: Conv, output device: cuda:0 Layer 19: PWCMamba, input device: cuda:0 Layer 19: PWCMamba, output device: cuda:0 Layer 20: Conv, input device: cuda:0 Layer 20: Conv, output device: cuda:0 Layer 21: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32) 输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96) Layer 21: Concat, output device: cuda:0 Layer 22: Conv, input device: cuda:0 Layer 22: Conv, output device: cuda:0 Layer 23: PWCMamba, input device: cuda:0 Layer 23: PWCMamba, output device: cuda:0 Layer 24: Conv, input device: cuda:0 Layer 24: Conv, output device: cuda:0 Layer 25: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64) 输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192) Layer 25: Concat, output device: cuda:0 Layer 26: Conv, input device: cuda:0 Layer 26: Conv, output device: cuda:0 Layer 27: PWCMamba, input device: cuda:0 Layer 27: PWCMamba, output device: cuda:0 Layer 28: Detect, input device: cuda:0 Layer 28: Detect, output device: cuda:0 Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64) 输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32) 输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96) Layer 17: Concat, output device: cuda:0 Layer 18: Conv, input device: cuda:0 Layer 18: Conv, output device: cuda:0 Layer 19: PWCMamba, input device: cuda:0 Layer 19: PWCMamba, output device: cuda:0 Layer 20: Conv, input device: cuda:0 Layer 20: Conv, output device: cuda:0 Layer 21: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32) 输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96) Layer 21: Concat, output device: cuda:0 Layer 22: Conv, input device: cuda:0 Layer 22: Conv, output device: cuda:0 Layer 23: PWCMamba, input device: cuda:0 Layer 23: PWCMamba, output device: cuda:0 Layer 24: Conv, input device: cuda:0 Layer 24: Conv, output device: cuda:0 Layer 25: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64) 输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192) Layer 25: Concat, output device: cuda:0 Layer 26: Conv, input device: cuda:0 Layer 26: Conv, output device: cuda:0 Layer 27: PWCMamba, input device: cuda:0 Layer 27: PWCMamba, output device: cuda:0 Layer 28: Detect, input device: cuda:0 Layer 28: Detect, output device: cuda:0 Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64) 输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32) 输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96) Layer 17: Concat, output device: cuda:0 Layer 18: Conv, input device: cuda:0 Layer 18: Conv, output device: cuda:0 Layer 19: PWCMamba, input device: cuda:0 Layer 19: PWCMamba, output device: cuda:0 Layer 20: Conv, input device: cuda:0 Layer 20: Conv, output device: cuda:0 Layer 21: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32) 输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96) Layer 21: Concat, output device: cuda:0 Layer 22: Conv, input device: cuda:0 Layer 22: Conv, output device: cuda:0 Layer 23: PWCMamba, input device: cuda:0 Layer 23: PWCMamba, output device: cuda:0 Layer 24: Conv, input device: cuda:0 Layer 24: Conv, output device: cuda:0 Layer 25: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64) 输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192) Layer 25: Concat, output device: cuda:0 Layer 26: Conv, input device: cuda:0 Layer 26: Conv, output device: cuda:0 Layer 27: PWCMamba, input device: cuda:0 Layer 27: PWCMamba, output device: cuda:0 Layer 28: Detect, input device: cuda:0 Layer 28: Detect, output device: cuda:0 Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64) 输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32) 输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96) Layer 17: Concat, output device: cuda:0 Layer 18: Conv, input device: cuda:0 Layer 18: Conv, output device: cuda:0 Layer 19: PWCMamba, input device: cuda:0 Layer 19: PWCMamba, output device: cuda:0 Layer 20: Conv, input device: cuda:0 Layer 20: Conv, output device: cuda:0 Layer 21: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32) 输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96) Layer 21: Concat, output device: cuda:0 Layer 22: Conv, input device: cuda:0 Layer 22: Conv, output device: cuda:0 Layer 23: PWCMamba, input device: cuda:0 Layer 23: PWCMamba, output device: cuda:0 Layer 24: Conv, input device: cuda:0 Layer 24: Conv, output device: cuda:0 Layer 25: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64) 输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192) Layer 25: Concat, output device: cuda:0 Layer 26: Conv, input device: cuda:0 Layer 26: Conv, output device: cuda:0 Layer 27: PWCMamba, input device: cuda:0 Layer 27: PWCMamba, output device: cuda:0 Layer 28: Detect, input device: cuda:0 Layer 28: Detect, output device: cuda:0 Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64) 输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32) 输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96) Layer 17: Concat, output device: cuda:0 Layer 18: Conv, input device: cuda:0 Layer 18: Conv, output device: cuda:0 Layer 19: PWCMamba, input device: cuda:0 Layer 19: PWCMamba, output device: cuda:0 Layer 20: Conv, input device: cuda:0 Layer 20: Conv, output device: cuda:0 Layer 21: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32) 输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96) Layer 21: Concat, output device: cuda:0 Layer 22: Conv, input device: cuda:0 Layer 22: Conv, output device: cuda:0 Layer 23: PWCMamba, input device: cuda:0 Layer 23: PWCMamba, output device: cuda:0 Layer 24: Conv, input device: cuda:0 Layer 24: Conv, output device: cuda:0 Layer 25: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64) 输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192) Layer 25: Concat, output device: cuda:0 Layer 26: Conv, input device: cuda:0 Layer 26: Conv, output device: cuda:0 Layer 27: PWCMamba, input device: cuda:0 Layer 27: PWCMamba, output device: cuda:0 Layer 28: Detect, input device: cuda:0 Layer 28: Detect, output device: cuda:0 Ultralytics 8.3.63 🚀 Python-3.10.18 torch-2.1.1+cu118 CUDA:0 (NVIDIA GeForce RTX 2060, 6144MiB) engine\trainer: task=detect, mode=train, model=E:/ultralytics-v8.3.63/ultralytics/models/yolo/pwcmamba/yolov8pwcm.yaml, data=E:/ultralytics-v8.3.63/motogp.yaml, epochs=100, time=None, patience=100, batch=16, imgsz=768, save=True, save_period=-1, cache=True, device=0, workers=2, project=None, name=train12, exist_ok=False, pretrained=False, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=True, opset=None, workspace=None, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, copy_paste_mode=flip, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=runs\detect\train12 WARNING ⚠️ no model scale passed. Assuming scale=&#39;n&#39;. from n params module arguments 解析第0层,from=-1,当前ch列表长度=1 0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2] 解析第0层后,save列表:[] 解析第1层,from=-1,当前ch列表长度=1 1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2] 解析第1层后,save列表:[] 解析第2层,from=-1,当前ch列表长度=2 2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True] 解析第2层后,save列表:[] 解析第3层,from=-1,当前ch列表长度=3 3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2] 解析第3层后,save列表:[] 解析第4层,from=-1,当前ch列表长度=4 4 -1 1 29056 ultralytics.nn.modules.block.C2f [64, 64, 1, True] 解析第4层后,save列表:[] 解析第5层,from=-1,当前ch列表长度=5 5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2] 解析第5层后,save列表:[] 解析第6层,from=-1,当前ch列表长度=6 6 -1 1 115456 ultralytics.nn.modules.block.C2f [128, 128, 1, True] 解析第6层后,save列表:[] 解析第7层,from=-1,当前ch列表长度=7 7 -1 1 184640 ultralytics.nn.modules.conv.Conv [128, 160, 3, 2] 解析第7层后,save列表:[] 解析第8层,from=-1,当前ch列表长度=8 8 -1 1 644690 ultralytics.nn.modules.block.PWCMamba [160, 64] 解析第8层后,save列表:[] 解析第9层,from=-1,当前ch列表长度=9 9 -1 1 18752 ultralytics.nn.modules.block.SPPF [64, 128, 5] 解析第9层后,save列表:[] 解析第10层,from=-1,当前ch列表长度=10 10 -1 1 8320 ultralytics.nn.modules.conv.Conv [128, 64, 1, 1] 解析第10层后,save列表:[] 解析第11层,from=-1,当前ch列表长度=11 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, &#39;nearest&#39;] 解析第11层后,save列表:[] 解析第12层,from=[-1, 6],当前ch列表长度=12 Concat层12:将历史层6加入save列表 12 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1] 解析第12层后,save列表:[6] 解析第13层,from=-1,当前ch列表长度=13 13 -1 1 12416 ultralytics.nn.modules.conv.Conv [192, 64, 1, 1] 解析第13层后,save列表:[6] 解析第14层,from=-1,当前ch列表长度=14 14 -1 1 118178 ultralytics.nn.modules.block.PWCMamba [64, 64] 解析第14层后,save列表:[6] 解析第15层,from=-1,当前ch列表长度=15 15 -1 1 2112 ultralytics.nn.modules.conv.Conv [64, 32, 1, 1] 解析第15层后,save列表:[6] 解析第16层,from=-1,当前ch列表长度=16 16 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, &#39;nearest&#39;] 解析第16层后,save列表:[6] 解析第17层,from=[-1, 4],当前ch列表长度=17 Concat层17:将历史层4加入save列表 17 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1] 解析第17层后,save列表:[4, 6] 解析第18层,from=-1,当前ch列表长度=18 18 -1 1 3136 ultralytics.nn.modules.conv.Conv [96, 32, 1, 1] 解析第18层后,save列表:[4, 6] 解析第19层,from=-1,当前ch列表长度=19 19 -1 1 34770 ultralytics.nn.modules.block.PWCMamba [32, 32] 解析第19层后,save列表:[4, 6] 解析第20层,from=-1,当前ch列表长度=20 20 -1 1 9280 ultralytics.nn.modules.conv.Conv [32, 32, 3, 2] 解析第20层后,save列表:[4, 6] 解析第21层,from=[-1, 14],当前ch列表长度=21 Concat层21:将历史层14加入save列表 21 [-1, 14] 1 0 ultralytics.nn.modules.conv.Concat [1] 解析第21层后,save列表:[4, 6, 14] 解析第22层,from=-1,当前ch列表长度=22 22 -1 1 6272 ultralytics.nn.modules.conv.Conv [96, 64, 1, 1] 解析第22层后,save列表:[4, 6, 14] 解析第23层,from=-1,当前ch列表长度=23 23 -1 1 118178 ultralytics.nn.modules.block.PWCMamba [64, 64] 解析第23层后,save列表:[4, 6, 14] 解析第24层,from=-1,当前ch列表长度=24 24 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2] 解析第24层后,save列表:[4, 6, 14] 解析第25层,from=[-1, 9],当前ch列表长度=25 Concat层25:将历史层9加入save列表 25 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1] 解析第25层后,save列表:[4, 6, 9, 14] 解析第26层,from=-1,当前ch列表长度=26 26 -1 1 24832 ultralytics.nn.modules.conv.Conv [192, 128, 1, 1] 解析第26层后,save列表:[4, 6, 9, 14] 解析第27层,from=-1,当前ch列表长度=27 27 -1 1 430914 ultralytics.nn.modules.block.PWCMamba [128, 128] 解析第27层后,save列表:[4, 6, 9, 14] 解析第28层,from=[19, 23, 27],当前ch列表长度=28 层28:将from列表中的19加入save列表 层28:将from列表中的23加入save列表 层28:将from列表中的27加入save列表 28 [19, 23, 27] 1 267123 ultralytics.nn.modules.head.Detect [1, [32, 64, 128]] 解析第28层后,save列表:[4, 6, 9, 14, 19, 23, 27] Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64) 输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32) 输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96) Layer 17: Concat, output device: cuda:0 Layer 18: Conv, input device: cuda:0 Layer 18: Conv, output device: cuda:0 Layer 19: PWCMamba, input device: cuda:0 Layer 19: PWCMamba, output device: cuda:0 Layer 20: Conv, input device: cuda:0 Layer 20: Conv, output device: cuda:0 Layer 21: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32) 输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96) Layer 21: Concat, output device: cuda:0 Layer 22: Conv, input device: cuda:0 Layer 22: Conv, output device: cuda:0 Layer 23: PWCMamba, input device: cuda:0 Layer 23: PWCMamba, output device: cuda:0 Layer 24: Conv, input device: cuda:0 Layer 24: Conv, output device: cuda:0 Layer 25: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64) 输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192) Layer 25: Concat, output device: cuda:0 Layer 26: Conv, input device: cuda:0 Layer 26: Conv, output device: cuda:0 Layer 27: PWCMamba, input device: cuda:0 Layer 27: PWCMamba, output device: cuda:0 Layer 28: Detect, input device: cuda:0 Layer 28: Detect, output device: cuda:0 Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64) 输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32) 输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96) Layer 17: Concat, output device: cuda:0 Layer 18: Conv, input device: cuda:0 Layer 18: Conv, output device: cuda:0 Layer 19: PWCMamba, input device: cuda:0 Layer 19: PWCMamba, output device: cuda:0 Layer 20: Conv, input device: cuda:0 Layer 20: Conv, output device: cuda:0 Layer 21: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32) 输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96) Layer 21: Concat, output device: cuda:0 Layer 22: Conv, input device: cuda:0 Layer 22: Conv, output device: cuda:0 Layer 23: PWCMamba, input device: cuda:0 Layer 23: PWCMamba, output device: cuda:0 Layer 24: Conv, input device: cuda:0 Layer 24: Conv, output device: cuda:0 Layer 25: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64) 输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192) Layer 25: Concat, output device: cuda:0 Layer 26: Conv, input device: cuda:0 Layer 26: Conv, output device: cuda:0 Layer 27: PWCMamba, input device: cuda:0 Layer 27: PWCMamba, output device: cuda:0 Layer 28: Detect, input device: cuda:0 Layer 28: Detect, output device: cuda:0 Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64) 输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32) 输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96) Layer 17: Concat, output device: cuda:0 Layer 18: Conv, input device: cuda:0 Layer 18: Conv, output device: cuda:0 Layer 19: PWCMamba, input device: cuda:0 Layer 19: PWCMamba, output device: cuda:0 Layer 20: Conv, input device: cuda:0 Layer 20: Conv, output device: cuda:0 Layer 21: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32) 输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96) Layer 21: Concat, output device: cuda:0 Layer 22: Conv, input device: cuda:0 Layer 22: Conv, output device: cuda:0 Layer 23: PWCMamba, input device: cuda:0 Layer 23: PWCMamba, output device: cuda:0 Layer 24: Conv, input device: cuda:0 Layer 24: Conv, output device: cuda:0 Layer 25: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64) 输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192) Layer 25: Concat, output device: cuda:0 Layer 26: Conv, input device: cuda:0 Layer 26: Conv, output device: cuda:0 Layer 27: PWCMamba, input device: cuda:0 Layer 27: PWCMamba, output device: cuda:0 Layer 28: Detect, input device: cuda:0 Layer 28: Detect, output device: cuda:0 Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64) 输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32) 输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96) Layer 17: Concat, output device: cuda:0 Layer 18: Conv, input device: cuda:0 Layer 18: Conv, output device: cuda:0 Layer 19: PWCMamba, input device: cuda:0 Layer 19: PWCMamba, output device: cuda:0 Layer 20: Conv, input device: cuda:0 Layer 20: Conv, output device: cuda:0 Layer 21: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32) 输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96) Layer 21: Concat, output device: cuda:0 Layer 22: Conv, input device: cuda:0 Layer 22: Conv, output device: cuda:0 Layer 23: PWCMamba, input device: cuda:0 Layer 23: PWCMamba, output device: cuda:0 Layer 24: Conv, input device: cuda:0 Layer 24: Conv, output device: cuda:0 Layer 25: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64) 输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192) Layer 25: Concat, output device: cuda:0 Layer 26: Conv, input device: cuda:0 Layer 26: Conv, output device: cuda:0 Layer 27: PWCMamba, input device: cuda:0 Layer 27: PWCMamba, output device: cuda:0 Layer 28: Detect, input device: cuda:0 Layer 28: Detect, output device: cuda:0 Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64) 输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32) 输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96) Layer 17: Concat, output device: cuda:0 Layer 18: Conv, input device: cuda:0 Layer 18: Conv, output device: cuda:0 Layer 19: PWCMamba, input device: cuda:0 Layer 19: PWCMamba, output device: cuda:0 Layer 20: Conv, input device: cuda:0 Layer 20: Conv, output device: cuda:0 Layer 21: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32) 输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96) Layer 21: Concat, output device: cuda:0 Layer 22: Conv, input device: cuda:0 Layer 22: Conv, output device: cuda:0 Layer 23: PWCMamba, input device: cuda:0 Layer 23: PWCMamba, output device: cuda:0 Layer 24: Conv, input device: cuda:0 Layer 24: Conv, output device: cuda:0 Layer 25: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64) 输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192) Layer 25: Concat, output device: cuda:0 Layer 26: Conv, input device: cuda:0 Layer 26: Conv, output device: cuda:0 Layer 27: PWCMamba, input device: cuda:0 Layer 27: PWCMamba, output device: cuda:0 Layer 28: Detect, input device: cuda:0 Layer 28: Detect, output device: cuda:0 Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 16, 16]) (通道数: 64) 输入1形状: torch.Size([1, 128, 16, 16]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 16, 16]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 32, 32]) (通道数: 32) 输入1形状: torch.Size([1, 64, 32, 32]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 32, 32]) (拼接后通道数: 96) Layer 17: Concat, output device: cuda:0 Layer 18: Conv, input device: cuda:0 Layer 18: Conv, output device: cuda:0 Layer 19: PWCMamba, input device: cuda:0 Layer 19: PWCMamba, output device: cuda:0 Layer 20: Conv, input device: cuda:0 Layer 20: Conv, output device: cuda:0 Layer 21: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 16, 16]) (通道数: 32) 输入1形状: torch.Size([1, 64, 16, 16]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 16, 16]) (拼接后通道数: 96) Layer 21: Concat, output device: cuda:0 Layer 22: Conv, input device: cuda:0 Layer 22: Conv, output device: cuda:0 Layer 23: PWCMamba, input device: cuda:0 Layer 23: PWCMamba, output device: cuda:0 Layer 24: Conv, input device: cuda:0 Layer 24: Conv, output device: cuda:0 Layer 25: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 8, 8]) (通道数: 64) 输入1形状: torch.Size([1, 128, 8, 8]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 8, 8]) (拼接后通道数: 192) Layer 25: Concat, output device: cuda:0 Layer 26: Conv, input device: cuda:0 Layer 26: Conv, output device: cuda:0 Layer 27: PWCMamba, input device: cuda:0 Layer 27: PWCMamba, output device: cuda:0 Layer 28: Detect, input device: cuda:0 Layer 28: Detect, output device: cuda:0 Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 4, 4]) (通道数: 64) 输入1形状: torch.Size([1, 128, 4, 4]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 4, 4]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 8, 8]) (通道数: 32) 输入1形状: torch.Size([1, 64, 7, 7]) (通道数: 64) Concat拼接失败: Sizes of tensors must match except in dimension 1. Expected size 8 but got size 7 for tensor number 1 in the list. Layer 0: Conv, input device: cuda:0 Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 40, 40]) (通道数: 64) 输入1形状: torch.Size([1, 128, 40, 40]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 40, 40]) (拼接后通道数: 192) Layer 12: Concat, output device: cuda:0 Layer 13: Conv, input device: cuda:0 Layer 13: Conv, output device: cuda:0 Layer 14: PWCMamba, input device: cuda:0 Layer 14: PWCMamba, output device: cuda:0 Layer 15: Conv, input device: cuda:0 Layer 15: Conv, output device: cuda:0 Layer 16: Upsample, input device: cuda:0 Layer 16: Upsample, output device: cuda:0 Layer 17: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 80, 80]) (通道数: 32) 输入1形状: torch.Size([1, 64, 80, 80]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 80, 80]) (拼接后通道数: 96) Layer 17: Concat, output device: cuda:0 Layer 18: Conv, input device: cuda:0 Layer 18: Conv, output device: cuda:0 Layer 19: PWCMamba, input device: cuda:0 Layer 19: PWCMamba, output device: cuda:0 Layer 20: Conv, input device: cuda:0 Layer 20: Conv, output device: cuda:0 Layer 21: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 32, 40, 40]) (通道数: 32) 输入1形状: torch.Size([1, 64, 40, 40]) (通道数: 64) Concat输出形状: torch.Size([1, 96, 40, 40]) (拼接后通道数: 96) Layer 21: Concat, output device: cuda:0 Layer 22: Conv, input device: cuda:0 Layer 22: Conv, output device: cuda:0 Layer 23: PWCMamba, input device: cuda:0 Layer 23: PWCMamba, output device: cuda:0 Layer 24: Conv, input device: cuda:0 Layer 24: Conv, output device: cuda:0 Layer 25: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([1, 64, 20, 20]) (通道数: 64) 输入1形状: torch.Size([1, 128, 20, 20]) (通道数: 128) Concat输出形状: torch.Size([1, 192, 20, 20]) (拼接后通道数: 192) Layer 25: Concat, output device: cuda:0 Layer 26: Conv, input device: cuda:0 Layer 26: Conv, output device: cuda:0 Layer 27: PWCMamba, input device: cuda:0 Layer 27: PWCMamba, output device: cuda:0 Layer 28: Detect, input device: cuda:0 Layer 28: Detect, output device: cuda:0 YOLOv8pwcm summary: 643 layers, 2,170,157 parameters, 2,170,141 gradients, 5.0 GFLOPs Freezing layer &#39;model.28.dfl.conv.weight&#39; AMP: running Automatic Mixed Precision (AMP) checks... AMP: checks skipped ⚠️. Unable to load YOLO11n for AMP checks due to possible Ultralytics package modifications. Setting &#39;amp=True&#39;. If you experience zero-mAP or NaN losses you can disable AMP with amp=False. WARNING ⚠️ imgsz=[768] must be multiple of max stride 51, updating to [816] train: Scanning E:\ultralytics-v8.3.63\datasets\Motogp.v1i.yolov8\train\labels.cache... 97 images, 11 backgrounds, 0 corrupt: 100%|██████████| 97/97 [00:00<?, ?it/s] WARNING ⚠️ cache=&#39;ram&#39; may produce non-deterministic training results. Consider cache=&#39;disk&#39; as a deterministic alternative if your disk space allows. train: Caching images (0.1GB RAM): 100%|██████████| 97/97 [00:00<00:00, 549.04it/s] val: Scanning E:\ultralytics-v8.3.63\datasets\Motogp.v1i.yolov8\valid\labels.cache... 28 images, 4 backgrounds, 0 corrupt: 100%|██████████| 28/28 [00:00<?, ?it/s] WARNING ⚠️ cache=&#39;ram&#39; may produce non-deterministic training results. Consider cache=&#39;disk&#39; as a deterministic alternative if your disk space allows. val: Caching images (0.0GB RAM): 100%|██████████| 28/28 [00:00<00:00, 282.27it/s] WARNING ⚠️ imgsz=[816] must be multiple of max stride 32, updating to [832] No module named &#39;seaborn&#39; optimizer: &#39;optimizer=auto&#39; found, ignoring &#39;lr0=0.01&#39; and &#39;momentum=0.937&#39; and determining best &#39;optimizer&#39;, &#39;lr0&#39; and &#39;momentum&#39; automatically... optimizer: AdamW(lr=0.002, momentum=0.9) with parameter groups 145 weight(decay=0.0), 177 weight(decay=0.0005), 161 bias(decay=0.0) Image sizes 816 train, 816 val Using 2 dataloader workers Logging results to runs\detect\train12 Starting training for 100 epochs... Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size Layer 0: Conv, input device: cuda:0 0%| | 0/7 [00:00<?, ?it/s]Layer 0: Conv, output device: cuda:0 Layer 1: Conv, input device: cuda:0 Layer 1: Conv, output device: cuda:0 Layer 2: C2f, input device: cuda:0 Layer 2: C2f, output device: cuda:0 Layer 3: Conv, input device: cuda:0 Layer 3: Conv, output device: cuda:0 Layer 4: C2f, input device: cuda:0 Layer 4: C2f, output device: cuda:0 Layer 5: Conv, input device: cuda:0 Layer 5: Conv, output device: cuda:0 Layer 6: C2f, input device: cuda:0 Layer 6: C2f, output device: cuda:0 Layer 7: Conv, input device: cuda:0 Layer 7: Conv, output device: cuda:0 Layer 8: PWCMamba, input device: cuda:0 Layer 8: PWCMamba, output device: cuda:0 Layer 9: SPPF, input device: cuda:0 Layer 9: SPPF, output device: cuda:0 Layer 10: Conv, input device: cuda:0 Layer 10: Conv, output device: cuda:0 Layer 11: Upsample, input device: cuda:0 Layer 11: Upsample, output device: cuda:0 Layer 12: Concat, input device: cuda:0 Concat层输入张量数量: 2 输入0形状: torch.Size([16, 64, 52, 52]) (通道数: 64) 输入1形状: torch.Size([16, 128, 51, 51]) (通道数: 128) Concat拼接失败: Sizes of tensors must match except in dimension 1. Expected size 52 but got size 51 for tensor number 1 in the list. 0%| | 0/7 [00:00<?, ?it/s] Traceback (most recent call last): File "E:\ultralytics-v8.3.63\ultralytics\models\yolo\pwcmamba\ceshi.py", line 21, in <module> train() File "E:\ultralytics-v8.3.63\ultralytics\models\yolo\pwcmamba\ceshi.py", line 8, in train results = model.train( File "E:\ultralytics-v8.3.63\ultralytics\engine\model.py", line 806, in train self.trainer.train() File "E:\ultralytics-v8.3.63\ultralytics\engine\trainer.py", line 207, in train self._do_train(world_size) File "E:\ultralytics-v8.3.63\ultralytics\engine\trainer.py", line 381, in _do_train self.loss, self.loss_items = self.model(batch) File "D:\anaconda\envs\mamba\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\anaconda\envs\mamba\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "E:\ultralytics-v8.3.63\ultralytics\nn\tasks.py", line 525, in forward return super().forward(x, *args, **kwargs) File "E:\ultralytics-v8.3.63\ultralytics\nn\tasks.py", line 110, in forward return self.predict(x, *args, **kwargs) File "E:\ultralytics-v8.3.63\ultralytics\nn\tasks.py", line 128, in predict return self._predict_once(x, profile, visualize, embed) File "E:\ultralytics-v8.3.63\ultralytics\nn\tasks.py", line 175, in _predict_once x = m(x) # 前向传播 File "D:\anaconda\envs\mamba\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\anaconda\envs\mamba\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "E:\ultralytics-v8.3.63\ultralytics\nn\modules\conv.py", line 350, in forward result = torch.cat(x, dim=self.d) RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 52 but got size 51 for tensor number 1 in the list. 进程已结束,退出代码为 1
09-03
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值