model.train()报错, bool is not callable

本文提供了一个亲测有效的解决方案,针对在PyTorch中遇到的模型评估时布尔对象不可调用的问题,帮助开发者快速修复常见错误。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

转载:
https://discuss.pytorch.org/t/model-eval-error-bool-object-is-not-callable/53851/2
亲测有效!!!

RuntimeError Traceback (most recent call last) Cell In[19], line 2 1 # Train the model on the COCO8 example dataset for 100 epochs ----> 2 results = model.train(data="C:\\Users\\asus\\Downloads\\coco8.yaml", epochs=100, imgsz=640) File D:\anaconda\envs\pytorch_env\lib\site-packages\ultralytics\engine\model.py:799, in Model.train(self, trainer, **kwargs) 796 self.model = self.trainer.model 798 self.trainer.hub_session = self.session # attach optional HUB session --> 799 self.trainer.train() 800 # Update model and cfg after training 801 if RANK in {-1, 0}: File D:\anaconda\envs\pytorch_env\lib\site-packages\ultralytics\engine\trainer.py:227, in BaseTrainer.train(self) 224 ddp_cleanup(self, str(file)) 226 else: --> 227 self._do_train(world_size) File D:\anaconda\envs\pytorch_env\lib\site-packages\ultralytics\engine\trainer.py:348, in BaseTrainer._do_train(self, world_size) 346 if world_size > 1: 347 self._setup_ddp(world_size) --> 348 self._setup_train(world_size) 350 nb = len(self.train_loader) # number of batches 351 nw = max(round(self.args.warmup_epochs * nb), 100) if self.args.warmup_epochs > 0 else -1 # warmup iterations File D:\anaconda\envs\pytorch_env\lib\site-packages\ultralytics\engine\trainer.py:285, in BaseTrainer._setup_train(self, world_size) 283 if self.amp and RANK in {-1, 0}: # Single-GPU and DDP 284 callbacks_backup = callbacks.default_callbacks.copy() # backup callbacks as check_amp() resets them --> 285 self.amp = torch.tensor(check_amp(self.model), device=self.device) 286 callbacks.default_callbacks = callbacks_backup # restore callbacks 287 if RANK > -1 and world_size > 1: # DDP File D:\anaconda\envs\pytorch_env\lib\site-packages\ultralytics\utils\checks.py:782, in check_amp(model) 779 try: 780 from ultralytics import YOLO --> 782 assert amp_allclose(YOLO("yolo11n.pt"), im) 783 LOGGER.info(f"{prefix}checks passed ✅") 784 except ConnectionError: File D:\anaconda\envs\pytorch_env\lib\site-packages\ultralytics\utils\checks.py:770, in check_amp.<locals>.amp_allclose(m, im) 768 batch = [im] * 8 769 imgsz = max(256, int(model.stride.max() * 4)) # max stride P5-32 and P6-64 --> 770 a = m(batch, imgsz=imgsz, device=device, verbose=False)[0].boxes.data # FP32 inference 771 with autocast(enabled=True): 772 b = m(batch, imgsz=imgsz, device=device, verbose=False)[0].boxes.data # AMP inference File D:\anaconda\envs\pytorch_env\lib\site-packages\ultralytics\engine\model.py:185, in Model.__call__(self, source, stream, **kwargs) 156 def __call__( 157 self, 158 source: Union[str, Path, int, Image.Image, list, tuple, np.ndarray, torch.Tensor] = None, 159 stream: bool = False, 160 **kwargs: Any, 161 ) -> list: 162 """ 163 Alias for the predict method, enabling the model instance to be callable for predictions. 164 (...) 183 ... print(f"Detected {len(r)} objects in image") 184 """ --> 185 return self.predict(source, stream, **kwargs) File D:\anaconda\envs\pytorch_env\lib\site-packages\ultralytics\engine\model.py:555, in Model.predict(self, source, stream, predictor, **kwargs) 553 if prompts and hasattr(self.predictor, "set_prompts"): # for SAM-type models 554 self.predictor.set_prompts(prompts) --> 555 return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream) File D:\anaconda\envs\pytorch_env\lib\site-packages\ultralytics\engine\predictor.py:227, in BasePredictor.__call__(self, source, model, stream, *args, **kwargs) 225 return self.stream_inference(source, model, *args, **kwargs) 226 else: --> 227 return list(self.stream_inference(source, model, *args, **kwargs)) File D:\anaconda\envs\pytorch_env\lib\site-packages\torch\autograd\grad_mode.py:43, in _DecoratorContextManager._wrap_generator.<locals>.generator_context(*args, **kwargs) 40 try: 41 # Issuing `None` to a generator fires it up 42 with self.clone(): ---> 43 response = gen.send(None) 45 while True: 46 try: 47 # Forward the response to our caller and get its next request File D:\anaconda\envs\pytorch_env\lib\site-packages\ultralytics\engine\predictor.py:326, in BasePredictor.stream_inference(self, source, model, *args, **kwargs) 324 # Preprocess 325 with profilers[0]: --> 326 im = self.preprocess(im0s) 328 # Inference 329 with profilers[1]: File D:\anaconda\envs\pytorch_env\lib\site-packages\ultralytics\engine\predictor.py:167, in BasePredictor.preprocess(self, im) 165 im = im.transpose((0, 3, 1, 2)) # BHWC to BCHW, (n, 3, h, w) 166 im = np.ascontiguousarray(im) # contiguous --> 167 im = torch.from_numpy(im) 169 im = im.to(self.device) 170 im = im.half() if self.model.fp16 else im.float() # uint8 to fp16/32 RuntimeError: Numpy is not available
最新发布
07-18
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值