QT6 源(20):宏 Q_LIKELY(expr) = __builtin_expect(!!(expr), true)

(1)
在这里插入图片描述
在这里插入图片描述

++ 翻译

提示编译器,所包围的条件 expr 很可能评估为 true。使用此宏可以帮助编译器对代码进行优化
(2)

谢谢

(abinet) D:\Desktop\ABINet-main>python main.py --config=configs/train_abinet.yaml D:\Anaconda\envs\abinet\lib\site-packages\fastai\imports\core.py:29: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. import pkg_resources D:\Anaconda\envs\abinet\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: [WinError 127] 找不到指定的程序。 warn(f"Failed to load image Python extension: {e}") [2025-09-26 17:01:10,888 main.py:215 INFO train-abinet] ModelConfig( (0): dataset_case_sensitive = False (1): dataset_charset_path = data/charset_36.txt (2): dataset_data_aug = True (3): dataset_eval_case_sensitive = False (4): dataset_image_height = 32 (5): dataset_image_width = 128 (6): dataset_max_length = 25 (7): dataset_multiscales = False (8): dataset_num_workers = 14 (9): dataset_one_hot_y = True (10): dataset_pin_memory = True (11): dataset_smooth_factor = 0.1 (12): dataset_smooth_label = False (13): dataset_test_batch_size = 384 (14): dataset_test_roots = ['data/evaluation/IIIT5k_3000', 'data/evaluation/SVT', 'data/evaluation/SVTP', 'data/evaluation/IC13_857', 'data/evaluation/IC15_1811', 'data/evaluation/CUTE80'] (15): dataset_train_batch_size = 384 (16): dataset_train_roots = ['data/training/MJ/MJ_train/', 'data/training/MJ/MJ_test/', 'data/training/MJ/MJ_valid/', 'data/training/ST'] (17): dataset_use_sm = False (18): global_name = train-abinet (19): global_phase = train (20): global_seed = None (21): global_stage = train-super (22): global_workdir = workdir\train-abinet (23): model_alignment_loss_weight = 1.0 (24): model_checkpoint = None (25): model_ensemble = (26): model_iter_size = 3 (27): model_language_checkpoint = workdir/pretrain-language-model/pretrain-language-model.pth (28): model_language_detach = True (29): model_language_loss_weight = 1.0 (30): model_language_num_layers = 4 (31): model_language_use_self_attn = False (32): model_name = modules.model_abinet_iter.ABINetIterModel (33): model_strict = True (34): model_use_vision = False (35): model_vision_attention = position (36): model_vision_backbone = transformer (37): model_vision_backbone_ln = 3 (38): model_vision_checkpoint = workdir/pretrain-vision-model/best-pretrain-vision-model.pth (39): model_vision_loss_weight = 1.0 (40): optimizer_args_betas = (0.9, 0.999) (41): optimizer_bn_wd = False (42): optimizer_clip_grad = 20 (43): optimizer_lr = 0.0001 (44): optimizer_scheduler_gamma = 0.1 (45): optimizer_scheduler_periods = [6, 4] (46): optimizer_true_wd = False (47): optimizer_type = Adam (48): optimizer_wd = 0.0 (49): training_epochs = 10 (50): training_eval_iters = 3000 (51): training_save_iters = 3000 (52): training_show_iters = 50 (53): training_start_iters = 0 (54): training_stats_iters = 100000 ) [2025-09-26 17:01:10,890 main.py:222 INFO train-abinet] Construct dataset. [2025-09-26 17:01:10,895 main.py:92 INFO train-abinet] 15895356 training items found. [2025-09-26 17:01:10,895 main.py:94 INFO train-abinet] 7248 valid items found. [2025-09-26 17:01:10,895 main.py:226 INFO train-abinet] Construct model. [2025-09-26 17:01:11,029 model_vision.py:37 INFO train-abinet] Read vision model from workdir/pretrain-vision-model/best-pretrain-vision-model.pth. Traceback (most recent call last): File "D:\Desktop\ABINet-main\main.py", line 246, in <module> main() File "D:\Desktop\ABINet-main\main.py", line 227, in main model = _get_model(config) File "D:\Desktop\ABINet-main\main.py", line 103, in _get_model model = cls(config) File "D:\Desktop\ABINet-main\modules\model_abinet_iter.py", line 15, in __init__ self.vision = BaseVision(config) File "D:\Desktop\ABINet-main\modules\model_vision.py", line 38, in __init__ self.load(config.model_vision_checkpoint) File "D:\Desktop\ABINet-main\modules\model.py", line 19, in load state = torch.load(source, map_location=device) File "D:\Anaconda\envs\abinet\lib\site-packages\torch\serialization.py", line 1553, in load raise pickle.UnpicklingError(_get_wo_message(str(e))) from None _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint. (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function. Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. 这是什么问题
09-27
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

zhangzhangkeji

谢谢支持

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值