python读取 IEMOCAP_text_context.pickle 文件时报错的解决办法

当Python在加载IEMOCAP_text_context.pickle文件时遇到asciicodec错误,可以通过指定latin1编码来解码。使用`pickle.load(handle,encoding=latin1)`可以成功读取文件内容,避免ordinalnotinrange(128)的错误。

python读取 IEMOCAP_text_context.pickle 文件时报错的解决办法


python在加载 IEMOCAP_text_context.pickle 文件时报

'ascii' codec can't decode byte 0xd2 in position 33: ordinal not in range(128)

的错误,原因是文件的编码方式与读取的编码方式不一致。


解决办法:

with open('./IEMOCAP_text_context.pickle', 'rb') as handle:
    data = pickle.load(handle, encoding='latin1')
print( list(data.keys())[:20] )

打印内容:
['Ses01F_script02_2_F042', 'Ses01F_script02_2_F043', 'Ses01F_script02_2_F040', 'Ses01F_script02_2_F041', 'Ses01F_script02_2_F046', 'Ses01F_script02_2_F044', 'Ses01F_script02_2_F045', 'Ses05F_script01_1_M010', 'Ses05F_script01_1_M011', 'Ses05F_script01_1_M012', 'Ses05F_script01_1_M013', 'Ses05F_script01_1_M014', 'Ses05F_script01_1_M015', 'Ses05F_script01_1_M016', 'Ses05F_script01_1_M017', 'Ses05M_script02_2_F030', 'Ses05M_script02_2_F031', 'Ses05M_script02_2_F032', 'Ses05M_script02_2_F034', 'Ses05M_script02_2_F037']
基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)含有代码注释,新手也可看懂,个人手打98分项目,导师非常认可的高分项目,毕业设计、期末大作业和课程设计高分必看,下载下来,简单部署,就可以使用。该项目可以直接作为毕设、期末大作业使用,代码都在里面,系统功能完善、界面美观、操作简单、功能齐全、管理便捷,具有很高的实际应用价值,项目都经过严格调试,确保可以运行! 基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)基于Python的多模态情感分析代码及文档说明、数据集 (包括文本、语音、图像和视频输入)基于Python的多模态情感
命令行运行也报错 (flk) C:\Users\JE16090\Desktop\ADC\reference>python range_search.py [INFO] MXTRAINING(9124:7836,MainProcess):2025-10-23-12:47:06.188.000 [C:\Users\JE16090\Desktop\ADC\reference\com_func.py:45] Total 4183 items in klarf files. [INFO] MXTRAINING(9124:7836,MainProcess):2025-10-23-12:47:06.189.000 [C:\Users\JE16090\Desktop\ADC\reference\range_search.py:123] Loading reference ... [INFO] MXTRAINING(9124:7836,MainProcess):2025-10-23-12:49:16.983.000 [C:\Users\JE16090\Desktop\ADC\reference\util.py:95] Gray Image shape: (58411, 58411) [INFO] MXTRAINING(9124:7836,MainProcess):2025-10-23-12:49:21.600.00 [C:\Users\JE16090\Desktop\ADC\reference\util.py:98] dataset size is 4183 Traceback (most recent call last): File "<string>", line 1, in <module> Traceback (most recent call last): File "C:\Users\JE16090\Desktop\ADC\reference\range_search.py", line 148, in <module> main() File "C:\Users\JE16090\Desktop\ADC\reference\range_search.py", line 140, in main process.start() File "D:\Python\Python310\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "D:\Python\Python310\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "D:\Python\Python310\lib\multiprocessing\context.py", line 336, in _Popen File "D:\Python\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main return Popen(process_obj) File "D:\Python\Python310\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__ exitcode = _main(fd, parent_sentinel) reduction.dump(process_obj, to_child) File "D:\Python\Python310\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) _pickle.UnpicklingError: pickle data was truncated File "D:\Python\Python310\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) OSError: [Errno 22] Invalid argument (flk) C:\Users\JE16090\Desktop\ADC\reference>
10-24
Traceback (most recent call last): File "D:\ptcharm\project\pythonProject\yolov5-5.0\yolov5-5.0\train.py", line 543, in <module> Traceback (most recent call last): File "<string>", line 1, in <module> File "D:\ANACONDA\envs\yolov5_new\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) train(hyp, opt, device, tb_writer) File "D:\ANACONDA\envs\yolov5_new\lib\multiprocessing\spawn.py", line 126, in _main File "D:\ptcharm\project\pythonProject\yolov5-5.0\yolov5-5.0\train.py", line 199, in train self = reduction.pickle.load(from_parent) EOFError: Ran out of input testloader = create_dataloader(test_path, imgsz_test, batch_size * 2, gs, opt, # testloader File "D:\ptcharm\project\pythonProject\yolov5-5.0\yolov5-5.0\utils\datasets.py", line 79, in create_dataloader dataloader = loader(dataset, File "D:\ptcharm\project\pythonProject\yolov5-5.0\yolov5-5.0\utils\datasets.py", line 97, in __init__ self.iterator = super().__iter__() File "D:\ANACONDA\envs\yolov5_new\lib\site-packages\torch\utils\data\dataloader.py", line 491, in __iter__ return self._get_iterator() File "D:\ANACONDA\envs\yolov5_new\lib\site-packages\torch\utils\data\dataloader.py", line 422, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "D:\ANACONDA\envs\yolov5_new\lib\site-packages\torch\utils\data\dataloader.py", line 1146, in __init__ w.start() File "D:\ANACONDA\envs\yolov5_new\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "D:\ANACONDA\envs\yolov5_new\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "D:\ANACONDA\envs\yolov5_new\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "D:\ANACONDA\envs\yolov5_new\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__ reduction.dump(process_obj, to_child) File "D:\ANA
03-15
分析跑internVL断点续训练的报错原因,并提供解决方法。 [INFO|deepspeed.py:400] 2025-11-06 08:03:17,906 >> Attempting to resume from work_dirs/InternVL_1b_remote_sensing_ViTP_03NTL_WAS_digit_weight=2n+1_loss/checkpoint-6000 [2025-11-06 08:03:17,907] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from work_dirs/InternVL_1b_remote_sensing_ViTP_03NTL_WAS_digit_weight=2n+1_loss/checkpoint-6000/global_step6000/mp_rank_00_model_states.pt... [2025-11-06 08:03:21,497] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from work_dirs/InternVL_1b_remote_sensing_ViTP_03NTL_WAS_digit_weight=2n+1_loss/checkpoint-6000/global_step6000/mp_rank_00_model_states.pt. [2025-11-06 08:03:21,631] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from work_dirs/InternVL_1b_remote_sensing_ViTP_03NTL_WAS_digit_weight=2n+1_loss/checkpoint-6000/global_step6000/mp_rank_00_model_states.pt... [2025-11-06 08:03:22,980] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from work_dirs/InternVL_1b_remote_sensing_ViTP_03NTL_WAS_digit_weight=2n+1_loss/checkpoint-6000/global_step6000/mp_rank_00_model_states.pt. [2025-11-06 08:03:23,347] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from work_dirs/InternVL_1b_remote_sensing_ViTP_03NTL_WAS_digit_weight=2n+1_loss/checkpoint-6000/global_step6000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... [rank1]: Traceback (most recent call last): [rank1]: File "/filesdir/ZZH/ViTP/ViTP/internvl/train/internvl_chat_finetune.py", line 1161, in <module> [rank1]: main() [rank1]: File "/filesdir/ZZH/ViTP/ViTP/internvl/train/internvl_chat_finetune.py", line 1146, in main [rank1]: train_result = trainer.train(resume_from_checkpoint=checkpoint) [rank1]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/transformers/trainer.py", line 1539, in train [rank1]: return inner_training_loop( [rank1]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/transformers/trainer.py", line 1708, in _inner_training_loop [rank1]: deepspeed_load_checkpoint(self.model_wrapped, resume_from_checkpoint) [rank1]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/transformers/integrations/deepspeed.py", line 402, in deepspeed_load_checkpoint [rank1]: load_path, _ = deepspeed_engine.load_checkpoint( [rank1]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2770, in load_checkpoint [rank1]: success = self._load_zero_checkpoint(load_dir, tag, load_optimizer_states=load_optimizer_states) [rank1]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2955, in _load_zero_checkpoint [rank1]: zero_sd_list = self._get_all_zero_checkpoints(load_dir, tag) [rank1]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 3030, in _get_all_zero_checkpoints [rank1]: return self._get_all_zero_checkpoint_state_dicts(zero_ckpt_names) [rank1]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 3009, in _get_all_zero_checkpoint_state_dicts [rank1]: _state = self.checkpoint_engine.load( [rank1]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/checkpoint_engine/torch_checkpoint_engine.py", line 28, in load [rank1]: partition = torch.load(path, map_location=map_location) [rank1]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/torch/serialization.py", line 1529, in load [rank1]: raise pickle.UnpicklingError(_get_wo_message(str(e))) from None [rank1]: _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint. [rank1]: (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. [rank1]: (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. [rank1]: WeightsUnpickler error: Unsupported global: GLOBAL deepspeed.runtime.fp16.loss_scaler.LossScaler was not an allowed global by default. Please use `torch.serialization.add_safe_globals([deepspeed.runtime.fp16.loss_scaler.LossScaler])` or the `torch.serialization.safe_globals([deepspeed.runtime.fp16.loss_scaler.LossScaler])` context manager to allowlist this global if you trust this class/function. [rank1]: Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. [rank0]: Traceback (most recent call last): [rank0]: File "/filesdir/ZZH/ViTP/ViTP/internvl/train/internvl_chat_finetune.py", line 1161, in <module> [rank0]: main() [rank0]: File "/filesdir/ZZH/ViTP/ViTP/internvl/train/internvl_chat_finetune.py", line 1146, in main [rank0]: train_result = trainer.train(resume_from_checkpoint=checkpoint) [rank0]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/transformers/trainer.py", line 1539, in train [rank0]: return inner_training_loop( [rank0]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/transformers/trainer.py", line 1708, in _inner_training_loop [rank0]: deepspeed_load_checkpoint(self.model_wrapped, resume_from_checkpoint) [rank0]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/transformers/integrations/deepspeed.py", line 402, in deepspeed_load_checkpoint [rank0]: load_path, _ = deepspeed_engine.load_checkpoint( [rank0]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2770, in load_checkpoint [rank0]: success = self._load_zero_checkpoint(load_dir, tag, load_optimizer_states=load_optimizer_states) [rank0]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2955, in _load_zero_checkpoint [rank0]: zero_sd_list = self._get_all_zero_checkpoints(load_dir, tag) [rank0]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 3030, in _get_all_zero_checkpoints [rank0]: return self._get_all_zero_checkpoint_state_dicts(zero_ckpt_names) [rank0]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 3009, in _get_all_zero_checkpoint_state_dicts [rank0]: _state = self.checkpoint_engine.load( [rank0]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/checkpoint_engine/torch_checkpoint_engine.py", line 28, in load [rank0]: partition = torch.load(path, map_location=map_location) [rank0]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/torch/serialization.py", line 1529, in load [rank0]: raise pickle.UnpicklingError(_get_wo_message(str(e))) from None [rank0]: _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint. [rank0]: (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. [rank0]: (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. [rank0]: WeightsUnpickler error: Unsupported global: GLOBAL deepspeed.runtime.fp16.loss_scaler.LossScaler was not an allowed global by default. Please use `torch.serialization.add_safe_globals([deepspeed.runtime.fp16.loss_scaler.LossScaler])` or the `torch.serialization.safe_globals([deepspeed.runtime.fp16.loss_scaler.LossScaler])` context manager to allowlist this global if you trust this class/function. [rank0]: Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. [rank3]: Traceback (most recent call last): [rank3]: File "/filesdir/ZZH/ViTP/ViTP/internvl/train/internvl_chat_finetune.py", line 1161, in <module> [rank3]: main() [rank3]: File "/filesdir/ZZH/ViTP/ViTP/internvl/train/internvl_chat_finetune.py", line 1146, in main [rank3]: train_result = trainer.train(resume_from_checkpoint=checkpoint) [rank3]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/transformers/trainer.py", line 1539, in train [rank3]: return inner_training_loop( [rank3]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/transformers/trainer.py", line 1708, in _inner_training_loop [rank3]: deepspeed_load_checkpoint(self.model_wrapped, resume_from_checkpoint) [rank3]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/transformers/integrations/deepspeed.py", line 402, in deepspeed_load_checkpoint [rank3]: load_path, _ = deepspeed_engine.load_checkpoint( [rank3]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2770, in load_checkpoint [rank3]: success = self._load_zero_checkpoint(load_dir, tag, load_optimizer_states=load_optimizer_states) [rank3]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2955, in _load_zero_checkpoint [rank3]: zero_sd_list = self._get_all_zero_checkpoints(load_dir, tag) [rank3]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 3030, in _get_all_zero_checkpoints [rank3]: return self._get_all_zero_checkpoint_state_dicts(zero_ckpt_names) [rank3]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 3009, in _get_all_zero_checkpoint_state_dicts [rank3]: _state = self.checkpoint_engine.load( [rank3]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/checkpoint_engine/torch_checkpoint_engine.py", line 28, in load [rank3]: partition = torch.load(path, map_location=map_location) [rank3]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/torch/serialization.py", line 1529, in load [rank3]: raise pickle.UnpicklingError(_get_wo_message(str(e))) from None [rank3]: _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint. [rank3]: (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. [rank3]: (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. [rank3]: WeightsUnpickler error: Unsupported global: GLOBAL deepspeed.runtime.fp16.loss_scaler.LossScaler was not an allowed global by default. Please use `torch.serialization.add_safe_globals([deepspeed.runtime.fp16.loss_scaler.LossScaler])` or the `torch.serialization.safe_globals([deepspeed.runtime.fp16.loss_scaler.LossScaler])` context manager to allowlist this global if you trust this class/function. [rank3]: Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. [rank2]: Traceback (most recent call last): [rank2]: File "/filesdir/ZZH/ViTP/ViTP/internvl/train/internvl_chat_finetune.py", line 1161, in <module> [rank2]: main() [rank2]: File "/filesdir/ZZH/ViTP/ViTP/internvl/train/internvl_chat_finetune.py", line 1146, in main [rank2]: train_result = trainer.train(resume_from_checkpoint=checkpoint) [rank2]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/transformers/trainer.py", line 1539, in train [rank2]: return inner_training_loop( [rank2]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/transformers/trainer.py", line 1708, in _inner_training_loop [rank2]: deepspeed_load_checkpoint(self.model_wrapped, resume_from_checkpoint) [rank2]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/transformers/integrations/deepspeed.py", line 402, in deepspeed_load_checkpoint [rank2]: load_path, _ = deepspeed_engine.load_checkpoint( [rank2]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2770, in load_checkpoint [rank2]: success = self._load_zero_checkpoint(load_dir, tag, load_optimizer_states=load_optimizer_states) [rank2]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 2955, in _load_zero_checkpoint [rank2]: zero_sd_list = self._get_all_zero_checkpoints(load_dir, tag) [rank2]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 3030, in _get_all_zero_checkpoints [rank2]: return self._get_all_zero_checkpoint_state_dicts(zero_ckpt_names) [rank2]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 3009, in _get_all_zero_checkpoint_state_dicts [rank2]: _state = self.checkpoint_engine.load( [rank2]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/deepspeed/runtime/checkpoint_engine/torch_checkpoint_engine.py", line 28, in load [rank2]: partition = torch.load(path, map_location=map_location) [rank2]: File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/torch/serialization.py", line 1529, in load [rank2]: raise pickle.UnpicklingError(_get_wo_message(str(e))) from None [rank2]: _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint. [rank2]: (1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. [rank2]: (2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message. [rank2]: WeightsUnpickler error: Unsupported global: GLOBAL deepspeed.runtime.fp16.loss_scaler.LossScaler was not an allowed global by default. Please use `torch.serialization.add_safe_globals([deepspeed.runtime.fp16.loss_scaler.LossScaler])` or the `torch.serialization.safe_globals([deepspeed.runtime.fp16.loss_scaler.LossScaler])` context manager to allowlist this global if you trust this class/function. [rank2]: Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html. [rank0]:[W1106 08:03:25.600984408 ProcessGroupNCCL.cpp:1538] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) W1106 08:03:26.770491 1735 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1813 closing signal SIGTERM W1106 08:03:26.771584 1735 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1814 closing signal SIGTERM W1106 08:03:26.771985 1735 site-packages/torch/distributed/elastic/multiprocessing/api.py:900] Sending process 1815 closing signal SIGTERM E1106 08:03:27.887712 1735 site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 0 (pid: 1812) of binary: /root/miniconda3/envs/ViTP/bin/python3.9 Traceback (most recent call last): File "/root/miniconda3/envs/ViTP/bin/torchrun", line 7, in <module> sys.exit(main()) File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 357, in wrapper return f(*args, **kwargs) File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/torch/distributed/run.py", line 901, in main run(args) File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/torch/distributed/run.py", line 892, in run elastic_launch( File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 143, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/root/miniconda3/envs/ViTP/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 277, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ internvl/train/internvl_chat_finetune.py FAILED
最新发布
11-07
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值