InvalidArgument=“0”的值对于“index”无效的解决措施

本文解析了在操作ListView组件时出现的常见错误原因,并提供了具体的解决办法。例如,当ListView中的item数量不足时尝试对其进行操作会导致程序异常。文章还分享了一个实例,展示了如何确保足够的item数量以避免此类问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

笔者主要针对ListView的操作中出现的这种问题进行解释,其他组件请自己对应下。

一般出现这种情况是由于数值的超出规定的范围导致的,比如ListView中没有item,这时对item[0]赋值也是不可行的。这种情况要查看下程序之前是不是调用了ListView.items.clear()函数,如果之前调用了,则只能使用ListView.items.Add(new item)进行添加。要谨记赋值只能对已经存在的Item赋值。

或者是item.subItem数量不够。比如笔者之前想只对ListView的第三列进行刷新,如下图。


一开始我的程序如下

当我开始对第三列赋值时,程序报同样的错误。只加载第三列的代码如下


Items[i]即为第i+1行,SubItems[j]即为第J+1列。

因为我的item只有两个SubItem,当我对第三列,也就是第三个SubItem赋值时,程序会显示超出,这种情况的解决措施就是先预先加载好第三列。

修改后的代码如下:


可知对第三列也赋了空值,这样就可以后期进行赋值了。


C:\Users\wsco\anaconda3\envs\py3.9_torch11.3.1_cuda11.3\python.exe C:\Users\wsco\Desktop\HREM-main\train.py Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias'] - This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). GraphLoss( (base_loss): TripletLoss() (gnn_loss): TripletLoss() (gnn): TransformerEncoder( (layers): ModuleList( (0): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): NonDynamicallyQuantizableLinear(in_features=1024, out_features=1024, bias=True) ) (linear1): Linear(in_features=1024, out_features=1024, bias=True) (dropout): Dropout(p=0.1, inplace=False) (linear2): Linear(in_features=1024, out_features=1024, bias=True) (norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (dropout1): Dropout(p=0.1, inplace=False) (dropout2): Dropout(p=0.1, inplace=False) ) ) ) (adj_model): AdjacencyModel( (adj_learning): AdjacencyLearning( (mlp_t2i): Sequential( (0): Linear(in_features=10, out_features=10, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.0, inplace=False) (3): Linear(in_features=10, out_features=1, bias=True) ) (mlp_i2t): Sequential( (0): Linear(in_features=10, out_features=10, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.0, inplace=False) (3): Linear(in_features=10, out_features=1, bias=True) ) ) ) ) Traceback (most recent call last): File "C:\Users\wsco\Desktop\HREM-main\train.py", line 271, in <module> main() File "C:\Users\wsco\Desktop\HREM-main\train.py", line 83, in main train(opt, train_loader, model, epoch) File "C:\Users\wsco\Desktop\HREM-main\train.py", line 143, in train for i, train_data in enumerate(train_loader): File "C:\Users\wsco\anaconda3\envs\py3.9_torch11.3.1_cuda11.3\lib\site-packages\torch\utils\data\dataloader.py", line 368, in __iter__ return self._get_iterator() File "C:\Users\wsco\anaconda3\envs\py3.9_torch11.3.1_cuda11.3\lib\site-packages\torch\utils\data\dataloader.py", line 314, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "C:\Users\wsco\anaconda3\envs\py3.9_torch11.3.1_cuda11.3\lib\site-packages\torch\utils\data\dataloader.py", line 927, in __init__ w.start() File "C:\Users\wsco\anaconda3\envs\py3.9_torch11.3.1_cuda11.3\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "C:\Users\wsco\anaconda3\envs\py3.9_torch11.3.1_cuda11.3\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\wsco\anaconda3\envs\py3.9_torch11.3.1_cuda11.3\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "C:\Users\wsco\anaconda3\envs\py3.9_torch11.3.1_cuda11.3\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__ reduction.dump(process_obj, to_child) File "C:\Users\wsco\anaconda3\envs\py3.9_torch11.3.1_cuda11.3\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) OSError: [Errno 22] Invalid argument Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\wsco\anaconda3\envs\py3.9_torch11.3.1_cuda11.3\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\wsco\anaconda3\envs\py3.9_torch11.3.1_cuda11.3\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) _pickle.UnpicklingError: pickle data was truncated 进程已结束,退出代码1 如何解决这个报错’
最新发布
08-07
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值