随便聊点啥

朋友给介绍了优快云,就把笔记逐渐搬上来了

这里大多是我个人学习过程中的一些信息记录,方便自己查阅,也欢迎大家路过交流

当然难免会写下一些bug,有问题欢迎指出,能看到这大概也是同领域的朋友了
这里写图片描述

以上代码有以下问题,分析修改:(style_tune) C:\Users\28996\Desktop\AI\persona_contrastive_finetuning>python Contrastive_Training_LM.py Map: 0%| | 0/1 [00:00<?, ? examples/s]ERROR:__main__:无法解析anchor_input_ids: 你如何看待气候变化? ERROR:__main__:无法解析positive_input_ids: 气候变化是严峻的全球危机,我们需要立即采取行动减少碳排放! ERROR:__main__:无法解析negative_input_ids: 哈哈天气什么的随便啦,不如游戏? Map: 100%|████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 13.02 examples/s] Map: 0%| | 0/1 [00:00<?, ? examples/s]ERROR:__main__:无法解析anchor_input_ids: 你如何看待气候变化? ERROR:__main__:无法解析positive_input_ids: 气候变化是严峻的全球危机,我们需要立即采取行动减少碳排放! ERROR:__main__:无法解析negative_input_ids: 哈哈天气什么的随便啦,不如游戏? Map: 100%|████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 67.37 examples/s] 训练集样本示例: {'anchor_input_ids': '你如何看待气候变化?', 'positive_input_ids': '气候变化是严峻的全球危机,我们需要立即采取行动减少碳排放!', 'negative_input_ids': '哈哈天气什么的随便啦,不如游戏?'} 验证集样本示例: {'anchor_input_ids': '你如何看待气候变化?', 'positive_input_ids': '气候变化是严峻的全球危机,我们需要立即采取行动减少碳排放!', 'negative_input_ids': '哈哈天气什么的随便啦,不如游戏?'} 0%| | 0/3 [00:00<?, ?it/s]ERROR:__main__:无法解析token IDs: 你如何看待气候变化? ERROR:__main__:无法解析token IDs: 气候变化是严峻的全球危机,我们需要立即采取行动减少碳排放! ERROR:__main__:无法解析token IDs: 哈哈天气什么的随便啦,不如游戏? You're using a Qwen2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. Traceback (most recent call last): File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 281, in <module> trainer.train() File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2171, in train return inner_training_loop( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2480, in _inner_training_loop batch_samples, num_items_in_batch = self.get_batch_samples(epoch_iterator, num_batches) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 5156, in get_batch_samples batch_samples += [next(epoch_iterator)] File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\accelerate\data_loader.py", line 567, in __iter__ current_batch = next(dataloader_iter) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\utils\data\dataloader.py", line 701, in __next__ data = self._next_data() File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\utils\data\dataloader.py", line 757, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\utils\data\_utils\fetch.py", line 55, in fetch return self.collate_fn(data) File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 96, in __call__ "positive_attention_mask": create_to_attention_mask(batch_positive["input_ids"]), NameError: name 'create_to_attention_mask' is not defined. Did you mean: 'create_attention_mask'? 0%| | 0/3 [00:00<?, ?it/s]
最新发布
07-21
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值