K Desktop Environment 3.5 正式发布

博客介绍了Konqueror的新进展,它已通过Acid2 CSS测试,领先于Firefox、Internet Explorer,还增加了ad - block功能。此外,新增了SuperKaramba,Kopete提供了对MSN、Yahoo!即时消息工具的支持。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

  • Konqueror 已通过了Acid2 CSS 测试(赶在了Firefox、Internet Explorer 前面),并增加了 ad-block 功能;
  • 新增了 SuperKaramba(easy-to-install widgets);
  • Kopete 提供了对 MSN、Yahoo! 即使消息工具的支持;
以上代码出现问题:(style_tune) C:\Users\28996\Desktop\AI\persona_contrastive_finetuning>python Contrastive_Training_LM.py Generating train split: 2 examples [00:00, 2.15 examples/s] Map: 100%|████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 71.39 examples/s] Generating train split: 2 examples [00:00, 252.61 examples/s] Map: 100%|███████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 399.72 examples/s] 训练集样本示例: {&#39;anchor_input_ids&#39;: [56568, 118919, 116122, 11319], &#39;positive_input_ids&#39;: [116122, 20412, 107340, 9370, 100357, 102323, 3837, 109202, 104078, 103975, 100675, 101940, 100912, 105054, 6313], &#39;negative_input_ids&#39;: [100323, 104307, 99245, 9370, 106059, 104060, 3837, 104530, 115604, 99329, 11319]} 验证集样本示例: {&#39;anchor_input_ids&#39;: [56568, 118919, 116122, 11319], &#39;positive_input_ids&#39;: [116122, 20412, 107340, 9370, 100357, 102323, 3837, 109202, 104078, 103975, 100675, 101940, 100912, 105054, 6313], &#39;negative_input_ids&#39;: [100323, 104307, 99245, 9370, 106059, 104060, 3837, 104530, 115604, 99329, 11319]} 0%| | 0/3 [00:00<?, ?it/s]You&#39;re using a Qwen2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. Traceback (most recent call last): File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 290, in <module> trainer.train() File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2171, in train return inner_training_loop( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2531, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 3676, in training_step loss = self.compute_loss(model, inputs) File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 173, in compute_loss anchor_emb = get_embeddings(anchor_ids, anchor_mask) File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 164, in get_embeddings outputs = model( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\accelerate\utils\operations.py", line 818, in forward return model_forward(*args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\accelerate\utils\operations.py", line 806, in __call__ return convert_to_fp32(self.model_forward(*args, **kwargs)) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\accelerate\utils\operations.py", line 785, in convert_to_fp32 return recursively_apply(_convert_to_fp32, tensor, test_type=_is_fp16_bf16_tensor) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\accelerate\utils\operations.py", line 118, in recursively_apply { File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\accelerate\utils\operations.py", line 119, in <dictcomp> k: recursively_apply( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\accelerate\utils\operations.py", line 126, in recursively_apply return func(data, *args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\accelerate\utils\operations.py", line 777, in _convert_to_fp32 return tensor.float() torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 594.00 MiB. GPU 0 has a total capacity of 8.00 GiB of which 0 bytes is free. Of the allocated memory 13.03 GiB is allocated by PyTorch, and 129.95 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) 0%| | 0/3 [00:15<?, ?it/s]
最新发布
07-21
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值