目标数据格式:
[[{“text”: “Hi, I need an ID.”, “speaker”: “Ses05F_impro01_F”, “label”: “neu”, “feature”: [-0.16701631247997284, -0.5138905048370361,…]},
{“text”: “ahh Yeah, this is the wrong line. I’m sorry. You need to go back over to line two B. That’s where you should have started from.”, “speaker”: “Ses05F_impro01_M”, “label”: “neu”, “feature”: [-0.0958164632320404, -0.3956751525402069,…]}], <<<<<注意这里已有 1 batch已结束
[{“text”: “Hi, I need an ID.”, “speaker”: “Ses05F_impro01_F”, “label”: “neu”, “feature”: [-0.16701631247997284, -0.5138905048370361,…]},
{“text”: “ahh Yeah, this is the wrong line. I’m sorry. You need to go back over to line two B. That’s where you should have started from.”, “speaker”: “Ses05F_impro01_M”, “label”: “neu”, “feature”: [-0.0958164632320404, -0.3956751525402069,…]}]]
说明:data[i]第i个对话,data[i][j]第i个对话的第j句话,每句话由768维向量表示
MELD数据集下载:https://affective-meld.github.io/
bert-base-uncased模型下载:https://huggingface.co/bert-base-uncased/tree/main
pytorch下载pytorch_model.bin, config.json, vocab.txt
import os
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
import pandas as pd
import torch
import gc
import pickle as pkl
from transformers import BertTokenizer, BertModel
from tqdm import tqdm
file_path = './MELD/dev_sent_emo.csv'
model_path

本文介绍了如何使用预训练的BERT模型对MELD情感对话数据集进行处理,提取句子级向量,并将其应用于情感标签预测。作者详细展示了加载模型、编码文本、获取特征向量的过程,并提供了关键代码片段。
最低0.47元/天 解锁文章
1121

被折叠的 条评论
为什么被折叠?



