bert 新闻标题聚类
数据集参考之前的博客lstm 新闻标题聚类
由于自己的电脑没有装pytorch的gpu环境,在colab上进行代码编写
先安装环境
!pip install transformers
然后加载一堆库
import torch
import time
import numpy as np
import pandas as pd
import torch.nn as nn
from transformers import BertModel
from transformers import BertTokenizer
from sklearn.model_selection import train_test_split
from transformers import get_linear_schedule_with_warmup
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from torch.optim import AdamW
import random
from imblearn.over_sampling import RandomOverSampler
接下来读取数据,设置种子数
pre_path="./drive/MyDrive/"
SEED = 721
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def seed_init(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
seed_init(SEED)
data = pd.read_csv(pre_path+'./data/news.csv')
划分测试集验证集,并观察数据的分布
x = data['comment'].to_numpy()
y = data['pos'].to_numpy()
x_train, x_valid, y_train, y_valid = train_test_split(x, y, test_size=0.3, random_state=SEED)
(unique, counts) = np.unique(y_train, return_counts=True)
np.asarray((unique, counts)).T

根据对比,对数据使用过采用能提高训练的准确度
ros = RandomOverSampler()
x_train, y_train = ros.fit_resample(np.array(x_train).reshape(-1,1),np.array(y_train).reshape(-1,1))
x_train = x_train.flatten()
y_train = y_train

本文介绍了如何在无GPU环境下,利用BERT模型对新闻标题进行聚类,通过RandomOverSampler改善数据分布,最终实现91%以上准确度的提升过程。
最低0.47元/天 解锁文章
1635

被折叠的 条评论
为什么被折叠?



