Convolutional Neural Networks for Sentence Classification(卷积神经网络用于句子分类)

该博客介绍了如何使用卷积神经网络(CNN)进行句子分类,详细阐述了模型结构,包括词向量表示、卷积操作、最大池化层和正则化策略。模型变种涉及静态和可训练的二维句子向量的结合,并在倒数第二层应用dropout以提高泛化能力。此外,还讨论了L2正则化和权重约束以防止过拟合。
本篇博文仅仅用于自己学习的笔记,若有错误欢迎指正。

参考:

论文:Convolutional Neural Networks for Sentence Classification

代码:https://github.com/dennybritz/cnn-text-classification-tf

Model(模型):

模型结构图:

模型思路:

  • x i ∈ R k x_i∈R^k xiRk表示一个句子的第i个词的词向量;句子长度用n表示,对于长度不是n的进行相应的填充与截断; x i : n = x 1 ⊕ x 2 ⊕ … . ⊕ x n x_{i:n}=x_1⊕x_2⊕….⊕x_n xi:n=x1x2.xn表示一个句子⊕是串联操作, x i : i + j x_{i:i+j} xi:i+j就是 x i , x i + 1 , … , x i + j {x_i,x_{i+1},…,x_{i+j}} xi,x
### 卷积神经网络在SST-1数据集上的句子分类代码示例 以下是基于MindSpore框架实现的卷积神经网络(CNN)用于SST-1数据集的句子分类任务的代码示例。此代码展示了如何加载预训练词向量、构建模型以及完成训练过程。 #### 数据处理部分 为了适配SST-1数据集,需要先对其进行预处理并转换为适合CNN输入的形式: ```python import numpy as np from mindspore import Tensor, context from mindspore.dataset import GeneratorDataset context.set_context(mode=context.GRAPH_MODE, device_target="CPU") def load_data(file_path, max_len=50): """ 加载和预处理SST-1数据集。 :param file_path: 文件路径 :param max_len: 句子最大长度 :return: 处理后的数据 """ data_list = [] label_list = [] with open(file_path, 'r', encoding='utf-8') as f: lines = f.readlines() for line in lines: parts = line.strip().split('\t') if len(parts) != 2: continue tokens = parts[0].split()[:max_len] labels = int(float(parts[1])) token_ids = [vocab.get(token, vocab['<UNK>']) for token in tokens] # 使用词汇表映射token到id padded_tokens = token_ids + [0] * (max_len - len(token_ids)) # 填充至固定长度 data_list.append(padded_tokens) label_list.append(labels) return np.array(data_list), np.array(label_list) class DatasetGenerator: def __init__(self, inputs, targets): self.inputs = inputs self.targets = targets def __getitem__(self, index): return self.inputs[index], self.targets[index] def __len__(self): return len(self.inputs) ``` #### 构建CNN模型 下面定义了一个简单的CNN架构,适用于句子级别的分类任务: ```python import mindspore.nn as nn import mindspore.common.initializer as init class TextCNN(nn.Cell): def __init__(self, vocab_size, embedding_dim, num_classes, kernel_sizes=[3, 4, 5], num_channels=100): super(TextCNN, self).__init__() self.embedding = nn.Embedding(vocab_size=vocab_size, embedding_size=embedding_dim, padding_idx=0, embedding_table=init.Uniform(0.1)) self.convs = nn.CellList([ nn.Conv2d(in_channels=1, out_channels=num_channels, kernel_size=(k, embedding_dim), has_bias=True, weight_init=init.HeUniform()) for k in kernel_sizes]) self.dropout = nn.Dropout(keep_prob=0.5) self.fc = nn.Dense(num_channels * len(kernel_sizes), num_classes) def construct(self, x): embedded_x = self.embedding(x) # Shape: (batch_size, seq_len, embed_dim) embedded_x = embedded_x.expand_dims(axis=1) # Add channel dimension conv_outs = [] for conv in self.convs: z = conv(embedded_x) # Apply convolution z = nn.ReLU()(z.squeeze()) z = nn.MaxPool1d(z.shape[-1])(z) # Max pooling over time-step conv_outs.append(z.squeeze(-1)) pooled_output = nn.Concat()(conv_outs) # Combine outputs from all kernels pooled_output = self.dropout(pooled_output) logits = self.fc(pooled_output) return logits ``` #### 训练流程 最后,我们设置优化器、损失函数,并执行训练过程: ```python from mindspore.train.callback import LossMonitor from mindspore.nn import SoftmaxCrossEntropyWithLogits, Adam # 参数配置 vocab_size = 10000 # 根据实际词汇表大小调整 embedding_dim = 300 num_classes = 5 # SST-1有五个类别 learning_rate = 0.001 epochs = 10 batch_size = 64 # 创建模型实例 model = TextCNN(vocab_size=vocab_size, embedding_dim=embedding_dim, num_classes=num_classes) # 定义损失函数和优化器 loss_fn = SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') optimizer = Adam(model.trainable_params(), learning_rate=learning_rate) # 加载数据 train_inputs, train_labels = load_data('sst1_train.txt') test_inputs, test_labels = load_data('sst1_test.txt') dataset_generator = DatasetGenerator(train_inputs, train_labels) ds_train = GeneratorDataset(dataset_generator, column_names=["data", "label"]) ds_train = ds_train.batch(batch_size=batch_size) # 模型封装与训练 net_with_criterion = nn.WithLossCell(model, loss_fn) train_network = nn.TrainOneStepCell(net_with_criterion, optimizer) for epoch in range(epochs): total_loss = 0 steps = 0 for batch in ds_train.create_dict_iterator(): data = Tensor(batch["data"], dtype=mindspore.int32) label = Tensor(batch["label"], dtype=mindspore.int32) loss = train_network(data, label) total_loss += loss.asnumpy() steps += 1 avg_loss = total_loss / steps print(f"Epoch {epoch+1}/{epochs}, Average Loss: {avg_loss:.4f}") ``` 以上代码实现了针对SST-1数据集的卷积神经网络句子分类功能[^1]。通过引入预训练词向量和多种尺寸的卷积核设计,该方法能够有效提取局部特征并提升分类效果[^4]。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值