Classifying Relations by Ranking with Convolutional Neural Networks实现(pytorch)

1. 问题描述

这是关系抽取的一篇经典文章Classifying Relations by Ranking with Convolutional Neural Networks。使用的是SemEval2010 Task 8 数据集,共有9种二元关系,每种关系对实体的前后排列敏感,加上other分类一共19个小分类。

2. 网络结构

论文的主要是在Zeng 2014的CNN基础上做的改进,最大的变化是损失函数,不再使用softmax+cross-entropy的方式,而是margin based的ranking-loss。还有一个对于other分类的特殊处理。

先看网络结构,不考虑loss的改变,实际上的网络结构没有什么变化,同样是一层CNN加上一层全连接:
在这里插入图片描述

  • 输入层:word embedding+position embedding
  • 卷积层:长度固定为3的卷积核(实际实现的代码是多个卷积核)
  • Pooling: max pooling
  • 全连接层:线性相乘得到对应每个类别的score

全连接层之后没有使用常用的softmax+cross entropy,而是自定义了一个基于margin的ranking loss:
在这里插入图片描述
在这里有两个分数, s θ ( x ) y + s_{\theta}(x)_{y^{+}} sθ(x)y+指的是句子x正确分类对应的score,而 s θ ( x ) c − s_{\theta}(x)_{c^{-}} sθ(x)c指的是从全连接层得到的分数向量中除去 s θ ( x ) y + s_{\theta}(x)_{y^{+}} sθ(x)y+之外最大的分量,也就是分数最大的错误得分。这样做在训练的过程中会使得 s θ ( x ) y + s_{\theta}(x)_{y^{+}} sθ(x)y+不断变大而 s θ ( x ) c − s_{\theta}(x)_{c^{-}}

Sure! Here's a simple code example using PyTorch to classify images of cats and dogs: ```python import torch import torch.nn as nn import torch.optim as optim import torchvision import torchvision.transforms as transforms # Define the neural network architecture class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 16, 3, padding=1) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(16, 32, 3, padding=1) self.fc1 = nn.Linear(32 * 56 * 56, 128) self.fc2 = nn.Linear(128, 2) def forward(self, x): x = self.pool(torch.relu(self.conv1(x))) x = self.pool(torch.relu(self.conv2(x))) x = x.view(-1, 32 * 56 * 56) x = torch.relu(self.fc1(x)) x = self.fc2(x) return x # Set device (GPU/CPU) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Load and transform the dataset transform = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ]) trainset = torchvision.datasets.ImageFolder(root='train_data_path', transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) # Define the network, loss function, and optimizer net = Net().to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) # Training loop for epoch in range(5): running_loss = 0.0 for i, data in enumerate(trainloader, 0): inputs, labels = data[0].to(device), data[1].to(device) optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() if i % 2000 == 1999: print(f"[{epoch + 1}, {i + 1}] loss: {running_loss / 2000:.3f}") running_loss = 0.0 print("Training finished!") # Save the trained model torch.save(net.state_dict(), 'model.pth') ``` Note: 1. Replace `'train_data_path'` with the actual path to your training dataset folder, containing separate subfolders for cats and dogs. 2. Make sure you have the necessary dependencies installed (e.g., `torch`, `torchvision`). This code defines a CNN (Convolutional Neural Network) architecture using PyTorch for classifying images of cats and dogs. It loads and preprocesses the dataset, trains the network, and saves the trained model. You can modify the network architecture, hyperparameters, and other aspects to fit your specific requirements.
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值