Knowledge Graph Convolutional Networks for RecommenderSystems

本文提出了一种融合知识图谱特点与图卷积神经网络的模型KGCN,用于推荐系统。KGCN通过邻居信息的综合和关系权重的计算,更好地捕捉局部邻域结构和用户个性化兴趣。实验表明,KGCN在MovieLens数据集上取得最佳性能,当邻居采样大小为4或8时,模型效果最优。未来的研究方向包括探索非均匀采样器和结合用户端知识图谱。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

 论文详解

一.和Ripple对比:

        如 RippleNet 同样存在一定的问题:首先其降低了关系 R 的重要性,其直接的 的计算方式难以捕捉关系带来的信息;其次,随着整个 ripple 的范围的扩大,此时加入模型训练的实体会大幅增多,带来巨大的计算负担和冗余。

        原文提出一种融合 KG 特点与图卷积神经网络的模型(KGCN),也就是在计算 KG 中某一个给定的 entity 的表示时,将邻居信息与偏差一并结合进来。主要体现出如下的优势:

  • 通过邻居信息的综合,可以更好地捕捉局部邻域结构(local proximity structure)并储存在各 entity 中
  • 不同邻居的权重取决于之间的关系和特定的用户 u,可以更好地体现用户的个性化兴趣,以展示 entity 的特点

 二.模型解释

       2.1问题叙述

       本质上还是基于知识图谱的推荐系统,考虑此时已经得到一个构造好的,由 (实体,关系,实体)的三元组组成的知识图谱,记作 g

       给定用户——物品矩阵 (本身是一个稀疏矩阵)和知识图谱  ,判断用户 u 是否会对物品 v 有兴趣,也就是需要学习到一个预测方程:

       这里的参数定义为整个模型的学习参数的集合

       2.2KGCN思想

        思想简述:   

        RippleNet将用户的兴趣在知识图谱上传播来抽取用户特征。那么我们是否可以将物品的特征在知识图谱上传播来抽取物品特征呢?KGCN模型随之产生。KGCN模型如下如所示。以一个物品为起点传播两次的情况如图a所示。每一个物品的特征矢量为与该物品直接相连的外层物品特征矢量的和,如图b所示。并且重点是在相加之前使用了注意力机制,决定注意力权重的因素有用户特征和关系特征,这样让推荐的结果具有个性化。

### Knowledge Graph Link Prediction Frameworks for Retrieval and Reading Knowledge graph link prediction involves predicting missing links or relationships between entities within a knowledge graph. For this purpose, several frameworks have been developed to enhance both retrieval and reading capabilities. #### 1. TransE Model TransE is one of the foundational models used in knowledge graph embedding methods. It represents each entity and relation as vectors in low-dimensional space where the score function measures how well a triplet $(h,r,t)$ holds by computing $f_r(h,t)=||\mathbf{e}_h+\mathbf{r}-\mathbf{e}_t||_2$[^1]. This model assumes that relations can be translated from head entities to tail entities through vector addition operations. #### 2. Convolutional Knowledge Graph Embeddings (ConvE) ConvE extends traditional embeddings like TransE by incorporating convolution layers over subject-predicate pairs before applying fully connected layers followed by reshaping into two-dimensional matrices. The scoring mechanism uses these transformed representations alongside object embeddings via element-wise multiplication and subsequent summation across all dimensions[^2]. ```python import torch.nn.functional as F from torch import nn class ConvE(nn.Module): def __init__(self, num_entities, num_relations, embedding_dim=200): super(ConvE, self).__init__() self.entity_embeddings = nn.Embedding(num_entities, embedding_dim) self.relation_embeddings = nn.Embedding(num_relations, embedding_dim) def forward(self, sub, rel): e_s = self.entity_embeddings(sub).view(-1, 1, 10, 20) # Example reshape r = self.relation_embeddings(rel).view(-1, 1, 10, 20) # Example reshape stacked_inputs = torch.cat([e_s, r], dim=-1) x = F.relu(stacked_inputs) return x ``` #### 3. ComplEx Model ComplEx addresses limitations found in earlier approaches such as inability to handle symmetric/antisymmetric properties effectively. By introducing complex-valued embeddings instead of real numbers only, it allows modeling asymmetric interactions more accurately while maintaining computational efficiency similar to simpler models[^3]. For applications requiring advanced reasoning tasks beyond simple triplets, architectures combining neural networks with symbolic logic offer promising solutions not covered directly here but worth exploring further based on specific requirements.
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值