图自监督学习和预训练论文推荐

Self-supervised Learning and Pre-training:研究图的自监督学习和预训练。预训练在NLP任务中大放光采,研究者希望此类技术在图上也能够进行预训练,并期望更好地辅助下游的任务。经典的自监督学习方法如对比学习等。

1.论文名称:Strategies for Pre-training Graph Neural Networks
链接:https://www.aminer.cn/pub/5e5e18eb93d709897ce3ce41
2.论文名称:CommDGI: Community Detection Oriented Deep Graph Infomax
链接:https://www.aminer.cn/pub/5f8ebbb99fced0a24b4e1994
3.论文名称:Inductive Representation Learning on Large Graphs.
链接:https://www.aminer.cn/pub/599c7988601a182cd2648a09
4.论文名称:InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization
链接:https://www.aminer.cn/pub/5e5e189a93d709897ce1e760
5.论文名称:GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training
链接:https://www.aminer.cn/pub/5eede0b091e0116a23aafbd3
6.论文名称:Contrastive Multi-View Representation Learning on Graphs
链接:https://www.aminer.cn/pub/5ede0553e06a4c1b26a8419c
7.论文名称:Graph Contrastive Learning with Augmentations

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值