Contrastive learning

Understanding Multimodal Contrastive Learning and Incorporating Unpaired Data

Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning

Text and Code Embeddings by Contrastive Pre-Training

Momentum Contrastive Pre-training for Question Answering

LiT: Zero-Shot Transfer with Locked-image text Tuning

UNDERSTANDING DIMENSIONAL COLLAPSE IN CONTRASTIVE SELF-SUPERVISED LEARNING

Unsupervised Feature Learning via Non-Parametric Instance Discrimination

Exploring simple siamese representation learning

Contrastive Learning for Prompt-Based Few-Shot Language Learners

UniTRec: A Unified Text-to-Text Transformer and Joint Contrastive Learning Framework for Text-based Recommendation

在这里插入图片描述
在这里插入图片描述

Multi-granularity Item-based Contrastive Recommendation

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

Contrastive Collaborative Filtering for Cold-Start Item Recommendation

main idea is to teach the CF module to memorize the co-occurrence collaborative signals during the training phase and how to rectify the blurry CBCEs of cold-start items according to the memorized co-occurrence collaborative signals when applying the model.
在这里插入图片描述

Review-based Multi-intention Contrastive Learning for Recommendation

A Contrastive Sharing Model for Multi-Task Recommendation

Re4: Learning to Re-contrast, Re-attend, Re-construct for Multi-interest Recommendation

在这里插入图片描述

对比学习关键的因素:

  1. L2正则使用,将向量转化为单位向量,使得训练变得稳定
  2. 温度参数的设置,一般要设置小一点
  3. alignment 拉近正样本
  4. uniformity靠负样本使得样本均匀分布在超球面上。负样本太容易区分会导致uniformity失效,导致容易崩溃
  5. 避免模型坍塌(1)基于不对称结构进行优化 (2)基于冗余降低进行优化 (3)
  6. stop gradient is import to collapse

一些论文认为对于collapse关键的因素

  1. simsiam -> stop gradient
  2. inbatch negative -> negative sample
  3. BYOL -> momentum encoder
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值