Paper List

Paperlist

  • Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere
  • Generalizable Person Re-identification via Balancing Alignment and Uniformity
  • Rethinking Graph Masked Autoencoders through Alignment and Uniformity
  • Alignment-Uniformity aware Representation Learning for Zero-shot Video Classification
  • Towards Representation Alignment and Uniformity in Collaborative Filtering
  • Correlation between Alignment-Uniformity and Performance of Dense Contrastive Representations
  • Geodesic Multi-Modal Mixup for Robust Fine-Tuning
  • Rethinking Prototypical Contrastive Learning through Alignment, Uniformity and Correlation
  • uCTRL: Unbiased Contrastive Representation Learning via Alignment and Uniformity for Collaborative Filtering
  • It’s Not a Modality Gap: Characterizing and Addressing the Contrastive Gap
  • Debiased Contrastive Learning of Unsupervised Sentence Representations
  • SimCSE: Simple Contrastive Learning of Sentence Embeddings
  • Toward Human-AI Alignment in Large-Scale Multi-Player Games
  • Graph-based Alignment and Uniformity for Recommendation
  • The Role of Local Alignment and Uniformity in Image-Text Contrastive Learning on Medical Images
  • Feature Alignment and Uniformity for Test Time Adaptation
  • Graph-Sequential Alignment and Uniformity: Toward Enhanced Recommendation Systems
  • Conditional Alignment and Uniformity for Contrastive Learning with Continuous Proxy Labels
  • Rethinking Alignment and Uniformity in Unsupervised Image Semantic Segmentation
  • Prototypical Contrastive Learning through Alignment and Uniformity for Recommendation
  • Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for Recommendation
  • Align before Fuse: Vision and Language Representation Learning with Momentum Distillation
  • Training language models to follow instructions with human feedback
  • Generalizing to any diverse distribution: uniformity, gentle finetuning and rebalancing
  • Whose Opinions Do Language Models Reflect?
  • Whitening Sentence Representations for Better Semantics and Faster Retrieval
  • Mitigate the Gap: Investigating Approaches for Improving Cross-Modal Alignment in CLIP
  • Multi-task Item-attribute Graph Pre-training for Strict Cold-start Item Recommendation
  • Aligning Large Multimodal Models with Factually Augmented RLHF
  • RAIN: Your Language Models Can Align Themselves without Finetuning
  • Towards Better Understanding with Uniformity and Explicit Regularization of Embeddings in Embedding-based Neural Topic Models
  • RotoGrad: Gradient Homogenization in Multitask Learning
  • Dual Distribution Alignment Network for Generalizable Person Re-Identification
  • Improving neural network representations using human similarity judgments
  • Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision
  • ControlRec: Bridging the Semantic Gap between Language Model and Personalized Recommendation
  • On the Sentence Embeddings from Pre-trained Language Models
  • Can AI writing be salvaged? Mitigating Idiosyncrasies and Improving Human-AI Alignment in the Writing Process through Edits
  • Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment
  • CyCLIP: Cyclic Contrastive Language-Image Pretraining
  • Chain of Hindsight Aligns Language Models with Feedback
  • Rethinking The Uniformity Metric in Self-Supervised Learning
  • Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts
  • Align on the Fly: Adapting Chatbot Behavior to Established Norms
  • RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness
  • Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
  • Self-Supervised Alignment with Mutual Information: Learning to Follow Principles without Preference Labels
  • Direct Preference Optimization: Your Language Model is Secretly a Reward Model
  • Towards the Generalization of Contrastive Self-Supervised Learning
  • Unsupervised Learning by Predicting Noise
  • IterAlign: Iterative Constitutional Alignment of Large Language Models
  • Zephyr: Direct Distillation of LM Alignment
  • Perfect Alignment May be Poisonous to Graph Contrastive Learning
  • Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning
  • LIMA: Less Is More for Alignment
  • Towards Mitigating Dimensional Collapse of Representations in Collaborative Filtering
  • Understanding the Behaviour of Contrastive Loss
  • DaRec: A Disentangled Alignment Framework for Large Language Model and Recommender System
  • Training Socially Aligned Language Models on Simulated Social Interactions
  • Self-Instruct: Aligning Language Models with Self-Generated Instructions
  • Preference Ranking Optimization for Human Alignment
  • Human alignment of neural network representations
  • Barlow Twins: Self-Supervised Learning via Redundancy Reduction
  • LiT: Zero-Shot Transfer with Locked-image text Tuning
  • As Confidence Aligns: Exploring the Effect of AI Confidence on Human Self-confidence in Human-AI Decision Making
  • The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
  • Improving alignment of dialogue agents via targeted human judgements
  • ULMA: Unified Language Model Alignment with Human Demonstration and Point-wise Preference
  • RRHF: Rank Responses to Align Language Models with Human Feedback without tears
  • Investigating Uncertainty Calibration of Aligned Language Models under the Multiple-Choice Setting
  • Uniform Learning in a Deep Neural Network via “Oddball” Stochastic Gradient Descent
  • Is Reinforcement Learning (Not) for Natural Language Processing: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization
  • ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation
  • Representation Learning with Large Language Models for Recommendation
  • Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision
  • RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment
  • Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences
  • RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
  • Towards Reliable Alignment: Uncertainty-aware RLHF
  • FLAVA: A Foundational Language And Vision Alignment Model
  • Criticality versus uniformity in deep neural networks
  • OpenAssistant Conversations – Democratizing Large Language Model Alignment
  • Generative Judge for Evaluating Alignment
  • LLaRA: Large Language-Recommendation Assistant
  • In conversation with Artificial Intelligence: aligning language models with human values
  • FaceNet: A Unified Embedding for Face Recognition and Clustering
  • Aligning AI With Shared Human Values
  • Human-Instruction-Free LLM Self-Alignment with Limited Samples
  • The Wisdom of Hindsight Makes Language Models Better Instruction Followers
  • Shared Interest: Measuring Human-AI Alignment to Identify Recurring Patterns in Model Behavior
  • ValueCompass: A Framework of Fundamental Values for Human-AI Alignment
  • Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
  • Unified Deep Supervised Domain Adaptation and Generalization
  • A General Language Assistant as a Laboratory for Alignment
  • Learning Human-like Representations to Enable Learning Human Values
  • Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
  • Specific versus General Principles for Constitutional AI
  • Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
  • RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback
  • Peering Through Preferences: Unraveling Feedback Acquisition for Aligning Large Language Models
  • Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap
  • Doubly Stochastic Neighbor Embedding on Spheres
  • RecRanker: Instruction Tuning Large Language Model as Ranker for Top-k Recommendation
  • RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms
  • Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models’ Alignment
  • Self-Alignment with Instruction Backtranslation
  • What are human values, and how do we align AI to them?
  • Getting aligned on representational alignment
  • Designing for Human-Agent Alignment: Understanding what humans want from their agents
  • Alignment with human representations supports robust few-shot learning
  • Cooperative Inverse Reinforcement Learning
  • Reward Design with Language Models
  • Smaug: Fixing Failure Modes of Preference Optimisation with DPO-Positive
  • TARN: Temporal Attentive Relation Network for Few-Shot and Zero-Shot Action Recognition
  • Generating and Evaluating Tests for K-12 Students with Language Model Simulations: A Case Study on Sentence Reading Efficiency
  • Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation
  • CommunityLM: Probing Partisan Worldviews from Language Models
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值