【paper阅读】Retrieval-Augmented Hypergraph for Multimodal Social MediaPopularity Prediction

发表在KDD2024

Background:

随着社交媒体平台上如 TikTok、Triller 和 Instagram 等多模态用户生成内容(UGC)的激增,预测 UGC 的流行度变得至关重要,这对许多现实应用如在线广告、推荐、政府识别潜在舆论危机等都有重要意义。

Motivation:

  1. 现有方法在预测 UGC 流行度时,主要关注单个 UGC 的有限上下文信息,忽略了从相关 UGC 中挖掘有用知识的潜在益处。
  2. 然而,在现实社交媒体中,即使是相同的 UGC,由于源用户的粉丝分布不同,其社交反馈也可能差异巨大,因此建模单个 UGC 提供的知识有限,可能导致预测错误。
  3. 人类具有通过观察相关事物来学习的能力,这启发作者通过检索相关 UGC 并利用其有意义的知识来增强 MSMPP 任务。

Challenges:

  • 计算目标 UGC 与相关实例之间的相似性复杂,需要评估多模态相似性以识别 Top - K邻实例,且现有检索方法主要关注单模态知识编码和检索,无法有效利用多模态数据及其复杂相关性,同时社交媒体上的 UGC 通常包含大量噪声,如文本和视觉内容的差异以及不完整的模态信息。
  • 目标 UGC 与检索实例之间的相关性通常是高阶的,现有方法通过求和或注意力操作来建模相关实例的邻域知识,无法有效建模这种复杂的高阶相关性。

Contributions:

• We propose RAGTrans, pioneering an aspect-aware retrieval augmented pipeline that bridges target multimodal UGCs and relevant instances to enhance the multimedia social media popularity prediction (MSMPP) task.

We propose a bootstrapping hypergraph transformer that extends information aggregation to the multimodal mixture. Intra-modal and inter-modal propagations are designed to capture correlations within and across modalities as well as fine-grained and aligned UGC representations.

We conduct extensive experiments on real-world multimodal datasets to evaluate RAGTrans. The results demonstrate that RAGTrans can effectively learn multimodal representations from visual and textual UGC modalities, and achieve up to a 20% gain over strong baseline approaches on the ICIP dataset. The code for reproducing the results is available at https://github.com/CZTAO12/RAGTrans

Related Works

  1. Feature-engineering
  2. deep-learning based

Methodology

问题定义:

 C = {𝑐1, · · · , 𝑐𝑁 } 表示一系列社交媒体中的 user-generated content (UGC)

这些UGC包含 文本描述(t)与图片(v).

问题的目标是学习3种表示[ 文本表征,图片表征,用户表征]

真实流行度是用户对未来的总交互数,如转发、点赞和评论的数量。

超图的构造是由目标 UGC 在内存库中检索到的与其最接近的实例(对应超图中的点)构成,超边表示 UGC 的aspect information(如用户、类别)。

方法框架:

方法模块1:Aspect - aware UGC Retrieval
  • 构建内存库:构建包含大量 UGC 的视觉、文本和方面信息的内存库,由 <图像、文本、方面> 三元组组成。
  • 检索相关实例:将目标 UGC 作为查询,内存库中的每个 UGC 作为文档,通过计算方面信息的相似性分数,使用搜索引擎技术(elasticsearch)和排名函数(如 BM25)从内存库中检索 Top - K个最相关的 <图像、文本、方面> 三元组实例
方法模块2:Bootstrapping Hypergraph Transformer

1. Aspect-aware Multimodal Hypergraph Construction

把上一步检索到的实例转换为超图。根据𝑐𝑞的方面信息中的每个特性都构造了一个超边来连接所涉及的检索实例,以表示目标UGC和检索实例之间的相关高阶关系。

2. Intra-modal Propagation

BHT首先在单模态超图上传播,以捕获模态内邻域知识。这篇工作定义了两个操作:

节点->超边

其中注意力系数ajk注入了跨模态传播的信息,用于确保只有来自模态的最具影响力和信息性的消息能够传递到模态,计算公式在文章中。

超边->节点

3. Inter-modal Propagation

受到前缀调优的优点的启发,BHT将信息传播过程重构为前缀引导的多模态信息传播,以预先减少模态异质性,并捕获跨模态交互。

(这一部分给传播公式,但比较难对应到框架图中,读起来比较吃力难理解)

大概的作用是:

- 从文本模态节点-》图片模态超边

这里用到了前面提到的在视觉模态超边中的文本模态节点的 attention coefficient(a) .

- 从图片模态超边-》文本模态节点

4. Feed-forward Network (FFN)

为了缓解模态的异质性,并获得包括内部和多模态相关性在内的细粒度表示,文章合并了前缀引导的交互模块的相应输出特征.

方法模块3:User-Aware Fusion

对应上一个模块最终输出的两个模态输出的表征,将用户表征与它们分别做拼接。得到两个向量: T 和 V

注意力计算:将这两个表示投影到维度d作为query,使用视觉和文本表示作为key和value,输入到用户感知融合层进行注意力计算

Experiments

datasets:

Overall performance

Ablation experiments

Conclusions:

设计提出了RAGTrans-一种具有方面感知功能的检索增强的多模态超图转换器,以检索-预测的方式重新定义预测过程。

### Retrieval-Augmented Generation in Knowledge-Intensive NLP Tasks Implementation and Best Practices The method of retrieval-augmented generation (RAG) for knowledge-intensive natural language processing tasks aims to combine the strengths of dense vector representations with sparse exact match methods, thereby improving model performance on tasks that require access to external information not present during training[^1]. This approach ensures models can retrieve relevant documents or passages from a large corpus at inference time and generate responses conditioned on this retrieved context. #### Key Components of RAG Framework A typical implementation involves two main components: 1. **Retriever**: A component responsible for fetching potentially useful pieces of text based on input queries. 2. **Generator**: An encoder-decoder architecture like BART or T5 which generates outputs given both the query and retrieved contexts as inputs. This dual-stage process allows systems to leverage vast amounts of unstructured data without needing explicit retraining when new facts become available. #### Practical Steps for Implementing RAG Models To effectively implement such an architecture, one should consider several factors including but not limited to choosing appropriate pre-trained retrievers and generators fine-tuned specifically towards question answering or similar objectives where factual accuracy is paramount. Additionally, integrating these modules into existing pipelines requires careful consideration regarding latency constraints versus quality trade-offs especially under real-time applications scenarios. For instance, here's how you might set up a simple pipeline using Hugging Face Transformers library: ```python from transformers import RagTokenizer, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq") def rag_pipeline(question): inputs = tokenizer([question], return_tensors="pt", truncation=True) generated_ids = model.generate(input_ids=inputs["input_ids"]) output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] return output ``` In practice, tuning hyperparameters associated with each stage separately could lead to better overall results compared to treating them monolithically due to their distinct roles within the system design. #### Best Practices When Working With RAG Systems When deploying RAG-based solutions, adhering to certain guidelines helps maximize effectiveness while minimizing potential pitfalls: - Ensure high-quality indexing over document collections used by the retriever part since poor recall directly impacts downstream generations negatively. - Regularly update underlying corpora so they remain current; stale resources may propagate outdated information through synthetic texts produced thereafter. - Monitor closely any changes made either upstream (e.g., modifications affecting source material accessibility) or inside your own infrastructure because alterations elsewhere often necessitate corresponding adjustments locally too. By following these recommendations alongside leveraging state-of-the-art techniques provided via frameworks like those mentioned earlier, developers stand well positioned to build robust conversational agents capable of delivering accurate answers across diverse domains requiring specialized domain expertise beyond what general-purpose pretrained models alone offer today. --related questions-- 1. How does multi-task learning compare against single-task approaches concerning adaptability? 2. What are some challenges faced when implementing keyword-based point cloud completion algorithms? 3. Can prompt engineering significantly influence outcomes in few-shot learning settings? 4. Are there specific industries benefiting most prominently from advancements in knowledge-intensive NLP technologies?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值