Retrieval Augmented Generation or Long-Context LLMs? A Comprehensive Study and Hybrid Approach

论文:
Retrieval Augmented Generation or Long-Context LLMs? A Comprehensive Study and Hybrid Approach

摘要:

背景:
根据长文本内容回答问题:
输入:长文本100k
策略:直接作为大模型的输入
RAG:长文本进行分片,然后quey选择相关内容,作为大模型的输入

检索增强RAG现在已经成为LLM的外挂,基于query检索相关知识。然而,像Gemini和GPT-4这样的最近发布的LLM已经展现出了直接理解长文本的非凡能力,相关内容直接作为数据,效果更好,但是花费更高。RAG相对而言更省钱。本文主要探索长文本和RAG结合的方式:

在这里插入图片描述
LC: long-context
Self-Route:本文提出的策略

数据集:

Long-Bench:包含21数据集,平均长度7k words
∞Bench:平均长度100k token

模型选择:

Gemini1.5-Pro
GPT-4O
GPT-3.5-Turbo

检索模型:dense retriever

Contriever
Dragon

检索策略:

300 words一个块
选择top-k 块个作为检索结果, 默认k为5
为了避免数据泄漏,即大模型预训练的时候见过这些内容,prompt中增加内容,
“based only on the provided passage”
动机:
在这里插入图片描述

  1. 63%的预测结果是一致
  2. 并且相同的结果,既有对的也有错误的结果
    结论:就为了一小部分不一样的数据,进行LC大量的计算,代价比较大,因此提出一种融合二者的方式

策略:Self-Route

  1. 常规的RAG,但是prompt中增加一个内容
    Write unanswerable if the query can not be answered based on the provided text
    如果可以回答,接受RAG的结果,不可回答,则执行第二步
  2. 把完整的上下文内容,输入给大模型

实验结果

在这里插入图片描述
-4是RAG回答的占比,-5是使用的token占比

  1. LC优于RAG, 除了标红部分,GPT3.5 Token数量最大支持16k, 这两个数据集平均长度147k,
  2. 整体上SLEF-ROUTE优于RAG 5%
  3. 使用 SLEF- ROUTE使用RAG超过50%, 即使*-4

分析实验:

  1. top-k设置分析实验
    在这里插入图片描述

  2. 为什么RAG失效?
    对RAG-and-Route step predicts “unanswerable”,这部分数据,标注几条数据,作为few-shot, 对数据问题进行分类:
    归纳四种问题:
    (1)多步问答, “What nationality is the performer of song XXX“
    (2)query太普遍, “Whatdoes the group think about XXX”
    (3)query又长又复杂
    (4)query隐含的不直观, “What caused the shadow behind the
    spaceship?”
    (5)其他
    在这里插入图片描述
    大多数“其他”类别都是“多步骤”的原因

不同的检索方式:

在这里插入图片描述在这里插入图片描述
基于Gemimi模型,不通的检索工具(模型),也是同样的结论

在合成数据方面:

“PassKey” dataset where a sentence with a passkey (e.g. “the passkey is 123456”) is hidden withinchunks of irrelevant text, and the model is asked to answer the question “What is the passkey”.
“PassKey”数据集,其中包含PassKey的句子(例如“PassKey是123456”)隐藏在不相关的文本块中,并询问模型来回答“什么是密钥”这个问题。即:大海捞针
在这里插入图片描述
Variant-1:“What is the special token hidden inside the texts”
Variant-2:包含两个two passkeys,“Which passkeyis larger? First or second?”,
结论:论证了数据评估高度受人工制品的影响,数据集构建,显示了合成数据测试的局限性

排除大模型内部知识:

主要就是排除大模型之前学习过这些内容,对实验造成影响,使大模型仅仅利用提供的知识。
采用两种方式:

  1. 证明:“based only on the provided passage” 这一句话是有效的
  2. 把大模型接触的常识性的知识去除掉,重新评估
    采用的方式, prompt中加入这一句话,“based only on the provided passage”
    在这里插入图片描述

使用这一句话,模型真题效果下降了,证明使用这句话,有效使用大模型忘记这些知识
另外一种方式:把一些常识性的问题,剔除掉
在这里插入图片描述
同样证明提出的方式是有效的。
结论:
本文对 RAG 和 LC 进行了全面比较,突出了性能和计算成本之间的权衡。虽然 LC 在长语境理解方面表现出色,但由于其较低的成本和在输入大大超过模型的上下文窗口大小时的优势,RAG 仍然是一种可行的选择。我们提出的方法通过基于模型自我反思的动态路由查询,有效地结合了 RAG 和 LC 的优点,在显著降低成本的情况下实现了与 LC 相当的性能。我们认为,我们的发现为长期上下文 LLM 的实际应用提供了有价值的见解,并为优化 RAG 技术的未来研究铺平了道路。

### Retrieval-Augmented Generation in Knowledge-Intensive NLP Tasks Implementation and Best Practices The method of retrieval-augmented generation (RAG) for knowledge-intensive natural language processing tasks aims to combine the strengths of dense vector representations with sparse exact match methods, thereby improving model performance on tasks that require access to external information not present during training[^1]. This approach ensures models can retrieve relevant documents or passages from a large corpus at inference time and generate responses conditioned on this retrieved context. #### Key Components of RAG Framework A typical implementation involves two main components: 1. **Retriever**: A component responsible for fetching potentially useful pieces of text based on input queries. 2. **Generator**: An encoder-decoder architecture like BART or T5 which generates outputs given both the query and retrieved contexts as inputs. This dual-stage process allows systems to leverage vast amounts of unstructured data without needing explicit retraining when new facts become available. #### Practical Steps for Implementing RAG Models To effectively implement such an architecture, one should consider several factors including but not limited to choosing appropriate pre-trained retrievers and generators fine-tuned specifically towards question answering or similar objectives where factual accuracy is paramount. Additionally, integrating these modules into existing pipelines requires careful consideration regarding latency constraints versus quality trade-offs especially under real-time applications scenarios. For instance, here's how you might set up a simple pipeline using Hugging Face Transformers library: ```python from transformers import RagTokenizer, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq") def rag_pipeline(question): inputs = tokenizer([question], return_tensors="pt", truncation=True) generated_ids = model.generate(input_ids=inputs["input_ids"]) output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] return output ``` In practice, tuning hyperparameters associated with each stage separately could lead to better overall results compared to treating them monolithically due to their distinct roles within the system design. #### Best Practices When Working With RAG Systems When deploying RAG-based solutions, adhering to certain guidelines helps maximize effectiveness while minimizing potential pitfalls: - Ensure high-quality indexing over document collections used by the retriever part since poor recall directly impacts downstream generations negatively. - Regularly update underlying corpora so they remain current; stale resources may propagate outdated information through synthetic texts produced thereafter. - Monitor closely any changes made either upstream (e.g., modifications affecting source material accessibility) or inside your own infrastructure because alterations elsewhere often necessitate corresponding adjustments locally too. By following these recommendations alongside leveraging state-of-the-art techniques provided via frameworks like those mentioned earlier, developers stand well positioned to build robust conversational agents capable of delivering accurate answers across diverse domains requiring specialized domain expertise beyond what general-purpose pretrained models alone offer today. --related questions-- 1. How does multi-task learning compare against single-task approaches concerning adaptability? 2. What are some challenges faced when implementing keyword-based point cloud completion algorithms? 3. Can prompt engineering significantly influence outcomes in few-shot learning settings? 4. Are there specific industries benefiting most prominently from advancements in knowledge-intensive NLP technologies?
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值