pgai与LangChain集成:构建高级RAG应用的开发指南
【免费下载链接】pgai Helper functions for AI workflows 项目地址: https://gitcode.com/GitHub_Trending/pg/pgai
概述
pgai是一个PostgreSQL扩展,它简化了检索增强生成(RAG)和其他AI应用的数据存储和检索。pgai自动创建和同步PostgreSQL中数据的嵌入,简化语义搜索,并允许从SQL调用LLM模型。
本文将介绍如何将pgai与LangChain集成,构建一个高级RAG应用。我们将使用pgai的向量器(Vectorizer)功能来管理文档嵌入,并使用LangChain的链和代理来构建复杂的RAG管道。
技术架构
pgai与LangChain集成的RAG应用主要包含以下组件:
- 数据存储层:PostgreSQL数据库,使用pgai扩展存储文档和嵌入向量。
- 向量计算层:pgai向量器自动处理文档分块和嵌入生成。
- 检索层:使用pgai的语义搜索功能检索相关文档。
- 生成层:LangChain链和LLM模型生成回答。
环境准备
安装依赖
首先,我们需要安装必要的依赖:
# 克隆仓库
git clone https://gitcode.com/GitHub_Trending/pg/pgai
cd pg/pgai
# 创建虚拟环境
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# 安装依赖
pip install -r examples/simple_fastapi_app/requirements.txt
pip install langchain
配置数据库
创建一个.env文件,配置数据库连接和API密钥:
DATABASE_URL=postgresql+asyncpg://user:password@localhost:5432/pgai_db
OPENAI_API_KEY=your_openai_api_key
数据准备
创建文档表
使用SQLAlchemy定义一个文档模型,该模型将与pgai的向量器集成:
from sqlalchemy import Column, Integer, String, Text
from sqlalchemy.ext.declarative import declarative_base
from pgai.sqlalchemy import vectorizer_relationship
Base = declarative_base()
class Document(Base):
__tablename__ = "documents"
id = Column(Integer, primary_key=True)
file_name = Column(String())
content = Column(Text())
content_embeddings = vectorizer_relationship(
dimensions=768,
)
初始化向量器
使用pgai的向量器来自动管理文档嵌入:
from pgai.vectorizer import CreateVectorizer, LoadingColumnConfig, EmbeddingOpenAIConfig
async def create_vectorizer():
vectorizer_statement = CreateVectorizer(
source="documents",
target_table='documents_embedding_storage',
loading=LoadingColumnConfig(column_name='content'),
embedding=EmbeddingOpenAIConfig(model='text-embedding-3-small', dimensions=768)
).to_sql()
async with engine.connect() as conn:
await conn.execute(text(vectorizer_statement))
await conn.commit()
LangChain集成
创建pgai向量存储
我们需要创建一个自定义的LangChain向量存储,以集成pgai的检索功能:
from langchain.vectorstores.base import VectorStore
from langchain.schema import Document as LangChainDocument
from sqlalchemy.ext.asyncio import AsyncSession
class PgaiVectorStore(VectorStore):
def __init__(self, session: AsyncSession):
self.session = session
async def add_documents(self, documents: list[LangChainDocument], **kwargs):
# 将LangChain文档转换为pgai文档模型
pgai_docs = [
Document(
file_name=doc.metadata.get("file_name", "unknown"),
content=doc.page_content
)
for doc in documents
]
async with self.session.begin():
self.session.add_all(pgai_docs)
async def similarity_search(self, query: str, k: int = 4, **kwargs) -> list[LangChainDocument]:
# 使用pgai检索相关文档
relevant_docs = await retrieve_relevant_documents(query)
return [
LangChainDocument(
page_content=doc["chunk"],
metadata={"file_name": doc["file_name"]}
)
for doc in relevant_docs
]
实现检索函数
实现一个函数来检索与查询相关的文档:
from sqlalchemy import select, func, text
from sqlalchemy.orm import joinedload
async def retrieve_relevant_documents(query: str) -> list[dict]:
async with async_session() as session:
statement = (
select(Document.content_embeddings)
.options(joinedload(Document.content_embeddings.parent))
.order_by(
Document.content_embeddings.embedding.cosine_distance(
func.ai.openai_embed(
"text-embedding-3-small",
query,
text("dimensions => 768"),
)
)
)
.limit(5)
)
result = await session.execute(statement)
relevant_docs = result.scalars().all()
return [
{"file_name": doc.parent.file_name, "chunk": doc.chunk}
for doc in relevant_docs
]
构建RAG链
使用LangChain的RetrievalQA链来构建RAG应用:
from langchain.chat_models import ChatOpenAI
from langchain.chains import RetrievalQA
from langchain.prompts import PromptTemplate
def build_rag_chain(vector_store):
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vector_store.as_retriever(),
chain_type_kwargs={"prompt": PROMPT}
)
return chain
应用示例
FastAPI应用
下面是一个使用FastAPI构建的RAG应用示例:
from fastapi import FastAPI, Depends, HTTPException
from sqlalchemy.ext.asyncio import AsyncSession
from pydantic import BaseModel
app = FastAPI()
class QueryRequest(BaseModel):
question: str
async def get_db() -> AsyncSession:
async with async_session() as session:
yield session
@app.post("/query")
async def query(request: QueryRequest, db: AsyncSession = Depends(get_db)):
vector_store = PgaiVectorStore(db)
chain = build_rag_chain(vector_store)
result = await chain.arun(request.question)
return {"answer": result}
Discord机器人
pgai还提供了一个Discord机器人示例,展示了如何将RAG功能集成到聊天应用中:
# 代码片段来自examples/discord_bot/pgai_discord_bot/main.py
async def on_message(self, message: discord.WebhookMessage):
assert self.user is not None
should_process_message, channel = await self.check_message(message)
if not should_process_message:
return
assert channel is not None
# 创建线程并处理消息
thread = await message.create_thread(name=f"Discussion with {message.author.name}")
docs = await retrieve_relevant_documents(message.content)
response = await ask_ai(self.user, [message], docs)
await thread.send(response)
高级功能
文档分块策略
pgai提供了多种文档分块策略,可以通过LangChain的文本分割器进行配置:
from langchain.text_splitter import RecursiveCharacterTextSplitter
from pgai.vectorizer.chunking import LangChainRecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200,
separators=["\n\n", "\n", " ", ""]
)
chunking_config = LangChainRecursiveCharacterTextSplitter(
text_splitter=text_splitter
)
异步处理
pgai的向量器支持异步处理,可以与LangChain的异步功能无缝集成:
# 代码片段来自examples/simple_fastapi_app/with_psycopg.py
worker = Worker(DB_URL)
task = asyncio.create_task(worker.run())
性能优化
批量处理
对于大量文档,可以使用LangChain的批量处理功能结合pgai的向量器:
from langchain.document_loaders import DirectoryLoader
from langchain.text_splitter import CharacterTextSplitter
loader = DirectoryLoader('./documents', glob="**/*.md")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
# 批量添加文档
await vector_store.aadd_documents(texts)
缓存策略
使用pgai的缓存功能优化嵌入计算:
# 配置嵌入缓存
embedding_config = EmbeddingOpenAIConfig(
model='text-embedding-3-small',
dimensions=768,
cache=True
)
总结
本文介绍了如何将pgai与LangChain集成,构建高级RAG应用。我们涵盖了环境准备、数据模型定义、向量器配置、LangChain集成、高级功能和性能优化等方面。
通过pgai的自动嵌入管理和LangChain的强大链功能,您可以构建高效、可扩展的RAG应用,为用户提供准确、相关的回答。
更多详细信息,请参考以下资源:
【免费下载链接】pgai Helper functions for AI workflows 项目地址: https://gitcode.com/GitHub_Trending/pg/pgai
创作声明:本文部分内容由AI辅助生成(AIGC),仅供参考




