文章目录
在现代的人工智能应用中,如何有效地管理和检索数据是一个重要的课题。LlamaIndex 提供了一种灵活的数据框架,使开发者能够轻松地构建和管理与大型语言模型(LLM)相关的应用。在本文中,我们将深入探讨如何使用 LlamaIndex 创建和检索知识库索引。
1. 环境准备
pip install llama_index
pip install llama-index-llms-ollama
pip install llama-index-embeddings-ollama
pip install llama-index-readers-database
pip install llama-index-vector-stores-postgres
pip install langchain
pip install langchain-core
pip install llama-index-graph-stores-neo4j
pip install langchain-text-splitters
pip install spacy
2. 启用诊断日志
import os, logging, sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
3. 配置本地模型
请到 https://ollama.com/
安装 Ollama
,并下载大模型,比如:Llama 3、 Phi 3、 Mistral、Gemma、qwen等。为了测试方便,我们选用速度更快、效果较好的 qwen2:7B
模型。
from llama_index.llms.ollama import Ollama
llm_ollama = Ollama(base_url='http://127.0.0.1:11434',model="qwen2:7b", request_timeout=600.0)
4. 配置本地向量模型
这里选用nomic-embed-text
文本向量模型
from llama_index.embeddings.ollama import OllamaEmbedding
nomic_embed_text= OllamaEmbedding(base_url='http://127.0.0.1:11434',model_name='nomic-embed-text')
5. LlamaIndex全局配置
from llama_index.core import Settings
# 指定 LLM
Settings.llm = llm_ollama
# 自定义文档分块
Settings.chunk_size=500
# 指定向量模型
Settings.embed_model = nomic_embed_text
6. 创建向量存储和知识图片存储数据库
6.1 自定义向量存储
# 向量存储使用postgres
from llama_index.vector_stores.postgres import PGVectorStore
vector_store = PGVectorStore.from_params(
database="langchat",
host="syg-node",
password="AaC43.#5",
port=5432,
user="postgres",
table_name="llama_vector_store",
embed_dim=768
)
6.2 知识图谱存储 neo4j
from llama_index.core import PropertyGraphIndex
from llama_index.graph_stores.neo4j import Neo4jPropertyGraphStore
graph_store = Neo4jPropertyGraphStore(
username="neo4j",
password="1+dZo#eG6*H=9.2",
url="bolt://syg-node:7687",
database="neo4j"
)
6.3 设置存储
from llama_index.core import StorageContext
storage_context = StorageContext.from_defaults(
vector_store=vector_store,
property_graph_store=graph_store
)
7. 从数据库加载数据
from llama_index.readers.database import DatabaseReader
db = DatabaseReader(
scheme="mysql",
host="syg-node", # Database Host
port="3206", # Database Port
user="root", # Database User
password="AaC43.#5", # Database Password
dbname="stock_db", # Database Name
)
query = f"""
select concat(title,'。\n',summary,'\n',content) as text from tb_article_info where content_flag =1 order by id limit 0,10
"""
documents = db.load_data(query=query)
print(f"Loaded {len(documents)} Files")
print(documents[0])
8. 文本分割器: SpacyTextSplitter
安装 zh_core_web_sm
模型
## https://github.com/explosion/spacy-models/releases/download/zh_core_web_sm-3.7.0/zh_core_web_sm-3.7.0-py3-none-any.whl
python download zh_core_web_sm
from llama_index.core.node_parser import LangchainNodeParser
from langchain.text_splitter import SpacyTextSplitter
spacy_text_splitter = LangchainNodeParser(SpacyTextSplitter(
pipeline="zh_core_web_sm",
chunk_size = 512,
chunk_overlap = 128
))
9. 配置管道
from llama_index.core.ingestion import IngestionPipeline
pipeline = IngestionPipeline(
transformations=[
spacy_text_splitter
],
vector_store=vector_store
)
# 生成索引存入向量数据库
nodes = pipeline.run(documents=documents)
print(f"Ingested {len(nodes)} Nodes")
10. 创建知识图谱存储索引
from llama_index.core import PropertyGraphIndex
index = PropertyGraphIndex(nodes=nodes,
llm=llm_ollama,
use_async=False,
storage_context=storage_context,
show_progress=True)
这个框架异步处理数据有bug,需要把 use_async
设置为 False
11 .创建查询引擎
index = PropertyGraphIndex.from_existing(property_graph_store=graph_store,vector_store=vector_store)
query_engine = index.as_query_engine(llm=llm_ollama)
res = query_engine.query("孩子连着上七天八天的课,确实挺累的")
print(res)