LLM之基于llama-index部署本地embedding与GLM-4模型并初步搭建RAG(其他大模型也可,附上ollma方式运行)

前言

日常没空,留着以后写

llama-index简介

官网:https://docs.llamaindex.ai/en/stable/

简介也没空,以后再写

注:先说明,随着官方的变动,代码也可能变动,大家运行不起来,可以进官网查查资料

加载本地embedding模型

如果没有找到 llama_index.embeddings.huggingface

那么:pip install llama_index-embeddings-huggingface

还不行进入官网,输入huggingface进行搜索

from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.core import Settings

Settings.embed_model = HuggingFaceEmbedding(
    model_name=f"{embed_model_path}",device='cuda'

)

 加载本地LLM模型

还是那句话,如果以下代码不行,进官网搜索Custom LLM Model

from llama_index.core.llms import (
    CustomLLM,
    CompletionResponse,
    CompletionResponseGen,
    LLMMetadata,
)
from llama_index.core.llms.callbacks import llm_completion_callback
from transformers import AutoTokenizer, AutoModelForCausalLM

class GLMCustomLLM(CustomLLM):
    context_window: int = 8192  # 上下文窗口大小
    num_output: int = 8000  # 输出的token数量
    model_name: str = "glm-4-9b-chat"  # 模型名称
    tokenizer: object = None  # 分词器
    model: object = None  # 模型
    dummy_response: str = "My response"

    def __init__(self, pretrained_model_name_or_path):
        super().__init__()

        # GPU方式加载模型
        self.tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, device_map="cuda", trust_remote_code=True)
        self.model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path, device_map="cuda", trust_remote_code=True).eval()

        # CPU方式加载模型
        # self.tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, device_map="cpu", trust_remote_code=True)
        # self.model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path, device_map="cpu", trust_remote_code=True)
        self.model = self.model.float()

    @property
    def metadata(self) -> LLMMetadata:
        """Get LLM metadata."""
        # 得到LLM的元数据
        return LLMMetadata(
            context_window=self.context_window,
            num_output=self.num_output,
            model_name=self.model_name,
        )

    # @llm_completion_callback()
    # def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:
    #     return CompletionResponse(text=self.dummy_response)
    #
    # @llm_completion_callback()
    # def stream_complete(
    #     self, prompt: str, **kwargs: Any
    # ) -> CompletionResponseGen:
    #     response = ""
    #     for token in self.dummy_response:
    #         response += token
    #         yield CompletionResponse(text=response, delta=token)

    @llm_completion_callback()  # 回调函数
    def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:
        # 完成函数
        print("完成函数")

        inputs = self.tokenizer.encode(prompt, return_tensors='pt').cuda()  # GPU方式
        # inputs = self.tokenizer.encode(prompt, return_tensors='pt')  # CPU方式
        outputs = self.model.generate(inputs, max_length=self.num_output)
        response = self.tokenizer.decode(outputs[0])
        return CompletionResponse(text=response)

    @llm_completion_callback()
    def stream_complete(
        self, prompt: str, **kwargs: Any
    ) -> CompletionResponseGen:
        # 流式完成函数
        print("流式完成函数")

        inputs = self.tokenizer.encode(prompt, return_tensors='pt').cuda()  # GPU方式
        # inputs = self.tokenizer.encode(prompt, return_tensors='pt')  # CPU方式
        outputs = self.model.generate(inputs, max_length=self.num_output)
        response = self.tokenizer.decode(outputs[0])
        for token in response:
            yield CompletionResponse(text=token, delta=token)

基于本地模型搭建简易RAG

from typing import Any

from llama_index.core.llms import (
    CustomLLM,
    CompletionResponse,
    CompletionResponseGen,
    LLMMetadata,
)
from llama_index.core.llms.callbacks import llm_completion_callback
from transformers import AutoTokenizer, AutoModelForCausalLM
from llama_index.core import Settings,VectorStoreIndex,SimpleDirectoryReader
from llama_index.embeddings.huggingface import HuggingFaceEmbedding


class GLMCustomLLM(CustomLLM):
    context_window: int = 8192  # 上下文窗口大小
    num_output: int = 8000  # 输出的token数量
    model_name: str = "glm-4-9b-chat"  # 模型名称
    tokenizer: object = None  # 分词器
    model: object = None  # 模型
    dummy_response: str = "My response"

    def __init__(self, pretrained_model_name_or_path):
        super().__init__()

        # GPU方式加载模型
        self.tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, device_map="cuda", trust_remote_code=True)
        self.model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path, device_map="cuda", trust_remote_code=True).eval()

        # CPU方式加载模型
        # self.tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, device_map="cpu", trust_remote_code=True)
        # self.model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path, device_map="cpu", trust_remote_code=True)
        self.model = self.model.float()

    @property
    def metadata(self) -> LLMMetadata:
        """Get LLM metadata."""
        # 得到LLM的元数据
        return LLMMetadata(
            context_window=self.context_window,
            num_output=self.num_output,
            model_name=self.model_name,
        )

    # @llm_completion_callback()
    # def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:
    #     return CompletionResponse(text=self.dummy_response)
    #
    # @llm_completion_callback()
    # def stream_complete(
    #     self, prompt: str, **kwargs: Any
    # ) -> CompletionResponseGen:
    #     response = ""
    #     for token in self.dummy_response:
    #         response += token
    #         yield CompletionResponse(text=response, delta=token)

    @llm_completion_callback()  # 回调函数
    def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:
        # 完成函数
        print("完成函数")

        inputs = self.tokenizer.encode(prompt, return_tensors='pt').cuda()  # GPU方式
        # inputs = self.tokenizer.encode(prompt, return_tensors='pt')  # CPU方式
        outputs = self.model.generate(inputs, max_length=self.num_output)
        response = self.tokenizer.decode(outputs[0])
        return CompletionResponse(text=response)

    @llm_completion_callback()
    def stream_complete(
        self, prompt: str, **kwargs: Any
    ) -> CompletionResponseGen:
        # 流式完成函数
        print("流式完成函数")

        inputs = self.tokenizer.encode(prompt, return_tensors='pt').cuda()  # GPU方式
        # inputs = self.tokenizer.encode(prompt, return_tensors='pt')  # CPU方式
        outputs = self.model.generate(inputs, max_length=self.num_output)
        response = self.tokenizer.decode(outputs[0])
        for token in response:
            yield CompletionResponse(text=token, delta=token)


if __name__ == "__main__":


    # 定义你的LLM
    pretrained_model_name_or_path = r'/home/nlp/model/LLM/THUDM/glm-4-9b-chat'
    embed_model_path = '/home/nlp/model/Embedding/BAAI/bge-m3'

    Settings.embed_model = HuggingFaceEmbedding(
        model_name=f"{embed_model_path}",device='cuda'

    )

    Settings.llm = GLMCustomLLM(pretrained_model_name_or_path)

    documents = SimpleDirectoryReader(input_dir="home/xxxx/input").load_data()
    index = VectorStoreIndex.from_documents(
        documents,
    )


    # 查询和打印结果
    query_engine = index.as_query_engine()
    response = query_engine.query("萧炎的表妹是谁?")

    print(response)

ollama 

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.llms.ollama import Ollama

documents = SimpleDirectoryReader("data").load_data()

# bge-base embedding model
Settings.embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-base-en-v1.5")

# ollama
Settings.llm = Ollama(model="llama3", request_timeout=360.0)

index = VectorStoreIndex.from_documents(
    documents,
)

欢迎大家点赞或收藏

大家的点赞或收藏可以鼓励作者加快更新哟~

作者也新建了一个学习交流群,欢迎大家加入一起学习成长

通过链接获取二维码
链接:https://pan.baidu.com/s/1ZZZZ-ANJvMFHlanl-tbtew?pwd=9el1 

参加链接:

LlamaIndex中的CustomLLM(本地加载模型)
llamaIndex 基于GPU加载本地embedding模型
 

官网文档

官网_starter_example_loca

官网_usage_custom

### 部署嵌入模型本地服务器 #### Python环境配置 为了确保能够顺利运行各种依赖库,在开始之前需先构建合适的Python开发环境。这通常意味着安装特定版本的Python解释器以及采用诸如Miniconda这样的工具来创建隔离的工作空间[^1]。 ```bash wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh source ~/.bashrc conda create --name myenv python=3.9 conda activate myenv ``` #### 获取准备模型资源 通过Git获取必要的源码仓库,这是访问预训练模型和其他辅助脚本的重要途径之一。对于那些希望利用Hugging Face Transformers库来进行自然语言处理任务的人来说,此步骤尤为关键。 ```bash apt-get update && apt-get install -y git git clone https://github.com/path_to_your_model_repo.git cd path_to_your_model_repo pip install transformers torch sentence-transformers ``` #### 使用Transformers加载预训练模型 一旦上述准备工作完成之后,则可以借助`transformers`库中的API轻松加载所需的预训练模型实例。这里以Sentence-BERT为例展示具体操作方法: ```python from sentence_transformers import SentenceTransformer, util model_name = 'sentence-transformers/all-MiniLM-L6-v2' model = SentenceTransformer(model_name) sentences = ['This framework generates embeddings for each input sentence', 'Sentences are passed as a list of string.', 'The quick brown fox jumps over the lazy dog'] embeddings = model.encode(sentences) print(embeddings.shape) # 输出形状应类似于 (3, 384),取决于所选模型的具体维度 ``` #### 整合至应用框架内 如果计划将这些功能集成到更复杂的应用程序中去——比如基于Dify的服务平台——那么还需要进一步调整设置选项以便正确连接外部提供的文本嵌入服务。此时应当按照官方文档指示完成相应参数配置工作[^2]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值