1、部署目的:
便于后端直接与端口交互进行访问
2、部署过程
步骤一、下载向量,这里我以m3e为例子
cd models
# 克隆m3e向量模型
git clone https://www.modelscope.cn/xrunda/m3e-base.git
步骤二、修改大模型路径

步骤二、运行启动
cd openai_api_demo
python api_server.py
步骤三、测试
使用crul进行测试
curl -X POST "http://127.0.0.1:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
-d "{\"model\": \"chatglm3-6b\", \"messages\": [{\"role\": \"system\", \"content\": \"You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.\"}, {\"role\": \"user\", \"content\": \"你好,你是谁?\"}], \"stream\": false, \"max_tokens\": 100, \"temperature\": 0.8, \"top_p\": 0.8}"