问题
在国内使用llamaIndex连接openai报错,如何使用自定义模型?
众所周知,openai对国内访问做了限制,所以连接报错。
解决
方法一:使用第三方中转到openai服务器
如果你就是想用openai的模型,用国内的第三方中转到openai服务器就好了。
llamaIndex线上访问模型,只支持openai的模型名,如果你想用其他厂商模型,请尝试修改源代码。
然后新建一个.env文件
OPENAI_BASE_URL = "https://sg.uiuiapi.com/v1"
OPENAI_API_KRY = "这儿替换你申请的key"
用python安装依赖的包:pip install dotenv openai
新建一个starter.py文件,运行后成功返回结果!
import asyncio
from llama_index.core.agent.workflow import FunctionAgent
from llama_index.llms.openai import OpenAI
from dotenv import load_dotenv
import openai
import os
# 加载当前目录.env文件存储的环境变量
load_dotenv()
# 变量赋值,以用于中转到openai
openai.api_key = os.environ["OPENAI_API_KEY"]
openai.base_url = os.environ["OPENAI_BASE_URL"]
# Define a simple calculator tool
def multiply(a: float, b: float) -> float:
"""Useful for multiplying two numbers."""
return a * b
# Create an agent workflow with our calculator tool
agent = FunctionAgent(
tools=[multiply],
llm=OpenAI(model="gpt-4o-mini"),
system_prompt="You are a helpful assistant that can multiply two numbers.",
)
async def main():
# Run the agent
response = await agent.run("What is 1234 * 4567?")
print(str(response))
# Run the agent
if __name__ == "__main__":
asyncio.run(main())
方法二:本地用ollama部署llama、qwen
如果用ollama部署模型,就不需要设置环境变量。
新建文件:starter_ollama.py,运行成功
import asyncio
from llama_index.core.agent.workflow import FunctionAgent
from llama_index.llms.ollama import Ollama
# Define a simple calculator tool
def multiply(a: float, b: float) -> float:
"""Useful for multiplying two numbers."""
return a * b
# Create an agent workflow with our calculator tool
agent = FunctionAgent(
tools=[multiply],
llm=Ollama(
model="llama3.1:latest",
request_timeout=360.0,
# Manually set the context window to limit memory usage
context_window=8000,
),
system_prompt="You are a helpful assistant that can multiply two numbers.",
)
async def main():
# Run the agent
response = await agent.run("What is 1234 * 4567?")
print(str(response))
# Run the agent
if __name__ == "__main__":
asyncio.run(main())