异步调用大模型

但又更多 prompt 时,可以异步调用大模型,加快效率,普通写法:

20 条 prompt:

import requests
import time
from openai import OpenAI

client = OpenAI(
    api_key="",
    base_url="https://open.bigmodel.cn/api/paas/v4/"
)

def call_large_model(prompt):
    completion = client.chat.completions.create(
        model="glm-4-plus",
        messages=[
            {"role": "system", "content": "你是一个聪明且富有创造力的小说作家"},
            {"role": "user",
             "content": prompt}
        ],
        top_p=0.7,
        temperature=0.9
    )
    return completion.choices[0].message

def main_sync():
    prompts = [f"说出数字,不用说其他内容 {i}" for i in range(20)]
    start_time = time.time()
    for prompt in prompts:
        size = call_large_model(prompt)
        print(f"{size}")
    print(f"Sync execution took {time.time() - start_time:.2f} seconds")

if __name__ == "__main__":
    main_sync()

异步写法:

import aiohttp
import asyncio
import time

API_KEY = ""
BASE_URL = "https://open.bigmodel.cn/api/paas/v4"

async def call_large_model(prompt, session):
    url = f"{BASE_URL}/chat/completions"
    headers = {
        "Authorization": f"Bearer {API_KEY}",
        "Content-Type": "application/json"
    }
    messages = [
        {"role": "system", "content": "你是一个聪明且富有创造力的小说作家"},
        {"role": "user", "content": prompt}
    ]
    payload = {
        "model": "glm-4-plus",
        "messages": messages,
        "top_p": 0.7,
        "temperature": 0.9
    }
    async with session.post(url, json=payload, headers=headers) as response:
        completion = await response.json()
        return completion['choices'][0]['message']

async def main_async():
    prompts = [f"说出数字,不用说其他内容 {i}" for i in range(20)]
    start_time = time.time()
    async with aiohttp.ClientSession() as session:
        tasks = [call_large_model(prompt, session) for prompt in prompts]
        results = await asyncio.gather(*tasks)
        for result in results:
            print(f"{result}")
    print(f"Async execution took {time.time() - start_time:.2f} seconds")

if __name__ == "__main__":
    asyncio.run(main_async())

 

opanai异步:

import asyncio
import time
from openai import AsyncOpenAI

# 创建异步客户端
client = AsyncOpenAI(
    base_url=',
    api_key=,  # ModelScope Token
)

# 设置额外参数
extra_body = {
    "enable_thinking": False,
    # "thinking_budget": 4096
}


async def process_request(user_query: str):
    """处理单个请求的异步函数"""
    response = await client.chat.completions.create(
        model='Qwen/Qwen3-235B-A22B',
        messages=[
            {
                'role': 'user',
                'content': user_query
            }
        ],
        stream=False,
        extra_body=extra_body
    )
    return response.choices[0].message.tool_calls


async def process_batch_requests(queries: list):
    """批量处理请求的异步函数"""
    tasks = [process_request(query) for query in queries]
    results = await asyncio.gather(*tasks)
    return results


# 使用示例
if __name__ == "__main__":
    queries = [
        "9.9和9.11谁大",
        "10.1和10.10哪个更大",
        "比较5.5和5.55的大小"
    ]

    # 运行异步批量处理
    start = time.time()
    results = asyncio.run(process_batch_requests(queries))

    # 打印所有结果
    for i, tool_calls in enumerate(results):
        print(f"查询 {queries[i]} 的结果:")
        print(tool_calls)
        print("-" * 50)

    end = time.time()
    print(f"处理完成,耗时 {end - start:.2f} 秒")

### 实现LangChain中的异步调用 在LangChain中,为了有效利用资源并提高性能,可以采用`AsyncCallbackHandler`来处理异步操作。这允许应用程序在等待长时间运行的任务完成的同时继续执行其他任务[^1]。 对于大型模型而言,由于其计算密集性和潜在的高延迟响应时间,使用异步方法尤为重要。通过这种方式可以在不影响用户体验的情况下加载或查询这些复杂的AI组件。当涉及到具体实现时,则需依赖于所选平台支持的相关库函数来进行非阻塞式的请求发送与接收。 下面给出一段Python代码作为示例展示如何创建一个简单的异步处理器: ```python from langchain.callbacks.base import AsyncCallbackHandler import asyncio class MyCustomAsyncHandler(AsyncCallbackHandler): async def on_llm_start(self, serialized, prompts, **kwargs): print("LLM started.") async def on_llm_end(self, response, **kwargs): print(f"LLM finished with result {response}.") async def main(): handler = MyCustomAsyncHandler() await handler.on_llm_start({}, ["prompt"]) await asyncio.sleep(2) # Simulate LLM processing time. await handler.on_llm_end("example_result") if __name__ == "__main__": asyncio.run(main()) ``` 此脚本定义了一个继承自`AsyncCallbackHandler`类的新类,并重写了两个关键的方法:`on_llm_start()` 和 `on_llm_end()`. 这些方法将在相应的生命周期事件发生时被自动触发。最后,在主程序里实例化该对象并通过模拟的方式展示了整个流程的工作原理。 值得注意的是,除了上述基础结构外,还可以进一步扩展此类以适应特定需求或者集成更多特性如错误处理机制等。此外,如果要删除不再使用的模型版本,可按照如下方式操作: ```bash ollama rm qwen2.5-3bnsfw:latest ``` 这条命令会移除名为`qwen2.5-3bnsfw:latest`的大规模预训练语言模型文件及其关联数据[^2]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值