AutoModelForCausalLM stream 流式输出的例子

import spaces
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, TextIteratorStreamer
from threading import Thread

import os

# PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
os.environ['PYTORCH_CUDA_ALLOC_CONF'] = 'expandable_segments:True'

torch.random.manual_seed(0)

model = AutoModelForCausalLM.from_pretrained(
    "NyxKrage/Microsoft_Phi-4", 
    device_map="cuda", 
    torch_dtype="auto", 
    trust_remote_code=True, 
)
tokenizer = AutoTokenizer.from_pretrained("NyxKrage/Microsoft_Phi-4")

streamer = TextIteratorStreamer(tokenizer)

messages = [
    {"role": "system", "content": "You are a helpful AI assistant."},
    {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
    {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
    {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]

# Convert messages to the format expected by the model
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")

generation_kwargs = dict(
    input_ids=input_ids,
    max_new_tokens=500,
    temperature=0.0,
    do_sample=False,
    streamer=streamer,
)

@spaces.GPU
def tuili():
    # Run the generation in a separate thread, so that we can fetch the generated text in a non-blocking way.
    thread = Thread(target=model.generate, kwargs=generation_kwargs)
    thread.start()
    # Print the generated text in real-time
    for new_text in streamer:
        print(new_text, end="", flush=True)

tuili()

参考文档:https://huggingface.co/docs/transformers/v4.47.1/en/internal/generation_utils#transformers.TextStreamer

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Deng_Xian_Shemg

捐助1元钱

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值