别让你的Qwen-Image在Jupyter里“吃灰“!三步教你用FastAPI把它变成一个能赚钱的API服务

别让你的Qwen-Image在Jupyter里"吃灰"!三步教你用FastAPI把它变成一个能赚钱的API服务

【免费下载链接】Qwen-Image 我们隆重推出 Qwen-Image,这是通义千问系列中的图像生成基础模型,在复杂文本渲染和精准图像编辑方面取得重大突破。 【免费下载链接】Qwen-Image 项目地址: https://ai.gitcode.com/hf_mirrors/Qwen/Qwen-Image

引言

你是否已经能在本地用Qwen-Image生成惊艳的图像,并渴望将其强大的视觉创造力分享给你的网站或App用户?当一个强大的文生图模型躺在你的硬盘里时,它的价值是有限的。只有当它变成一个稳定、可调用的API服务时,才能真正赋能万千应用。本教程将手把手教你如何实现这一转变,从本地脚本到云端API的关键一步。

技术栈选型与环境准备

为什么选择FastAPI?

FastAPI是一个现代、高性能的Python Web框架,特别适合构建API服务。相比Flask,它具有以下优势:

  • 自动API文档生成:内置Swagger UI和ReDoc,无需手动编写文档
  • 类型提示支持:基于Python类型提示,提供更好的代码可读性和IDE支持
  • 异步支持:原生支持async/await,处理高并发请求更高效
  • 数据验证:自动请求参数验证,减少错误处理代码

环境依赖

创建requirements.txt文件:

fastapi==0.104.1
uvicorn[standard]==0.24.0
diffusers==0.26.3
torch==2.1.0
torchvision==0.16.0
transformers==4.35.2
accelerate==0.24.1
Pillow==10.0.1
python-multipart==0.0.6

安装依赖:

pip install -r requirements.txt

核心逻辑封装:适配Qwen-Image的推理函数

模型加载函数

首先,我们需要封装模型加载逻辑。由于Qwen-Image模型较大,我们采用单例模式确保只加载一次:

import torch
from diffusers import DiffusionPipeline
from typing import Optional, Dict, Tuple
import logging

# 配置日志
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

class QwenImageModel:
    _instance = None
    _pipe = None
    
    def __new__(cls):
        if cls._instance is None:
            cls._instance = super().__new__(cls)
        return cls._instance
    
    def load_model(self):
        """加载Qwen-Image模型"""
        if self._pipe is not None:
            logger.info("模型已加载,跳过重复加载")
            return self._pipe
            
        model_name = "Qwen/Qwen-Image"
        
        # 设备检测和配置
        if torch.cuda.is_available():
            torch_dtype = torch.bfloat16
            device = "cuda"
            logger.info("检测到CUDA设备,使用GPU加速")
        else:
            torch_dtype = torch.float32
            device = "cpu"
            logger.warning("未检测到CUDA设备,使用CPU运行,性能可能受限")
        
        try:
            logger.info(f"开始加载模型: {model_name}")
            self._pipe = DiffusionPipeline.from_pretrained(
                model_name, 
                torch_dtype=torch_dtype
            )
            self._pipe = self._pipe.to(device)
            logger.info("模型加载成功")
        except Exception as e:
            logger.error(f"模型加载失败: {str(e)}")
            raise
        
        return self._pipe
    
    def get_model(self):
        """获取模型实例"""
        if self._pipe is None:
            return self.load_model()
        return self._pipe

图像生成函数

接下来封装核心的图像生成逻辑:

from PIL import Image
import io
import base64
from datetime import datetime
import os

class QwenImageGenerator:
    def __init__(self):
        self.model = QwenImageModel()
        self.positive_magic = {
            "en": ", Ultra HD, 4K, cinematic composition.",
            "zh": ", 超清,4K,电影级构图."
        }
        self.aspect_ratios = {
            "1:1": (1328, 1328),
            "16:9": (1664, 928),
            "9:16": (928, 1664),
            "4:3": (1472, 1140),
            "3:4": (1140, 1472),
            "3:2": (1584, 1056),
            "2:3": (1056, 1584),
        }
    
    def generate_image(
        self,
        prompt: str,
        negative_prompt: str = " ",
        aspect_ratio: str = "16:9",
        num_inference_steps: int = 50,
        true_cfg_scale: float = 4.0,
        seed: Optional[int] = None
    ) -> Image.Image:
        """
        生成图像的核心函数
        
        Args:
            prompt: 正向提示词
            negative_prompt: 负向提示词
            aspect_ratio: 宽高比
            num_inference_steps: 推理步数
            true_cfg_scale: CFG缩放系数
            seed: 随机种子
            
        Returns:
            PIL.Image.Image: 生成的图像对象
        """
        pipe = self.model.get_model()
        
        # 获取宽高尺寸
        if aspect_ratio not in self.aspect_ratios:
            logger.warning(f"不支持的宽高比: {aspect_ratio}, 使用默认16:9")
            aspect_ratio = "16:9"
        width, height = self.aspect_ratios[aspect_ratio]
        
        # 设置随机种子
        if seed is not None:
            generator = torch.Generator(device=pipe.device).manual_seed(seed)
        else:
            generator = None
        
        # 自动检测语言并添加质量后缀
        processed_prompt = prompt
        if any(char in prompt for char in "你好世界测试中文"):
            processed_prompt += self.positive_magic["zh"]
        else:
            processed_prompt += self.positive_magic["en"]
        
        logger.info(f"开始生成图像,提示词长度: {len(prompt)}")
        
        # 执行图像生成
        try:
            result = pipe(
                prompt=processed_prompt,
                negative_prompt=negative_prompt,
                width=width,
                height=height,
                num_inference_steps=num_inference_steps,
                true_cfg_scale=true_cfg_scale,
                generator=generator
            )
            
            image = result.images[0]
            logger.info("图像生成成功")
            return image
            
        except Exception as e:
            logger.error(f"图像生成失败: {str(e)}")
            raise
    
    def image_to_base64(self, image: Image.Image, format: str = "PNG") -> str:
        """将PIL图像转换为base64字符串"""
        buffered = io.BytesIO()
        image.save(buffered, format=format)
        img_str = base64.b64encode(buffered.getvalue()).decode()
        return f"data:image/{format.lower()};base64,{img_str}"
    
    def save_image(self, image: Image.Image, filename: str = None) -> str:
        """保存图像到文件并返回文件路径"""
        if filename is None:
            timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
            filename = f"qwen_image_{timestamp}.png"
        
        # 确保输出目录存在
        os.makedirs("outputs", exist_ok=True)
        filepath = os.path.join("outputs", filename)
        
        image.save(filepath, format="PNG")
        logger.info(f"图像已保存: {filepath}")
        return filepath

API接口设计:优雅地处理输入与输出

完整的FastAPI服务

现在我们来设计完整的API服务:

from fastapi import FastAPI, HTTPException, BackgroundTasks
from fastapi.responses import JSONResponse, FileResponse
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel, Field
from typing import Optional, List
import uuid
import json
import asyncio
from concurrent.futures import ThreadPoolExecutor

# 定义请求模型
class ImageGenerationRequest(BaseModel):
    prompt: str = Field(..., description="正向提示词,描述想要生成的图像内容")
    negative_prompt: Optional[str] = Field(" ", description="负向提示词,描述不想要的内容")
    aspect_ratio: Optional[str] = Field("16:9", description="图像宽高比")
    num_inference_steps: Optional[int] = Field(50, description="推理步数,影响生成质量")
    true_cfg_scale: Optional[float] = Field(4.0, description="CFG缩放系数")
    seed: Optional[int] = Field(None, description="随机种子,用于重现结果")
    return_type: Optional[str] = Field("url", description="返回类型:url或base64")

class BatchImageRequest(BaseModel):
    requests: List[ImageGenerationRequest]
    batch_id: Optional[str] = Field(None, description="批次ID,用于跟踪")

# 初始化FastAPI应用
app = FastAPI(
    title="Qwen-Image API服务",
    description="基于Qwen-Image模型的文生图API服务",
    version="1.0.0"
)

# 配置CORS
app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

# 全局变量
generator = QwenImageGenerator()
executor = ThreadPoolExecutor(max_workers=4)  # 控制并发数

# 任务状态跟踪
task_status = {}

@app.on_event("startup")
async def startup_event():
    """应用启动时预加载模型"""
    logger.info("应用启动,预加载模型...")
    # 在后台线程中加载模型,避免阻塞启动
    loop = asyncio.get_event_loop()
    await loop.run_in_executor(executor, generator.model.load_model)
    logger.info("模型预加载完成")

@app.post("/generate", response_model=dict)
async def generate_image(request: ImageGenerationRequest):
    """
    单张图像生成接口
    
    根据文本提示词生成高质量的图像,支持多种宽高比和参数配置。
    """
    try:
        # 生成任务ID
        task_id = str(uuid.uuid4())
        task_status[task_id] = {"status": "processing", "start_time": datetime.now()}
        
        # 在线程池中执行生成任务
        def generate_task():
            try:
                image = generator.generate_image(
                    prompt=request.prompt,
                    negative_prompt=request.negative_prompt,
                    aspect_ratio=request.aspect_ratio,
                    num_inference_steps=request.num_inference_steps,
                    true_cfg_scale=request.true_cfg_scale,
                    seed=request.seed
                )
                
                # 根据返回类型处理结果
                if request.return_type == "base64":
                    image_data = generator.image_to_base64(image)
                    result = {"image_data": image_data, "format": "base64"}
                else:
                    # 默认返回文件URL
                    filename = f"{task_id}.png"
                    filepath = generator.save_image(image, filename)
                    result = {"image_url": f"/download/{filename}", "format": "url"}
                
                task_status[task_id].update({
                    "status": "completed",
                    "result": result,
                    "end_time": datetime.now()
                })
                return result
                
            except Exception as e:
                task_status[task_id].update({
                    "status": "failed",
                    "error": str(e),
                    "end_time": datetime.now()
                })
                raise
        
        # 提交任务到线程池
        future = executor.submit(generate_task)
        result = await asyncio.wrap_future(future)
        
        return {
            "task_id": task_id,
            "status": "success",
            "data": result
        }
        
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"图像生成失败: {str(e)}")

@app.post("/generate/batch", response_model=dict)
async def generate_batch_images(request: BatchImageRequest):
    """
    批量图像生成接口
    
    一次性生成多张图像,适合需要大量生成内容的场景。
    """
    batch_id = request.batch_id or str(uuid.uuid4())
    task_ids = []
    
    for i, req in enumerate(request.requests):
        task_id = f"{batch_id}_{i}"
        task_ids.append(task_id)
        task_status[task_id] = {"status": "pending", "batch_id": batch_id}
    
    # 在后台处理批量任务
    async def process_batch():
        results = []
        for i, req in enumerate(request.requests):
            task_id = task_ids[i]
            try:
                # 这里简化处理,实际应该使用更复杂的并发控制
                image = generator.generate_image(
                    prompt=req.prompt,
                    negative_prompt=req.negative_prompt,
                    aspect_ratio=req.aspect_ratio,
                    num_inference_steps=req.num_inference_steps,
                    true_cfg_scale=req.true_cfg_scale,
                    seed=req.seed
                )
                
                filename = f"{task_id}.png"
                filepath = generator.save_image(image, filename)
                
                results.append({
                    "task_id": task_id,
                    "image_url": f"/download/{filename}",
                    "status": "completed"
                })
                
                task_status[task_id].update({
                    "status": "completed",
                    "result": {"image_url": f"/download/{filename}"}
                })
                
            except Exception as e:
                results.append({
                    "task_id": task_id,
                    "status": "failed",
                    "error": str(e)
                })
                task_status[task_id].update({
                    "status": "failed",
                    "error": str(e)
                })
        
        return results
    
    # 立即返回批次ID,实际处理在后台进行
    return {
        "batch_id": batch_id,
        "status": "processing",
        "task_count": len(request.requests),
        "message": "批量任务已提交,请使用batch_id查询状态"
    }

@app.get("/download/{filename}")
async def download_image(filename: str):
    """
    图像下载接口
    
    通过文件名下载生成的图像文件。
    """
    filepath = os.path.join("outputs", filename)
    if not os.path.exists(filepath):
        raise HTTPException(status_code=404, detail="文件不存在")
    
    return FileResponse(
        filepath,
        media_type="image/png",
        filename=filename
    )

@app.get("/task/{task_id}")
async def get_task_status(task_id: str):
    """
    查询任务状态
    
    根据任务ID查询图像生成任务的状态和结果。
    """
    if task_id not in task_status:
        raise HTTPException(status_code=404, detail="任务不存在")
    
    status = task_status[task_id]
    return {
        "task_id": task_id,
        "status": status["status"],
        "result": status.get("result"),
        "error": status.get("error")
    }

@app.get("/health")
async def health_check():
    """
    健康检查接口
    
    检查服务是否正常运行,模型是否加载成功。
    """
    try:
        pipe = generator.model.get_model()
        return {
            "status": "healthy",
            "model_loaded": pipe is not None,
            "device": str(pipe.device) if pipe else "unknown"
        }
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"服务异常: {str(e)}")

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)

为什么选择返回URL而不是直接返回文件内容?

在设计API返回策略时,我们选择了返回文件URL而不是直接返回base64编码的图像数据,主要基于以下考虑:

  1. 性能优化:base64编码会增加约33%的数据传输量,对于大尺寸图像来说,这会显著增加网络传输时间。

  2. 缓存友好:URL方式可以利用HTTP缓存机制,浏览器和CDN可以缓存生成的图像,减少重复请求的开销。

  3. 错误处理:如果生成过程中出现错误,URL方式可以返回404状态码,而base64方式需要在客户端解析后才能发现错误。

  4. 扩展性:未来如果需要支持图像编辑、批量下载等功能,URL方式更容易扩展。

实战测试:验证你的API服务

使用curl测试

# 测试单张图像生成
curl -X POST "http://localhost:8000/generate" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "A beautiful sunset over mountains with vibrant colors",
    "aspect_ratio": "16:9",
    "num_inference_steps": 30,
    "return_type": "url"
  }'

# 响应示例
{
  "task_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "status": "success",
  "data": {
    "image_url": "/download/a1b2c3d4-e5f6-7890-abcd-ef1234567890.png",
    "format": "url"
  }
}

# 下载生成的图像
curl -O "http://localhost:8000/download/a1b2c3d4-e5f6-7890-abcd-ef1234567890.png"

使用Python requests测试

import requests
import json

def test_qwen_image_api():
    # 单张图像生成
    url = "http://localhost:8000/generate"
    payload = {
        "prompt": "一只可爱的猫咪在花园里玩耍,阳光明媚",
        "negative_prompt": "模糊,低质量,水印",
        "aspect_ratio": "1:1",
        "num_inference_steps": 40,
        "true_cfg_scale": 5.0,
        "return_type": "url"
    }
    
    response = requests.post(url, json=payload)
    if response.status_code == 200:
        result = response.json()
        print(f"任务ID: {result['task_id']}")
        print(f"图像URL: {result['data']['image_url']}")
        
        # 下载图像
        image_url = f"http://localhost:8000{result['data']['image_url']}"
        image_response = requests.get(image_url)
        if image_response.status_code == 200:
            with open("generated_image.png", "wb") as f:
                f.write(image_response.content)
            print("图像下载成功")
        else:
            print("图像下载失败")
    else:
        print(f"请求失败: {response.status_code}")
        print(response.text)

if __name__ == "__main__":
    test_qwen_image_api()

批量生成测试

def test_batch_generation():
    url = "http://localhost:8000/generate/batch"
    
    prompts = [
        "科幻城市夜景,霓虹灯光,未来感",
        "宁静的山水画,中国传统风格",
        "抽象艺术,色彩斑斓的几何图案"
    ]
    
    batch_requests = []
    for prompt in prompts:
        batch_requests.append({
            "prompt": prompt,
            "aspect_ratio": "16:9",
            "num_inference_steps": 35
        })
    
    payload = {
        "requests": batch_requests,
        "batch_id": "test_batch_001"
    }
    
    response = requests.post(url, json=payload)
    print(response.json())

生产化部署与优化考量

部署方案

使用Gunicorn + Uvicorn Worker

# 安装Gunicorn
pip install gunicorn

# 启动服务(推荐配置)
gunicorn -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000 main:app --timeout 120

Docker部署

创建Dockerfile:

FROM python:3.9-slim

WORKDIR /app

# 安装系统依赖
RUN apt-get update && apt-get install -y \
    libgl1 \
    libglib2.0-0 \
    && rm -rf /var/lib/apt/lists/*

# 复制依赖文件
COPY requirements.txt .

# 安装Python依赖
RUN pip install --no-cache-dir -r requirements.txt

# 复制应用代码
COPY . .

# 创建输出目录
RUN mkdir -p outputs

# 暴露端口
EXPOSE 8000

# 启动命令
CMD ["gunicorn", "-w", "4", "-k", "uvicorn.workers.UvicornWorker", "-b", "0.0.0.0:8000", "main:app", "--timeout", "120"]

优化建议

  1. GPU显存管理

    # 启用VAE切片,减少显存占用
    pipe.enable_vae_slicing()
    
    # 启用注意力切片
    pipe.enable_attention_slicing()
    
    # 使用CPU卸载(如果显存不足)
    pipe.enable_sequential_cpu_offload()
    
  2. 模型缓存优化

    # 使用模型缓存,避免重复下载
    from diffusers import DiffusionPipeline
    import os
    
    # 设置缓存目录
    os.environ["HF_HOME"] = "/path/to/model/cache"
    
    # 或者使用本地模型路径
    model_path = "/path/to/local/qwen-image"
    pipe = DiffusionPipeline.from_pretrained(model_path)
    
  3. 请求队列管理

    # 实现请求队列和限流
    from fastapi import Request
    from slowapi import Limiter
    from slowapi.util import get_remote_address
    
    limiter = Limiter(key_func=get_remote_address)
    app.state.limiter = limiter
    
    @app.post("/generate")
    @limiter.limit("5/minute")  # 每分钟5次请求
    async def generate_image(request: Request, body: ImageGenerationRequest):
        # 处理逻辑
    
  4. 监控和日志

    # 添加详细的监控指标
    from prometheus_client import Counter, Histogram
    
    # 定义监控指标
    REQUEST_COUNT = Counter('request_count', 'API请求计数', ['method', 'endpoint'])
    GENERATION_TIME = Histogram('generation_time', '图像生成时间')
    
    @app.middleware("http")
    async def monitor_requests(request: Request, call_next):
        start_time = time.time()
        response = await call_next(request)
        process_time = time.time() - start_time
    
        REQUEST_COUNT.labels(method=request.method, endpoint=request.url.path).inc()
        if "generate" in request.url.path:
            GENERATION_TIME.observe(process_time)
    
        return response
    

性能调优参数

根据实际硬件配置调整以下参数:

# 在应用启动时配置
@app.on_event("startup")
async def configure_optimizations():
    pipe = generator.model.get_model()
    
    # 根据GPU显存调整优化策略
    gpu_memory = torch.cuda.get_device_properties(0).total_memory / 1024**3  # GB
    
    if gpu_memory < 16:
        # 小显存配置
        pipe.enable_vae_slicing()
        pipe.enable_attention_slicing()
        logger.info("启用显存优化模式(小显存)")
    elif gpu_memory < 24:
        # 中等显存配置
        pipe.enable_attention_slicing()
        logger.info("启用显存优化模式(中等显存)")
    else:
        # 大显存配置,禁用优化以获得最佳性能
        logger.info("禁用显存优化(大显存)")

通过以上完整的API封装方案,你已经成功将本地的Qwen-Image模型转变为一个可扩展、高性能的生产级API服务。这个服务不仅能够处理单个图像生成请求,还支持批量处理、状态查询、文件下载等完整的功能链路。

【免费下载链接】Qwen-Image 我们隆重推出 Qwen-Image,这是通义千问系列中的图像生成基础模型,在复杂文本渲染和精准图像编辑方面取得重大突破。 【免费下载链接】Qwen-Image 项目地址: https://ai.gitcode.com/hf_mirrors/Qwen/Qwen-Image

创作声明:本文部分内容由AI辅助生成(AIGC),仅供参考

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值