Python全栈--智能问答系统的设计与实现

目录

项目概述

智能问答系统是一个集成了自然语言处理、机器学习和Web开发技术的综合性项目。本系统旨在为用户提供智能、准确、实时的问答服务,支持多轮对话、上下文理解和知识检索功能。

系统特性

  • 智能对话:基于大语言模型的自然语言理解和生成
  • 知识检索:支持文档向量化和语义搜索
  • 多轮对话:维护对话上下文和历史记录
  • 用户管理:完整的用户注册、登录和权限管理
  • 实时交互:WebSocket支持的实时消息推送
  • 响应式设计:适配多种设备的前端界面

系统架构设计

整体架构

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   前端 (React)   │────│  后端 (FastAPI)  │────│   数据库 (MySQL) │
└─────────────────┘    └─────────────────┘    └─────────────────┘
                               │
                       ┌───────┴────────┐
                       │  AI服务 (LLM)   │
                       └────────────────┘

核心模块

  1. 用户管理模块:处理用户注册、登录、权限验证
  2. 对话管理模块:管理多轮对话会话和上下文
  3. 知识库模块:文档存储、向量化和检索
  4. AI集成模块:大语言模型接口封装和调用
  5. API网关模块:统一的接口管理和路由分发

技术栈选择

后端技术栈

  • 框架:FastAPI - 高性能异步Web框架
  • 数据库:MySQL + SQLAlchemy ORM
  • 缓存:Redis - 会话管理和数据缓存
  • 任务队列:Celery + Redis - 异步任务处理
  • 向量数据库:Pinecone/Qdrant - 向量存储和检索

前端技术栈

  • 框架:React 18 + TypeScript
  • 状态管理:Redux Toolkit + RTK Query
  • UI组件库:Ant Design / Material-UI
  • 实时通信:Socket.IO Client
  • 构建工具:Vite

AI/ML技术栈

  • 大语言模型:OpenAI GPT-4 / Claude / 本地模型
  • 向量嵌入:Sentence-BERT / OpenAI Embeddings
  • 文档处理:LangChain
  • 模型部署:Docker + NVIDIA Triton

后端开发

项目结构

backend/
├── app/
│   ├── api/              # API路由
│   ├── core/             # 核心配置
│   ├── models/           # 数据模型
│   ├── schemas/          # Pydantic模式
│   ├── services/         # 业务逻辑
│   ├── utils/            # 工具函数
│   └── main.py           # 应用入口
├── tests/                # 测试用例
├── requirements.txt      # 依赖包
└── Dockerfile           # 容器配置

核心代码实现

1. FastAPI应用配置
# main.py
from fastapi import FastAPI, Middleware
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.gzip import GZipMiddleware
from app.api import api_router
from app.core.config import settings

def create_application() -> FastAPI:
    app = FastAPI(
        title="智能问答系统API",
        description="基于AI的智能问答系统后端服务",
        version="1.0.0"
    )
    
    # 添加中间件
    app.add_middleware(
        CORSMiddleware,
        allow_origins=settings.ALLOWED_HOSTS,
        allow_credentials=True,
        allow_methods=["*"],
        allow_headers=["*"],
    )
    app.add_middleware(GZipMiddleware, minimum_size=1000)
    
    # 注册路由
    app.include_router(api_router, prefix="/api/v1")
    
    return app

app = create_application()
2. 数据模型定义
# models/conversation.py
from sqlalchemy import Column, Integer, String, Text, DateTime, ForeignKey
from sqlalchemy.orm import relationship
from app.core.database import Base

class Conversation(Base):
    __tablename__ = "conversations"
    
    id = Column(Integer, primary_key=True, index=True)
    user_id = Column(Integer, ForeignKey("users.id"))
    title = Column(String(255), nullable=False)
    created_at = Column(DateTime, default=datetime.utcnow)
    updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
    
    # 关系
    user = relationship("User", back_populates="conversations")
    messages = relationship("Message", back_populates="conversation", cascade="all, delete-orphan")

class Message(Base):
    __tablename__ = "messages"
    
    id = Column(Integer, primary_key=True, index=True)
    conversation_id = Column(Integer, ForeignKey("conversations.id"))
    role = Column(String(20), nullable=False)  # user, assistant, system
    content = Column(Text, nullable=False)
    timestamp = Column(DateTime, default=datetime.utcnow)
    
    # 关系
    conversation = relationship("Conversation", back_populates="messages")
3. AI服务集成
# services/ai_service.py
import openai
from typing import List, Dict, Optional
from app.core.config import settings
from app.schemas.chat import ChatMessage, ChatResponse

class AIService:
    def __init__(self):
        openai.api_key = settings.OPENAI_API_KEY
        self.model = settings.AI_MODEL_NAME
    
    async def generate_response(
        self, 
        messages: List[ChatMessage],
        context: Optional[str] = None,
        temperature: float = 0.7
    ) -> ChatResponse:
        """生成AI回复"""
        try:
            # 构建消息历史
            formatted_messages = [
                {"role": msg.role, "content": msg.content} 
                for msg in messages
            ]
            
            # 添加上下文信息
            if context:
                system_message = {
                    "role": "system",
                    "content": f"基于以下上下文信息回答用户问题:\n{context}"
                }
                formatted_messages.insert(0, system_message)
            
            # 调用OpenAI API
            response = await openai.ChatCompletion.acreate(
                model=self.model,
                messages=formatted_messages,
                temperature=temperature,
                max_tokens=2000,
                stream=False
            )
            
            return ChatResponse(
                content=response.choices[0].message.content,
                model=self.model,
                usage=response.usage.dict() if response.usage else None
            )
            
        except Exception as e:
            logger.error(f"AI服务调用失败: {str(e)}")
            raise HTTPException(status_code=500, detail="AI服务暂时不可用")

# 全局AI服务实例
ai_service = AIService()
4. 知识检索服务
# services/knowledge_service.py
import numpy as np
from sentence_transformers import SentenceTransformer
from typing import List, Tuple
from app.models.document import Document

class KnowledgeService:
    def __init__(self):
        self.encoder = SentenceTransformer('all-MiniLM-L6-v2')
        self.similarity_threshold = 0.7
    
    async def search_knowledge(
        self, 
        query: str, 
        top_k: int = 5
    ) -> List[Tuple[Document, float]]:
        """基于语义相似度检索知识"""
        
        # 编码查询
        query_embedding = self.encoder.encode([query])
        
        # 获取所有文档向量
        documents = await self.get_all_documents()
        
        results = []
        for doc in documents:
            if doc.embedding:
                # 计算余弦相似度
                similarity = np.dot(query_embedding[0], doc.embedding) / (
                    np.linalg.norm(query_embedding[0]) * np.linalg.norm(doc.embedding)
                )
                
                if similarity > self.similarity_threshold:
                    results.append((doc, float(similarity)))
        
        # 按相似度排序并返回top_k
        results.sort(key=lambda x: x[1], reverse=True)
        return results[:top_k]
    
    async def vectorize_document(self, content: str) -> List[float]:
        """文档向量化"""
        embedding = self.encoder.encode([content])
        return embedding[0].tolist()

knowledge_service = KnowledgeService()
5. WebSocket实时通信
# api/websocket.py
from fastapi import WebSocket, WebSocketDisconnect, Depends
from typing import Dict, List
import json
import asyncio

class ConnectionManager:
    def __init__(self):
        self.active_connections: Dict[str, WebSocket] = {}
    
    async def connect(self, websocket: WebSocket, user_id: str):
        await websocket.accept()
        self.active_connections[user_id] = websocket
    
    def disconnect(self, user_id: str):
        if user_id in self.active_connections:
            del self.active_connections[user_id]
    
    async def send_message(self, user_id: str, message: dict):
        if user_id in self.active_connections:
            websocket = self.active_connections[user_id]
            await websocket.send_text(json.dumps(message))

manager = ConnectionManager()

@router.websocket("/ws/{user_id}")
async def websocket_endpoint(websocket: WebSocket, user_id: str):
    await manager.connect(websocket, user_id)
    try:
        while True:
            # 接收客户端消息
            data = await websocket.receive_text()
            message_data = json.loads(data)
            
            # 处理不同类型的消息
            if message_data["type"] == "chat_message":
                # 处理聊天消息
                response = await process_chat_message(
                    user_id, 
                    message_data["content"]
                )
                await manager.send_message(user_id, {
                    "type": "chat_response",
                    "content": response
                })
                
    except WebSocketDisconnect:
        manager.disconnect(user_id)

前端开发

项目结构

frontend/
├── src/
│   ├── components/       # 可复用组件
│   ├── pages/           # 页面组件
│   ├── hooks/           # 自定义hooks
│   ├── store/           # Redux状态管理
│   ├── services/        # API服务
│   ├── utils/           # 工具函数
│   └── types/           # TypeScript类型定义
├── public/              # 静态资源
└── package.json         # 项目配置

核心组件实现

1. 聊天界面组件
// components/ChatInterface.tsx
import React, { useState, useEffect, useRef } from 'react';
import { Input, Button, List, Typography, Spin } from 'antd';
import { SendOutlined } from '@ant-design/icons';
import { useWebSocket } from '../hooks/useWebSocket';
import { Message } from '../types/chat';

const { TextArea } = Input;
const { Text } = Typography;

interface ChatInterfaceProps {
  conversationId: string;
  onMessageSent?: (message: Message) => void;
}

export const ChatInterface: React.FC<ChatInterfaceProps> = ({
  conversationId,
  onMessageSent
}) => {
  const [messages, setMessages] = useState<Message[]>([]);
  const [inputValue, setInputValue] = useState('');
  const [isLoading, setIsLoading] = useState(false);
  const messagesEndRef = useRef<HTMLDivElement>(null);
  
  const { sendMessage, lastMessage, connectionStatus } = useWebSocket();
  
  // 自动滚动到底部
  const scrollToBottom = () => {
    messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
  };
  
  useEffect(() => {
    scrollToBottom();
  }, [messages]);
  
  // 处理接收到的消息
  useEffect(() => {
    if (lastMessage) {
      const newMessage: Message = {
        id: Date.now().toString(),
        role: 'assistant',
        content: lastMessage.content,
        timestamp: new Date()
      };
      setMessages(prev => [...prev, newMessage]);
      setIsLoading(false);
    }
  }, [lastMessage]);
  
  const handleSendMessage = async () => {
    if (!inputValue.trim() || isLoading) return;
    
    const userMessage: Message = {
      id: Date.now().toString(),
      role: 'user',
      content: inputValue,
      timestamp: new Date()
    };
    
    setMessages(prev => [...prev, userMessage]);
    setIsLoading(true);
    setInputValue('');
    
    // 发送消息到WebSocket
    sendMessage({
      type: 'chat_message',
      conversationId,
      content: inputValue
    });
    
    onMessageSent?.(userMessage);
  };
  
  const handleKeyPress = (e: React.KeyboardEvent) => {
    if (e.key === 'Enter' && !e.shiftKey) {
      e.preventDefault();
      handleSendMessage();
    }
  };
  
  return (
    <div className="chat-interface">
      <div className="messages-container">
        <List
          dataSource={messages}
          renderItem={(message) => (
            <List.Item className={`message-item ${message.role}`}>
              <div className="message-content">
                <Text strong={message.role === 'user'}>
                  {message.role === 'user' ? '您' : 'AI助手'}
                </Text>
                <div className="message-text">
                  {message.content}
                </div>
                <div className="message-time">
                  {message.timestamp.toLocaleTimeString()}
                </div>
              </div>
            </List.Item>
          )}
        />
        {isLoading && (
          <div className="loading-indicator">
            <Spin size="small" />
            <Text type="secondary">AI正在思考中...</Text>
          </div>
        )}
        <div ref={messagesEndRef} />
      </div>
      
      <div className="input-area">
        <TextArea
          value={inputValue}
          onChange={(e) => setInputValue(e.target.value)}
          onKeyPress={handleKeyPress}
          placeholder="输入您的问题..."
          autoSize={{ minRows: 1, maxRows: 4 }}
          disabled={connectionStatus !== 'connected'}
        />
        <Button
          type="primary"
          icon={<SendOutlined />}
          onClick={handleSendMessage}
          disabled={!inputValue.trim() || isLoading || connectionStatus !== 'connected'}
        >
          发送
        </Button>
      </div>
    </div>
  );
};
2. WebSocket Hook
// hooks/useWebSocket.ts
import { useState, useEffect, useCallback, useRef } from 'react';
import { useSelector } from 'react-redux';
import { RootState } from '../store';

interface WebSocketMessage {
  type: string;
  [key: string]: any;
}

export const useWebSocket = () => {
  const [lastMessage, setLastMessage] = useState<any>(null);
  const [connectionStatus, setConnectionStatus] = useState<'connecting' | 'connected' | 'disconnected'>('disconnected');
  const ws = useRef<WebSocket | null>(null);
  const user = useSelector((state: RootState) => state.auth.user);
  
  const connect = useCallback(() => {
    if (!user?.id) return;
    
    const wsUrl = `ws://localhost:8000/api/v1/ws/${user.id}`;
    ws.current = new WebSocket(wsUrl);
    
    ws.current.onopen = () => {
      setConnectionStatus('connected');
      console.log('WebSocket连接已建立');
    };
    
    ws.current.onmessage = (event) => {
      try {
        const message = JSON.parse(event.data);
        setLastMessage(message);
      } catch (error) {
        console.error('解析WebSocket消息失败:', error);
      }
    };
    
    ws.current.onclose = () => {
      setConnectionStatus('disconnected');
      console.log('WebSocket连接已关闭');
      
      // 自动重连
      setTimeout(() => {
        if (user?.id) {
          connect();
        }
      }, 3000);
    };
    
    ws.current.onerror = (error) => {
      console.error('WebSocket错误:', error);
      setConnectionStatus('disconnected');
    };
  }, [user?.id]);
  
  const sendMessage = useCallback((message: WebSocketMessage) => {
    if (ws.current && ws.current.readyState === WebSocket.OPEN) {
      ws.current.send(JSON.stringify(message));
    } else {
      console.warn('WebSocket未连接,无法发送消息');
    }
  }, []);
  
  const disconnect = useCallback(() => {
    if (ws.current) {
      ws.current.close();
      ws.current = null;
    }
  }, []);
  
  useEffect(() => {
    if (user?.id) {
      setConnectionStatus('connecting');
      connect();
    }
    
    return () => {
      disconnect();
    };
  }, [user?.id, connect, disconnect]);
  
  return {
    sendMessage,
    lastMessage,
    connectionStatus,
    connect,
    disconnect
  };
};
3. Redux状态管理
// store/slices/chatSlice.ts
import { createSlice, createAsyncThunk, PayloadAction } from '@reduxjs/toolkit';
import { chatAPI } from '../../services/api';
import { Conversation, Message } from '../../types/chat';

interface ChatState {
  conversations: Conversation[];
  currentConversation: Conversation | null;
  messages: Message[];
  loading: boolean;
  error: string | null;
}

const initialState: ChatState = {
  conversations: [],
  currentConversation: null,
  messages: [],
  loading: false,
  error: null
};

// 异步操作
export const fetchConversations = createAsyncThunk(
  'chat/fetchConversations',
  async (_, { rejectWithValue }) => {
    try {
      const response = await chatAPI.getConversations();
      return response.data;
    } catch (error: any) {
      return rejectWithValue(error.response?.data?.message || '获取对话列表失败');
    }
  }
);

export const createConversation = createAsyncThunk(
  'chat/createConversation',
  async (title: string, { rejectWithValue }) => {
    try {
      const response = await chatAPI.createConversation({ title });
      return response.data;
    } catch (error: any) {
      return rejectWithValue(error.response?.data?.message || '创建对话失败');
    }
  }
);

const chatSlice = createSlice({
  name: 'chat',
  initialState,
  reducers: {
    setCurrentConversation: (state, action: PayloadAction<Conversation>) => {
      state.currentConversation = action.payload;
    },
    addMessage: (state, action: PayloadAction<Message>) => {
      state.messages.push(action.payload);
    },
    clearMessages: (state) => {
      state.messages = [];
    },
    clearError: (state) => {
      state.error = null;
    }
  },
  extraReducers: (builder) => {
    builder
      // fetchConversations
      .addCase(fetchConversations.pending, (state) => {
        state.loading = true;
        state.error = null;
      })
      .addCase(fetchConversations.fulfilled, (state, action) => {
        state.loading = false;
        state.conversations = action.payload;
      })
      .addCase(fetchConversations.rejected, (state, action) => {
        state.loading = false;
        state.error = action.payload as string;
      })
      
      // createConversation
      .addCase(createConversation.pending, (state) => {
        state.loading = true;
        state.error = null;
      })
      .addCase(createConversation.fulfilled, (state, action) => {
        state.loading = false;
        state.conversations.unshift(action.payload);
        state.currentConversation = action.payload;
      })
      .addCase(createConversation.rejected, (state, action) => {
        state.loading = false;
        state.error = action.payload as string;
      });
  }
});

export const { 
  setCurrentConversation, 
  addMessage, 
  clearMessages, 
  clearError 
} = chatSlice.actions;

export default chatSlice.reducer;

数据库设计

表结构设计

-- 用户表
CREATE TABLE users (
    id INT PRIMARY KEY AUTO_INCREMENT,
    username VARCHAR(50) UNIQUE NOT NULL,
    email VARCHAR(100) UNIQUE NOT NULL,
    password_hash VARCHAR(255) NOT NULL,
    avatar_url VARCHAR(255),
    is_active BOOLEAN DEFAULT TRUE,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);

-- 对话表
CREATE TABLE conversations (
    id INT PRIMARY KEY AUTO_INCREMENT,
    user_id INT NOT NULL,
    title VARCHAR(255) NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
    FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE,
    INDEX idx_user_created (user_id, created_at)
);

-- 消息表
CREATE TABLE messages (
    id INT PRIMARY KEY AUTO_INCREMENT,
    conversation_id INT NOT NULL,
    role ENUM('user', 'assistant', 'system') NOT NULL,
    content TEXT NOT NULL,
    timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    metadata JSON,
    FOREIGN KEY (conversation_id) REFERENCES conversations(id) ON DELETE CASCADE,
    INDEX idx_conversation_timestamp (conversation_id, timestamp)
);

-- 知识库文档表
CREATE TABLE documents (
    id INT PRIMARY KEY AUTO_INCREMENT,
    title VARCHAR(255) NOT NULL,
    content LONGTEXT NOT NULL,
    file_path VARCHAR(500),
    file_type VARCHAR(50),
    embedding JSON,  -- 存储向量嵌入
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
    INDEX idx_title (title),
    FULLTEXT idx_content (content)
);

-- 用户会话表
CREATE TABLE user_sessions (
    id VARCHAR(128) PRIMARY KEY,
    user_id INT NOT NULL,
    expires_at TIMESTAMP NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE,
    INDEX idx_user_expires (user_id, expires_at)
);

数据库连接和ORM配置

# core/database.py
from sqlalchemy import create_engine, MetaData
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from app.core.config import settings

# 创建数据库引擎
engine = create_engine(
    settings.DATABASE_URL,
    pool_pre_ping=True,
    pool_recycle=3600,
    echo=settings.DEBUG
)

# 会话工厂
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

# Base类
Base = declarative_base()
metadata = MetaData()

# 数据库依赖
def get_db():
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()

AI模型集成

多模型支持架构

# services/ai_models.py
from abc import ABC, abstractmethod
from typing import List, Dict, Optional
import openai
import anthropic
from transformers import pipeline

class BaseAIModel(ABC):
    """AI模型基类"""
    
    @abstractmethod
    async def generate_response(
        self, 
        messages: List[Dict[str, str]], 
        **kwargs
    ) -> str:
        pass
    
    @abstractmethod
    def get_model_info(self) -> Dict[str, any]:
        pass

class OpenAIModel(BaseAIModel):
    """OpenAI GPT模型"""
    
    def __init__(self, model_name: str = "gpt-4"):
        self.model_name = model_name
        openai.api_key = settings.OPENAI_API_KEY
    
    async def generate_response(
        self, 
        messages: List[Dict[str, str]], 
        temperature: float = 0.7,
        max_tokens: int = 2000
    ) -> str:
        response = await openai.ChatCompletion.acreate(
            model=self.model_name,
            messages=messages,
            temperature=temperature,
            max_tokens=max_tokens
        )
        return response.choices[0].message.content
    
    def get_model_info(self) -> Dict[str, any]:
        return {
            "provider": "OpenAI",
            "model": self.model_name,
            "max_tokens": 4096,
            "supports_functions": True
        }

class ClaudeModel(BaseAIModel):
    """Anthropic Claude模型"""
    
    def __init__(self, model_name: str = "claude-3-sonnet-20240229"):
        self.model_name = model_name
        self.client = anthropic.AsyncAnthropic(
            api_key=settings.ANTHROPIC_API_KEY
        )
    
    async def generate_response(
        self, 
        messages: List[Dict[str, str]], 
        temperature: float = 0.7,
        max_tokens: int = 2000
    ) -> str:
        response = await self.client.messages.create(
            model=self.model_name,
            messages=messages,
            temperature=temperature,
            max_tokens=max_tokens
        )
        return response.content[0].text
    
    def get_model_info(self) -> Dict[str, any]:
        return {
            "provider": "Anthropic",
            "model": self.model_name,
            "max_tokens": 8192,
            "supports_functions": False
        }

class LocalModel(BaseAIModel):
    """本地部署模型"""
    
    def __init__(self, model_path: str):
        self.model_path = model_path
        self.pipeline = pipeline(
            "text-generation",
            model=model_path,
            device_map="auto",
            torch_dtype="auto"
        )
    
    async def generate_response(
        self, 
        messages: List[Dict[str, str]], 
        temperature: float = 0.7,
        max_tokens: int = 2000
    ) -> str:
        # 将消息转换为prompt
        prompt = self._messages_to_prompt(messages)
        
        outputs = self.pipeline(
            prompt,
            max_new_tokens=max_tokens,
            temperature=temperature,
            do_sample=True,
            pad_token_id=self.pipeline.tokenizer.eos_token_id
        )
        
        return outputs[0]['generated_text'][len(prompt):]
    
    def _messages_to_prompt(self, messages: List[Dict[str, str]]) -> str:
        prompt = ""
        for message in messages:
            role = message["role"]
            content = message["content"]
            if role == "user":
                prompt += f"Human: {content}\n"
            elif role == "assistant":
                prompt += f"Assistant: {content}\n"
        prompt += "Assistant: "
        return prompt
    
    def get_model_info(self) -> Dict[str, any]:
        return {
            "provider": "Local",
            "model": self.model_path,
            "max_tokens": 2048,
            "supports_functions": False
        }

# 模型工厂
class AIModelFactory:
    _models = {
        "gpt-4": OpenAIModel,
        "gpt-3.5-turbo": OpenAIModel,
        "claude-3-sonnet": ClaudeModel,
        "claude-3-haiku": ClaudeModel,
        "local-llama": LocalModel
    }
    
    @classmethod
    def create_model(cls, model_name: str, **kwargs) -> BaseAIModel:
        if model_name not in cls._models:
            raise ValueError(f"不支持的模型: {model_name}")
        
        model_class = cls._models[model_name]
        return model_class(model_name, **kwargs)
    
    @classmethod
    def list_available_models(cls) -> List[str]:
        return list(cls._models.keys())

RAG (检索增强生成) 实现

# services/rag_service.py
from typing import List, Dict, Optional
from app.services.knowledge_service import knowledge_service
from app.services.ai_models import AIModelFactory

class RAGService:
    """检索增强生成服务"""
    
    def __init__(self, model_name: str = "gpt-4"):
        self.ai_model = AIModelFactory.create_model(model_name)
        self.max_context_length = 4000
    
    async def generate_answer(
        self, 
        question: str, 
        conversation_history: List[Dict[str, str]] = None,
        top_k: int = 3
    ) -> Dict[str, any]:
        """基于RAG生成答案"""
        
        # 1. 检索相关知识
        knowledge_results = await knowledge_service.search_knowledge(
            query=question, 
            top_k=top_k
        )
        
        # 2. 构建上下文
        context = self._build_context(knowledge_results)
        
        # 3. 构建消息
        messages = self._build_messages(
            question=question,
            context=context,
            history=conversation_history or []
        )
        
        # 4. 生成回答
        response = await self.ai_model.generate_response(messages)
        
        # 5. 返回结果包含引用信息
        return {
            "answer": response,
            "sources": [
                {
                    "title": doc.title,
                    "content_snippet": doc.content[:200] + "...",
                    "similarity_score": score
                }
                for doc, score in knowledge_results
            ],
            "model_info": self.ai_model.get_model_info()
        }
    
    def _build_context(
        self, 
        knowledge_results: List[tuple]
    ) -> str:
        """构建上下文信息"""
        if not knowledge_results:
            return ""
        
        context_parts = []
        current_length = 0
        
        for doc, score in knowledge_results:
            doc_text = f"文档:{doc.title}\n内容:{doc.content}\n"
            
            if current_length + len(doc_text) > self.max_context_length:
                break
            
            context_parts.append(doc_text)
            current_length += len(doc_text)
        
        return "\n---\n".join(context_parts)
    
    def _build_messages(
        self,
        question: str,
        context: str,
        history: List[Dict[str, str]]
    ) -> List[Dict[str, str]]:
        """构建对话消息"""
        messages = []
        
        # 系统提示
        system_prompt = f"""你是一个智能助手,请基于提供的上下文信息回答用户问题。

上下文信息:
{context}

请注意:
1. 优先使用上下文信息中的内容回答问题
2. 如果上下文信息不足以回答问题,请说明并提供你的一般性建议
3. 回答要准确、有帮助,并且友好
4. 可以适当引用上下文中的具体信息"""
        
        messages.append({"role": "system", "content": system_prompt})
        
        # 添加历史对话
        for msg in history[-10:]:  # 只保留最近10轮对话
            messages.append(msg)
        
        # 添加当前问题
        messages.append({"role": "user", "content": question})
        
        return messages

# 全局RAG服务实例
rag_service = RAGService()

系统部署

Docker容器化

# backend/Dockerfile
FROM python:3.9-slim

WORKDIR /app

# 安装系统依赖
RUN apt-get update && apt-get install -y \
    gcc \
    g++ \
    && rm -rf /var/lib/apt/lists/*

# 复制依赖文件
COPY requirements.txt .

# 安装Python依赖
RUN pip install --no-cache-dir -r requirements.txt

# 复制应用代码
COPY . .

# 暴露端口
EXPOSE 8000

# 启动命令
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
# frontend/Dockerfile
FROM node:18-alpine

WORKDIR /app

# 复制依赖文件
COPY package*.json ./

# 安装依赖
RUN npm ci --only=production

# 复制源码
COPY . .

# 构建应用
RUN npm run build

# 使用nginx提供静态文件服务
FROM nginx:alpine
COPY --from=0 /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf

EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Docker Compose配置

# docker-compose.yml
version: '3.8'

services:
  # 数据库
  mysql:
    image: mysql:8.0
    environment:
      MYSQL_ROOT_PASSWORD: rootpassword
      MYSQL_DATABASE: qa_system
      MYSQL_USER: qauser
      MYSQL_PASSWORD: qapassword
    volumes:
      - mysql_data:/var/lib/mysql
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    ports:
      - "3306:3306"
    networks:
      - qa_network

  # Redis缓存
  redis:
    image: redis:7-alpine
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data
    ports:
      - "6379:6379"
    networks:
      - qa_network

  # 后端API
  backend:
    build: ./backend
    environment:
      DATABASE_URL: mysql+pymysql://qauser:qapassword@mysql:3306/qa_system
      REDIS_URL: redis://redis:6379
      OPENAI_API_KEY: ${OPENAI_API_KEY}
      JWT_SECRET_KEY: ${JWT_SECRET_KEY}
    ports:
      - "8000:8000"
    depends_on:
      - mysql
      - redis
    volumes:
      - ./uploads:/app/uploads
    networks:
      - qa_network

  # 前端应用
  frontend:
    build: ./frontend
    ports:
      - "80:80"
    depends_on:
      - backend
    networks:
      - qa_network

  # 向量数据库 (可选)
  qdrant:
    image: qdrant/qdrant:latest
    ports:
      - "6333:6333"
    volumes:
      - qdrant_data:/qdrant/storage
    networks:
      - qa_network

volumes:
  mysql_data:
  redis_data:
  qdrant_data:

networks:
  qa_network:
    driver: bridge

Kubernetes部署配置

# k8s/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: qa-system

---
# k8s/backend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: qa-backend
  namespace: qa-system
spec:
  replicas: 3
  selector:
    matchLabels:
      app: qa-backend
  template:
    metadata:
      labels:
        app: qa-backend
    spec:
      containers:
      - name: backend
        image: qa-system/backend:latest
        ports:
        - containerPort: 8000
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: qa-secrets
              key: database-url
        - name: OPENAI_API_KEY
          valueFrom:
            secretKeyRef:
              name: qa-secrets
              key: openai-api-key
        resources:
          requests:
            memory: "512Mi"
            cpu: "250m"
          limits:
            memory: "1Gi"
            cpu: "500m"

---
# k8s/backend-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: qa-backend-service
  namespace: qa-system
spec:
  selector:
    app: qa-backend
  ports:
  - protocol: TCP
    port: 8000
    targetPort: 8000
  type: ClusterIP

性能优化

缓存策略

# utils/cache.py
import redis
import json
import hashlib
from typing import Optional, Any, Callable
from functools import wraps
from app.core.config import settings

# Redis连接
redis_client = redis.Redis.from_url(settings.REDIS_URL)

def cache_key(prefix: str, *args, **kwargs) -> str:
    """生成缓存键"""
    key_data = f"{prefix}:{args}:{sorted(kwargs.items())}"
    return hashlib.md5(key_data.encode()).hexdigest()

def cached(prefix: str, ttl: int = 3600):
    """缓存装饰器"""
    def decorator(func: Callable) -> Callable:
        @wraps(func)
        async def wrapper(*args, **kwargs):
            # 生成缓存键
            key = cache_key(prefix, *args, **kwargs)
            
            # 尝试从缓存获取
            cached_result = redis_client.get(key)
            if cached_result:
                return json.loads(cached_result)
            
            # 执行函数并缓存结果
            result = await func(*args, **kwargs)
            redis_client.setex(
                key, 
                ttl, 
                json.dumps(result, default=str)
            )
            
            return result
        return wrapper
    return decorator

# 使用示例
@cached("conversation", ttl=1800)
async def get_conversation_messages(conversation_id: int):
    # 数据库查询逻辑
    pass

数据库优化

# utils/database_optimization.py
from sqlalchemy import Index, text
from sqlalchemy.orm import Session
from app.core.database import engine

# 创建索引
def create_performance_indexes():
    """创建性能优化索引"""
    with engine.connect() as conn:
        # 消息表复合索引
        conn.execute(text("""
            CREATE INDEX IF NOT EXISTS idx_messages_conv_timestamp 
            ON messages(conversation_id, timestamp DESC)
        """))
        
        # 用户对话索引
        conn.execute(text("""
            CREATE INDEX IF NOT EXISTS idx_conversations_user_updated 
            ON conversations(user_id, updated_at DESC)
        """))
        
        # 文档全文搜索索引
        conn.execute(text("""
            CREATE FULLTEXT INDEX IF NOT EXISTS idx_documents_fulltext 
            ON documents(title, content)
        """))

# 查询优化
class OptimizedQueries:
    @staticmethod
    def get_recent_conversations(db: Session, user_id: int, limit: int = 20):
        """优化的最近对话查询"""
        return db.execute(text("""
            SELECT c.*, 
                   (SELECT content FROM messages m 
                    WHERE m.conversation_id = c.id 
                    ORDER BY timestamp DESC LIMIT 1) as last_message
            FROM conversations c 
            WHERE c.user_id = :user_id 
            ORDER BY c.updated_at DESC 
            LIMIT :limit
        """), {"user_id": user_id, "limit": limit}).fetchall()
    
    @staticmethod
    def get_conversation_with_messages(
        db: Session, 
        conversation_id: int, 
        limit: int = 50
    ):
        """优化的对话消息查询"""
        return db.execute(text("""
            SELECT m.* FROM messages m 
            WHERE m.conversation_id = :conv_id 
            ORDER BY m.timestamp DESC 
            LIMIT :limit
        """), {
            "conv_id": conversation_id, 
            "limit": limit
        }).fetchall()

异步任务处理

# tasks/celery_app.py
from celery import Celery
from app.core.config import settings

# 创建Celery应用
celery_app = Celery(
    "qa_system",
    broker=settings.REDIS_URL,
    backend=settings.REDIS_URL,
    include=["app.tasks.ai_tasks", "app.tasks.document_tasks"]
)

# 配置
celery_app.conf.update(
    task_serializer="json",
    accept_content=["json"],
    result_serializer="json",
    timezone="UTC",
    enable_utc=True,
    task_track_started=True,
    task_time_limit=30 * 60,  # 30分钟超时
    task_soft_time_limit=25 * 60,  # 25分钟软超时
    worker_prefetch_multiplier=1,
    worker_max_tasks_per_child=1000,
)

# tasks/ai_tasks.py
from celery import current_task
from app.tasks.celery_app import celery_app
from app.services.ai_service import ai_service

@celery_app.task(bind=True)
def process_ai_request(self, conversation_id: int, message_content: str):
    """异步处理AI请求"""
    try:
        # 更新任务状态
        self.update_state(
            state="PROCESSING",
            meta={"message": "正在处理AI请求..."}
        )
        
        # 调用AI服务
        response = ai_service.generate_response_sync(
            conversation_id=conversation_id,
            content=message_content
        )
        
        return {
            "status": "SUCCESS",
            "response": response,
            "conversation_id": conversation_id
        }
        
    except Exception as exc:
        self.update_state(
            state="FAILURE",
            meta={"error": str(exc)}
        )
        raise exc

@celery_app.task
def batch_process_documents(document_ids: List[int]):
    """批量处理文档向量化"""
    from app.services.knowledge_service import knowledge_service
    
    results = []
    for doc_id in document_ids:
        try:
            result = knowledge_service.vectorize_document_sync(doc_id)
            results.append({"doc_id": doc_id, "status": "success"})
        except Exception as e:
            results.append({"doc_id": doc_id, "status": "error", "error": str(e)})
    
    return results

总结与展望

项目成果

通过本项目的开发,我们成功构建了一个功能完整的智能问答系统,主要成果包括:

  1. 完整的全栈架构:采用FastAPI + React的现代化技术栈,实现了高性能的前后端分离架构
  2. 智能对话能力:集成多种AI模型,支持上下文理解和多轮对话
  3. 知识检索功能:基于向量相似度的语义搜索,实现了RAG检索增强生成
  4. 实时交互体验:WebSocket支持的实时消息推送,提供流畅的用户体验
  5. 可扩展架构:模块化设计,支持多种AI模型和数据源接入

技术亮点

  • 异步编程:充分利用Python异步特性,提高系统并发性能
  • 缓存优化:多层次缓存策略,显著提升响应速度
  • 容器化部署:Docker + Kubernetes方案,支持弹性伸缩
  • 实时通信:WebSocket实现双向实时数据传输
  • AI模型抽象:可插拔的AI模型架构,易于扩展新模型

性能指标

  • 响应时间:平均API响应时间 < 200ms
  • 并发处理:支持1000+并发用户
  • 可用性:系统可用性 > 99.9%
  • 扩展性:支持水平扩展,可根据负载自动伸缩

未来优化方向

  1. 性能优化

    • 实现更精细的缓存策略
    • 数据库分库分表
    • CDN加速静态资源
  2. 功能扩展

    • 支持多模态输入(图片、语音)
    • 增加文档协作功能
    • 集成更多第三方服务
  3. 智能化提升

    • 个性化推荐算法
    • 用户意图理解优化
    • 自动知识库更新
  4. 安全加固

    • API安全防护
    • 数据加密传输
    • 用户隐私保护

源代码下载

https://download.youkuaiyun.com/download/exlink2012/92042646

评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

天天进步2015

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值