Spring AI与RAG技术实战:构建企业级智能文档问答系统

Spring AI与RAG技术实战:构建企业级智能文档问答系统

引言

在人工智能技术飞速发展的今天,企业面临着如何有效利用AI技术提升业务效率的挑战。传统的聊天机器人往往存在知识更新不及时、回答不准确的问题。Spring AI结合RAG(检索增强生成)技术为解决这一问题提供了完美的解决方案。本文将详细介绍如何使用Spring AI框架构建企业级智能文档问答系统。

技术栈概述

Spring AI框架

Spring AI是Spring生态系统中的AI集成框架,提供了统一的API来访问各种AI模型和服务。它支持OpenAI、Azure OpenAI、Google Vertex AI等多种AI服务提供商。

RAG技术原理

RAG(Retrieval-Augmented Generation)是一种结合信息检索和文本生成的技术。其核心思想是:

  1. 首先从知识库中检索相关信息
  2. 然后将检索到的信息作为上下文提供给生成模型
  3. 最后生成基于上下文的准确回答

系统架构设计

整体架构

用户请求 → API网关 → Spring AI服务 → 向量数据库 → 生成模型 → 返回回答
                      ↓
                  文档处理流水线

核心组件

  1. 文档加载器:支持PDF、Word、Excel、TXT等多种格式
  2. 文本分割器:将长文档分割为适合处理的片段
  3. 向量化引擎:使用Embedding模型将文本转换为向量
  4. 向量数据库:存储和检索向量数据
  5. 生成模型:基于检索到的上下文生成回答

环境搭建与配置

Maven依赖配置

<dependencies>
    <dependency>
        <groupId>org.springframework.ai</groupId>
        <artifactId>spring-ai-openai-spring-boot-starter</artifactId>
        <version>0.8.1</version>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-redis</artifactId>
    </dependency>
</dependencies>

应用配置

spring:
  ai:
    openai:
      api-key: ${OPENAI_API_KEY}
      chat:
        options:
          model: gpt-4
  data:
    redis:
      host: localhost
      port: 6379

核心功能实现

文档处理模块

文档加载器实现
@Component
public class DocumentLoader {
    
    @Autowired
    private Tika tika;
    
    public String loadDocument(MultipartFile file) {
        try {
            return tika.parseToString(file.getInputStream());
        } catch (Exception e) {
            throw new RuntimeException("文档加载失败", e);
        }
    }
}
文本分割器
@Component
public class TextSplitter {
    
    private static final int CHUNK_SIZE = 1000;
    private static final int OVERLAP_SIZE = 200;
    
    public List<String> splitText(String text) {
        List<String> chunks = new ArrayList<>();
        int length = text.length();
        
        for (int i = 0; i < length; i += CHUNK_SIZE - OVERLAP_SIZE) {
            int end = Math.min(i + CHUNK_SIZE, length);
            String chunk = text.substring(i, end);
            chunks.add(chunk);
            
            if (end == length) break;
        }
        
        return chunks;
    }
}

向量化与存储

Embedding服务
@Service
public class EmbeddingService {
    
    @Autowired
    private OpenAiEmbeddingClient embeddingClient;
    
    public List<Double> generateEmbedding(String text) {
        EmbeddingResponse response = embeddingClient.embed(text);
        return response.getResult().getOutput();
    }
}
向量数据库操作
@Service
public class VectorStoreService {
    
    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    
    public void storeEmbedding(String docId, String chunk, List<Double> embedding) {
        Map<String, Object> data = new HashMap<>();
        data.put("chunk", chunk);
        data.put("embedding", embedding);
        data.put("timestamp", System.currentTimeMillis());
        
        redisTemplate.opsForHash().putAll("doc:" + docId, data);
    }
    
    public List<Map<String, Object>> searchSimilar(List<Double> queryEmbedding, int topK) {
        // 实现相似度搜索逻辑
        return Collections.emptyList();
    }
}

RAG问答服务

核心问答逻辑
@Service
public class RagService {
    
    @Autowired
    private OpenAiChatClient chatClient;
    
    @Autowired
    private VectorStoreService vectorStoreService;
    
    @Autowired
    private EmbeddingService embeddingService;
    
    public String answerQuestion(String question) {
        // 1. 生成问题向量
        List<Double> questionEmbedding = embeddingService.generateEmbedding(question);
        
        // 2. 检索相关文档片段
        List<Map<String, Object>> relevantChunks = 
            vectorStoreService.searchSimilar(questionEmbedding, 3);
        
        // 3. 构建提示词
        String context = buildContext(relevantChunks);
        String prompt = buildPrompt(question, context);
        
        // 4. 调用AI生成回答
        return generateAnswer(prompt);
    }
    
    private String buildContext(List<Map<String, Object>> chunks) {
        StringBuilder context = new StringBuilder();
        for (Map<String, Object> chunk : chunks) {
            context.append(chunk.get("chunk")).append("\n\n");
        }
        return context.toString();
    }
    
    private String buildPrompt(String question, String context) {
        return String.format("""
            基于以下上下文信息,请回答用户的问题。
            如果上下文中的信息不足以回答问题,请如实告知。
            
            上下文:
            %s
            
            问题:%s
            
            回答:
            """, context, question);
    }
    
    private String generateAnswer(String prompt) {
        return chatClient.call(prompt);
    }
}

REST API设计

控制器实现

@RestController
@RequestMapping("/api/rag")
public class RagController {
    
    @Autowired
    private RagService ragService;
    
    @Autowired
    private DocumentService documentService;
    
    @PostMapping("/upload")
    public ResponseEntity<String> uploadDocument(@RequestParam("file") MultipartFile file) {
        try {
            documentService.processDocument(file);
            return ResponseEntity.ok("文档上传成功");
        } catch (Exception e) {
            return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
                .body("文档上传失败: " + e.getMessage());
        }
    }
    
    @PostMapping("/ask")
    public ResponseEntity<String> askQuestion(@RequestBody QuestionRequest request) {
        try {
            String answer = ragService.answerQuestion(request.getQuestion());
            return ResponseEntity.ok(answer);
        } catch (Exception e) {
            return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
                .body("回答问题失败: " + e.getMessage());
        }
    }
}

@Data
class QuestionRequest {
    private String question;
}

性能优化策略

缓存机制

@Configuration
@EnableCaching
public class CacheConfig {
    
    @Bean
    public CacheManager cacheManager() {
        return new ConcurrentMapCacheManager("embeddings", "answers");
    }
}

@Service
public class CachedRagService {
    
    @Cacheable(value = "answers", key = "#question")
    public String answerQuestionWithCache(String question) {
        return ragService.answerQuestion(question);
    }
}

批量处理优化

@Service
public class BatchProcessingService {
    
    @Async
    public CompletableFuture<Void> processDocumentsBatch(List<MultipartFile> files) {
        return CompletableFuture.runAsync(() -> {
            files.parallelStream().forEach(file -> {
                try {
                    documentService.processDocument(file);
                } catch (Exception e) {
                    log.error("处理文档失败: {}", file.getOriginalFilename(), e);
                }
            });
        });
    }
}

监控与日志

Micrometer监控

@Configuration
public class MetricsConfig {
    
    @Bean
    public MeterRegistry meterRegistry() {
        return new PrometheusMeterRegistry(PrometheusConfig.DEFAULT);
    }
}

@Service
public class MonitoringService {
    
    private final Counter questionCounter;
    private final Timer responseTimer;
    
    public MonitoringService(MeterRegistry registry) {
        questionCounter = registry.counter("rag.questions.total");
        responseTimer = registry.timer("rag.response.time");
    }
    
    public String monitorQuestion(String question) {
        return responseTimer.record(() -> {
            questionCounter.increment();
            return ragService.answerQuestion(question);
        });
    }
}

结构化日志

@Slf4j
@Service
public class LoggingService {
    
    public void logQuestion(String question, String answer, long processingTime) {
        MDC.put("question", question);
        MDC.put("processingTime", String.valueOf(processingTime));
        
        log.info("Question answered", 
            kv("question", question),
            kv("answer_length", answer.length()),
            kv("processing_time_ms", processingTime)
        );
        
        MDC.clear();
    }
}

安全考虑

API安全配置

@Configuration
@EnableWebSecurity
public class SecurityConfig {
    
    @Bean
    public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
        return http
            .authorizeHttpRequests(auth -> auth
                .requestMatchers("/api/rag/ask").authenticated()
                .anyRequest().permitAll()
            )
            .oauth2ResourceServer(OAuth2ResourceServerConfigurer::jwt)
            .build();
    }
}

数据脱敏

@Service
public class DataSanitizationService {
    
    private static final Pattern SENSITIVE_PATTERN = 
        Pattern.compile("\\b(\\d{4}[ -]?\\d{4}[ -]?\\d{4}[ -]?\\d{4})\\b");
    
    public String sanitizeText(String text) {
        return SENSITIVE_PATTERN.matcher(text).replaceAll("****-****-****-****");
    }
}

部署与运维

Docker容器化

FROM openjdk:17-jdk-slim
WORKDIR /app
COPY target/*.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]

Kubernetes部署

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rag-service
spec:
  replicas: 3
  template:
    spec:
      containers:
      - name: rag-app
        image: rag-service:latest
        ports:
        - containerPort: 8080
        env:
        - name: OPENAI_API_KEY
          valueFrom:
            secretKeyRef:
              name: openai-secret
              key: api-key
---
apiVersion: v1
kind: Service
metadata:
  name: rag-service
spec:
  selector:
    app: rag-service
  ports:
  - port: 80
    targetPort: 8080

测试策略

单元测试

@SpringBootTest
class RagServiceTest {
    
    @MockBean
    private OpenAiChatClient chatClient;
    
    @MockBean
    private VectorStoreService vectorStoreService;
    
    @Autowired
    private RagService ragService;
    
    @Test
    void testAnswerQuestion() {
        // 模拟向量存储返回
        when(vectorStoreService.searchSimilar(any(), anyInt()))
            .thenReturn(createMockChunks());
        
        // 模拟AI回答
        when(chatClient.call(anyString())).thenReturn("这是测试回答");
        
        String answer = ragService.answerQuestion("测试问题");
        
        assertEquals("这是测试回答", answer);
    }
}

集成测试

@SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
class RagControllerIntegrationTest {
    
    @LocalServerPort
    private int port;
    
    @Test
    void testAskEndpoint() {
        RestTemplate restTemplate = new RestTemplate();
        
        QuestionRequest request = new QuestionRequest();
        request.setQuestion("什么是Spring AI?");
        
        ResponseEntity<String> response = restTemplate.postForEntity(
            "http://localhost:" + port + "/api/rag/ask",
            request,
            String.class
        );
        
        assertEquals(HttpStatus.OK, response.getStatusCode());
        assertNotNull(response.getBody());
    }
}

总结与展望

本文详细介绍了如何使用Spring AI框架结合RAG技术构建企业级智能文档问答系统。通过合理的架构设计、性能优化和安全考虑,我们能够构建出既高效又安全的AI应用。

未来的改进方向包括:

  1. 支持多模态文档处理(图片、音频等)
  2. 实现实时文档更新和增量索引
  3. 加入更多AI模型的支持
  4. 提供更细粒度的权限控制
  5. 优化检索算法提升准确率

Spring AI作为一个新兴的框架,正在快速发展和完善。随着AI技术的不断进步,基于Spring AI构建的应用将会在企业中发挥越来越重要的作用。

评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Uranus^

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值