架构之高并发写

架构之高并发写

引言

在互联网应用中,写入操作虽然比例相对较低,但却是系统的核心命脉。订单创建、用户注册、支付处理、库存扣减等关键业务都依赖于写入操作。当系统面临海量并发写入请求时,如何在保证数据一致性的同时,确保系统的稳定性和可扩展性,成为架构设计的关键挑战。

高并发写法则强调:对于写入量大的场景,使用消息队列进行异步处理,通过解耦、削峰、缓冲等手段,构建能够支撑海量并发写入的高可用系统架构。这不仅是对性能的追求,更是对系统稳定性的保障。

高并发写架构的核心理念

为什么需要异步处理?

高并发写挑战
数据库压力
响应延迟
资源竞争
峰值冲击
系统耦合
连接数耗尽
锁竞争激烈
磁盘IO瓶颈
事务冲突
同步等待耗时
复杂业务处理
第三方服务调用
数据一致性检查
数据库锁等待
连接池耗尽
线程资源竞争
内存资源竞争
突发流量冲击
秒杀活动峰值
热点数据集中
系统瞬间过载
服务间强依赖
失败传播风险
扩展性受限
维护复杂度

异步处理能够有效解决上述问题:

  • 削峰填谷:平滑处理突发流量,避免系统瞬间过载
  • 解耦服务:降低系统组件间的耦合度,提高可维护性
  • 提升性能:减少用户等待时间,提升系统吞吐量
  • 增强可靠性:通过重试机制和死信队列保证数据最终一致性
  • 支持扩展:便于水平扩展,适应业务增长

消息队列在高并发写中的作用

消息队列作用
流量削峰
系统解耦
异步处理
数据缓冲
可靠传输
平滑突发流量
避免系统过载
保护后端服务
服务间松耦合
独立扩展
故障隔离
减少响应时间
提高吞吐量
并行处理
临时存储
速率匹配
批量处理
消息持久化
重试机制
顺序保证

高并发写架构设计原则

1. 异步化原则

将同步操作转换为异步操作,减少用户等待时间。

// 同步写入 vs 异步写入对比
@Service
public class WritePatternComparison {
    
    // ❌ 同步写入:用户需要等待所有操作完成
    public Order createOrderSync(CreateOrderRequest request) {
        // 1. 参数验证 (10ms)
        validateRequest(request);
        
        // 2. 库存检查 (50ms)
        checkInventory(request.getItems());
        
        // 3. 创建订单 (100ms)
        Order order = orderRepository.create(request);
        
        // 4. 扣减库存 (80ms)
        inventoryService.deductInventory(request.getItems());
        
        // 5. 发送通知 (200ms)
        notificationService.sendOrderNotification(order);
        
        // 6. 更新统计 (150ms)
        statisticsService.updateOrderStatistics(order);
        
        // 总耗时:590ms
        return order;
    }
    
    // ✅ 异步写入:快速响应,后续处理
    public Order createOrderAsync(CreateOrderRequest request) {
        // 1. 参数验证 (10ms)
        validateRequest(request);
        
        // 2. 库存检查 (50ms)
        checkInventory(request.getItems());
        
        // 3. 创建订单 (100ms)
        Order order = orderRepository.create(request);
        
        // 4. 发送异步消息 (5ms)
        OrderCreatedEvent event = OrderCreatedEvent.builder()
            .orderId(order.getId())
            .userId(order.getUserId())
            .items(order.getItems())
            .timestamp(System.currentTimeMillis())
            .build();
            
        messageQueueService.sendOrderCreatedMessage(event);
        
        // 总耗时:165ms (提升72%)
        return order;
    }
}

2. 解耦原则

通过消息队列实现服务间的松耦合。

// 事件驱动架构实现
@Component
public class EventDrivenOrderService {
    
    @Autowired
    private MessageQueueService messageQueueService;
    
    @Autowired
    private OrderRepository orderRepository;
    
    /**
     * 订单创建事件处理
     */
    @EventListener
    public void handleOrderCreated(OrderCreatedEvent event) {
        log.info("处理订单创建事件: orderId={}", event.getOrderId());
        
        try {
            // 1. 扣减库存
            processInventoryDeduction(event);
            
            // 2. 发送通知
            processNotification(event);
            
            // 3. 更新统计
            processStatisticsUpdate(event);
            
            log.info("订单创建事件处理完成: orderId={}", event.getOrderId());
            
        } catch (Exception e) {
            log.error("订单创建事件处理失败: orderId={}", event.getOrderId(), e);
            // 发送到重试队列
            messageQueueService.sendToRetryQueue(event);
        }
    }
    
    private void processInventoryDeduction(OrderCreatedEvent event) {
        InventoryDeductionMessage message = InventoryDeductionMessage.builder()
            .orderId(event.getOrderId())
            .items(event.getItems())
            .deductionType(DeductionType.ORDER_CREATE)
            .build();
            
        messageQueueService.sendInventoryMessage(message);
    }
    
    private void processNotification(OrderCreatedEvent event) {
        NotificationMessage message = NotificationMessage.builder()
            .userId(event.getUserId())
            .orderId(event.getOrderId())
            .notificationType(NotificationType.ORDER_CREATED)
            .build();
            
        messageQueueService.sendNotificationMessage(message);
    }
    
    private void processStatisticsUpdate(OrderCreatedEvent event) {
        StatisticsMessage message = StatisticsMessage.builder()
            .orderId(event.getOrderId())
            .userId(event.getUserId())
            .amount(calculateTotalAmount(event.getItems()))
            .timestamp(event.getTimestamp())
            .build();
            
        messageQueueService.sendStatisticsMessage(message);
    }
}

3. 可靠性原则

确保消息的可靠传输和处理。

// 可靠消息传输实现
@Component
public class ReliableMessageService {
    
    @Autowired
    private MessageQueueService messageQueueService;
    
    @Autowired
    private MessageStore messageStore;
    
    /**
     * 发送可靠消息
     */
    public void sendReliableMessage(Message message) {
        // 1. 消息持久化
        String messageId = messageStore.saveMessage(message);
        message.setMessageId(messageId);
        
        try {
            // 2. 发送消息到队列
            messageQueueService.send(message);
            
            // 3. 更新消息状态
            messageStore.updateMessageStatus(messageId, MessageStatus.SENT);
            
            log.info("可靠消息发送成功: messageId={}", messageId);
            
        } catch (Exception e) {
            log.error("可靠消息发送失败: messageId={}", messageId, e);
            
            // 4. 标记发送失败,等待重试
            messageStore.updateMessageStatus(messageId, MessageStatus.SEND_FAILED);
            
            // 5. 发送到重试队列
            scheduleRetry(messageId);
        }
    }
    
    /**
     * 消息消费确认
     */
    public void confirmMessageConsumption(String messageId) {
        messageStore.updateMessageStatus(messageId, MessageStatus.CONSUMED);
        log.debug("消息消费确认: messageId={}", messageId);
    }
    
    /**
     * 定时重试失败消息
     */
    @Scheduled(fixedDelay = 60000) // 每分钟执行一次
    public void retryFailedMessages() {
        List<String> failedMessageIds = messageStore.getFailedMessages(100);
        
        for (String messageId : failedMessageIds) {
            try {
                Message message = messageStore.getMessage(messageId);
                if (message != null) {
                    messageQueueService.send(message);
                    messageStore.updateMessageStatus(messageId, MessageStatus.SENT);
                    log.info("消息重试成功: messageId={}", messageId);
                }
            } catch (Exception e) {
                log.error("消息重试失败: messageId={}", messageId, e);
                // 增加重试次数
                messageStore.incrementRetryCount(messageId);
            }
        }
    }
    
    private void scheduleRetry(String messageId) {
        // 使用指数退避策略
        int retryCount = messageStore.getRetryCount(messageId);
        long delay = calculateRetryDelay(retryCount);
        
        scheduledExecutorService.schedule(() -> {
            retryFailedMessages();
        }, delay, TimeUnit.SECONDS);
    }
    
    private long calculateRetryDelay(int retryCount) {
        // 指数退避:1s, 2s, 4s, 8s, 16s, 32s, 最大60s
        return Math.min((long) Math.pow(2, retryCount), 60);
    }
}

消息队列核心技术

1. 消息队列架构设计

消息队列架构
生产者
消息队列
消费者
存储系统
监控系统
业务应用
消息封装
发送策略
失败重试
消息路由
负载均衡
消息持久化
顺序保证
消息监听
并发处理
消息确认
错误处理
磁盘存储
内存缓存
索引机制
数据备份
性能监控
健康检查
告警通知
运维管理

2. 消息队列选型与配置

# RabbitMQ高可用配置
rabbitmq_config: |
  # 集群配置
  cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
  cluster_formation.classic_config.nodes.1 = rabbit@node1
  cluster_formation.classic_config.nodes.2 = rabbit@node2
  cluster_formation.classic_config.nodes.3 = rabbit@node3
  
  # 内存配置
  vm_memory_high_watermark.relative = 0.6
  vm_memory_high_watermark_paging_ratio = 0.5
  
  # 磁盘配置
  disk_free_limit.absolute = 2GB
  
  # 队列配置
  queue_master_locator = min-masters
  queue_mode = lazy
  
  # 流控配置
  channel_max = 2047
  heartbeat = 60
  
  # 持久化配置
  queue_index_embed_msgs_below = 4096
  msg_store_index_module = rabbit_msg_store_ets_index

# Apache Kafka高可用配置
kafka_config: |
  # Broker配置
  broker.id=1
  listeners=PLAINTEXT://0.0.0.0:9092
  advertised.listeners=PLAINTEXT://kafka1:9092
  log.dirs=/var/lib/kafka/logs
  num.network.threads=8
  num.io.threads=16
  socket.send.buffer.bytes=102400
  socket.receive.buffer.bytes=102400
  socket.request.max.bytes=104857600
  
  # 复制配置
  default.replication.factor=3
  min.insync.replicas=2
  num.partitions=12
  num.recovery.threads.per.data.dir=2
  
  # 日志配置
  log.retention.hours=168
  log.segment.bytes=1073741824
  log.retention.check.interval.ms=300000
  
  # 性能配置
  log.flush.interval.messages=10000
  log.flush.interval.ms=1000
  log.flush.scheduler.interval.ms=3000
  
  # ZooKeeper配置
  zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
  zookeeper.connection.timeout.ms=18000
  zookeeper.session.timeout.ms=18000

3. 消息生产与消费优化

// 高性能消息生产者
@Component
public class OptimizedMessageProducer {
    
    @Autowired
    private RabbitTemplate rabbitTemplate;
    
    @Autowired
    private KafkaTemplate<String, Object> kafkaTemplate;
    
    private final ThreadLocal<MessageBuilder> messageBuilderThreadLocal = 
        ThreadLocal.withInitial(MessageBuilder::new);
    
    /**
     * 批量发送消息(RabbitMQ)
     */
    public void batchSendMessages(List<Message> messages, String exchange, String routingKey) {
        if (messages.isEmpty()) {
            return;
        }
        
        // 使用批量发送
        BatchMessage batchMessage = BatchMessage.builder()
            .messages(messages)
            .timestamp(System.currentTimeMillis())
            .build();
            
        rabbitTemplate.convertAndSend(exchange, routingKey, batchMessage, message -> {
            // 设置消息属性
            message.getMessageProperties().setDeliveryMode(MessageDeliveryMode.PERSISTENT);
            message.getMessageProperties().setContentType("application/json");
            message.getMessageProperties().setMessageId(UUID.randomUUID().toString());
            return message;
        });
        
        log.info("批量消息发送完成: count={}", messages.size());
    }
    
    /**
     * 异步发送消息(Kafka)
     */
    public CompletableFuture<SendResult<String, Object>> asyncSendMessage(
            String topic, String key, Object message) {
        
        ProducerRecord<String, Object> record = new ProducerRecord<>(topic, key, message);
        
        // 设置消息头
        record.headers().add("message-id", UUID.randomUUID().toString().getBytes());
        record.headers().add("timestamp", String.valueOf(System.currentTimeMillis()).getBytes());
        record.headers().add("source", "order-service".getBytes());
        
        return kafkaTemplate.send(record);
    }
    
    /**
     * 使用消息队列池化
     */
    public void sendWithConnectionPool(String queueName, Object message) {
        // 使用线程本地变量避免频繁创建对象
        MessageBuilder builder = messageBuilderThreadLocal.get();
        
        Message msg = builder
            .reset()
            .setMessageId(UUID.randomUUID().toString())
            .setTimestamp(System.currentTimeMillis())
            .setData(message)
            .build();
            
        rabbitTemplate.convertAndSend(queueName, msg);
    }
}

// 高性能消息消费者
@Component
public class OptimizedMessageConsumer {
    
    private final ExecutorService executorService = Executors.newFixedThreadPool(
        Runtime.getRuntime().availableProcessors() * 2,
        new ThreadFactoryBuilder().setNameFormat("message-consumer-%d").build()
    );
    
    /**
     * 批量消费消息
     */
    @RabbitListener(queues = "order.processing.queue", containerFactory = "batchContainerFactory")
    public void batchProcessMessages(List<Message> messages) {
        if (messages.isEmpty()) {
            return;
        }
        
        log.info("批量处理消息: count={}", messages.size());
        
        // 按业务类型分组处理
        Map<String, List<Message>> groupedMessages = messages.stream()
            .collect(Collectors.groupingBy(this::getMessageType));
            
        // 并行处理不同类型的消息
        List<CompletableFuture<Void>> futures = new ArrayList<>();
        
        groupedMessages.forEach((type, msgs) -> {
            CompletableFuture<Void> future = CompletableFuture.runAsync(() -> {
                processMessageBatch(type, msgs);
            }, executorService);
            futures.add(future);
        });
        
        // 等待所有批处理完成
        CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join();
        
        log.info("批量消息处理完成: count={}", messages.size());
    }
    
    /**
     * 并发处理消息(Kafka)
     */
    @KafkaListener(topics = "user-activity", groupId = "user-service-group", 
                   containerFactory = "kafkaListenerContainerFactory")
    public void concurrentProcessMessage(ConsumerRecord<String, Object> record) {
        try {
            // 反序列化消息
            UserActivityMessage message = deserializeMessage(record.value());
            
            // 处理消息
            processUserActivity(message);
            
            // 手动提交偏移量
            acknowledgeMessage(record);
            
        } catch (Exception e) {
            log.error("消息处理失败: offset={}, error={}", record.offset(), e.getMessage());
            handleProcessingError(record, e);
        }
    }
    
    private void processMessageBatch(String type, List<Message> messages) {
        switch (type) {
            case "ORDER_CREATED":
                processOrderCreatedBatch(messages);
                break;
            case "INVENTORY_DEDUCTION":
                processInventoryDeductionBatch(messages);
                break;
            case "PAYMENT_COMPLETED":
                processPaymentCompletedBatch(messages);
                break;
            default:
                log.warn("未知消息类型: {}", type);
        }
    }
    
    private void processOrderCreatedBatch(List<Message> messages) {
        // 批量处理订单创建
        List<OrderCreatedEvent> events = messages.stream()
            .map(msg -> (OrderCreatedEvent) msg.getData())
            .collect(Collectors.toList());
            
        // 批量更新数据库
        batchUpdateOrders(events);
        
        // 批量发送通知
        batchSendNotifications(events);
    }
}

4. 消息顺序与一致性保证

// 消息顺序保证实现
@Component
public class MessageOrderingService {
    
    @Autowired
    private MessageQueueService messageQueueService;
    
    /**
     * 发送顺序消息
     */
    public void sendOrderedMessage(String partitionKey, Object message) {
        // 使用分区键确保消息发送到同一分区
        MessageKey key = MessageKey.builder()
            .partitionKey(partitionKey)
            .messageType(message.getClass().getSimpleName())
            .timestamp(System.currentTimeMillis())
            .build();
            
        messageQueueService.sendOrderedMessage(key, message);
    }
    
    /**
     * 处理顺序消息
     */
    @KafkaListener(topics = "ordered-events", groupId = "ordered-consumer-group")
    public void processOrderedMessage(ConsumerRecord<String, Object> record) {
        String partitionKey = record.key();
        Object message = record.value();
        
        try {
            // 使用分布式锁确保同一分区键的消息顺序处理
            String lockKey = "ordered_message:" + partitionKey;
            
            if (distributedLock.tryLock(lockKey, 30, TimeUnit.SECONDS)) {
                try {
                    // 检查消息顺序
                    if (isMessageInOrder(partitionKey, record.offset())) {
                        processMessage(message);
                        updateLastProcessedOffset(partitionKey, record.offset());
                    } else {
                        // 消息乱序,等待重试
                        handleOutOfOrderMessage(record);
                    }
                } finally {
                    distributedLock.unlock(lockKey);
                }
            } else {
                // 获取锁失败,重新入队
                requeueMessage(record);
            }
            
        } catch (Exception e) {
            log.error("顺序消息处理失败: partitionKey={}, offset={}", 
                partitionKey, record.offset(), e);
            handleProcessingError(record, e);
        }
    }
    
    private boolean isMessageInOrder(String partitionKey, long currentOffset) {
        Long lastProcessedOffset = getLastProcessedOffset(partitionKey);
        return lastProcessedOffset == null || currentOffset > lastProcessedOffset;
    }
}

// 分布式事务消息实现
@Component
public class TransactionalMessageService {
    
    @Autowired
    private MessageStore messageStore;
    
    @Autowired
    private MessageQueueService messageQueueService;
    
    /**
     * 执行本地事务并发送消息
     */
    @Transactional
    public void executeLocalTransactionAndSendMessage(
            Runnable localTransaction, Message message) {
        
        String transactionId = generateTransactionId();
        message.setTransactionId(transactionId);
        
        try {
            // 1. 执行本地事务
            localTransaction.run();
            
            // 2. 预发送消息(半消息)
            messageQueueService.sendHalfMessage(message);
            
            // 3. 记录事务状态
            messageStore.recordTransactionStatus(transactionId, TransactionStatus.PREPARED);
            
        } catch (Exception e) {
            log.error("本地事务执行失败: transactionId={}", transactionId, e);
            
            // 回滚事务
            TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
            
            // 删除半消息
            messageQueueService.deleteHalfMessage(message.getMessageId());
            
            throw new TransactionException("本地事务执行失败", e);
        }
    }
    
    /**
     * 检查本地事务状态
     */
    public TransactionStatus checkLocalTransactionStatus(String transactionId) {
        return messageStore.getTransactionStatus(transactionId);
    }
    
    /**
     * 提交事务消息
     */
    public void commitTransactionMessage(String transactionId) {
        // 更新事务状态
        messageStore.updateTransactionStatus(transactionId, TransactionStatus.COMMITTED);
        
        // 提交半消息
        Message message = messageStore.getMessageByTransactionId(transactionId);
        if (message != null) {
            messageQueueService.commitHalfMessage(message.getMessageId());
        }
    }
    
    /**
     * 回滚事务消息
     */
    public void rollbackTransactionMessage(String transactionId) {
        // 更新事务状态
        messageStore.updateTransactionStatus(transactionId, TransactionStatus.ROLLED_BACK);
        
        // 删除半消息
        Message message = messageStore.getMessageByTransactionId(transactionId);
        if (message != null) {
            messageQueueService.deleteHalfMessage(message.getMessageId());
        }
    }
}

高并发写架构实践案例

1. 电商订单系统异步化

// 电商订单系统异步化处理
@Service
public class AsyncOrderService {
    
    @Autowired
    private OrderRepository orderRepository;
    
    @Autowired
    private MessageQueueService messageQueueService;
    
    @Autowired
    private DistributedCacheService cacheService;
    
    /**
     * 异步创建订单
     */
    @Transactional
    public OrderCreateResult createOrderAsync(CreateOrderRequest request) {
        // 1. 参数验证
        validateOrderRequest(request);
        
        // 2. 预检查库存(使用缓存)
        if (!preCheckInventory(request.getItems())) {
            return OrderCreateResult.failed("库存不足");
        }
        
        // 3. 创建订单(状态为PENDING)
        Order order = Order.builder()
            .orderNo(generateOrderNo())
            .userId(request.getUserId())
            .items(request.getItems())
            .totalAmount(calculateTotalAmount(request.getItems()))
            .status(OrderStatus.PENDING)
            .createTime(LocalDateTime.now())
            .build();
            
        order = orderRepository.save(order);
        
        // 4. 发送订单创建事件
        OrderCreatedEvent event = OrderCreatedEvent.builder()
            .orderId(order.getId())
            .orderNo(order.getOrderNo())
            .userId(order.getUserId())
            .items(order.getItems())
            .totalAmount(order.getTotalAmount())
            .timestamp(System.currentTimeMillis())
            .build();
            
        messageQueueService.sendOrderCreatedEvent(event);
        
        // 5. 设置订单处理超时
        scheduleOrderTimeoutCheck(order.getId(), 30, TimeUnit.MINUTES);
        
        return OrderCreateResult.success(order.getOrderNo(), "订单创建成功,正在处理中");
    }
    
    /**
     * 处理订单创建事件
     */
    @EventListener
    public void handleOrderCreated(OrderCreatedEvent event) {
        log.info("处理订单创建事件: orderId={}", event.getOrderId());
        
        try {
            // 1. 扣减库存
            boolean inventoryDeducted = deductInventory(event);
            if (!inventoryDeducted) {
                handleOrderFailed(event.getOrderId(), "库存扣减失败");
                return;
            }
            
            // 2. 更新订单状态
            updateOrderStatus(event.getOrderId(), OrderStatus.INVENTORY_DEDUCTED);
            
            // 3. 触发支付流程
            triggerPaymentProcess(event);
            
            log.info("订单创建事件处理完成: orderId={}", event.getOrderId());
            
        } catch (Exception e) {
            log.error("订单创建事件处理失败: orderId={}", event.getOrderId(), e);
            handleOrderFailed(event.getOrderId(), e.getMessage());
        }
    }
    
    /**
     * 库存扣减处理
     */
    private boolean deductInventory(OrderCreatedEvent event) {
        try {
            // 使用分布式锁防止超卖
            String lockKey = "inventory_deduction:" + event.getOrderId();
            
            if (distributedLock.tryLock(lockKey, 10, TimeUnit.SECONDS)) {
                try {
                    // 批量扣减库存
                    List<InventoryDeduction> deductions = event.getItems().stream()
                        .map(item -> InventoryDeduction.builder()
                            .productId(item.getProductId())
                            .quantity(item.getQuantity())
                            .orderId(event.getOrderId())
                            .build())
                        .collect(Collectors.toList());
                        
                    return inventoryService.batchDeductInventory(deductions);
                    
                } finally {
                    distributedLock.unlock(lockKey);
                }
            } else {
                log.warn("获取库存扣减锁失败: orderId={}", event.getOrderId());
                return false;
            }
            
        } catch (Exception e) {
            log.error("库存扣减异常: orderId={}", event.getOrderId(), e);
            return false;
        }
    }
    
    /**
     * 处理订单失败
     */
    private void handleOrderFailed(Long orderId, String reason) {
        log.error("订单处理失败: orderId={}, reason={}", orderId, reason);
        
        // 1. 更新订单状态为FAILED
        updateOrderStatus(orderId, OrderStatus.FAILED);
        
        // 2. 补偿已扣减的库存
        compensateInventory(orderId);
        
        // 3. 发送订单失败通知
        sendOrderFailureNotification(orderId, reason);
        
        // 4. 记录失败原因
        recordOrderFailure(orderId, reason);
    }
    
    /**
     * 订单超时检查
     */
    @Scheduled(fixedDelay = 60000) // 每分钟检查一次
    public void checkOrderTimeout() {
        List<Order> timeoutOrders = orderRepository.findTimeoutOrders(
            LocalDateTime.now().minusMinutes(30), 
            OrderStatus.PENDING
        );
        
        for (Order order : timeoutOrders) {
            log.warn("订单超时处理: orderId={}", order.getId());
            handleOrderFailed(order.getId(), "订单处理超时");
        }
    }
}

2. 支付系统异步化

// 支付系统异步化处理
@Service
public class AsyncPaymentService {
    
    @Autowired
    private PaymentRepository paymentRepository;
    
    @Autowired
    private MessageQueueService messageQueueService;
    
    @Autowired
    private RiskControlService riskControlService;
    
    /**
     * 异步支付处理
     */
    public PaymentResult processPaymentAsync(PaymentRequest request) {
        // 1. 参数验证
        validatePaymentRequest(request);
        
        // 2. 风险控制检查
        RiskAssessmentResult riskResult = riskControlService.assessRisk(request);
        if (riskResult.isRejected()) {
            return PaymentResult.failed("风险控制拒绝: " + riskResult.getReason());
        }
        
        // 3. 创建支付记录
        Payment payment = Payment.builder()
            .paymentNo(generatePaymentNo())
            .orderId(request.getOrderId())
            .userId(request.getUserId())
            .amount(request.getAmount())
            .paymentMethod(request.getPaymentMethod())
            .status(PaymentStatus.PENDING)
            .riskScore(riskResult.getRiskScore())
            .createTime(LocalDateTime.now())
            .build();
            
        payment = paymentRepository.save(payment);
        
        // 4. 发送支付处理事件
        PaymentProcessingEvent event = PaymentProcessingEvent.builder()
            .paymentId(payment.getId())
            .paymentNo(payment.getPaymentNo())
            .orderId(payment.getOrderId())
            .userId(payment.getUserId())
            .amount(payment.getAmount())
            .paymentMethod(payment.getPaymentMethod())
            .riskScore(payment.getRiskScore())
            .timestamp(System.currentTimeMillis())
            .build();
            
        messageQueueService.sendPaymentProcessingEvent(event);
        
        return PaymentResult.processing(payment.getPaymentNo(), "支付处理中");
    }
    
    /**
     * 处理支付事件
     */
    @EventListener
    public void handlePaymentProcessing(PaymentProcessingEvent event) {
        log.info("处理支付事件: paymentId={}", event.getPaymentId());
        
        try {
            // 1. 根据风险评分选择处理策略
            if (event.getRiskScore() < 30) {
                // 低风险:快速通道
                processLowRiskPayment(event);
            } else if (event.getRiskScore() < 70) {
                // 中风险:标准流程
                processMediumRiskPayment(event);
            } else {
                // 高风险:人工审核
                processHighRiskPayment(event);
            }
            
        } catch (Exception e) {
            log.error("支付事件处理失败: paymentId={}", event.getPaymentId(), e);
            handlePaymentFailed(event.getPaymentId(), e.getMessage());
        }
    }
    
    /**
     * 处理低风险支付
     */
    private void processLowRiskPayment(PaymentProcessingEvent event) {
        log.info("处理低风险支付: paymentId={}", event.getPaymentId());
        
        // 1. 调用第三方支付接口
        PaymentResult externalResult = callExternalPaymentAPI(event);
        
        if (externalResult.isSuccess()) {
            // 2. 更新支付状态
            updatePaymentStatus(event.getPaymentId(), PaymentStatus.SUCCESS);
            
            // 3. 发送支付成功事件
            publishPaymentSuccessEvent(event);
            
            log.info("低风险支付处理成功: paymentId={}", event.getPaymentId());
        } else {
            handlePaymentFailed(event.getPaymentId(), externalResult.getErrorMessage());
        }
    }
    
    /**
     * 处理中风险支付
     */
    private void processMediumRiskPayment(PaymentProcessingEvent event) {
        log.info("处理中风险支付: paymentId={}", event.getPaymentId());
        
        // 1. 额外的风控检查
        AdditionalRiskCheckResult additionalCheck = performAdditionalRiskCheck(event);
        
        if (additionalCheck.isPassed()) {
            // 2. 继续支付流程
            processLowRiskPayment(event);
        } else {
            // 3. 需要人工审核
            requestManualReview(event.getPaymentId(), additionalCheck.getReason());
        }
    }
    
    /**
     * 处理高风险支付
     */
    private void processHighRiskPayment(PaymentProcessingEvent event) {
        log.info("处理高风险支付: paymentId={}", event.getPaymentId());
        
        // 1. 直接标记为需要人工审核
        updatePaymentStatus(event.getPaymentId(), PaymentStatus.MANUAL_REVIEW_REQUIRED);
        
        // 2. 发送人工审核请求
        ManualReviewRequest reviewRequest = ManualReviewRequest.builder()
            .paymentId(event.getPaymentId())
            .riskScore(event.getRiskScore())
            .reason("高风险支付,需要人工审核")
            .build();
            
        messageQueueService.sendManualReviewRequest(reviewRequest);
        
        log.info("高风险支付已提交人工审核: paymentId={}", event.getPaymentId());
    }
    
    /**
     * 支付补偿机制
     */
    @EventListener
    public void handlePaymentCompensation(PaymentCompensationEvent event) {
        log.info("处理支付补偿: paymentId={}", event.getPaymentId());
        
        try {
            // 1. 查询支付状态
            Payment payment = paymentRepository.findById(event.getPaymentId());
            
            // 2. 根据支付状态执行补偿
            switch (payment.getStatus()) {
                case PENDING:
                    // 超时未处理,标记为失败
                    updatePaymentStatus(payment.getId(), PaymentStatus.FAILED);
                    break;
                    
                case PROCESSING:
                    // 处理中但超时,查询第三方状态
                    checkExternalPaymentStatus(payment);
                    break;
                    
                case SUCCESS:
                    // 已成功但业务方未收到通知,重发通知
                    resendPaymentNotification(payment);
                    break;
                    
                case FAILED:
                    // 已失败但订单状态未更新,更新订单状态
                    updateOrderStatusAfterPaymentFailure(payment);
                    break;
            }
            
        } catch (Exception e) {
            log.error("支付补偿处理失败: paymentId={}", event.getPaymentId(), e);
        }
    }
}

3. 库存系统异步化

// 库存系统异步化处理
@Service
public class AsyncInventoryService {
    
    @Autowired
    private InventoryRepository inventoryRepository;
    
    @Autowired
    private MessageQueueService messageQueueService;
    
    @Autowired
    private CacheService cacheService;
    
    /**
     * 异步库存扣减
     */
    public InventoryDeductionResult deductInventoryAsync(InventoryDeductionRequest request) {
        // 1. 预检查库存(使用缓存)
        if (!preCheckInventory(request)) {
            return InventoryDeductionResult.failed("库存不足");
        }
        
        // 2. 创建库存扣减记录
        InventoryDeduction deduction = InventoryDeduction.builder()
            .deductionNo(generateDeductionNo())
            .productId(request.getProductId())
            .quantity(request.getQuantity())
            .orderId(request.getOrderId())
            .status(DeductionStatus.PENDING)
            .createTime(LocalDateTime.now())
            .build();
            
        deduction = inventoryRepository.saveDeduction(deduction);
        
        // 3. 发送库存扣减事件
        InventoryDeductionEvent event = InventoryDeductionEvent.builder()
            .deductionId(deduction.getId())
            .productId(deduction.getProductId())
            .quantity(deduction.getQuantity())
            .orderId(deduction.getOrderId())
            .timestamp(System.currentTimeMillis())
            .build();
            
        messageQueueService.sendInventoryDeductionEvent(event);
        
        return InventoryDeductionResult.processing(deduction.getDeductionNo(), "库存扣减处理中");
    }
    
    /**
     * 处理库存扣减事件
     */
    @EventListener
    public void handleInventoryDeduction(InventoryDeductionEvent event) {
        log.info("处理库存扣减事件: deductionId={}, productId={}", 
            event.getDeductionId(), event.getProductId());
        
        try {
            // 1. 获取分布式锁
            String lockKey = "inventory:" + event.getProductId();
            
            if (distributedLock.tryLock(lockKey, 10, TimeUnit.SECONDS)) {
                try {
                    // 2. 检查库存是否充足
                    Inventory currentInventory = inventoryRepository.findByProductId(event.getProductId());
                    
                    if (currentInventory.getAvailableQuantity() < event.getQuantity()) {
                        handleInventoryDeductionFailed(event.getDeductionId(), "库存不足");
                        return;
                    }
                    
                    // 3. 执行库存扣减
                    boolean deducted = inventoryRepository.deductInventory(
                        event.getProductId(), 
                        event.getQuantity(),
                        event.getDeductionId()
                    );
                    
                    if (deducted) {
                        // 4. 更新扣减记录状态
                        updateDeductionStatus(event.getDeductionId(), DeductionStatus.SUCCESS);
                        
                        // 5. 更新缓存
                        updateInventoryCache(event.getProductId(), -event.getQuantity());
                        
                        // 6. 发送库存变更事件
                        publishInventoryChangedEvent(event);
                        
                        log.info("库存扣减成功: deductionId={}, productId={}, quantity={}", 
                            event.getDeductionId(), event.getProductId(), event.getQuantity());
                    } else {
                        handleInventoryDeductionFailed(event.getDeductionId(), "并发扣减失败");
                    }
                    
                } finally {
                    distributedLock.unlock(lockKey);
                }
            } else {
                log.warn("获取库存锁失败: productId={}", event.getProductId());
                handleInventoryDeductionFailed(event.getDeductionId(), "获取锁失败");
            }
            
        } catch (Exception e) {
            log.error("库存扣减处理异常: deductionId={}", event.getDeductionId(), e);
            handleInventoryDeductionFailed(event.getDeductionId(), e.getMessage());
        }
    }
    
    /**
     * 批量库存扣减(秒杀场景)
     */
    public BatchDeductionResult batchDeductInventory(BatchDeductionRequest request) {
        log.info("批量库存扣减: productId={}, totalQuantity={}, orderCount={}", 
            request.getProductId(), request.getTotalQuantity(), request.getOrders().size());
        
        // 1. 预扣减总库存
        Inventory inventory = inventoryRepository.findByProductId(request.getProductId());
        
        if (inventory.getAvailableQuantity() < request.getTotalQuantity()) {
            return BatchDeductionResult.failed("总库存不足");
        }
        
        // 2. 创建批量扣减任务
        BatchDeductionTask task = BatchDeductionTask.builder()
            .taskNo(generateTaskNo())
            .productId(request.getProductId())
            .totalQuantity(request.getTotalQuantity())
            .orderCount(request.getOrders().size())
            .status(TaskStatus.PENDING)
            .createTime(LocalDateTime.now())
            .build();
            
        task = inventoryRepository.saveBatchTask(task);
        
        // 3. 为每个订单创建扣减事件
        List<InventoryDeductionEvent> events = new ArrayList<>();
        
        for (OrderInventory order : request.getOrders()) {
            InventoryDeductionEvent event = InventoryDeductionEvent.builder()
                .deductionId(generateDeductionId())
                .batchTaskId(task.getId())
                .productId(request.getProductId())
                .quantity(order.getQuantity())
                .orderId(order.getOrderId())
                .userId(order.getUserId())
                .timestamp(System.currentTimeMillis())
                .build();
                
            events.add(event);
        }
        
        // 4. 批量发送扣减事件
        messageQueueService.batchSendInventoryDeductionEvents(events);
        
        return BatchDeductionResult.processing(task.getTaskNo(), "批量扣减处理中");
    }
    
    /**
     * 库存回滚机制
     */
    @EventListener
    public void handleInventoryRollback(InventoryRollbackEvent event) {
        log.info("处理库存回滚: deductionId={}, productId={}, quantity={}", 
            event.getDeductionId(), event.getProductId(), event.getQuantity());
        
        try {
            // 1. 查询原扣减记录
            InventoryDeduction originalDeduction = inventoryRepository.findDeductionById(event.getDeductionId());
            
            if (originalDeduction == null || originalDeduction.getStatus() != DeductionStatus.SUCCESS) {
                log.warn("库存回滚失败:原扣减记录不存在或状态不正确: deductionId={}", event.getDeductionId());
                return;
            }
            
            // 2. 执行库存回滚
            boolean rolledBack = inventoryRepository.rollbackInventory(
                event.getProductId(),
                event.getQuantity(),
                event.getDeductionId()
            );
            
            if (rolledBack) {
                // 3. 更新扣减记录状态
                updateDeductionStatus(event.getDeductionId(), DeductionStatus.ROLLED_BACK);
                
                // 4. 更新缓存
                updateInventoryCache(event.getProductId(), event.getQuantity());
                
                // 5. 发送库存回滚完成事件
                publishInventoryRollbackCompletedEvent(event);
                
                log.info("库存回滚成功: deductionId={}, productId={}, quantity={}", 
                    event.getDeductionId(), event.getProductId(), event.getQuantity());
            } else {
                log.error("库存回滚失败: deductionId={}", event.getDeductionId());
            }
            
        } catch (Exception e) {
            log.error("库存回滚处理异常: deductionId={}", event.getDeductionId(), e);
        }
    }
    
    /**
     * 库存预热机制(防止缓存击穿)
     */
    @PostConstruct
    public void preheatInventoryCache() {
        log.info("开始预热库存缓存");
        
        try {
            // 1. 获取热销商品列表
            List<Long> hotProductIds = getHotProductIds(1000);
            
            // 2. 批量加载库存信息
            Map<Long, Inventory> inventoryMap = inventoryRepository.findByProductIds(hotProductIds);
            
            // 3. 预热到缓存
            for (Map.Entry<Long, Inventory> entry : inventoryMap.entrySet()) {
                Long productId = entry.getKey();
                Inventory inventory = entry.getValue();
                
                // 缓存库存信息
                cacheInventory(productId, inventory);
                
                // 缓存库存可用量(用于快速预检查)
                cacheAvailableQuantity(productId, inventory.getAvailableQuantity());
            }
            
            log.info("库存缓存预热完成: {} 个商品", hotProductIds.size());
            
        } catch (Exception e) {
            log.error("库存缓存预热失败", e);
        }
    }
}

高并发写架构最佳实践

1. 消息队列使用原则

// 消息队列最佳实践
public class MessageQueueBestPractices {
    
    /**
     * 原则1:消息要幂等
     */
    public void demonstrateIdempotentMessage() {
        // 不好的做法:消息没有唯一标识
        public class NonIdempotentMessage {
            private String userId;
            private String action;
            private LocalDateTime timestamp;
        }
        
        // 好的做法:消息包含唯一标识
        public class IdempotentMessage {
            private String messageId;      // 消息唯一ID
            private String userId;
            private String action;
            private LocalDateTime timestamp;
            private int retryCount;        // 重试次数
            
            public IdempotentMessage() {
                this.messageId = UUID.randomUUID().toString();
                this.timestamp = LocalDateTime.now();
                this.retryCount = 0;
            }
        }
        
        // 消费时检查消息是否已处理
        public void consumeMessage(IdempotentMessage message) {
            String messageKey = "processed_message:" + message.getMessageId();
            
            // 使用分布式锁确保幂等性
            if (distributedLock.tryLock(messageKey, 10, TimeUnit.SECONDS)) {
                try {
                    // 双重检查
                    if (isMessageProcessed(message.getMessageId())) {
                        log.info("消息已处理,跳过: messageId={}", message.getMessageId());
                        return;
                    }
                    
                    // 处理消息
                    processMessage(message);
                    
                    // 标记消息已处理
                    markMessageProcessed(message.getMessageId());
                    
                } finally {
                    distributedLock.unlock(messageKey);
                }
            }
        }
    }
    
    /**
     * 原则2:消息要可重试
     */
    public void demonstrateRetryableMessage() {
        // 配置重试策略
        @Configuration
        public class RetryConfig {
            
            @Bean
            public RetryTemplate retryTemplate() {
                RetryTemplate template = new RetryTemplate();
                
                // 指数退避策略
                ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
                backOffPolicy.setInitialInterval(1000);     // 初始间隔1秒
                backOffPolicy.setMultiplier(2.0);          // 倍数2
                backOffPolicy.setMaxInterval(60000);       // 最大间隔60秒
                template.setBackOffPolicy(backOffPolicy);
                
                // 重试策略
                SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
                retryPolicy.setMaxAttempts(5);             // 最大重试5次
                template.setRetryPolicy(retryPolicy);
                
                return template;
            }
        }
        
        // 使用重试模板
        public void processMessageWithRetry(Message message) {
            retryTemplate.execute(context -> {
                log.info("处理消息: attempt={}", context.getRetryCount() + 1);
                
                try {
                    processMessage(message);
                    return null;
                    
                } catch (Exception e) {
                    log.error("消息处理失败: attempt={}", context.getRetryCount() + 1, e);
                    
                    // 判断是否还需要重试
                    if (context.getRetryCount() >= 4) {
                        // 发送到死信队列
                        sendToDeadLetterQueue(message, e);
                    }
                    
                    throw e; // 继续重试
                }
            });
        }
    }
    
    /**
     * 原则3:消息要可追踪
     */
    public void demonstrateTraceableMessage() {
        // 消息追踪上下文
        public class MessageTracingContext {
            private String traceId;
            private String spanId;
            private String parentSpanId;
            private Map<String, String> baggage;
            
            public MessageTracingContext() {
                this.traceId = UUID.randomUUID().toString();
                this.spanId = UUID.randomUUID().toString();
                this.baggage = new HashMap<>();
            }
        }
        
        // 发送带追踪信息的消息
        public void sendTraceableMessage(Object payload, MessageTracingContext context) {
            TracedMessage message = TracedMessage.builder()
                .payload(payload)
                .traceId(context.getTraceId())
                .spanId(context.getSpanId())
                .parentSpanId(context.getParentSpanId())
                .timestamp(System.currentTimeMillis())
                .baggage(context.getBaggage())
                .build();
                
            messageQueueService.send(message);
            
            // 记录追踪日志
            log.info("发送追踪消息: traceId={}, spanId={}", 
                context.getTraceId(), context.getSpanId());
        }
    }
}

2. 性能调优建议

# 消息队列性能调优配置
performance_tuning:
  # 生产者优化
  producer:
    batch_size: 16384                    # 批处理大小
    linger_ms: 10                        # 延迟发送时间
    compression_type: lz4               # 压缩算法
    acks: 1                              # 确认级别
    retries: 3                           # 重试次数
    max_in_flight_requests_per_connection: 5
    
  # 消费者优化
  consumer:
    fetch_min_bytes: 1024               # 最小拉取字节数
    fetch_max_wait_ms: 500              # 最大等待时间
    max_poll_records: 500               # 最大拉取记录数
    enable_auto_commit: false           # 禁用自动提交
    auto_offset_reset: earliest         # 偏移量重置策略
    
  # Broker优化
  broker:
    num_network_threads: 8              # 网络线程数
    num_io_threads: 16                  # IO线程数
    socket_send_buffer_bytes: 102400    # 发送缓冲区
    socket_receive_buffer_bytes: 102400 # 接收缓冲区
    log_segment_bytes: 1073741824       # 日志段大小
    log_retention_hours: 168            # 日志保留时间
    
  # JVM优化
  jvm:
    heap_size: "4g"                     # 堆内存大小
    gc_type: "G1GC"                     # 垃圾收集器
    gc_max_pause: 200                   # GC最大暂停时间
    heap_regions_size: "16m"            # 堆区域大小

# 数据库连接池优化
database_optimization:
  hikari:
    maximum_pool_size: 50               # 最大连接数
    minimum_idle: 10                    # 最小空闲连接
    connection_timeout: 30000           # 连接超时
    idle_timeout: 600000                # 空闲超时
    max_lifetime: 1800000               # 最大生命周期
    leak_detection_threshold: 60000     # 泄露检测阈值
    
  # 数据库优化
  mysql:
    innodb_buffer_pool_size: "8G"       # InnoDB缓冲池
    innodb_log_file_size: "2G"          # InnoDB日志文件大小
    innodb_flush_log_at_trx_commit: 2   # 事务提交刷新策略
    sync_binlog: 0                      # binlog同步策略
    query_cache_type: 0                 # 查询缓存类型

3. 监控告警配置

# Prometheus消息队列监控配置
groups:
- name: message_queue_monitoring
  rules:
  
  # 消息积压告警
  - alert: MessageQueueBacklogHigh
    expr: rabbitmq_queue_messages > 10000
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "消息队列积压严重"
      description: "队列 {{ $labels.queue }} 积压消息数 {{ $value }}"
  
  # 消息处理延迟告警
  - alert: MessageProcessingLatencyHigh
    expr: message_processing_duration_seconds{quantile="0.95"} > 10
    for: 3m
    labels:
      severity: warning
    annotations:
      summary: "消息处理延迟过高"
      description: "消息处理95分位延迟 {{ $value }}秒"
  
  # 消息消费失败率告警
  - alert: MessageConsumptionFailureRateHigh
    expr: rate(message_consumption_failures_total[5m]) / rate(message_consumption_total[5m]) > 0.1
    for: 2m
    labels:
      severity: critical
    annotations:
      summary: "消息消费失败率过高"
      description: "消息消费失败率 {{ $value | humanizePercentage }}"
  
  # 死信队列消息数告警
  - alert: DeadLetterQueueMessagesHigh
    expr: rabbitmq_queue_messages{queue=~"*.dlq"} > 1000
    for: 1m
    labels:
      severity: warning
    annotations:
      summary: "死信队列消息数过多"
      description: "死信队列 {{ $labels.queue }} 消息数 {{ $value }}"
  
  # 消息重试次数告警
  - alert: MessageRetryCountHigh
    expr: message_retry_count > 5
    for: 1m
    labels:
      severity: warning
    annotations:
      summary: "消息重试次数过多"
      description: "消息重试次数 {{ $value }}"
  
  # 消息队列连接数告警
  - alert: MessageQueueConnectionsHigh
    expr: rabbitmq_connections > 1000
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "消息队列连接数过高"
      description: "RabbitMQ连接数 {{ $value }}"
  
  # 消息队列内存使用告警
  - alert: MessageQueueMemoryUsageHigh
    expr: rabbitmq_node_mem_used / rabbitmq_node_mem_limit > 0.8
    for: 3m
    labels:
      severity: critical
    annotations:
      summary: "消息队列内存使用率过高"
      description: "RabbitMQ内存使用率 {{ $value | humanizePercentage }}"
  
  # 消息生产速率告警
  - alert: MessageProductionRateHigh
    expr: rate(message_production_total[5m]) > 10000
    for: 2m
    labels:
      severity: info
    annotations:
      summary: "消息生产速率过高"
      description: "消息生产速率 {{ $value }}/秒"
  
  # 消息消费速率告警
  - alert: MessageConsumptionRateLow
    expr: rate(message_consumption_total[5m]) < 100
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "消息消费速率过低"
      description: "消息消费速率 {{ $value }}/秒"

高并发写架构演进路径

性能瓶颈
容量限制
复杂性增加
智能化需求
特点
特点
特点
特点
特点
同步写入
本地异步
消息队列
事件驱动
自适应架构
简单直接
响应快速
削峰填谷
松耦合
智能调度

总结

高并发写架构的核心在于通过异步化处理,将同步的写入操作转换为异步的消息处理,从而实现系统的削峰填谷、解耦和性能提升。通过消息队列架构,我们能够:

核心原则

  1. 异步化处理:将同步写入转换为异步消息,减少用户等待时间
  2. 流量削峰:通过消息队列缓冲突发流量,保护后端系统
  3. 系统解耦:降低系统组件间的耦合度,提高可维护性
  4. 可靠性保证:通过消息持久化、重试机制确保数据最终一致性

关键技术

  1. 消息队列:RabbitMQ、Apache Kafka、RocketMQ等,提供可靠的消息传输
  2. 事件驱动:基于事件的架构设计,实现松耦合的系统集成
  3. 幂等性设计:确保消息重复消费不会导致数据不一致
  4. 重试机制:通过指数退避、死信队列等机制处理失败消息
  5. 监控告警:实时监控系统指标,及时发现问题

成功要素

  1. 合理的消息设计:消息要幂等、可重试、可追踪
  2. 适当的队列配置:根据业务特点配置队列参数和集群部署
  3. 完善的错误处理:建立重试机制和死信队列处理失败消息
  4. 持续监控优化:实时监控系统性能,动态调整处理策略
  5. 容量规划:提前规划系统容量,支持业务增长

高并发写架构不是简单的异步化,而是需要根据业务特点、数据一致性要求、系统复杂度等因素,设计出最适合的异步处理架构。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值