SpringCloud微服务开发脚手架云原生部署:阿里云ACK集群部署方案
一、云原生部署痛点与解决方案
你是否正面临这些部署难题:微服务组件版本冲突导致部署失败、多环境配置管理混乱、弹性伸缩策略无法适配业务峰值、云资源成本居高不下?本文基于SpringCloud微服务开发脚手架(整合Spring Security OAuth2、Nacos、Sentinel等核心组件),提供一套完整的阿里云ACK(容器服务Kubernetes版)部署方案,通过12个实施步骤+6个最佳实践,帮助团队实现从代码提交到生产可用的全流程自动化。
读完本文你将获得:
- 阿里云ACK集群标准化部署架构设计
- 微服务容器化改造的10项关键配置
- 基于Nacos的多环境配置中心实践
- 服务网格与流量治理的落地指南
- 监控告警体系的完整搭建流程
- 成本优化的7个实用技巧
二、部署架构设计
2.1 整体架构图
2.2 核心组件版本矩阵
| 组件 | 版本 | 阿里云服务对应 | 作用 |
|---|---|---|---|
| SpringCloud | Greenwich.RELEASE | - | 微服务开发框架 |
| SpringBoot | 2.1.10.RELEASE | - | 应用开发框架 |
| SpringCloud Alibaba | 2.1.0.RELEASE | - | 阿里云微服务组件 |
| Kubernetes | 1.24+ | ACK标准版 | 容器编排平台 |
| Nacos | 1.4.2 | 阿里云Nacos企业版 | 服务注册与配置中心 |
| Sentinel | 1.8.0 | 应用高可用服务AHAS | 流量控制与熔断降级 |
| Elasticsearch | 7.14.0 | 阿里云Elasticsearch | 日志与检索存储 |
| SkyWalking | 8.7.0 | 应用实时监控服务ARMS | 分布式追踪 |
| Jenkins | 2.346.3 | 云效DevOps | CI/CD流水线 |
2.3 网络规划
| 网络类型 | CIDR范围 | 用途 | 安全组规则 |
|---|---|---|---|
| 节点网络 | 10.0.0.0/16 | 集群节点通信 | 仅允许集群内部通信 |
| Pod网络 | 172.20.0.0/16 | 容器间通信 | 按命名空间隔离 |
| 服务网络 | 172.21.0.0/16 | 服务发现 | 集群内部访问 |
| ingress网络 | - | 外部流量入口 | 仅开放80/443端口 |
三、部署环境准备
3.1 ACK集群创建
-
集群配置
- 集群类型:ACK标准版
- Kubernetes版本:1.24.6-aliyun.1
- 控制节点:3台 ecs.g6.xlarge(4核16G)
- 工作节点:至少3台 ecs.c6.2xlarge(8核16G)
- 操作系统:CentOS 7.9
- 容器运行时:Docker 20.10
- 网络插件:Terway(支持网络策略)
- 存储插件:flexvolume(支持NAS和云盘)
-
初始化命令
# 安装阿里云CLI
curl -o aliyun https://aliyuncli.alicdn.com/aliyun-cli-linux-x64/latest/aliyun
chmod +x aliyun
mv aliyun /usr/local/bin/
# 配置阿里云CLI
aliyun configure set --access-key <your-access-key> --secret-key <your-secret-key> --region cn-beijing
# 创建ACK集群(通过控制台操作更直观,此处仅为API示例)
aliyun cs POST /clusters \
--region cn-beijing \
--cluster-spec "ack.pro.small" \
--name "springcloud-prod-cluster" \
--version "1.24.6-aliyun.1" \
--num-of-nodes 3 \
--node-cidr-mask 24 \
--service-cidr "172.21.0.0/20" \
--pod-cidr "172.20.0.0/16"
3.2 基础服务部署
通过阿里云市场部署以下托管服务:
-
Nacos配置中心
- 版本:1.4.2
- 部署模式:集群版
- 资源规格:2核4G * 3节点
- 存储:云数据库RDS MySQL 8.0
-
Sentinel控制台
- 版本:1.8.0
- 部署方式:应用高可用服务AHAS
- 限流规则持久化:开启Nacos持久化
-
Elasticsearch集群
- 版本:7.14.0
- 规格:3节点,每节点4核16G
- 存储:每节点100GB SSD云盘
四、部署前准备
4.1 代码准备
克隆项目代码库:
git clone https://gitcode.com/gh_mirrors/sp/SpringCloud.git
cd SpringCloud
4.2 环境检查清单
部署前请确认本地环境已安装:
- JDK 1.8+
- Maven 3.6+
- Docker 19.03+
- kubectl 1.24+(已配置ACK集群凭证)
- Helm 3.8+
验证环境配置:
# 验证Java版本
java -version
# 应输出:java version "1.8.0_XXX"
# 验证Maven版本
mvn -v
# 应输出:Apache Maven 3.6.X
# 验证kubectl配置
kubectl config get-contexts
# 应显示已配置的ACK集群上下文
# 验证Docker
docker --version
# 应输出:Docker version 19.03.X+
五、微服务容器化改造
5.1 Dockerfile标准化
以基础授权服务base-authorization为例,创建标准Dockerfile:
# 构建阶段
FROM maven:3.6.3-jdk-8-slim AS builder
WORKDIR /app
COPY pom.xml .
# 缓存依赖
RUN mvn dependency:go-offline -B
COPY src ./src
# 构建应用,指定prod环境
RUN mvn package -DskipTests -Pprod
# 运行阶段
FROM openjdk:8-jre-slim
WORKDIR /app
# 添加时区配置
ENV TZ=Asia/Shanghai
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# 添加应用用户
RUN groupadd -r appuser && useradd -r -g appuser appuser
# 复制jar包
COPY --from=builder /app/target/*.jar app.jar
# 安全加固:非root用户运行
USER appuser
# JVM参数优化
ENV JAVA_OPTS="-server -Xms512m -Xmx512m -XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/dump.hprof"
# 健康检查
HEALTHCHECK --interval=30s --timeout=3s --retries=3 \
CMD wget -q -O /dev/null http://localhost:8080/actuator/health || exit 1
# 启动命令
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -jar app.jar"]
5.2 应用配置改造
5.2.1 bootstrap.yml配置
spring:
application:
name: base-authorization
profiles:
active: ${SPRING_PROFILES_ACTIVE:prod}
cloud:
nacos:
config:
server-addr: ${NACOS_SERVER_ADDR:xxx.nacos-ans.mse.aliyuncs.com:8848}
namespace: ${NACOS_NAMESPACE:prod}
group: SPRING_CLOUD_GROUP
file-extension: yaml
shared-configs:
- data-id: common.yaml
group: SPRING_CLOUD_GROUP
refresh: true
discovery:
server-addr: ${NACOS_SERVER_ADDR:xxx.nacos-ans.mse.aliyuncs.com:8848}
namespace: ${NACOS_NAMESPACE:prod}
# 暴露actuator端点
management:
endpoints:
web:
exposure:
include: health,info,metrics,prometheus
metrics:
export:
prometheus:
enabled: true
endpoint:
health:
show-details: always
probes:
enabled: true
5.2.2 Maven配置修改
修改项目根目录pom.xml,添加Docker镜像构建插件:
<build>
<plugins>
<!-- Docker镜像构建插件 -->
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.4.13</version>
<executions>
<execution>
<id>default</id>
<goals>
<goal>build</goal>
<goal>push</goal>
</goals>
</execution>
</executions>
<configuration>
<repository>registry.cn-beijing.aliyuncs.com/your-namespace/${project.artifactId}</repository>
<tag>${project.version}-${BUILD_NUMBER}</tag>
<buildArgs>
<JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
</buildArgs>
</configuration>
</plugin>
</plugins>
</build>
六、部署实施步骤
6.1 构建Docker镜像
配置阿里云容器镜像服务ACR凭证:
docker login --username=your-username registry.cn-beijing.aliyuncs.com
构建并推送所有服务镜像:
# 构建基础授权服务
cd base-authorization
mvn clean package -DskipTests -Pprod dockerfile:build dockerfile:push
# 构建网关服务
cd ../base-gateway
mvn clean package -DskipTests -Pprod dockerfile:build dockerfile:push
# 构建其他服务...
6.2 Kubernetes部署文件
6.2.1 命名空间创建
创建namespace.yaml:
apiVersion: v1
kind: Namespace
metadata:
name: prod
labels:
name: prod
environment: production
应用配置:
kubectl apply -f namespace.yaml
6.2.2 基础授权服务部署
创建base-authorization-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: base-authorization
namespace: prod
labels:
app: base-authorization
service: authorization
spec:
replicas: 3
selector:
matchLabels:
app: base-authorization
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: base-authorization
service: authorization
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/actuator/prometheus"
prometheus.io/port: "8080"
sidecar.istio.io/inject: "true"
spec:
containers:
- name: base-authorization
image: registry.cn-beijing.aliyuncs.com/your-namespace/base-authorization:0.0.1-SNAPSHOT-123
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
limits:
cpu: "1000m"
memory: "1024Mi"
requests:
cpu: "500m"
memory: "512Mi"
env:
- name: SPRING_PROFILES_ACTIVE
value: "prod"
- name: NACOS_SERVER_ADDR
valueFrom:
configMapKeyRef:
name: prod-config
key: nacos_server_addr
- name: NACOS_NAMESPACE
value: "prod"
- name: JAVA_OPTS
value: "-server -Xms512m -Xmx512m -XX:+UseG1GC"
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 3
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 60
periodSeconds: 15
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 3
volumeMounts:
- name: logs
mountPath: /app/logs
volumes:
- name: logs
persistentVolumeClaim:
claimName: prod-logs-pvc
---
apiVersion: v1
kind: Service
metadata:
name: base-authorization
namespace: prod
labels:
app: base-authorization
service: authorization
spec:
ports:
- port: 8080
name: http
targetPort: 8080
selector:
app: base-authorization
6.2.3 网关服务部署
创建base-gateway-deploy.yaml(关键部分):
apiVersion: apps/v1
kind: Deployment
metadata:
name: base-gateway
namespace: prod
labels:
app: base-gateway
service: gateway
spec:
replicas: 2
selector:
matchLabels:
app: base-gateway
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: base-gateway
service: gateway
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/actuator/prometheus"
prometheus.io/port: "8080"
spec:
containers:
- name: base-gateway
image: registry.cn-beijing.aliyuncs.com/your-namespace/base-gateway:0.0.1-SNAPSHOT-124
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http
resources:
limits:
cpu: "1000m"
memory: "1024Mi"
requests:
cpu: "500m"
memory: "512Mi"
# 环境变量配置...
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: base-gateway
namespace: prod
spec:
ports:
- port: 8080
name: http
selector:
app: base-gateway
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: base-gateway-ingress
namespace: prod
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/target-type: "ip"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/certificate-arn: "acs:alb:cn-beijing:xxxx:certificate/xxxx"
spec:
rules:
- host: api.your-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: base-gateway
port:
number: 8080
6.3 执行部署
# 创建持久化存储
kubectl apply -f k8s/pvc.yaml
# 部署配置中心
kubectl apply -f k8s/configmap.yaml
# 部署授权服务
kubectl apply -f k8s/base-authorization-deploy.yaml
# 部署网关服务
kubectl apply -f k8s/base-gateway-deploy.yaml
# 部署其他服务...
6.4 部署验证
# 检查Pod状态
kubectl get pods -n prod
# 检查服务状态
kubectl get svc -n prod
# 检查Ingress状态
kubectl get ingress -n prod
# 查看应用日志
kubectl logs -f -n prod $(kubectl get pods -n prod -l app=base-authorization -o jsonpath='{.items[0].metadata.name}')
# 验证服务可用性
curl -X GET "https://api.your-domain.com/actuator/health" -H "accept: application/json"
七、流量治理配置
7.1 Sentinel流量控制
通过Nacos配置Sentinel限流规则,创建base-gateway-sentinel-rules.json:
[
{
"resource": "GET:/api/v1/users",
"limitApp": "default",
"grade": 1,
"count": 100,
"strategy": 0,
"controlBehavior": 0,
"clusterMode": false
},
{
"resource": "base-authorization",
"limitApp": "default",
"grade": 0,
"count": 200,
"strategy": 0,
"controlBehavior": 0,
"clusterMode": false
}
]
通过Nacos API发布规则:
curl -X POST "http://${NACOS_SERVER_ADDR}/nacos/v1/cs/configs" \
-d "dataId=base-gateway-sentinel-rules" \
-d "group=SENTINEL_GROUP" \
-d "namespaceId=prod" \
-d "content=$(cat base-gateway-sentinel-rules.json | jq -c .)"
7.2 动态路由配置
通过Nacos配置Gateway路由规则,创建base-gateway-routes.json:
[
{
"id": "user-service-route",
"order": 1,
"predicates": [
{
"name": "Path",
"args": {
"_genkey_0": "/api/v1/users/**"
}
}
],
"filters": [
{
"name": "StripPrefix",
"args": {
"_genkey_0": "1"
}
},
{
"name": "Sentinel",
"args": {
"resource": "user-service",
"limitApp": "default",
"grade": 1,
"count": 50
}
}
],
"uri": "lb://user-service"
}
]
发布路由规则到Nacos:
curl -X POST "http://${NACOS_SERVER_ADDR}/nacos/v1/cs/configs" \
-d "dataId=base-gateway-routes" \
-d "group=SPRING_CLOUD_GROUP" \
-d "namespaceId=prod" \
-d "content=$(cat base-gateway-routes.json | jq -c .)"
八、监控告警配置
8.1 接入ARMS监控
在应用启动参数中添加:
-javaagent:/opt/arms/arms-agent/arms-bootstrap-1.7.0-SNAPSHOT.jar
-Darms.licenseKey=your-arms-license-key
-Darms.appName=base-authorization
-Darms.logTransfer=true
8.2 Prometheus + Grafana监控
创建Prometheus监控配置prometheus-serviceMonitor.yaml:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: springcloud-services-monitor
namespace: monitoring
labels:
app: springcloud-monitor
spec:
selector:
matchLabels:
monitoring: springcloud
namespaceSelector:
matchNames:
- prod
endpoints:
- port: http
path: /actuator/prometheus
interval: 15s
scrapeTimeout: 5s
8.3 日志收集配置
创建log-config.yaml:
apiVersion: logging.kubesphere.io/v1alpha2
kind: Input
metadata:
name: springcloud-log-input
namespace: prod
spec:
tail:
path: /app/logs/*.log
parser:
name: springcloud-log
type: docker
tag: springcloud.*
---
apiVersion: logging.kubesphere.io/v1alpha2
kind: Output
metadata:
name: springcloud-log-output
namespace: prod
spec:
es:
host: ${ES_HOST}
port: ${ES_PORT}
index: springcloud-logs-%Y.%m.%d
logstashFormat: true
logstashPrefix: springcloud-logs
user: ${ES_USER}
password: ${ES_PASSWORD}
---
apiVersion: logging.kubesphere.io/v1alpha2
kind: Flow
metadata:
name: springcloud-log-flow
namespace: prod
spec:
selectors:
app: base-authorization
app: base-gateway
inputRefs:
- springcloud-log-input
outputRefs:
- springcloud-log-output
九、CI/CD流水线配置
9.1 Jenkinsfile
创建Jenkinsfile实现自动构建部署:
pipeline {
agent any
environment {
PROJECT_NAME = 'SpringCloud'
ACR_REPO = 'registry.cn-beijing.aliyuncs.com/your-namespace'
SERVICE_NAME = 'base-authorization'
NAMESPACE = 'prod'
ACK_MASTER_URL = 'https://xxx.cn-beijing.aliyuncs.com:6443'
}
stages {
stage('代码拉取') {
steps {
git url: 'https://gitcode.com/gh_mirrors/sp/SpringCloud.git',
branch: 'master',
credentialsId: 'gitcode-credentials'
}
}
stage('代码检查') {
steps {
sh 'mvn clean compile checkstyle:checkstyle'
}
}
stage('单元测试') {
steps {
sh 'mvn test'
}
post {
always {
junit '**/target/surefire-reports/*.xml'
}
}
}
stage('构建打包') {
steps {
sh "cd ${SERVICE_NAME} && mvn clean package -DskipTests -Pprod"
}
}
stage('构建镜像') {
steps {
withCredentials([string(credentialsId: 'docker-registry', variable: 'DOCKER_CREDENTIALS')]) {
sh """
docker login --username=your-username --password-stdin registry.cn-beijing.aliyuncs.com <<< ${DOCKER_CREDENTIALS}
cd ${SERVICE_NAME}
mvn dockerfile:build -DbuildNumber=${BUILD_NUMBER}
"""
}
}
}
stage('推送镜像') {
steps {
sh "cd ${SERVICE_NAME} && mvn dockerfile:push -DbuildNumber=${BUILD_NUMBER}"
}
}
stage('部署到ACK') {
steps {
withKubeConfig([credentialsId: 'ack-credentials', serverUrl: env.ACK_MASTER_URL]) {
sh """
sed -i "s|IMAGE_TAG|${BUILD_NUMBER}|g" k8s/${SERVICE_NAME}-deploy.yaml
kubectl apply -f k8s/${SERVICE_NAME}-deploy.yaml -n ${NAMESPACE}
# 等待部署完成
kubectl rollout status deployment/${SERVICE_NAME} -n ${NAMESPACE}
"""
}
}
}
stage('部署验证') {
steps {
sh """
kubectl get pods -n ${NAMESPACE} -l app=${SERVICE_NAME}
kubectl logs -n ${NAMESPACE} $(kubectl get pods -n ${NAMESPACE} -l app=${SERVICE_NAME} -o jsonpath='{.items[0].metadata.name}') --tail=100
"""
}
}
}
post {
success {
slackSend channel: '#deploy-notifications',
message: "✅ ${env.JOB_NAME} 构建#${env.BUILD_NUMBER} 成功部署到 ${NAMESPACE} 环境"
}
failure {
slackSend channel: '#deploy-notifications',
message: "❌ ${env.JOB_NAME} 构建#${env.BUILD_NUMBER} 部署失败"
}
}
}
十、最佳实践与优化
10.1 资源配置优化
| 服务类型 | CPU请求 | CPU限制 | 内存请求 | 内存限制 | JVM参数优化 |
|---|---|---|---|---|---|
| 网关服务 | 1000m | 2000m | 1Gi | 2Gi | -Xms1g -Xmx1.5g -XX:MetaspaceSize=256m |
| 认证服务 | 500m | 1000m | 512Mi | 1Gi | -Xms512m -Xmx768m |
| 业务服务 | 200m | 500m | 256Mi | 512Mi | -Xms256m -Xmx384m |
10.2 高可用配置
- 多可用区部署:确保ACK集群节点分布在至少2个可用区
- PodDisruptionBudget:设置最小可用Pod数量
- StatefulSet:有状态服务使用StatefulSet部署
- 自动扩缩容:配置HPA实现基于CPU/内存/自定义指标的扩缩容
HPA配置示例:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: base-authorization-hpa
namespace: prod
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: base-authorization
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
10.3 安全最佳实践
-
镜像安全
- 使用私有镜像仓库
- 启用镜像漏洞扫描
- 非root用户运行容器
-
网络安全
- 启用网络策略限制Pod间通信
- 使用HTTPS加密传输
- 敏感配置使用Secret存储
-
权限控制
- 为服务账户配置最小权限
- 启用RBAC权限控制
十一、常见问题排查
11.1 服务注册问题
症状:服务启动后Nacos未注册 排查步骤:
- 检查Nacos服务地址是否正确
- 检查网络连通性:
kubectl exec -it <pod-name> -n prod -- telnet ${NACOS_SERVER_ADDR} 8848 - 查看应用日志中的Nacos连接信息
- 检查Nacos命名空间是否存在
11.2 部署失败
症状:Pod一直处于Pending状态 排查步骤:
- 检查事件:
kubectl describe pod <pod-name> -n prod - 检查资源是否充足:
kubectl top nodes - 检查存储是否可用:
kubectl get pvc -n prod
11.3 性能问题
症状:服务响应缓慢 排查步骤:
- 查看监控指标:CPU/内存/GC/线程数
- 检查慢查询日志
- 分析SkyWalking追踪数据
- 检查Sentinel限流情况
十二、总结与展望
本文详细介绍了SpringCloud微服务开发脚手架在阿里云ACK集群的部署方案,从架构设计、环境准备、部署实施到监控运维,提供了一套完整的落地指南。通过容器化部署和云原生技术,团队可以显著提升服务的弹性伸缩能力、资源利用率和运维效率。
未来可进一步优化的方向:
- 引入服务网格Istio实现更细粒度的流量控制
- 实现GitOps流程,使用ArgoCD管理部署
- 基于阿里云EDAS实现微服务全链路灰度发布
- 探索Serverless容器服务降低运维成本
通过持续优化部署架构和运维流程,企业可以将更多精力聚焦在业务创新而非基础设施管理上,真正发挥微服务架构的优势。
十三、附录:部署清单
13.1 部署文件清单
| 文件路径 | 作用 |
|---|---|
| k8s/namespace.yaml | 命名空间定义 |
| k8s/configmap.yaml | 配置中心 |
| k8s/pvc.yaml | 持久化存储声明 |
| k8s/base-authorization-deploy.yaml | 授权服务部署 |
| k8s/base-gateway-deploy.yaml | 网关服务部署 |
| k8s/ingress.yaml | 入口流量配置 |
| k8s/hpa.yaml | 自动扩缩容配置 |
| k8s/servicemonitor.yaml | 监控配置 |
13.2 常用命令速查
# 查看命名空间所有资源
kubectl get all -n prod
# 查看Pod详细信息
kubectl describe pod <pod-name> -n prod
# 查看服务日志(最近100行)
kubectl logs --tail=100 <pod-name> -n prod
# 进入容器
kubectl exec -it <pod-name> -n prod -- /bin/bash
# 端口转发
kubectl port-forward -n prod <pod-name> 8080:8080
# 查看部署历史
kubectl rollout history deployment <deployment-name> -n prod
# 回滚部署
kubectl rollout undo deployment <deployment-name> --to-revision=2 -n prod
13.3 阿里云资源成本估算
| 资源 | 配置 | 月度成本(元) |
|---|---|---|
| ACK集群 | 3个控制节点(4核16G) | 约2400 |
| 工作节点 | 6个工作节点(8核16G) | 约4800 |
| RDS MySQL | 4核16G,100GB SSD | 约1500 |
| Nacos | 3节点(4核16G) | 约2100 |
| Elasticsearch | 3节点(4核16G) | 约3000 |
| 其他(存储/网络/SLB) | - | 约1200 |
| 总计 | - | 约15000 |
创作声明:本文部分内容由AI辅助生成(AIGC),仅供参考



