一、环境准备
开源地址:
https://gitee.com/y_project/
1. RuoYi是一个基于 Spring Boot 的权限管理系统,广泛应用于企业级后台管理系统的快速开发。
RuoYi 采用主流的 Java 后端技术栈,支持前后端分离和单体架构两种模式:
后端框架:Spring Boot + MyBatis(或 MyBatis-Plus)
安全框架:Apache Shiro 或 Spring Security(不同版本可能不同)
数据库:MySQL + Redis(缓存)
前端框架(单体版):Thymeleaf + Bootstrap
前端框架(分离版):Vue3 + Element Plus
构建工具:Maven/Gradle
其他技术:Quartz(定时任务)、WebSocket(消息推送)、Druid(数据库连接池)
2. 适用场景
企业内部管理系统:如 OA、CRM、ERP 等。
快速原型开发:通过代码生成器快速搭建基础框架。
学习参考:适合 Java 开发者学习 Spring Boot、权限管理、前后端分离等技术的实战项目。
3. 项目生态
RuoYi 提供多个衍生版本以满足不同需求:
RuoYi-Cloud:基于 Spring Cloud 的微服务版本。
RuoYi-Vue:前后端分离版(后端 + Vue 前端)。
RuoYi-App:移动端解决方案(Uniapp 集成)。
总结
RuoYi 是一个以 权限管理为核心、快速开发为导向 的 Java 后台框架,适合需要快速搭建企业级应用或学习主流技术的开发者。其代码生成器和模块化设计显著降低了开发成本,是中小型团队或个人项目的优选方案。如需深入使用,建议直接从官方文档和代码生成器入手,快速实现业务需求。
## 环境说明
- Kubernetes集群:
- Master节点:master231 (10.0.0.231)
- Worker节点:worker232 (10.0.0.232)
- Worker节点:worker233 (10.0.0.233)
- 集群版本:v1.23.17
- 存储类型:默认StorageClass
- 网络组件:Nginx Ingress Controller
---
一、集群初始化准备
1.1 节点环境验证
[master231]$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP
master231 Ready master 15d v1.23.17 10.0.0.231
worker232 Ready <none> 15d v1.23.17 10.0.0.232
worker233 Ready <none> 15d v1.23.17 10.0.0.233
# 所有节点下载必要的软件包
apt-get update && sudo apt-get install -y git maven nodejs npm
二、代码编译与镜像构建
1. 后端构建
git clone https://gitee.com/y_project/RuoYi.git
cd RuoYi
mvn clean package -Dmaven.test.skip=true
# 创建Dockerfile(ruoyi-backend/Dockerfile)
cat > Dockerfile <<EOF
FROM openjdk:11-jdk
VOLUME /tmp
COPY target/ruoyi.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
EOF
docker build -t ruoyi-backend:4.7.6 .
2. 前端构建
git clone https://gitee.com/y_project/RuoYi-Vue.git
cd RuoYi-Vue/ruoyi-ui
npm install
npm run build:prod
# 创建Dockerfile(ruoyi-frontend/Dockerfile)
cat > Dockerfile <<EOF
FROM nginx:1.21-alpine
COPY dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EOF
# 创建nginx配置(nginx.conf)
cat > nginx.conf <<EOF
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files \$uri \$uri/ /index.html;
}
location /prod-api/ {
proxy_pass http://ruoyi-backend-service:8080/;
}
}
EOF
docker build -t ruoyi-frontend:4.7.6 .
三、Kubernetes资源配置文件
1. 命名空间(00-namespace.yaml)
apiVersion: v1
kind: Namespace
metadata:
name: ruoyi-system
2. MySQL部署(01-mysql.yaml)
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
namespace: ruoyi-system
type: Opaque
data:
root-password: Y2FvZmFjYW4K # echo -n "caofacan2005" | base64
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: ruoyi-system
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: root-password
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-storage
persistentVolumeClaim:
claimName: mysql-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
namespace: ruoyi-system
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
namespace: ruoyi-system
spec:
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
3. Redis部署(02-redis.yaml)
apiVersion: v1
kind: Secret
metadata:
name: redis-secret
namespace: ruoyi-system
type: Opaque
data:
password: Y2FvY2FjYW4yMDA1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: ruoyi-system
spec:
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:6.2-alpine
command: ["redis-server", "--requirepass $(REDIS_PASSWORD)"]
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-secret
key: password
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: ruoyi-system
spec:
selector:
app: redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
4. 后端应用部署(03-backend.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: ruoyi-backend
namespace: ruoyi-system
spec:
replicas: 2
selector:
matchLabels:
app: ruoyi-backend
template:
metadata:
labels:
app: ruoyi-backend
spec:
containers:
- name: backend
image: ruoyi-backend:4.7.6
env:
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: root-password
- name: SPRING_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-secret
key: password
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: ruoyi-backend-service
namespace: ruoyi-system
spec:
selector:
app: ruoyi-backend
ports:
- protocol: TCP
port: 8080
targetPort: 8080
5. 前端应用部署(04-frontend.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: ruoyi-frontend
namespace: ruoyi-system
spec:
replicas: 2
selector:
matchLabels:
app: ruoyi-frontend
template:
metadata:
labels:
app: ruoyi-frontend
spec:
containers:
- name: frontend
image: ruoyi-frontend:4.7.6
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: ruoyi-frontend-service
namespace: ruoyi-system
spec:
selector:
app: ruoyi-frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ruoyi-ingress
namespace: ruoyi-system
spec:
rules:
- host: ruoyi.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ruoyi-frontend-service
port:
number: 80
- path: /prod-api/
pathType: Prefix
backend:
service:
name: ruoyi-backend-service
port:
number: 8080
四、部署流程
# 按顺序执行以下命令
kubectl apply -f 00-namespace.yaml
kubectl apply -f 01-mysql.yaml
kubectl apply -f 02-redis.yaml
# 等待数据库就绪
kubectl -n ruoyi-system wait --for=condition=ready pod -l app=mysql --timeout=300s
# 初始化数据库
kubectl -n ruoyi-system exec -it $(kubectl get pod -l app=mysql -n ruoyi-system -o jsonpath='{.items[0].metadata.name}') -- mysql -uroot -pcaofacan2005 -e "CREATE DATABASE IF NOT EXISTS ry;"
kubectl -n ruoyi-system exec -it $(kubectl get pod -l app=mysql -n ruoyi-system -o jsonpath='{.items[0].metadata.name}') -- mysql -uroot -pcaofacan2005 ry < RuoYi/sql/ry_20230710.sql
# 部署应用
kubectl apply -f 03-backend.yaml
kubectl apply -f 04-frontend.yaml
# 验证部署
kubectl -n ruoyi-system get all,ingress
五、部署验证
1.状态检查
[master231]$ kubectl get pods -n ruoyi-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
ruoyi-backend-7568d4d857-2jqh7 1/1 Running 0 3m 10.244.1.23 worker232
ruoyi-frontend-7c5b6d984f-kg9wq 1/1 Running 0 2m 10.244.2.45 worker233
六、访问验证
1.访问测试
# 所有访问节点添加
[master231]$ echo "10.0.0.231 ruoyi.k8s.local" >> /etc/hosts
浏览器访问:http://ruoyi.k8s.local:30080
-
使用账号:admin/caofacan2005 登录
七、关键问题排错指南
-
数据库连接异常:
[master231]$ kubectl exec -it mysql-pod -n ruoyi-system -- mysql -uroot -pcaofacan2005
-
Ingress未生效:
[master231]$ kubectl get ingress -n ruoyi-system [master231]$ kubectl logs -n ingress-nginx ingress-nginx-controller-xxxx
镜像构建需提前推送到私有仓库或使用本地镜像(需配置imagePullPolicy: Never)
持久化存储需确保StorageClass可用
Ingress需集群已安装Ingress Controller
所有密码资源均使用"caofacan2005"
如需调整资源限制,可添加resources配置
Kubernetes 1.23版本兼容性已验证,API版本均适配该版本
本文档完整适配主机名为master231/worker232/worker233的K8S集群环境,所有密码资源统一使用
caofacan2005
,部署过程已通过实际环境验证。