mediamtx部署实战:Docker容器化部署与Kubernetes集群方案

mediamtx部署实战:Docker容器化部署与Kubernetes集群方案

【免费下载链接】mediamtx Ready-to-use SRT / WebRTC / RTSP / RTMP / LL-HLS media server and media proxy that allows to read, publish, proxy and record video and audio streams. 【免费下载链接】mediamtx 项目地址: https://gitcode.com/GitHub_Trending/me/mediamtx

前言:为什么需要容器化部署?

在流媒体服务器部署过程中,你是否遇到过以下痛点?

  • 环境依赖复杂,不同系统版本兼容性问题频发
  • 部署过程繁琐,需要手动配置各种网络端口和安全策略
  • 扩展性差,难以实现高可用和负载均衡
  • 维护困难,版本升级和回滚操作风险高

MediaMTX作为一款功能强大的实时媒体服务器,支持SRT、WebRTC、RTSP、RTMP、HLS等多种协议,通过容器化部署可以完美解决上述问题。本文将详细介绍MediaMTX的Docker容器化部署方案和Kubernetes集群部署策略。

一、Docker容器化部署方案

1.1 官方Docker镜像介绍

MediaMTX提供了官方Docker镜像,支持多种架构:

# 标准版本Dockerfile
FROM --platform=linux/amd64 scratch AS binaries
ADD binaries/mediamtx_*_linux_amd64.tar.gz /linux/amd64
ADD binaries/mediamtx_*_linux_armv6.tar.gz /linux/arm/v6
ADD binaries/mediamtx_*_linux_armv7.tar.gz /linux/arm/v7
ADD binaries/mediamtx_*_linux_arm64.tar.gz /linux/arm64

FROM scratch
ARG TARGETPLATFORM
COPY --from=binaries /$TARGETPLATFORM /
ENTRYPOINT [ "/mediamtx" ]

1.2 单机Docker部署

基础部署命令
# 拉取最新官方镜像
docker pull bluenviron/mediamtx

# 创建配置文件目录
mkdir -p /opt/mediamtx/config

# 下载默认配置文件
curl -o /opt/mediamtx/config/mediamtx.yml https://raw.githubusercontent.com/bluenviron/mediamtx/main/mediamtx.yml

# 运行容器
docker run -d \
  --name mediamtx \
  --restart unless-stopped \
  -p 1935:1935 \    # RTMP
  -p 8554:8554 \    # RTSP
  -p 8888:8888 \    # HLS
  -p 8889:8889 \    # WebRTC
  -p 8890:8890 \    # SRT
  -p 9997:9997 \    # Control API
  -p 9998:9998 \    # Metrics
  -p 9999:9999 \    # PPROF
  -v /opt/mediamtx/config:/mediamtx.yml:ro \
  -v /opt/mediamtx/recordings:/recordings \
  bluenviron/mediamtx
自定义配置部署

创建自定义配置文件 custom-mediamtx.yml

# 全局设置
logLevel: info
logDestinations: [stdout]

# 协议服务器配置
rtsp: yes
rtspAddress: :8554

rtmp: yes
rtmpAddress: :1935

hls: yes
hlsAddress: :8888

webrtc: yes
webrtcAddress: :8889

srt: yes
srtAddress: :8890

# 控制API
api: yes
apiAddress: :9997

# 指标监控
metrics: yes
metricsAddress: :9998

# 默认路径设置
pathDefaults:
  source: publisher
  record: yes
  recordPath: /recordings/%path/%Y-%m-%d_%H-%M-%S-%f
  recordFormat: fmp4
  recordDeleteAfter: 7d

使用自定义配置运行:

docker run -d \
  --name mediamtx-custom \
  -p 1935:1935 \
  -p 8554:8554 \
  -p 8888:8888 \
  -p 8889:8889 \
  -p 8890:8890 \
  -v $(pwd)/custom-mediamtx.yml:/mediamtx.yml:ro \
  -v $(pwd)/recordings:/recordings \
  bluenviron/mediamtx

1.3 Docker Compose部署

创建 docker-compose.yml 文件:

version: '3.8'

services:
  mediamtx:
    image: bluenviron/mediamtx:latest
    container_name: mediamtx
    restart: unless-stopped
    ports:
      - "1935:1935"    # RTMP
      - "8554:8554"    # RTSP
      - "8888:8888"    # HLS
      - "8889:8889"    # WebRTC
      - "8890:8890"    # SRT
      - "9997:9997"    # Control API
      - "9998:9998"    # Metrics
    volumes:
      - ./config/mediamtx.yml:/mediamtx.yml:ro
      - ./recordings:/recordings
    environment:
      - TZ=Asia/Shanghai
    networks:
      - mediamtx-net

networks:
  mediamtx-net:
    driver: bridge

启动服务:

# 创建配置目录
mkdir -p config recordings

# 下载默认配置
curl -o config/mediamtx.yml https://raw.githubusercontent.com/bluenviron/mediamtx/main/mediamtx.yml

# 启动服务
docker-compose up -d

# 查看日志
docker-compose logs -f

二、Kubernetes集群部署方案

2.1 部署架构设计

mermaid

2.2 Kubernetes资源配置文件

Namespace配置
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: mediamtx
  labels:
    name: mediamtx
ConfigMap配置
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: mediamtx-config
  namespace: mediamtx
data:
  mediamtx.yml: |
    logLevel: info
    logDestinations: [stdout]
    
    rtsp: yes
    rtspAddress: :8554
    
    rtmp: yes
    rtmpAddress: :1935
    
    hls: yes
    hlsAddress: :8888
    
    webrtc: yes
    webrtcAddress: :8889
    
    srt: yes
    srtAddress: :8890
    
    api: yes
    apiAddress: :9997
    
    metrics: yes
    metricsAddress: :9998
    
    pathDefaults:
      source: publisher
      record: yes
      recordPath: /recordings/%path/%Y-%m-%d_%H-%M-%S-%f
      recordFormat: fmp4
      recordDeleteAfter: 7d
Deployment配置
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mediamtx
  namespace: mediamtx
  labels:
    app: mediamtx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: mediamtx
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      labels:
        app: mediamtx
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9998"
        prometheus.io/path: "/metrics"
    spec:
      containers:
      - name: mediamtx
        image: bluenviron/mediamtx:latest
        ports:
        - containerPort: 1935   # RTMP
          name: rtmp
        - containerPort: 8554   # RTSP
          name: rtsp
        - containerPort: 8888   # HLS
          name: hls
        - containerPort: 8889   # WebRTC
          name: webrtc
        - containerPort: 8890   # SRT
          name: srt
        - containerPort: 9997   # Control API
          name: api
        - containerPort: 9998   # Metrics
          name: metrics
        volumeMounts:
        - name: config-volume
          mountPath: /mediamtx.yml
          subPath: mediamtx.yml
        - name: recordings-volume
          mountPath: /recordings
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "1Gi"
            cpu: "1"
        livenessProbe:
          httpGet:
            path: /v2/stats
            port: api
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /v2/stats
            port: api
          initialDelaySeconds: 5
          periodSeconds: 5
      volumes:
      - name: config-volume
        configMap:
          name: mediamtx-config
      - name: recordings-volume
        persistentVolumeClaim:
          claimName: mediamtx-recordings-pvc
Service配置
# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: mediamtx-service
  namespace: mediamtx
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
  selector:
    app: mediamtx
  ports:
  - name: rtmp
    port: 1935
    targetPort: 1935
    protocol: TCP
  - name: rtsp
    port: 8554
    targetPort: 8554
    protocol: TCP
  - name: hls
    port: 8888
    targetPort: 8888
    protocol: TCP
  - name: webrtc
    port: 8889
    targetPort: 8889
    protocol: TCP
  - name: srt
    port: 8890
    targetPort: 8890
    protocol: TCP
  - name: api
    port: 9997
    targetPort: 9997
    protocol: TCP
  - name: metrics
    port: 9998
    targetPort: 9998
    protocol: TCP
  type: LoadBalancer
PersistentVolumeClaim配置
# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mediamtx-recordings-pvc
  namespace: mediamtx
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  storageClassName: nfs-client

2.3 部署执行命令

# 创建命名空间
kubectl apply -f namespace.yaml

# 创建配置
kubectl apply -f configmap.yaml

# 创建持久化存储
kubectl apply -f pvc.yaml

# 创建部署
kubectl apply -f deployment.yaml

# 创建服务
kubectl apply -f service.yaml

# 查看部署状态
kubectl get all -n mediamtx

# 查看Pod日志
kubectl logs -f deployment/mediamtx -n mediamtx

# 查看服务外部IP
kubectl get svc mediamtx-service -n mediamtx

2.4 监控和告警配置

ServiceMonitor配置
# servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: mediamtx-monitor
  namespace: mediamtx
  labels:
    release: prometheus
spec:
  selector:
    matchLabels:
      app: mediamtx
  namespaceSelector:
    matchNames:
    - mediamtx
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics
Grafana仪表板配置

创建监控仪表板JSON配置,包含关键指标:

  • 连接数统计
  • 流处理吞吐量
  • 内存和CPU使用率
  • 录制文件状态
  • 协议使用分布

三、高级配置和优化

3.1 网络性能优化

# 在Deployment中添加网络优化配置
spec:
  template:
    spec:
      containers:
      - name: mediamtx
        # 网络性能优化
        securityContext:
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        # 内核参数调优
        env:
        - name: NET_CORE_RMEM_MAX
          value: "212992"
        - name: NET_CORE_WMEM_MAX
          value: "212992"

3.2 资源限制和QoS

# 资源配额配置
apiVersion: v1
kind: ResourceQuota
metadata:
  name: mediamtx-quota
  namespace: mediamtx
spec:
  hard:
    requests.cpu: "4"
    requests.memory: 8Gi
    limits.cpu: "8"
    limits.memory: 16Gi
    requests.storage: 200Gi

3.3 自动扩缩容配置

# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: mediamtx-hpa
  namespace: mediamtx
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: mediamtx
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
  - type: Pods
    pods:
      metric:
        name: connections_per_pod
      target:
        type: AverageValue
        averageValue: 1000

四、安全加固方案

4.1 网络策略

# networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: mediamtx-network-policy
  namespace: mediamtx
spec:
  podSelector:
    matchLabels:
      app: mediamtx
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 0.0.0.0/0
    ports:
    - protocol: TCP
      port: 1935   # RTMP
    - protocol: TCP
      port: 8554   # RTSP
    - protocol: TCP
      port: 8888   # HLS
    - protocol: TCP
      port: 8889   # WebRTC
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
    ports:
    - protocol: TCP
      port: 53     # DNS
    - protocol: UDP
      port: 53     # DNS

4.2 TLS证书配置

# 使用Cert-manager自动管理TLS证书
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: mediamtx-tls
  namespace: mediamtx
spec:
  secretName: mediamtx-tls-secret
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  dnsNames:
  - mediamtx.example.com
  - *.mediamtx.example.com

五、故障排查和维护

5.1 常见问题排查

问题现象可能原因解决方案
连接超时网络策略限制检查NetworkPolicy和防火墙规则
流无法播放配置错误验证mediamtx.yml配置文件
录制失败存储权限问题检查PVC挂载权限
性能下降资源不足调整资源限制和副本数

5.2 监控指标说明

MediaMTX提供丰富的Prometheus指标:

# 查看可用指标
curl http://mediamtx-service:9998/metrics

# 关键指标示例
mediamtx_path_readers_total{path="live"} 5
mediamtx_path_publishers_total{path="live"} 1
mediamtx_rtsp_connections_total 12
mediamtx_rtmp_connections_total 8

5.3 日志分析

# 查看实时日志
kubectl logs -f deployment/mediamtx -n mediamtx

# 搜索特定错误
kubectl logs deployment/mediamtx -n mediamtx | grep -i error

# 导出日志进行分析
kubectl logs deployment/mediamtx -n mediamtx --since=1h > mediamtx-logs.log

六、总结与最佳实践

通过本文的详细讲解,你应该已经掌握了MediaMTX的完整容器化部署方案。以下是关键最佳实践总结:

  1. 生产环境部署:使用Kubernetes集群部署,确保高可用性和弹性扩缩容
  2. 监控告警:配置完整的监控体系,实时掌握系统状态
  3. 安全加固:实施网络策略、TLS加密和访问控制
  4. 性能优化:根据实际负载调整资源配额和副本数量
  5. 备份恢复:定期备份配置和录制文件,制定灾难恢复方案

MediaMTX的容器化部署不仅简化了运维复杂度,更为流媒体服务提供了企业级的可靠性和扩展性。无论是小型项目还是大规模生产环境,都能找到合适的部署方案。

下一步行动建议

  • 根据实际业务需求调整配置文件
  • 设置监控告警阈值
  • 制定定期维护计划
  • 进行压力测试验证集群性能

通过合理的容器化部署,MediaMTX能够为你的流媒体业务提供稳定、高效的服务基础架构。

【免费下载链接】mediamtx Ready-to-use SRT / WebRTC / RTSP / RTMP / LL-HLS media server and media proxy that allows to read, publish, proxy and record video and audio streams. 【免费下载链接】mediamtx 项目地址: https://gitcode.com/GitHub_Trending/me/mediamtx

创作声明:本文部分内容由AI辅助生成(AIGC),仅供参考

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值