how to open mysql_操作指南:通过 OpenShfit 运行高可用 MySQL数据库

本文详细介绍了如何在OpenShift上使用Portworx实现高可用MySQL数据库的部署。首先,通过安装Portworx外部卷插件启用企业级存储功能。接着,创建Kubernetes存储类以配置复制因子、IO优先级和快照间隔。然后,利用PortworxMySQL模板创建持久卷,并配置内存限制和存储参数。在OpenShift中部署Pods并验证数据库的高可用性,通过关闭节点和删除Pod观察到自动重新调度。Portworx提供了加密、快照、备份等高级功能,确保数据库的安全和高效运行。

如何通过 OpenShfit 运行高可用 MySQL数据库

Portworx通过RedHat技术认证

我们的文章包括了MySQL on Kubernetes在不同平台不同场景下的情况。相关文章的列表如下:

OpenShift Container Platform是Red Hat在私有云/本地部署的应用部署平台。许多用户使用OpenShift来运行无状态应用。但是通过OpenShift来运行类似数据库的有状态应用仍然是一个很大的挑战。Red Hat提供了一系列的企业级存储解决方案。但不论是GlusterFS (RedHat称之为容器原生存储),还是Ceph,都并不是被设计来运行高性能/低延迟数据库的。GlusterFS和Ceph是很不错的项目,但对于运行数据库来说都存在较多问题。这些问题使得OpenShift的用户不得不放弃通过OpenShift来运行数据服务。

但这些问题实际上是可以解决的。本篇文章中,我们将通过使用开源数据库MySQL为例,来演示,如何通过OpenShift来运行数据库。

在Openshift上运行数据库的关键,需要一个专为高性能数据库或其他有状态应用设计的,云原生存储解决方案。

Portworx是根据DevOps的原则,专为在容器中运行有状态应用和生产系统设计的解决方案。使用Portworx,用户可以使用任何容器排程器,在任何基础架构上,管理任何数据库或有状态服务。包括:

OpenShift发布的3.7版本支持外部的卷插件,从而用户能够使用Portworx的企业级存储功能来加密、快照、备份、确保高可用,来保护关键应用数据库。

在本篇文章中,我们会演示如何通过5个步骤,在OpenShift上运行高可用的MySQL数据库。

1. 为OpenShift安装外部卷插件,这样用户就可以使用快照、备份、高可用、以及加密功能

2. 创建一个Kubernetes存储类,含有复制因子=2,IO优先级=High,快照间隔=60。这些值也可以根据用户实际需要来配置

3. 在OpenShift里创建一个MySQL模板:导入JSON,配置OpenShift MySQL持久卷,包含内存上限、MySQL的参数、以及存储类的大小

4. 从这个模板创建一个MySQL 持久卷,部署OpenShift的Pods来使用这个卷

5. 验证MySQL高可用:通过关闭节点,删除Pod来看MySQL已经被自动重新排程了

如果你希望了解更多如何在OpenShift上运行高性能数据库,可以查看Portworx网站上的相关文档和视频。

在OpenShift 3.7上安装Portworx

安装Portworx

Portworx在OpenShift上作为一个Daemonset被安装。访问 https://install.portworx.com来创建你的px-spec.yaml文件,并且运行oc apply –f px-spec.yaml。

一旦Portworx安装完成,我们就继续创建一个存储类,用来为我们的MySQL实例做卷的动态部署。

下面是一个存储类的例子,我们用它来创建我们的卷,kind: StorageClass

apiVersion: http://storage.k8s.io/v1beta1

metadata:

name: px-demo-sc

provisioner: http://kubernetes.io/portworx-volume

parameters:

repl: “2”

priority_io: “high”

snap_interval: “60”

在这个存储类例子里,我们会创建一个叫做px-demo-sc的存储类,并且会配置一些Portworx的参数,

Replication – repl: “2”

我们可以配置,在集群里我们需要多少份卷的副本。Portworx支持的复制因子包括1/2/3。配置复制因子为2或者3,可以确保Portworx在集群中同步地把卷复制到2或3个节点里,同时确保数据的持久性。如果某个节点死掉,Portworx和OpenShift会把Pod重新部署到集群中存在Portworx卷的另外一个Worker节点上。

IO Priority – priority_io: “high”

Portworx允许你创建3个存储池:High、Medium和Low。你可以使用具备SSD、HDD和SATA存储的服务器。SSD是High,HDD是Medium,SATA是Low。如果是在云环境中也可以通过配置不同的IOPS来完成。当选择High的存储类,Portworx会把Pod排程到具备SSD存储的服务器上。

Snapshots – snap_interval: “60”

Porworx会每60分钟创建一个快照。这些快照可以被用来回滚数据库,测试升级,以及做研发测试。

注意:Portworx也支持备份你的容器卷到云中或者本地的对象存储里。

(https://docs.portworx.com/cloud/backups.html)你可以创建备份的排程。这些备份可以被加密和恢复到同一个或者不同的Portworx集群里。

在OpenShift里创建一个MySQL模板

在OpenShift操作面板里选择导入YAML/JSON,copy和粘贴PortworxMySQL 模板,点击创建。

这将会出现Portworx MySQL (持久)模板配置界面。你可以选择内存上限以及其他MySQL参数,或者使用系统默认的参数。你也可以设定卷的大小,以及需要使用的存储类。确保你使用的存储类与之前创建的存储类相匹配。

进入项目,通过点击Storage验证PVC已经被创建并被绑定。

容器需要1到2分钟来出现,容器开始运行后,验证存储已经连上了: 点击Application、Pods;选择MySQLPod,在终端里输入df –H,你可以看到/var/lib/mysql/data目录已经被mounted到Portworx支持的PX-Volume里。

登入数据库并且创建一张表。

确认Pod运行在哪个节点上,oc get pods -n mysql-openshift -o wide

NAME READY STATUS RESTARTS AGE IP NODE

mysql-1-f4xlw 1/1 Running 0 1h 10.130.0.34 http://70-0-107-155.pools.spcsdns.net

关闭(Cordon off)正在运行Pod的节点,

验证节点上的排程已经被disable了,oc get nodes

NAME STATUS AGE VERSION

http://70-0-107-155.pools.spcsdns.net Ready,SchedulingDisabled 23d v1.7.6+a08f5eeb62

删除MySQL Pod,oc delete pod mysql-1-q88qq -n mysql-openshift

pod “mysql-1-q88qq” deleted

验证Pod已经被重新排程到集群上的另一个节点里。oc get pods -n mysql-openshift -o wide

NAME READY STATUS RESTARTS AGE IP NODE

mysql-1-j97tw 1/1 Running 0 1m 10.128.0.63 http://70-0-40-193.pools.spcsdns.net

回到OpenShift控制面板,选择你的项目,到Application, Pods,点击新的MySQL Pod, 然后是终端,验证数据库表还在。

总结来看,我们通过5个步骤,在OpenShift中运行了高可用的MySQL数据库。为OpenShift安装外部卷插件,这样用户就可以使用快照、备份、高可用、以及加密功能

创建一个Kubernetes存储类,含有复制因子=2,IO优先级=High,快照间隔=60。这些值也可以根据用户实际需要来配置

在OpenShift里创建一个MySQL模板:导入JSON,配置OpenShiftMySQL持久卷,包含内存上限、MySQL的参数、以及存储类的大小

从这个模板创建一个MySQL 持久卷,部署OpenShift的Pods来使用这个卷

验证MySQL高可用:通过关闭节点,删除Pod来看MySQL已经被自动重新排程了

如果你希望了解更多如何在OpenShift上运行高性能数据库,可以查看Portworx网站上的相关文档和视频。

程序依赖 APIPark 依赖 MYSQL、Redis、InfluxDB 数据库,下表是数据库所需版本: 名称 版本要求 MYSQL >=5.7.x Redis >=6.2.x InfluxDB >=2.6 部署方式 使用脚本部署 备注 支持的系统列表: CentOS 7.9(7.x为代表) CentOS 8.5(8.x为代表) Ubuntu 20.04 Ubuntu 22.04 Debain 12.4 Alibaba Cloud Linux 3.2104 Alibaba Cloud Linux 2.1903 当前仅测试了上述部署的安装,若需要其他系统的一键部署,可给我们提交Issue。 输入一键部署指令: curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh 按照提示进行部署即可,部署完成后,将会展示部署信息。 Docker-Compose部署 使用此方法安装 APIPark,你需要安装 Docker 和 Docker Compose。 部署完成后,APIPark需要绑定API网关节点才可使用,具体教程请参考配置API网关 部署APIPark+API网关 编辑config.yml vi config.yml 修改文件配置 version: 2 #certificate: # 证书存放根目录 # dir: /etc/apinto/cert client: advertise_urls: # open api 服务的广播地址 - http://{IP}:9400 listen_urls: # open api 服务的监听地址 - http://0.0.0.0:9400 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key gateway: advertise_urls: # 转发服务的广播地址 - http://{IP}:8099 - https://{IP}:8099 listen_urls: # 转发服务的监听地址 - https://0.0.0.0:8099 - http://0.0.0.0:8099 peer: # 集群间节点通信配置信息 listen_urls: # 节点监听地址 - http://0.0.0.0:9401 advertise_urls: # 节点通信广播地址 - http://{IP}:9401 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key 上述配置中的{IP}是一个变量,应该填写容器所在宿主机IP,假设宿主机IP为172.18.65.22,则此时配置应如下 version: 2 #certificate: # 证书存放根目录 # dir: /etc/apinto/cert client: advertise_urls: # open api 服务的广播地址 - http://172.18.65.22:9400 listen_urls: # open api 服务的监听地址 - http://0.0.0.0:9400 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key gateway: advertise_urls: # 转发服务的广播地址 - http://172.18.65.22:8099 - https://172.18.65.22:8099 listen_urls: # 转发服务的监听地址 - https://0.0.0.0:8099 - http://0.0.0.0:8099 peer: # 集群间节点通信配置信息 listen_urls: # 节点监听地址 - http://0.0.0.0:9401 advertise_urls: # 节点通信广播地址 - http://172.18.65.22:9401 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key 配置说明 字段名称 说明 version 配置版本号,默认2 client openAPI配置信息 client -> listen_urls openAPI监听地址列表,格式:{协议}://{IP}:{端口} client -> advertise_urls openAPI广播地址列表,在控制台集群节点列表中展示,格式:{协议}://{IP/域名}:{端口} client -> certificate openAPI证书信息列表 gateway 转发代理核心程序配置信息 gateway -> listen_urls 转发代理核心程序监听地址列表,格式:{协议}://{IP}:{端口} gateway -> advertise_urls 转发代理核心程序广播地址列表,在控制台集群节点列表中展示,格式:{协议}://{IP/域名}:{端口} peer Raft节点配置信息,用于Raft集群节点配置同步、加入集群、离开集群等操作的通信 peer -> listen_urls Raft节点监听地址列表,格式:{协议}://{IP}:{端口} peer -> advertise_urls Raft节点广播地址列表,格式:{协议}://{IP/域名}:{端口} peer -> certificate Raft节点证书信息列表 编辑docker-compose.yml文件 vi docker-compose.yml 修改文件配置 version: '3' services: apipark-mysql: image: mysql:8.0.37 privileged: true restart: always container_name: apipark-mysql hostname: apipark-mysql command: - "--character-set-server=utf8mb4" - "--collation-server=utf8mb4_unicode_ci" ports: - "33306:3306" environment: - MYSQL_ROOT_PASSWORD={MYSQL_PWD} - MYSQL_DATABASE=apipark volumes: - /var/lib/apipark/mysql:/var/lib/mysql networks: - apipark apipark: image: apipark/apipark:v1.7.3-beta container_name: apipark privileged: true restart: always networks: - apipark ports: - "18288:8288" depends_on: - apipark-mysql environment: - MYSQL_USER_NAME=root - MYSQL_PWD={MYSQL_PWD} - MYSQL_IP=apipark-mysql - MYSQL_PORT=3306 #mysql端口 - MYSQL_DB="apipark" - ERROR_DIR=work/logs # 日志放置目录 - ERROR_FILE_NAME=error.log # 错误日志文件名 - ERROR_LOG_LEVEL=info # 错误日志等级,可选:panic,fatal,error,warning,info,debug,trace 不填或者非法则为info - ERROR_EXPIRE=7d # 错误日志过期时间,默认单位为天,d|天,h|小时, 不合法配置默认为7d - ERROR_PERIOD=day # 错误日志切割周期,仅支持day、hour - REDIS_ADDR=apipark-redis:6379 #Redis集群地址 多个用,隔开 - REDIS_PWD={REDIS_PWD} # Redis密码 - ADMIN_PASSWORD={ADMIN_PASSWORD} - Init=true - InfluxdbToken={INFLUXDB_TOKEN} apipark-influxdb: image: influxdb:2.6 privileged: true restart: always container_name: apipark-influxdb hostname: apipark-influxdb ports: - "8086:8086" volumes: - /var/lib/apipark/influxdb2:/var/lib/influxdb2 environment: - DOCKER_INFLUXDB_INIT_USERNAME=admin - DOCKER_INFLUXDB_INIT_PASSWORD=Key123qaz - DOCKER_INFLUXDB_INIT_ORG=apipark - DOCKER_INFLUXDB_INIT_BUCKET=apinto - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN={INFLUXDB_TOKEN} - DOCKER_INFLUXDB_INIT_MODE=setup networks: - apipark apipark-redis: container_name: apipark-redis image: redis:7.2.4 hostname: apipark-redis privileged: true restart: always ports: - 6379:6379 command: - bash - -c - "redis-server --protected-mode yes --logfile redis.log --appendonly no --port 6379 --requirepass {REDIS_PWD}" networks: - apipark apipark-loki: container_name: apipark-loki image: grafana/loki:3.2.1 hostname: apipark-loki privileged: true user: root restart: always ports: - 3100:3100 entrypoint: - sh - -euc - | mkdir -p /mnt/config cat <<EOF > /mnt/config/loki-config.yaml --- auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 common: instance_addr: 127.0.0.1 path_prefix: /tmp/loki storage: filesystem: chunks_directory: /tmp/loki/chunks rules_directory: /tmp/loki/rules replication_factor: 1 ring: kvstore: store: inmemory query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100 schema_config: configs: - from: 2020-10-24 store: tsdb object_store: filesystem schema: v13 index: prefix: index_ period: 24h limits_config: max_query_length: 90d # 设置最大查询时长为 30 天 ruler: alertmanager_url: http://localhost:9093 # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/ # # Statistics help us better understand how Loki is used, and they show us performance # levels for most users. This helps us prioritize features and documentation. # For more information on what's sent, look at # https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go # Refer to the buildReport method to see what goes into a report. # # If you would like to disable reporting, uncomment the following lines: #analytics: # reporting_enabled: false table_manager: retention_period: 90d EOF /usr/bin/loki -config.file=/mnt/config/loki-config.yaml networks: - apipark apipark-grafana: container_name: apipark-grafana image: grafana/grafana:11.3.2 hostname: apipark-grafana privileged: true restart: always environment: - GF_PATHS_PROVISIONING=/etc/grafana/provisioning - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin depends_on: - apipark-loki entrypoint: - sh - -euc - | mkdir -p /etc/grafana/provisioning/datasources cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://apipark-loki:3100 EOF /run.sh ports: - "3000:3000" healthcheck: test: [ "CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1" ] interval: 10s timeout: 5s retries: 5 networks: - apipark apipark-nsq: container_name: apipark-nsq image: nsqio/nsq:v1.3.0 hostname: apipark-nsq privileged: true restart: always command: - /nsqd ports: - 4150:4150 - 4151:4151 networks: - apipark apipark-apinto: image: eolinker/apinto-gateway container_name: apipark-apinto privileged: true restart: always ports: - "18099:8099" - "19400:9400" - "19401:9401" volumes: - /var/lib/apipark/apinto/data:/var/lib/apinto - /var/lib/apipark/apinto/log:/var/log/apinto - ${PWD}/config.yml:/etc/apinto/config.yml networks: - apipark networks: apipark: driver: bridge ipam: driver: default config: - subnet: 172.100.0.0/24 上述配置中,使用 "" 包裹的均为变量,相关变量说明如下: **MYSQL_PWD:**mysql数据库root用户初始化密码 **REDIS_PWD:**redis密码 **ADMIN_PASSWORD:**APIPark Admin账号初始密码 **INFLUXDB_TOKEN:**InfluxDB 初始化Token 替换后配置示例如下: version: '3' services: apipark-mysql: image: mysql:8.0.37 privileged: true restart: always container_name: apipark-mysql hostname: apipark-mysql command: - "--character-set-server=utf8mb4" - "--collation-server=utf8mb4_unicode_ci" ports: - "33306:3306" environment: - MYSQL_ROOT_PASSWORD=123456 - MYSQL_DATABASE=apipark volumes: - /var/lib/apipark/mysql:/var/lib/mysql networks: - apipark apipark: image: apipark/apipark:v1.7.3-beta container_name: apipark privileged: true restart: always networks: - apipark ports: - "18288:8288" depends_on: - apipark-mysql environment: - MYSQL_USER_NAME=root - MYSQL_PWD=123456 - MYSQL_IP=apipark-mysql - MYSQL_PORT=3306 #mysql端口 - MYSQL_DB="apipark" - ERROR_DIR=work/logs # 日志放置目录 - ERROR_FILE_NAME=error.log # 错误日志文件名 - ERROR_LOG_LEVEL=info # 错误日志等级,可选:panic,fatal,error,warning,info,debug,trace 不填或者非法则为info - ERROR_EXPIRE=7d # 错误日志过期时间,默认单位为天,d|天,h|小时, 不合法配置默认为7d - ERROR_PERIOD=day # 错误日志切割周期,仅支持day、hour - REDIS_ADDR=apipark-redis:6379 #Redis集群地址 多个用,隔开 - REDIS_PWD=123456 # Redis密码 - ADMIN_PASSWORD=12345678 - Init=true - InfluxdbToken=dQ9>fK6&gJ apipark-influxdb: image: influxdb:2.6 privileged: true restart: always container_name: apipark-influxdb hostname: apipark-influxdb ports: - "8086:8086" volumes: - /var/lib/apipark/influxdb2:/var/lib/influxdb2 environment: - DOCKER_INFLUXDB_INIT_USERNAME=admin - DOCKER_INFLUXDB_INIT_PASSWORD=Key123qaz - DOCKER_INFLUXDB_INIT_ORG=apipark - DOCKER_INFLUXDB_INIT_BUCKET=apinto - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=dQ9>fK6&gJ - DOCKER_INFLUXDB_INIT_MODE=setup networks: - apipark apipark-redis: container_name: apipark-redis image: redis:7.2.4 hostname: apipark-redis privileged: true restart: always ports: - 6379:6379 command: - bash - -c - "redis-server --protected-mode yes --logfile redis.log --appendonly no --port 6379 --requirepass 123456" networks: - apipark apipark-loki: container_name: apipark-loki image: grafana/loki:3.2.1 hostname: apipark-loki privileged: true user: root restart: always ports: - 3100:3100 entrypoint: - sh - -euc - | mkdir -p /mnt/config cat <<EOF > /mnt/config/loki-config.yaml --- auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 common: instance_addr: 127.0.0.1 path_prefix: /tmp/loki storage: filesystem: chunks_directory: /tmp/loki/chunks rules_directory: /tmp/loki/rules replication_factor: 1 ring: kvstore: store: inmemory query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100 schema_config: configs: - from: 2020-10-24 store: tsdb object_store: filesystem schema: v13 index: prefix: index_ period: 24h limits_config: max_query_length: 90d # 设置最大查询时长为 30 天 ruler: alertmanager_url: http://localhost:9093 # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/ # # Statistics help us better understand how Loki is used, and they show us performance # levels for most users. This helps us prioritize features and documentation. # For more information on what's sent, look at # https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go # Refer to the buildReport method to see what goes into a report. # # If you would like to disable reporting, uncomment the following lines: #analytics: # reporting_enabled: false table_manager: retention_period: 90d EOF /usr/bin/loki -config.file=/mnt/config/loki-config.yaml networks: - apipark apipark-grafana: container_name: apipark-grafana image: grafana/grafana:11.3.2 hostname: apipark-grafana privileged: true restart: always environment: - GF_PATHS_PROVISIONING=/etc/grafana/provisioning - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin depends_on: - apipark-loki entrypoint: - sh - -euc - | mkdir -p /etc/grafana/provisioning/datasources cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://apipark-loki:3100 EOF /run.sh ports: - "3000:3000" healthcheck: test: [ "CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1" ] interval: 10s timeout: 5s retries: 5 networks: - apipark apipark-nsq: container_name: apipark-nsq image: nsqio/nsq:v1.3.0 hostname: apipark-nsq privileged: true restart: always command: - /nsqd ports: - 4150:4150 - 4151:4151 networks: - apipark apipark-apinto: image: eolinker/apinto-gateway container_name: apipark-apinto privileged: true restart: always ports: - "18099:8099" - "19400:9400" - "19401:9401" volumes: - /var/lib/apipark/apinto/data:/var/lib/apinto - /var/lib/apipark/apinto/log:/var/log/apinto - ${PWD}/config.yml:/etc/apinto/config.yml networks: - apipark networks: apipark: driver: bridge ipam: driver: default config: - subnet: 172.100.0.0/24 启动APIPark docker-compose up -d 执行完成后,将出现如下图所示: 单独部署APIPark 编辑docker-compose.yml文件 vi docker-compose.yml 修改文件配置 version: '3' services: apipark-mysql: image: mysql:8.0.37 privileged: true restart: always container_name: apipark-mysql hostname: apipark-mysql command: - "--character-set-server=utf8mb4" - "--collation-server=utf8mb4_unicode_ci" ports: - "33306:3306" environment: - MYSQL_ROOT_PASSWORD={MYSQL_PWD} - MYSQL_DATABASE=apipark volumes: - /var/lib/apipark/mysql:/var/lib/mysql networks: - apipark apipark: image: apipark/apipark:v1.7.3-beta container_name: apipark privileged: true restart: always networks: - apipark ports: - "18288:8288" depends_on: - apipark-mysql environment: - MYSQL_USER_NAME=root - MYSQL_PWD={MYSQL_PWD} - MYSQL_IP=apipark-mysql - MYSQL_PORT=3306 #mysql端口 - MYSQL_DB="apipark" - ERROR_DIR=work/logs # 日志放置目录 - ERROR_FILE_NAME=error.log # 错误日志文件名 - ERROR_LOG_LEVEL=info # 错误日志等级,可选:panic,fatal,error,warning,info,debug,trace 不填或者非法则为info - ERROR_EXPIRE=7d # 错误日志过期时间,默认单位为天,d|天,h|小时, 不合法配置默认为7d - ERROR_PERIOD=day # 错误日志切割周期,仅支持day、hour - REDIS_ADDR=apipark-redis:6379 #Redis集群地址 多个用,隔开 - REDIS_PWD={REDIS_PWD} # Redis密码 - ADMIN_PASSWORD={ADMIN_PASSWORD} - Init=true - InfluxdbToken={INFLUXDB_TOKEN} apipark-influxdb: image: influxdb:2.6 privileged: true restart: always container_name: apipark-influxdb hostname: apipark-influxdb ports: - "8086:8086" volumes: - /var/lib/apipark/influxdb2:/var/lib/influxdb2 environment: - DOCKER_INFLUXDB_INIT_USERNAME=admin - DOCKER_INFLUXDB_INIT_PASSWORD=Key123qaz - DOCKER_INFLUXDB_INIT_ORG=apipark - DOCKER_INFLUXDB_INIT_BUCKET=apinto - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN={INFLUXDB_TOKEN} - DOCKER_INFLUXDB_INIT_MODE=setup networks: - apipark apipark-redis: container_name: apipark-redis image: redis:7.2.4 hostname: apipark-redis privileged: true restart: always ports: - 6379:6379 command: - bash - -c - "redis-server --protected-mode yes --logfile redis.log --appendonly no --port 6379 --requirepass {REDIS_PWD}" networks: - apipark apipark-loki: container_name: apipark-loki image: grafana/loki:3.2.1 hostname: apipark-loki privileged: true user: root restart: always ports: - 3100:3100 entrypoint: - sh - -euc - | mkdir -p /mnt/config cat <<EOF > /mnt/config/loki-config.yaml --- auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 common: instance_addr: 127.0.0.1 path_prefix: /tmp/loki storage: filesystem: chunks_directory: /tmp/loki/chunks rules_directory: /tmp/loki/rules replication_factor: 1 ring: kvstore: store: inmemory query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100 schema_config: configs: - from: 2020-10-24 store: tsdb object_store: filesystem schema: v13 index: prefix: index_ period: 24h limits_config: max_query_length: 90d # 设置最大查询时长为 30 天 ruler: alertmanager_url: http://localhost:9093 # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/ # # Statistics help us better understand how Loki is used, and they show us performance # levels for most users. This helps us prioritize features and documentation. # For more information on what's sent, look at # https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go # Refer to the buildReport method to see what goes into a report. # # If you would like to disable reporting, uncomment the following lines: #analytics: # reporting_enabled: false table_manager: retention_period: 90d EOF /usr/bin/loki -config.file=/mnt/config/loki-config.yaml networks: - apipark apipark-grafana: container_name: apipark-grafana image: grafana/grafana:11.3.2 hostname: apipark-grafana privileged: true restart: always environment: - GF_PATHS_PROVISIONING=/etc/grafana/provisioning - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin depends_on: - apipark-loki entrypoint: - sh - -euc - | mkdir -p /etc/grafana/provisioning/datasources cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://apipark-loki:3100 EOF /run.sh ports: - "3000:3000" healthcheck: test: [ "CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1" ] interval: 10s timeout: 5s retries: 5 networks: - apipark apipark-nsq: container_name: apipark-nsq image: nsqio/nsq:v1.3.0 hostname: apipark-nsq privileged: true restart: always command: - /nsqd ports: - 4150:4150 - 4151:4151 networks: - apipark networks: apipark: driver: bridge ipam: driver: default config: - subnet: 172.100.0.0/24 上述配置中,使用 "" 包裹的均为变量,相关变量说明如下: **MYSQL_PWD:**mysql数据库root用户初始化密码 **REDIS_PWD:**redis密码 **ADMIN_PASSWORD:**APIPark Admin账号初始密码 **INFLUXDB_TOKEN:**InfluxDB 初始化Token 替换后配置示例如下: version: '3' services: apipark-mysql: image: mysql:8.0.37 privileged: true restart: always container_name: apipark-mysql hostname: apipark-mysql command: - "--character-set-server=utf8mb4" - "--collation-server=utf8mb4_unicode_ci" ports: - "33306:3306" environment: - MYSQL_ROOT_PASSWORD=123456 - MYSQL_DATABASE=apipark volumes: - /var/lib/apipark/mysql:/var/lib/mysql networks: - apipark apipark: image: apipark/apipark:v1.7.3-beta container_name: apipark privileged: true restart: always networks: - apipark ports: - "18288:8288" depends_on: - apipark-mysql environment: - MYSQL_USER_NAME=root - MYSQL_PWD=123456 - MYSQL_IP=apipark-mysql - MYSQL_PORT=3306 #mysql端口 - MYSQL_DB="apipark" - ERROR_DIR=work/logs # 日志放置目录 - ERROR_FILE_NAME=error.log # 错误日志文件名 - ERROR_LOG_LEVEL=info # 错误日志等级,可选:panic,fatal,error,warning,info,debug,trace 不填或者非法则为info - ERROR_EXPIRE=7d # 错误日志过期时间,默认单位为天,d|天,h|小时, 不合法配置默认为7d - ERROR_PERIOD=day # 错误日志切割周期,仅支持day、hour - REDIS_ADDR=apipark-redis:6379 #Redis集群地址 多个用,隔开 - REDIS_PWD=123456 # Redis密码 - ADMIN_PASSWORD=12345678 - Init=true - InfluxdbToken=dQ9>fK6&gJ apipark-influxdb: image: influxdb:2.6 privileged: true restart: always container_name: apipark-influxdb hostname: apipark-influxdb ports: - "8086:8086" volumes: - /var/lib/apipark/influxdb2:/var/lib/influxdb2 environment: - DOCKER_INFLUXDB_INIT_USERNAME=admin - DOCKER_INFLUXDB_INIT_PASSWORD=Key123qaz - DOCKER_INFLUXDB_INIT_ORG=apipark - DOCKER_INFLUXDB_INIT_BUCKET=apinto - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=dQ9>fK6&gJ - DOCKER_INFLUXDB_INIT_MODE=setup networks: - apipark apipark-redis: container_name: apipark-redis image: redis:7.2.4 hostname: apipark-redis privileged: true restart: always ports: - 6379:6379 command: - bash - -c - "redis-server --protected-mode yes --logfile redis.log --appendonly no --port 6379 --requirepass 123456" networks: - apipark apipark-loki: container_name: apipark-loki image: grafana/loki:3.2.1 hostname: apipark-loki privileged: true user: root restart: always ports: - 3100:3100 entrypoint: - sh - -euc - | mkdir -p /mnt/config cat <<EOF > /mnt/config/loki-config.yaml --- auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 common: instance_addr: 127.0.0.1 path_prefix: /tmp/loki storage: filesystem: chunks_directory: /tmp/loki/chunks rules_directory: /tmp/loki/rules replication_factor: 1 ring: kvstore: store: inmemory query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100 schema_config: configs: - from: 2020-10-24 store: tsdb object_store: filesystem schema: v13 index: prefix: index_ period: 24h limits_config: max_query_length: 90d # 设置最大查询时长为 30 天 ruler: alertmanager_url: http://localhost:9093 # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/ # # Statistics help us better understand how Loki is used, and they show us performance # levels for most users. This helps us prioritize features and documentation. # For more information on what's sent, look at # https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go # Refer to the buildReport method to see what goes into a report. # # If you would like to disable reporting, uncomment the following lines: #analytics: # reporting_enabled: false table_manager: retention_period: 90d EOF /usr/bin/loki -config.file=/mnt/config/loki-config.yaml networks: - apipark apipark-grafana: container_name: apipark-grafana image: grafana/grafana:11.3.2 hostname: apipark-grafana privileged: true restart: always environment: - GF_PATHS_PROVISIONING=/etc/grafana/provisioning - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin depends_on: - apipark-loki entrypoint: - sh - -euc - | mkdir -p /etc/grafana/provisioning/datasources cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://apipark-loki:3100 EOF /run.sh ports: - "3000:3000" healthcheck: test: [ "CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1" ] interval: 10s timeout: 5s retries: 5 networks: - apipark apipark-nsq: container_name: apipark-nsq image: nsqio/nsq:v1.3.0 hostname: apipark-nsq privileged: true restart: always command: - /nsqd ports: - 4150:4150 - 4151:4151 networks: - apipark networks: apipark: driver: bridge ipam: driver: default config: - subnet: 172.100.0.0/24 启动APIPark docker-compose up -d 执行完成后,将出现如下图所示: 单独部署API网关 编辑config.yml vi config.yml 修改文件配置 version: 2 #certificate: # 证书存放根目录 # dir: /etc/apinto/cert client: advertise_urls: # open api 服务的广播地址 - http://{IP}:9400 listen_urls: # open api 服务的监听地址 - http://0.0.0.0:9400 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key gateway: advertise_urls: # 转发服务的广播地址 - http://{IP}:8099 - https://{IP}:8099 listen_urls: # 转发服务的监听地址 - https://0.0.0.0:8099 - http://0.0.0.0:8099 peer: # 集群间节点通信配置信息 listen_urls: # 节点监听地址 - http://0.0.0.0:9401 advertise_urls: # 节点通信广播地址 - http://{IP}:9401 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key 上述配置中的{IP}是一个变量,应该填写容器所在宿主机IP,假设宿主机IP为172.18.65.22,则此时配置应如下 version: 2 #certificate: # 证书存放根目录 # dir: /etc/apinto/cert client: advertise_urls: # open api 服务的广播地址 - http://172.18.65.22:9400 listen_urls: # open api 服务的监听地址 - http://0.0.0.0:9400 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key gateway: advertise_urls: # 转发服务的广播地址 - http://172.18.65.22:8099 - https://172.18.65.22:8099 listen_urls: # 转发服务的监听地址 - https://0.0.0.0:8099 - http://0.0.0.0:8099 peer: # 集群间节点通信配置信息 listen_urls: # 节点监听地址 - http://0.0.0.0:9401 advertise_urls: # 节点通信广播地址 - http://172.18.65.22:9401 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key 配置说明 字段名称 说明 version 配置版本号,默认2 client openAPI配置信息 client -> listen_urls openAPI监听地址列表,格式:{协议}://{IP}:{端口} client -> advertise_urls openAPI广播地址列表,在控制台集群节点列表中展示,格式:{协议}://{IP/域名}:{端口} client -> certificate openAPI证书信息列表 gateway 转发代理核心程序配置信息 gateway -> listen_urls 转发代理核心程序监听地址列表,格式:{协议}://{IP}:{端口} gateway -> advertise_urls 转发代理核心程序广播地址列表,在控制台集群节点列表中展示,格式:{协议}://{IP/域名}:{端口} peer Raft节点配置信息,用于Raft集群节点配置同步、加入集群、离开集群等操作的通信 peer -> listen_urls Raft节点监听地址列表,格式:{协议}://{IP}:{端口} peer -> advertise_urls Raft节点广播地址列表,格式:{协议}://{IP/域名}:{端口} peer -> certificate Raft节点证书信息列表 运行Docker容器,并挂载配置文件config.yml docker run -td -p 8099:8099 -p 9400:9400 -p 9401:9401 --privileged=true \ -v /var/lib/apinto/data:/var/lib/apinto \ -v /var/lib/apinto/log:/var/log/apinto \ -v ${PWD}/config.yml:/etc/apinto/config.yml \ --name=apinto_node eolinker/apinto-gateway:latest ./start.sh,我现在要从github上把这个项目拉起来,如何在idea里面运行跑起来,帮我从0到1把这个启动项目运行项目,最后打包需要配置的地方给我列出来
06-07
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值