Shell之-获取tomcat_Log

本文提供了一段使用bash脚本自动化拉取远程Tomcat日志catalina.log的详细步骤,适用于系统管理员和运维人员,旨在简化日常运维任务。

拉取远程Tomcat日志 catalina.log

首先,大家如果看到有什么不懂的地方,欢迎吐槽!!!
我会在当天或者第二天及时回复,并且改进~~


#!/bin/bash

#定义目标log路径
#tomcat根目录路径
#当前时间
#删除一周前日志的时间
#备份的catalina.out的日志file名称
Dst_Log_Path="/usr/local/logs"
Tomcat_Path="/usr/local/tomcats"
Date=$(date +%Y%m%d)
Date_7=$(date +%Y%m%d --date="-7 day")
Catalina_Log=catalina.out-${Date}.gz

#定义Server4
function S_C4_Log () {
S_C4="183.131.13.61"
S_C4_TomcatList=(trade-alive1 tradeservice-alive1 scservice-alive1)

[ ! -d ${Dst_Log_Path}/S_C4/${Date} ] && mkdir -p ${Dst_Log_Path}/S_C4/${Date}
rm -rf ${Dst_Log_Path}/S_C4/${Date_7}

for (( i = 0; i < ${#S_C4_TomcatList[$i]}; i++))
do
        scp  ${S_C4}:${Tomcat_Path}/${S_C4_TomcatList[$i]}/logs/${Catalina_Log} ${Dst_Log_Path}/S_C4/${Date}/${S_C4_TomcatList[$i]}.gz
done
}

#定义Server5
function S_C5_Log () {
S_C5="183.131.13.62"
S_C5_TomcatList=(trade-alive1 tradeservice-alive1 scservice-alive1)

[ ! -d ${Dst_Log_Path}/S_C5/{Date} ] && mkdir -p ${Dst_Log_Path}/S_C5/${Date}
rm -rf ${Dst_Log_Path}/S_C5/${Date_7}

for (( i = 0; i < ${#S_C5_TomcatList[$i]}; i++))
do
        scp  ${S_C5}:${Tomcat_Path}/${S_C5_TomcatList[$i]}/logs/${Catalina_Log} ${Dst_Log_Path}/S_C5/${Date}/${S_C5_TomcatList[$i]}.gz
done

}

#定义Server8
function S_C8_Log () {
S_C8="183.131.13.59"
S_C8_TomcatList=(consoletemp-alive1)

[ ! -d ${Dst_Log_Path}/S_C8/${Date} ] && mkdir -p ${Dst_Log_Path}/S_C8/${Date}
rm -rf ${Dst_Log_Path}/S_C8/${Date_7}

for (( i = 0; i < ${#S_C8_TomcatList[$i]}; i++))
do
        scp  ${S_C8}:${Tomcat_Path}/${S_C8_TomcatList[$i]}/logs/${Catalina_Log} ${Dst_Log_Path}/S_C8/${Date}/${S_C8_TomcatList[$i]}.gz
done

}

#定义Server10
function S_C10_Log () {
S_C10="101.71.39.61"
S_C10_TomcatList=(trade-preview1 trade-preview2 tradeservice-preview1 tradeservice-preview2 scservice-preview1 scservice-preview2 consoletemp-preview1)

[ ! -d ${Dst_Log_Path}/S_C10/${Date} ] && mkdir -p ${Dst_Log_Path}/S_C10/${Date}
rm -rf ${Dst_Log_Path}/S_C10/${Date_7}

for (( i = 0; i < ${#S_C10_TomcatList[$i]}; i++))
do
        scp -l 500 ${S_C10}:${Tomcat_Path}/${S_C10_TomcatList[$i]}/logs/${Catalina_Log} ${Dst_Log_Path}/S_C10/${Date}/${S_C10_TomcatList[$i]}.gz
done

}


S_C4_Log
S_C5_Log
S_C8_Log
S_C10_Log
程序依赖 APIPark 依赖 MYSQL、Redis、InfluxDB 数据库,下表是数据库所需版本: 名称 版本要求 MYSQL >=5.7.x Redis >=6.2.x InfluxDB >=2.6 部署方式 使用脚本部署 备注 支持的系统列表: CentOS 7.9(7.x为代表) CentOS 8.5(8.x为代表) Ubuntu 20.04 Ubuntu 22.04 Debain 12.4 Alibaba Cloud Linux 3.2104 Alibaba Cloud Linux 2.1903 当前仅测试了上述部署的安装,若需要其他系统的一键部署,可给我们提交Issue。 输入一键部署指令: curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh 按照提示进行部署即可,部署完成后,将会展示部署信息。 Docker-Compose部署 使用此方法安装 APIPark,你需要安装 Docker 和 Docker Compose。 部署完成后,APIPark需要绑定API网关节点才可使用,具体教程请参考配置API网关 部署APIPark+API网关 编辑config.yml vi config.yml 修改文件配置 version: 2 #certificate: # 证书存放根目录 # dir: /etc/apinto/cert client: advertise_urls: # open api 服务的广播地址 - http://{IP}:9400 listen_urls: # open api 服务的监听地址 - http://0.0.0.0:9400 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key gateway: advertise_urls: # 转发服务的广播地址 - http://{IP}:8099 - https://{IP}:8099 listen_urls: # 转发服务的监听地址 - https://0.0.0.0:8099 - http://0.0.0.0:8099 peer: # 集群间节点通信配置信息 listen_urls: # 节点监听地址 - http://0.0.0.0:9401 advertise_urls: # 节点通信广播地址 - http://{IP}:9401 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key 上述配置中的{IP}是一个变量,应该填写容器所在宿主机IP,假设宿主机IP为172.18.65.22,则此时配置应如下 version: 2 #certificate: # 证书存放根目录 # dir: /etc/apinto/cert client: advertise_urls: # open api 服务的广播地址 - http://172.18.65.22:9400 listen_urls: # open api 服务的监听地址 - http://0.0.0.0:9400 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key gateway: advertise_urls: # 转发服务的广播地址 - http://172.18.65.22:8099 - https://172.18.65.22:8099 listen_urls: # 转发服务的监听地址 - https://0.0.0.0:8099 - http://0.0.0.0:8099 peer: # 集群间节点通信配置信息 listen_urls: # 节点监听地址 - http://0.0.0.0:9401 advertise_urls: # 节点通信广播地址 - http://172.18.65.22:9401 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key 配置说明 字段名称 说明 version 配置版本号,默认2 client openAPI配置信息 client -> listen_urls openAPI监听地址列表,格式:{协议}://{IP}:{端口} client -> advertise_urls openAPI广播地址列表,在控制台集群节点列表中展示,格式:{协议}://{IP/域名}:{端口} client -> certificate openAPI证书信息列表 gateway 转发代理核心程序配置信息 gateway -> listen_urls 转发代理核心程序监听地址列表,格式:{协议}://{IP}:{端口} gateway -> advertise_urls 转发代理核心程序广播地址列表,在控制台集群节点列表中展示,格式:{协议}://{IP/域名}:{端口} peer Raft节点配置信息,用于Raft集群节点配置同步、加入集群、离开集群等操作的通信 peer -> listen_urls Raft节点监听地址列表,格式:{协议}://{IP}:{端口} peer -> advertise_urls Raft节点广播地址列表,格式:{协议}://{IP/域名}:{端口} peer -> certificate Raft节点证书信息列表 编辑docker-compose.yml文件 vi docker-compose.yml 修改文件配置 version: '3' services: apipark-mysql: image: mysql:8.0.37 privileged: true restart: always container_name: apipark-mysql hostname: apipark-mysql command: - "--character-set-server=utf8mb4" - "--collation-server=utf8mb4_unicode_ci" ports: - "33306:3306" environment: - MYSQL_ROOT_PASSWORD={MYSQL_PWD} - MYSQL_DATABASE=apipark volumes: - /var/lib/apipark/mysql:/var/lib/mysql networks: - apipark apipark: image: apipark/apipark:v1.7.3-beta container_name: apipark privileged: true restart: always networks: - apipark ports: - "18288:8288" depends_on: - apipark-mysql environment: - MYSQL_USER_NAME=root - MYSQL_PWD={MYSQL_PWD} - MYSQL_IP=apipark-mysql - MYSQL_PORT=3306 #mysql端口 - MYSQL_DB="apipark" - ERROR_DIR=work/logs # 日志放置目录 - ERROR_FILE_NAME=error.log # 错误日志文件名 - ERROR_LOG_LEVEL=info # 错误日志等级,可选:panic,fatal,error,warning,info,debug,trace 不填或者非法则为info - ERROR_EXPIRE=7d # 错误日志过期时间,默认单位为天,d|天,h|小时, 不合法配置默认为7d - ERROR_PERIOD=day # 错误日志切割周期,仅支持day、hour - REDIS_ADDR=apipark-redis:6379 #Redis集群地址 多个用,隔开 - REDIS_PWD={REDIS_PWD} # Redis密码 - ADMIN_PASSWORD={ADMIN_PASSWORD} - Init=true - InfluxdbToken={INFLUXDB_TOKEN} apipark-influxdb: image: influxdb:2.6 privileged: true restart: always container_name: apipark-influxdb hostname: apipark-influxdb ports: - "8086:8086" volumes: - /var/lib/apipark/influxdb2:/var/lib/influxdb2 environment: - DOCKER_INFLUXDB_INIT_USERNAME=admin - DOCKER_INFLUXDB_INIT_PASSWORD=Key123qaz - DOCKER_INFLUXDB_INIT_ORG=apipark - DOCKER_INFLUXDB_INIT_BUCKET=apinto - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN={INFLUXDB_TOKEN} - DOCKER_INFLUXDB_INIT_MODE=setup networks: - apipark apipark-redis: container_name: apipark-redis image: redis:7.2.4 hostname: apipark-redis privileged: true restart: always ports: - 6379:6379 command: - bash - -c - "redis-server --protected-mode yes --logfile redis.log --appendonly no --port 6379 --requirepass {REDIS_PWD}" networks: - apipark apipark-loki: container_name: apipark-loki image: grafana/loki:3.2.1 hostname: apipark-loki privileged: true user: root restart: always ports: - 3100:3100 entrypoint: - sh - -euc - | mkdir -p /mnt/config cat <<EOF > /mnt/config/loki-config.yaml --- auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 common: instance_addr: 127.0.0.1 path_prefix: /tmp/loki storage: filesystem: chunks_directory: /tmp/loki/chunks rules_directory: /tmp/loki/rules replication_factor: 1 ring: kvstore: store: inmemory query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100 schema_config: configs: - from: 2020-10-24 store: tsdb object_store: filesystem schema: v13 index: prefix: index_ period: 24h limits_config: max_query_length: 90d # 设置最大查询时长为 30 天 ruler: alertmanager_url: http://localhost:9093 # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/ # # Statistics help us better understand how Loki is used, and they show us performance # levels for most users. This helps us prioritize features and documentation. # For more information on what's sent, look at # https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go # Refer to the buildReport method to see what goes into a report. # # If you would like to disable reporting, uncomment the following lines: #analytics: # reporting_enabled: false table_manager: retention_period: 90d EOF /usr/bin/loki -config.file=/mnt/config/loki-config.yaml networks: - apipark apipark-grafana: container_name: apipark-grafana image: grafana/grafana:11.3.2 hostname: apipark-grafana privileged: true restart: always environment: - GF_PATHS_PROVISIONING=/etc/grafana/provisioning - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin depends_on: - apipark-loki entrypoint: - sh - -euc - | mkdir -p /etc/grafana/provisioning/datasources cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://apipark-loki:3100 EOF /run.sh ports: - "3000:3000" healthcheck: test: [ "CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1" ] interval: 10s timeout: 5s retries: 5 networks: - apipark apipark-nsq: container_name: apipark-nsq image: nsqio/nsq:v1.3.0 hostname: apipark-nsq privileged: true restart: always command: - /nsqd ports: - 4150:4150 - 4151:4151 networks: - apipark apipark-apinto: image: eolinker/apinto-gateway container_name: apipark-apinto privileged: true restart: always ports: - "18099:8099" - "19400:9400" - "19401:9401" volumes: - /var/lib/apipark/apinto/data:/var/lib/apinto - /var/lib/apipark/apinto/log:/var/log/apinto - ${PWD}/config.yml:/etc/apinto/config.yml networks: - apipark networks: apipark: driver: bridge ipam: driver: default config: - subnet: 172.100.0.0/24 上述配置中,使用 "" 包裹的均为变量,相关变量说明如下: **MYSQL_PWD:**mysql数据库root用户初始化密码 **REDIS_PWD:**redis密码 **ADMIN_PASSWORD:**APIPark Admin账号初始密码 **INFLUXDB_TOKEN:**InfluxDB 初始化Token 替换后配置示例如下: version: '3' services: apipark-mysql: image: mysql:8.0.37 privileged: true restart: always container_name: apipark-mysql hostname: apipark-mysql command: - "--character-set-server=utf8mb4" - "--collation-server=utf8mb4_unicode_ci" ports: - "33306:3306" environment: - MYSQL_ROOT_PASSWORD=123456 - MYSQL_DATABASE=apipark volumes: - /var/lib/apipark/mysql:/var/lib/mysql networks: - apipark apipark: image: apipark/apipark:v1.7.3-beta container_name: apipark privileged: true restart: always networks: - apipark ports: - "18288:8288" depends_on: - apipark-mysql environment: - MYSQL_USER_NAME=root - MYSQL_PWD=123456 - MYSQL_IP=apipark-mysql - MYSQL_PORT=3306 #mysql端口 - MYSQL_DB="apipark" - ERROR_DIR=work/logs # 日志放置目录 - ERROR_FILE_NAME=error.log # 错误日志文件名 - ERROR_LOG_LEVEL=info # 错误日志等级,可选:panic,fatal,error,warning,info,debug,trace 不填或者非法则为info - ERROR_EXPIRE=7d # 错误日志过期时间,默认单位为天,d|天,h|小时, 不合法配置默认为7d - ERROR_PERIOD=day # 错误日志切割周期,仅支持day、hour - REDIS_ADDR=apipark-redis:6379 #Redis集群地址 多个用,隔开 - REDIS_PWD=123456 # Redis密码 - ADMIN_PASSWORD=12345678 - Init=true - InfluxdbToken=dQ9>fK6&gJ apipark-influxdb: image: influxdb:2.6 privileged: true restart: always container_name: apipark-influxdb hostname: apipark-influxdb ports: - "8086:8086" volumes: - /var/lib/apipark/influxdb2:/var/lib/influxdb2 environment: - DOCKER_INFLUXDB_INIT_USERNAME=admin - DOCKER_INFLUXDB_INIT_PASSWORD=Key123qaz - DOCKER_INFLUXDB_INIT_ORG=apipark - DOCKER_INFLUXDB_INIT_BUCKET=apinto - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=dQ9>fK6&gJ - DOCKER_INFLUXDB_INIT_MODE=setup networks: - apipark apipark-redis: container_name: apipark-redis image: redis:7.2.4 hostname: apipark-redis privileged: true restart: always ports: - 6379:6379 command: - bash - -c - "redis-server --protected-mode yes --logfile redis.log --appendonly no --port 6379 --requirepass 123456" networks: - apipark apipark-loki: container_name: apipark-loki image: grafana/loki:3.2.1 hostname: apipark-loki privileged: true user: root restart: always ports: - 3100:3100 entrypoint: - sh - -euc - | mkdir -p /mnt/config cat <<EOF > /mnt/config/loki-config.yaml --- auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 common: instance_addr: 127.0.0.1 path_prefix: /tmp/loki storage: filesystem: chunks_directory: /tmp/loki/chunks rules_directory: /tmp/loki/rules replication_factor: 1 ring: kvstore: store: inmemory query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100 schema_config: configs: - from: 2020-10-24 store: tsdb object_store: filesystem schema: v13 index: prefix: index_ period: 24h limits_config: max_query_length: 90d # 设置最大查询时长为 30 天 ruler: alertmanager_url: http://localhost:9093 # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/ # # Statistics help us better understand how Loki is used, and they show us performance # levels for most users. This helps us prioritize features and documentation. # For more information on what's sent, look at # https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go # Refer to the buildReport method to see what goes into a report. # # If you would like to disable reporting, uncomment the following lines: #analytics: # reporting_enabled: false table_manager: retention_period: 90d EOF /usr/bin/loki -config.file=/mnt/config/loki-config.yaml networks: - apipark apipark-grafana: container_name: apipark-grafana image: grafana/grafana:11.3.2 hostname: apipark-grafana privileged: true restart: always environment: - GF_PATHS_PROVISIONING=/etc/grafana/provisioning - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin depends_on: - apipark-loki entrypoint: - sh - -euc - | mkdir -p /etc/grafana/provisioning/datasources cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://apipark-loki:3100 EOF /run.sh ports: - "3000:3000" healthcheck: test: [ "CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1" ] interval: 10s timeout: 5s retries: 5 networks: - apipark apipark-nsq: container_name: apipark-nsq image: nsqio/nsq:v1.3.0 hostname: apipark-nsq privileged: true restart: always command: - /nsqd ports: - 4150:4150 - 4151:4151 networks: - apipark apipark-apinto: image: eolinker/apinto-gateway container_name: apipark-apinto privileged: true restart: always ports: - "18099:8099" - "19400:9400" - "19401:9401" volumes: - /var/lib/apipark/apinto/data:/var/lib/apinto - /var/lib/apipark/apinto/log:/var/log/apinto - ${PWD}/config.yml:/etc/apinto/config.yml networks: - apipark networks: apipark: driver: bridge ipam: driver: default config: - subnet: 172.100.0.0/24 启动APIPark docker-compose up -d 执行完成后,将出现如下图所示: 单独部署APIPark 编辑docker-compose.yml文件 vi docker-compose.yml 修改文件配置 version: '3' services: apipark-mysql: image: mysql:8.0.37 privileged: true restart: always container_name: apipark-mysql hostname: apipark-mysql command: - "--character-set-server=utf8mb4" - "--collation-server=utf8mb4_unicode_ci" ports: - "33306:3306" environment: - MYSQL_ROOT_PASSWORD={MYSQL_PWD} - MYSQL_DATABASE=apipark volumes: - /var/lib/apipark/mysql:/var/lib/mysql networks: - apipark apipark: image: apipark/apipark:v1.7.3-beta container_name: apipark privileged: true restart: always networks: - apipark ports: - "18288:8288" depends_on: - apipark-mysql environment: - MYSQL_USER_NAME=root - MYSQL_PWD={MYSQL_PWD} - MYSQL_IP=apipark-mysql - MYSQL_PORT=3306 #mysql端口 - MYSQL_DB="apipark" - ERROR_DIR=work/logs # 日志放置目录 - ERROR_FILE_NAME=error.log # 错误日志文件名 - ERROR_LOG_LEVEL=info # 错误日志等级,可选:panic,fatal,error,warning,info,debug,trace 不填或者非法则为info - ERROR_EXPIRE=7d # 错误日志过期时间,默认单位为天,d|天,h|小时, 不合法配置默认为7d - ERROR_PERIOD=day # 错误日志切割周期,仅支持day、hour - REDIS_ADDR=apipark-redis:6379 #Redis集群地址 多个用,隔开 - REDIS_PWD={REDIS_PWD} # Redis密码 - ADMIN_PASSWORD={ADMIN_PASSWORD} - Init=true - InfluxdbToken={INFLUXDB_TOKEN} apipark-influxdb: image: influxdb:2.6 privileged: true restart: always container_name: apipark-influxdb hostname: apipark-influxdb ports: - "8086:8086" volumes: - /var/lib/apipark/influxdb2:/var/lib/influxdb2 environment: - DOCKER_INFLUXDB_INIT_USERNAME=admin - DOCKER_INFLUXDB_INIT_PASSWORD=Key123qaz - DOCKER_INFLUXDB_INIT_ORG=apipark - DOCKER_INFLUXDB_INIT_BUCKET=apinto - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN={INFLUXDB_TOKEN} - DOCKER_INFLUXDB_INIT_MODE=setup networks: - apipark apipark-redis: container_name: apipark-redis image: redis:7.2.4 hostname: apipark-redis privileged: true restart: always ports: - 6379:6379 command: - bash - -c - "redis-server --protected-mode yes --logfile redis.log --appendonly no --port 6379 --requirepass {REDIS_PWD}" networks: - apipark apipark-loki: container_name: apipark-loki image: grafana/loki:3.2.1 hostname: apipark-loki privileged: true user: root restart: always ports: - 3100:3100 entrypoint: - sh - -euc - | mkdir -p /mnt/config cat <<EOF > /mnt/config/loki-config.yaml --- auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 common: instance_addr: 127.0.0.1 path_prefix: /tmp/loki storage: filesystem: chunks_directory: /tmp/loki/chunks rules_directory: /tmp/loki/rules replication_factor: 1 ring: kvstore: store: inmemory query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100 schema_config: configs: - from: 2020-10-24 store: tsdb object_store: filesystem schema: v13 index: prefix: index_ period: 24h limits_config: max_query_length: 90d # 设置最大查询时长为 30 天 ruler: alertmanager_url: http://localhost:9093 # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/ # # Statistics help us better understand how Loki is used, and they show us performance # levels for most users. This helps us prioritize features and documentation. # For more information on what's sent, look at # https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go # Refer to the buildReport method to see what goes into a report. # # If you would like to disable reporting, uncomment the following lines: #analytics: # reporting_enabled: false table_manager: retention_period: 90d EOF /usr/bin/loki -config.file=/mnt/config/loki-config.yaml networks: - apipark apipark-grafana: container_name: apipark-grafana image: grafana/grafana:11.3.2 hostname: apipark-grafana privileged: true restart: always environment: - GF_PATHS_PROVISIONING=/etc/grafana/provisioning - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin depends_on: - apipark-loki entrypoint: - sh - -euc - | mkdir -p /etc/grafana/provisioning/datasources cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://apipark-loki:3100 EOF /run.sh ports: - "3000:3000" healthcheck: test: [ "CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1" ] interval: 10s timeout: 5s retries: 5 networks: - apipark apipark-nsq: container_name: apipark-nsq image: nsqio/nsq:v1.3.0 hostname: apipark-nsq privileged: true restart: always command: - /nsqd ports: - 4150:4150 - 4151:4151 networks: - apipark networks: apipark: driver: bridge ipam: driver: default config: - subnet: 172.100.0.0/24 上述配置中,使用 "" 包裹的均为变量,相关变量说明如下: **MYSQL_PWD:**mysql数据库root用户初始化密码 **REDIS_PWD:**redis密码 **ADMIN_PASSWORD:**APIPark Admin账号初始密码 **INFLUXDB_TOKEN:**InfluxDB 初始化Token 替换后配置示例如下: version: '3' services: apipark-mysql: image: mysql:8.0.37 privileged: true restart: always container_name: apipark-mysql hostname: apipark-mysql command: - "--character-set-server=utf8mb4" - "--collation-server=utf8mb4_unicode_ci" ports: - "33306:3306" environment: - MYSQL_ROOT_PASSWORD=123456 - MYSQL_DATABASE=apipark volumes: - /var/lib/apipark/mysql:/var/lib/mysql networks: - apipark apipark: image: apipark/apipark:v1.7.3-beta container_name: apipark privileged: true restart: always networks: - apipark ports: - "18288:8288" depends_on: - apipark-mysql environment: - MYSQL_USER_NAME=root - MYSQL_PWD=123456 - MYSQL_IP=apipark-mysql - MYSQL_PORT=3306 #mysql端口 - MYSQL_DB="apipark" - ERROR_DIR=work/logs # 日志放置目录 - ERROR_FILE_NAME=error.log # 错误日志文件名 - ERROR_LOG_LEVEL=info # 错误日志等级,可选:panic,fatal,error,warning,info,debug,trace 不填或者非法则为info - ERROR_EXPIRE=7d # 错误日志过期时间,默认单位为天,d|天,h|小时, 不合法配置默认为7d - ERROR_PERIOD=day # 错误日志切割周期,仅支持day、hour - REDIS_ADDR=apipark-redis:6379 #Redis集群地址 多个用,隔开 - REDIS_PWD=123456 # Redis密码 - ADMIN_PASSWORD=12345678 - Init=true - InfluxdbToken=dQ9>fK6&gJ apipark-influxdb: image: influxdb:2.6 privileged: true restart: always container_name: apipark-influxdb hostname: apipark-influxdb ports: - "8086:8086" volumes: - /var/lib/apipark/influxdb2:/var/lib/influxdb2 environment: - DOCKER_INFLUXDB_INIT_USERNAME=admin - DOCKER_INFLUXDB_INIT_PASSWORD=Key123qaz - DOCKER_INFLUXDB_INIT_ORG=apipark - DOCKER_INFLUXDB_INIT_BUCKET=apinto - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=dQ9>fK6&gJ - DOCKER_INFLUXDB_INIT_MODE=setup networks: - apipark apipark-redis: container_name: apipark-redis image: redis:7.2.4 hostname: apipark-redis privileged: true restart: always ports: - 6379:6379 command: - bash - -c - "redis-server --protected-mode yes --logfile redis.log --appendonly no --port 6379 --requirepass 123456" networks: - apipark apipark-loki: container_name: apipark-loki image: grafana/loki:3.2.1 hostname: apipark-loki privileged: true user: root restart: always ports: - 3100:3100 entrypoint: - sh - -euc - | mkdir -p /mnt/config cat <<EOF > /mnt/config/loki-config.yaml --- auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 common: instance_addr: 127.0.0.1 path_prefix: /tmp/loki storage: filesystem: chunks_directory: /tmp/loki/chunks rules_directory: /tmp/loki/rules replication_factor: 1 ring: kvstore: store: inmemory query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100 schema_config: configs: - from: 2020-10-24 store: tsdb object_store: filesystem schema: v13 index: prefix: index_ period: 24h limits_config: max_query_length: 90d # 设置最大查询时长为 30 天 ruler: alertmanager_url: http://localhost:9093 # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/ # # Statistics help us better understand how Loki is used, and they show us performance # levels for most users. This helps us prioritize features and documentation. # For more information on what's sent, look at # https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go # Refer to the buildReport method to see what goes into a report. # # If you would like to disable reporting, uncomment the following lines: #analytics: # reporting_enabled: false table_manager: retention_period: 90d EOF /usr/bin/loki -config.file=/mnt/config/loki-config.yaml networks: - apipark apipark-grafana: container_name: apipark-grafana image: grafana/grafana:11.3.2 hostname: apipark-grafana privileged: true restart: always environment: - GF_PATHS_PROVISIONING=/etc/grafana/provisioning - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin depends_on: - apipark-loki entrypoint: - sh - -euc - | mkdir -p /etc/grafana/provisioning/datasources cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://apipark-loki:3100 EOF /run.sh ports: - "3000:3000" healthcheck: test: [ "CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1" ] interval: 10s timeout: 5s retries: 5 networks: - apipark apipark-nsq: container_name: apipark-nsq image: nsqio/nsq:v1.3.0 hostname: apipark-nsq privileged: true restart: always command: - /nsqd ports: - 4150:4150 - 4151:4151 networks: - apipark networks: apipark: driver: bridge ipam: driver: default config: - subnet: 172.100.0.0/24 启动APIPark docker-compose up -d 执行完成后,将出现如下图所示: 单独部署API网关 编辑config.yml vi config.yml 修改文件配置 version: 2 #certificate: # 证书存放根目录 # dir: /etc/apinto/cert client: advertise_urls: # open api 服务的广播地址 - http://{IP}:9400 listen_urls: # open api 服务的监听地址 - http://0.0.0.0:9400 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key gateway: advertise_urls: # 转发服务的广播地址 - http://{IP}:8099 - https://{IP}:8099 listen_urls: # 转发服务的监听地址 - https://0.0.0.0:8099 - http://0.0.0.0:8099 peer: # 集群间节点通信配置信息 listen_urls: # 节点监听地址 - http://0.0.0.0:9401 advertise_urls: # 节点通信广播地址 - http://{IP}:9401 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key 上述配置中的{IP}是一个变量,应该填写容器所在宿主机IP,假设宿主机IP为172.18.65.22,则此时配置应如下 version: 2 #certificate: # 证书存放根目录 # dir: /etc/apinto/cert client: advertise_urls: # open api 服务的广播地址 - http://172.18.65.22:9400 listen_urls: # open api 服务的监听地址 - http://0.0.0.0:9400 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key gateway: advertise_urls: # 转发服务的广播地址 - http://172.18.65.22:8099 - https://172.18.65.22:8099 listen_urls: # 转发服务的监听地址 - https://0.0.0.0:8099 - http://0.0.0.0:8099 peer: # 集群间节点通信配置信息 listen_urls: # 节点监听地址 - http://0.0.0.0:9401 advertise_urls: # 节点通信广播地址 - http://172.18.65.22:9401 #certificate: # 证书配置,允许使用ip的自签证书 # - cert: server.pem # key: server.key 配置说明 字段名称 说明 version 配置版本号,默认2 client openAPI配置信息 client -> listen_urls openAPI监听地址列表,格式:{协议}://{IP}:{端口} client -> advertise_urls openAPI广播地址列表,在控制台集群节点列表中展示,格式:{协议}://{IP/域名}:{端口} client -> certificate openAPI证书信息列表 gateway 转发代理核心程序配置信息 gateway -> listen_urls 转发代理核心程序监听地址列表,格式:{协议}://{IP}:{端口} gateway -> advertise_urls 转发代理核心程序广播地址列表,在控制台集群节点列表中展示,格式:{协议}://{IP/域名}:{端口} peer Raft节点配置信息,用于Raft集群节点配置同步、加入集群、离开集群等操作的通信 peer -> listen_urls Raft节点监听地址列表,格式:{协议}://{IP}:{端口} peer -> advertise_urls Raft节点广播地址列表,格式:{协议}://{IP/域名}:{端口} peer -> certificate Raft节点证书信息列表 运行Docker容器,并挂载配置文件config.yml docker run -td -p 8099:8099 -p 9400:9400 -p 9401:9401 --privileged=true \ -v /var/lib/apinto/data:/var/lib/apinto \ -v /var/lib/apinto/log:/var/log/apinto \ -v ${PWD}/config.yml:/etc/apinto/config.yml \ --name=apinto_node eolinker/apinto-gateway:latest ./start.sh,我现在要从github上把这个项目拉起来,如何在idea里面运行跑起来,帮我从0到1把这个启动项目运行项目,最后打包需要配置的地方给我列出来
最新发布
06-07
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值