Tomcat与Apache Flume整合:日志收集系统部署

Tomcat与Apache Flume整合:日志收集系统部署

【免费下载链接】tomcat Tomcat是一个开源的Web服务器,主要用于部署Java Web应用程序。它的特点是易用性高、稳定性好、兼容性广等。适用于Java Web应用程序部署场景。 【免费下载链接】tomcat 项目地址: https://gitcode.com/gh_mirrors/tom/tomcat

1. 引言:日志收集的痛点与解决方案

在现代Java Web应用部署中,Tomcat作为主流的Servlet容器,其产生的访问日志、错误日志和应用日志是系统监控、问题排查和性能优化的关键数据。然而,随着应用规模的扩大和集群部署的普及,传统的本地日志文件管理方式面临以下挑战:

  • 分散存储:多节点部署时日志分散在不同服务器,难以集中分析
  • 容量压力:单个Tomcat实例日均日志量可达数百MB,长期存储占用大量磁盘空间
  • 实时性差:无法实时监控系统运行状态,问题发现滞后
  • 分析困难:缺乏统一的日志格式和集中式分析平台

Apache Flume(日志收集系统)作为一个高可用、高可靠的分布式日志收集、聚合和传输系统,能够完美解决上述问题。本文将详细介绍如何实现Tomcat与Apache Flume的无缝整合,构建一个高效、可靠的日志收集系统。

读完本文后,您将能够:

  • 理解Tomcat日志体系结构和配置方式
  • 掌握Apache Flume的核心组件和工作原理
  • 实现Tomcat日志到Flume的实时数据传输
  • 配置多种日志 sink 以满足不同的分析需求
  • 优化系统性能和可靠性

2. 环境准备与组件说明

2.1 软件版本要求

组件版本要求用途
Apache Tomcat9.x 或更高Java Web应用服务器
Apache Flume1.9.0 或更高日志收集系统
JavaJDK 1.8 或更高运行环境
ZooKeeper3.5.x 或更高Flume集群协调(可选)
HDFS3.x 或更高日志持久化存储(可选)
Elasticsearch7.x 或更高日志检索与分析(可选)

2.2 系统架构

mermaid

2.3 安装与配置准备

Tomcat安装

# 克隆Tomcat仓库
git clone https://gitcode.com/gh_mirrors/tom/tomcat
cd tomcat

# 编译安装
./build.sh

Flume安装

# 下载Flume二进制包
wget https://dlcdn.apache.org/flume/1.11.0/apache-flume-1.11.0-bin.tar.gz
tar -zxvf apache-flume-1.11.0-bin.tar.gz
mv apache-flume-1.11.0-bin /usr/local/flume

# 配置环境变量
echo "export FLUME_HOME=/usr/local/flume" >> /etc/profile
echo "export PATH=\$FLUME_HOME/bin:\$PATH" >> /etc/profile
source /etc/profile

3. Tomcat日志配置详解

3.1 Tomcat日志体系

Tomcat主要产生以下几类日志:

  1. 访问日志:记录所有HTTP请求的详细信息
  2. Catalina日志:Tomcat自身运行日志
  3. 本地主机日志:特定虚拟主机的日志
  4. 应用日志:部署在Tomcat上的Java应用产生的日志

3.2 访问日志配置

Tomcat的访问日志通过server.xml中的AccessLogValve配置,默认已启用:

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
       prefix="localhost_access_log" suffix=".txt"
       pattern="%h %l %u %t &quot;%r&quot; %s %b" />

为了便于Flume收集和解析,我们修改为JSON格式:

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
       prefix="localhost_access_log" suffix=".txt"
       pattern="{&quot;clientip&quot;:&quot;%h&quot;,&quot;identd&quot;:&quot;%l&quot;,&quot;user&quot;:&quot;%u&quot;,&quot;timestamp&quot;:&quot;%t&quot;,&quot;request&quot;:&quot;%r&quot;,&quot;status&quot;:&quot;%s&quot;,&quot;bytes&quot;:&quot;%b&quot;,&quot;referer&quot;:&quot;%{Referer}i&quot;,&quot;useragent&quot;:&quot;%{User-Agent}i&quot;}" 
       rotatable="true" renameOnRotate="true" />

关键参数说明

  • rotatable="true":启用日志轮转
  • renameOnRotate="true":轮转时重命名文件,避免Flume读取不完整日志
  • pattern:定义日志格式,这里使用JSON格式便于后续解析

3.3 系统日志配置

Tomcat的系统日志通过logging.properties文件配置,默认位置在conf/logging.properties

# 日志处理器配置
handlers = 1catalina.org.apache.juli.AsyncFileHandler, 2localhost.org.apache.juli.AsyncFileHandler, 3manager.org.apache.juli.AsyncFileHandler, 4host-manager.org.apache.juli.AsyncFileHandler, java.util.logging.ConsoleHandler

# Catalina日志配置
1catalina.org.apache.juli.AsyncFileHandler.level = INFO
1catalina.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
1catalina.org.apache.juli.AsyncFileHandler.prefix = catalina.
1catalina.org.apache.juli.AsyncFileHandler.maxDays = 90
1catalina.org.apache.juli.AsyncFileHandler.encoding = UTF-8

# 日志格式配置
java.util.logging.ConsoleHandler.formatter = org.apache.juli.OneLineFormatter

为了统一日志格式,我们修改为JSON格式输出:

# 添加JSON格式处理器
1catalina.org.apache.juli.AsyncFileHandler.formatter = org.apache.juli.JsonFormatter

# 设置日志级别
org.apache.catalina.level = INFO
org.apache.coyote.level = INFO

3.4 应用日志配置

对于部署在Tomcat中的Java应用,建议使用SLF4J+Logback或Log4j2作为日志框架,并配置FileAppender将日志输出到指定目录:

Logback配置示例

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${catalina.base}/logs/application.log</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>${catalina.base}/logs/application.%d{yyyy-MM-dd}.log</fileNamePattern>
      <maxHistory>30</maxHistory>
    </rollingPolicy>
    <encoder class="net.logstash.logback.encoder.LogstashEncoder">
      <includeMdcKeyName>requestId</includeMdcKeyName>
      <includeMdcKeyName>userId</includeMdcKeyName>
      <fieldNames>
        <timestamp>timestamp</timestamp>
        <message>message</message>
        <logger>logger</logger>
        <thread>thread</thread>
        <level>level</level>
      </fieldNames>
    </encoder>
  </appender>
  
  <root level="INFO">
    <appender-ref ref="FILE" />
  </root>
</configuration>

4. Apache Flume核心配置

4.1 Flume架构与核心组件

Flume的核心组件包括:

  • Agent:JVM进程,负责从数据源收集数据并发送到目的地
  • Source:数据采集组件,从数据源获取数据
  • Channel:数据缓冲组件,临时存储数据
  • Sink:数据输出组件,将数据发送到目的地
  • Interceptor:数据处理组件,对数据进行过滤、转换等操作

4.2 单节点Flume配置

创建Flume配置文件tomcat-flume.conf

# 定义Agent名称和组件
agent1.sources = tomcat-access-source tomcat-catalina-source tomcat-application-source
agent1.channels = file-channel
agent1.sinks = hdfs-sink es-sink

# 配置访问日志Source
agent1.sources.tomcat-access-source.type = TAILDIR
agent1.sources.tomcat-access-source.positionFile = /var/lib/flume/taildir_position.json
agent1.sources.tomcat-access-source.filegroups = f1
agent1.sources.tomcat-access-source.filegroups.f1 = /data/web/disk1/git_repo/gh_mirrors/tom/tomcat/logs/localhost_access_log.*.txt
agent1.sources.tomcat-access-source.fileHeader = true
agent1.sources.tomcat-access-source.fileHeaderKey = file

# 配置Catalina日志Source
agent1.sources.tomcat-catalina-source.type = TAILDIR
agent1.sources.tomcat-catalina-source.positionFile = /var/lib/flume/taildir_position.json
agent1.sources.tomcat-catalina-source.filegroups = f2
agent1.sources.tomcat-catalina-source.filegroups.f2 = /data/web/disk1/git_repo/gh_mirrors/tom/tomcat/logs/catalina.*.log
agent1.sources.tomcat-catalina-source.fileHeader = true
agent1.sources.tomcat-catalina-source.fileHeaderKey = file

# 配置应用日志Source
agent1.sources.tomcat-application-source.type = TAILDIR
agent1.sources.tomcat-application-source.positionFile = /var/lib/flume/taildir_position.json
agent1.sources.tomcat-application-source.filegroups = f3
agent1.sources.tomcat-application-source.filegroups.f3 = /data/web/disk1/git_repo/gh_mirrors/tom/tomcat/logs/application.*.log
agent1.sources.tomcat-application-source.fileHeader = true
agent1.sources.tomcat-application-source.fileHeaderKey = file

# 配置拦截器,添加日志类型标记
agent1.sources.tomcat-access-source.interceptors = i1
agent1.sources.tomcat-access-source.interceptors.i1.type = static
agent1.sources.tomcat-access-source.interceptors.i1.key = logType
agent1.sources.tomcat-access-source.interceptors.i1.value = access

agent1.sources.tomcat-catalina-source.interceptors = i2
agent1.sources.tomcat-catalina-source.interceptors.i2.type = static
agent1.sources.tomcat-catalina-source.interceptors.i2.key = logType
agent1.sources.tomcat-catalina-source.interceptors.i2.value = catalina

agent1.sources.tomcat-application-source.interceptors = i3
agent1.sources.tomcat-application-source.interceptors.i3.type = static
agent1.sources.tomcat-application-source.interceptors.i3.key = logType
agent1.sources.tomcat-application-source.interceptors.i3.value = application

# 配置File Channel
agent1.channels.file-channel.type = file
agent1.channels.file-channel.checkpointDir = /var/lib/flume/checkpoint
agent1.channels.file-channel.dataDirs = /var/lib/flume/data
agent1.channels.file-channel.capacity = 1000000
agent1.channels.file-channel.transactionCapacity = 10000

# 配置HDFS Sink
agent1.sinks.hdfs-sink.type = hdfs
agent1.sinks.hdfs-sink.hdfs.path = hdfs://namenode:9000/flume/tomcat-logs/%{logType}/%Y-%m-%d
agent1.sinks.hdfs-sink.hdfs.filePrefix = tomcat-%{logType}-
agent1.sinks.hdfs-sink.hdfs.fileType = DataStream
agent1.sinks.hdfs-sink.hdfs.rollInterval = 3600
agent1.sinks.hdfs-sink.hdfs.rollSize = 134217728
agent1.sinks.hdfs-sink.hdfs.rollCount = 0
agent1.sinks.hdfs-sink.hdfs.batchSize = 1000
agent1.sinks.hdfs-sink.hdfs.useLocalTimeStamp = true

# 配置Elasticsearch Sink
agent1.sinks.es-sink.type = elasticsearch
agent1.sinks.es-sink.hosts = es-node1:9200,es-node2:9200
agent1.sinks.es-sink.indexName = tomcat-logs-%{logType}-%Y.%m.%d
agent1.sinks.es-sink.documentType = _doc
agent1.sinks.es-sink.batchSize = 500
agent1.sinks.es-sink.ttl = 30d
agent1.sinks.es-sink.serializer = org.apache.flume.sink.elasticsearch.JsonSerializer
agent1.sinks.es-sink.serializer.jsonSerializer.addTimestamp = true

# 绑定Source、Channel和Sink
agent1.sources.tomcat-access-source.channels = file-channel
agent1.sources.tomcat-catalina-source.channels = file-channel
agent1.sources.tomcat-application-source.channels = file-channel
agent1.sinks.hdfs-sink.channel = file-channel
agent1.sinks.es-sink.channel = file-channel

4.3 启动Flume Agent

# 创建必要的目录
mkdir -p /var/lib/flume/checkpoint /var/lib/flume/data /var/lib/flume/taildir

# 启动Flume Agent
flume-ng agent -n agent1 -c /usr/local/flume/conf -f /usr/local/flume/conf/tomcat-flume.conf -Dflume.root.logger=INFO,console

5. 高级配置与优化

5.1 日志轮转与Flume协调

Tomcat默认启用日志轮转,为确保Flume能正确处理轮转后的日志文件,需进行以下配置:

  1. Tomcat日志轮转配置server.xml):
<Valve className="org.apache.catalina.valves.AccessLogValve" 
       ...
       rotatable="true" 
       renameOnRotate="true"
       maxDays="7"
       fileDateFormat="yyyy-MM-dd" />
  1. Flume TAILDIR Source配置
agent1.sources.tomcat-access-source.type = TAILDIR
agent1.sources.tomcat-access-source.positionFile = /var/lib/flume/taildir_position.json
agent1.sources.tomcat-access-source.fileHeader = true

TAILDIR Source会记录每个文件的读取位置,即使Tomcat轮转日志,Flume也能通过positionFile追踪新文件。

5.2 数据可靠性保障

为确保日志数据不丢失,需配置Flume的事务和持久化机制:

# 配置Channel容量和事务大小
agent1.channels.file-channel.capacity = 1000000
agent1.channels.file-channel.transactionCapacity = 10000
agent1.channels.file-channel.checkpointInterval = 30000
agent1.channels.file-channel.maxFileSize = 2146435071
agent1.channels.file-channel.minimumRequiredSpace = 524288000

# 配置Sink重试机制
agent1.sinks.hdfs-sink.hdfs.callTimeout = 60000
agent1.sinks.hdfs-sink.hdfs.retryCount = 3
agent1.sinks.hdfs-sink.hdfs.retryInterval = 10

# 配置Sink组和故障转移
agent1.sinkgroups = g1
agent1.sinkgroups.g1.sinks = hdfs-sink es-sink
agent1.sinkgroups.g1.processor.type = failover
agent1.sinkgroups.g1.processor.priority.es-sink = 10
agent1.sinkgroups.g1.processor.priority.hdfs-sink = 5
agent1.sinkgroups.g1.processor.maxpenalty = 30000

5.3 性能优化

针对高流量Tomcat服务器,可进行以下性能优化:

  1. 增加Channel容量
agent1.channels.file-channel.capacity = 2000000
agent1.channels.file-channel.transactionCapacity = 20000
  1. 使用内存Channel提高吞吐量(适用于允许少量数据丢失的场景):
agent1.channels.memory-channel.type = memory
agent1.channels.memory-channel.capacity = 100000
agent1.channels.memory-channel.transactionCapacity = 10000
agent1.channels.memory-channel.keep-alive = 30
  1. 配置批处理大小
agent1.sinks.es-sink.batchSize = 1000
agent1.sinks.hdfs-sink.hdfs.batchSize = 2000
  1. 启用压缩
agent1.sinks.hdfs-sink.hdfs.compression = gzip
agent1.sinks.hdfs-sink.hdfs.compressionType = BLOCK

6. 监控与运维

6.1 Flume监控配置

Flume提供了内置的监控功能,可通过JMX或HTTP方式暴露指标:

# 启用JMX监控
agent1.monitoring.type = jmx
agent1.monitoring.jmx.port = 5445
agent1.monitoring.jmx.host = 0.0.0.0

# 或启用HTTP监控
agent1.monitoring.type = http
agent1.monitoring.port = 34545
agent1.monitoring.bind = 0.0.0.0

关键监控指标包括:

  • Source:事件接收速率、处理延迟
  • Channel:容量使用率、事件停留时间
  • Sink:事件发送速率、成功率、失败率

6.2 常见问题排查

问题1:Flume无法读取Tomcat新生成的日志文件

解决方案:

  1. 检查TAILDIR Source的positionFile路径是否正确
  2. 确保Tomcat日志文件权限允许Flume读取
  3. 验证filegroups配置是否匹配实际日志文件路径

问题2:日志数据重复发送

解决方案:

  1. 检查Flume Channel配置,确保事务容量足够
  2. 避免在Sink失败时自动重启Agent
  3. 启用Sink的幂等性配置(如Elasticsearch的_id字段)

问题3:系统资源占用过高

解决方案:

  1. 调整Flume JVM内存参数:
export JAVA_OPTS="-Xms2048m -Xmx4096m -XX:+UseG1GC"
  1. 减少同时处理的文件数量
  2. 增加日志轮转频率,减小单个文件大小

7. 总结与展望

本文详细介绍了Tomcat与Apache Flume整合构建日志收集系统的全过程,包括环境准备、配置步骤、性能优化和运维监控。通过这种整合方案,我们实现了Tomcat日志的实时收集、集中存储和多维度分析。

关键成果

  • 建立了统一的日志收集管道,解决了分布式环境下日志分散的问题
  • 实现了日志数据的高可靠性传输,确保数据不丢失
  • 提供了灵活的日志存储和分析选项,满足不同场景需求
  • 优化了系统性能,可支持高流量Tomcat服务器的日志收集

未来展望

  1. 实时告警:结合Flink或Spark Streaming实现日志异常实时检测
  2. 日志脱敏:增加数据脱敏组件,保护敏感信息
  3. 智能分析:利用机器学习算法对日志进行异常检测和趋势预测
  4. 容器化部署:将Tomcat和Flume打包为Docker容器,简化部署流程

通过不断优化和扩展这个日志收集系统,我们可以构建一个更加智能、高效的应用监控平台,为Java Web应用的稳定运行提供有力保障。

8. 附录:常用配置文件模板

8.1 Tomcat server.xml完整配置

<?xml version="1.0" encoding="UTF-8"?>
<Server port="8005" shutdown="SHUTDOWN">
  <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
  <Listener className="org.apache.catalina.core.AprLifecycleListener" />
  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
  <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

  <GlobalNamingResources>
    <Resource name="UserDatabase" auth="Container"
              type="org.apache.catalina.UserDatabase"
              description="User database that can be updated and saved"
              factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
              pathname="conf/tomcat-users.xml" />
  </GlobalNamingResources>

  <Service name="Catalina">
    <Connector port="8080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />

    <Engine name="Catalina" defaultHost="localhost">
      <Realm className="org.apache.catalina.realm.LockOutRealm">
        <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
               resourceName="UserDatabase"/>
      </Realm>

      <Host name="localhost"  appBase="webapps"
            unpackWARs="true" autoDeploy="true">
        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
               prefix="localhost_access_log" suffix=".txt"
               pattern="{&quot;clientip&quot;:&quot;%h&quot;,&quot;identd&quot;:&quot;%l&quot;,&quot;user&quot;:&quot;%u&quot;,&quot;timestamp&quot;:&quot;%t&quot;,&quot;request&quot;:&quot;%r&quot;,&quot;status&quot;:&quot;%s&quot;,&quot;bytes&quot;:&quot;%b&quot;,&quot;referer&quot;:&quot;%{Referer}i&quot;,&quot;useragent&quot;:&quot;%{User-Agent}i&quot;}" 
               rotatable="true" renameOnRotate="true" />
      </Host>
    </Engine>
  </Service>
</Server>

8.2 Flume完整配置文件

agent1.sources = tomcat-access-source tomcat-catalina-source tomcat-application-source
agent1.channels = file-channel
agent1.sinks = hdfs-sink es-sink

# Access Log Source
agent1.sources.tomcat-access-source.type = TAILDIR
agent1.sources.tomcat-access-source.positionFile = /var/lib/flume/taildir_position.json
agent1.sources.tomcat-access-source.filegroups = f1
agent1.sources.tomcat-access-source.filegroups.f1 = /data/web/disk1/git_repo/gh_mirrors/tom/tomcat/logs/localhost_access_log.*.txt
agent1.sources.tomcat-access-source.fileHeader = true
agent1.sources.tomcat-access-source.interceptors = i1
agent1.sources.tomcat-access-source.interceptors.i1.type = static
agent1.sources.tomcat-access-source.interceptors.i1.key = logType
agent1.sources.tomcat-access-source.interceptors.i1.value = access

# Catalina Log Source
agent1.sources.tomcat-catalina-source.type = TAILDIR
agent1.sources.tomcat-catalina-source.positionFile = /var/lib/flume/taildir_position.json
agent1.sources.tomcat-catalina-source.filegroups = f2
agent1.sources.tomcat-catalina-source.filegroups.f2 = /data/web/disk1/git_repo/gh_mirrors/tom/tomcat/logs/catalina.*.log
agent1.sources.tomcat-catalina-source.fileHeader = true
agent1.sources.tomcat-catalina-source.interceptors = i2
agent1.sources.tomcat-catalina-source.interceptors.i2.type = static
agent1.sources.tomcat-catalina-source.interceptors.i2.key = logType
agent1.sources.tomcat-catalina-source.interceptors.i2.value = catalina

# Application Log Source
agent1.sources.tomcat-application-source.type = TAILDIR
agent1.sources.tomcat-application-source.positionFile = /var/lib/flume/taildir_position.json
agent1.sources.tomcat-application-source.filegroups = f3
agent1.sources.tomcat-application-source.filegroups.f3 = /data/web/disk1/git_repo/gh_mirrors/tom/tomcat/logs/application.*.log
agent1.sources.tomcat-application-source.fileHeader = true
agent1.sources.tomcat-application-source.interceptors = i3
agent1.sources.tomcat-application-source.interceptors.i3.type = static
agent1.sources.tomcat-application-source.interceptors.i3.key = logType
agent1.sources.tomcat-application-source.interceptors.i3.value = application

# File Channel
agent1.channels.file-channel.type = file
agent1.channels.file-channel.checkpointDir = /var/lib/flume/checkpoint
agent1.channels.file-channel.dataDirs = /var/lib/flume/data
agent1.channels.file-channel.capacity = 2000000
agent1.channels.file-channel.transactionCapacity = 10000

# HDFS Sink
agent1.sinks.hdfs-sink.type = hdfs
agent1.sinks.hdfs-sink.hdfs.path = hdfs://namenode:9000/flume/tomcat-logs/%{logType}/%Y-%m-%d
agent1.sinks.hdfs-sink.hdfs.filePrefix = tomcat-%{logType}-
agent1.sinks.hdfs-sink.hdfs.fileType = DataStream
agent1.sinks.hdfs-sink.hdfs.rollInterval = 3600
agent1.sinks.hdfs-sink.hdfs.rollSize = 134217728
agent1.sinks.hdfs-sink.hdfs.rollCount = 0
agent1.sinks.hdfs-sink.hdfs.batchSize = 2000
agent1.sinks.hdfs-sink.hdfs.useLocalTimeStamp = true
agent1.sinks.hdfs-sink.hdfs.compression = gzip
agent1.sinks.hdfs-sink.hdfs.compressionType = BLOCK

# Elasticsearch Sink
agent1.sinks.es-sink.type = elasticsearch
agent1.sinks.es-sink.hosts = es-node1:9200,es-node2:9200
agent1.sinks.es-sink.indexName = tomcat-logs-%{logType}-%Y.%m.%d
agent1.sinks.es-sink.documentType = _doc
agent1.sinks.es-sink.batchSize = 500
agent1.sinks.es-sink.ttl = 30d
agent1.sinks.es-sink.serializer = org.apache.flume.sink.elasticsearch.JsonSerializer
agent1.sinks.es-sink.serializer.jsonSerializer.addTimestamp = true

# Sink Group
agent1.sinkgroups = g1
agent1.sinkgroups.g1.sinks = hdfs-sink es-sink
agent1.sinkgroups.g1.processor.type = failover
agent1.sinkgroups.g1.processor.priority.es-sink = 10
agent1.sinkgroups.g1.processor.priority.hdfs-sink = 5
agent1.sinkgroups.g1.processor.maxpenalty = 30000

# Bindings
agent1.sources.tomcat-access-source.channels = file-channel
agent1.sources.tomcat-catalina-source.channels = file-channel
agent1.sources.tomcat-application-source.channels = file-channel
agent1.sinks.hdfs-sink.channel = file-channel
agent1.sinks.es-sink.channel = file-channel

# Monitoring
agent1.monitoring.type = http
agent1.monitoring.port = 34545
agent1.monitoring.bind = 0.0.0.0

8.3 启动脚本

Tomcat启动脚本start-tomcat.sh):

#!/bin/bash
export CATALINA_HOME=/data/web/disk1/git_repo/gh_mirrors/tom/tomcat
export JAVA_OPTS="-Xms2g -Xmx4g -XX:+UseG1GC"
$CATALINA_HOME/bin/catalina.sh start

Flume启动脚本start-flume.sh):

#!/bin/bash
export FLUME_HOME=/usr/local/flume
export JAVA_OPTS="-Xms2048m -Xmx4096m -XX:+UseG1GC"
nohup $FLUME_HOME/bin/flume-ng agent -n agent1 -c $FLUME_HOME/conf -f $FLUME_HOME/conf/tomcat-flume.conf > /var/log/flume/agent1.log 2>&1 &

【免费下载链接】tomcat Tomcat是一个开源的Web服务器,主要用于部署Java Web应用程序。它的特点是易用性高、稳定性好、兼容性广等。适用于Java Web应用程序部署场景。 【免费下载链接】tomcat 项目地址: https://gitcode.com/gh_mirrors/tom/tomcat

创作声明:本文部分内容由AI辅助生成(AIGC),仅供参考

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值