Graylog集群环境搭建

本文介绍如何在CentOS 7.2环境下部署Graylog集群,包括配置MongoDB副本集、Elasticsearch集群及Graylog多节点配置,并实现负载均衡。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

本次多节点环境部署示例是基于centos7.2进行,由于资源有限,将ES与Graylog和MongoDB部署在同一台server上。以下内容仅供参考,正式生产环境根据需要进行调整。


前置条件

准备三个节点,系统版本为centos7.2。

IpHostname
192.168.2.121graylogNode1
192.168.2.122graylogNode2
192.168.2.123graylogNode3

可以直接使用vagrant创建三个节点,Vagrantfile如下:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|

  (1..3).each do |i|
    config.vm.define "graylogNode#{i}" do |s|
      s.vm.box = "bento/centos-7.2"
      s.vm.hostname = "graylogNode#{i}"
      n = 120 + i
      s.vm.provision :shell, inline: "sed 's/127\.0\.0\.1.*node.*/192\.168\.2\.#{n} node#{i}/' -i /etc/hosts"
      s.vm.network "private_network", ip: "192.168.2.#{n}"
      s.vm.provider "virtualbox" do |v|
        v.cpus = 1
        v.memory = 2048
        v.name = "graylog-node#{i}"
      end
    end
  end

end

启动节点:

$ vagrant up

在每个节点上安装好MongoDB、Elasticsearch、Graylog,具体安装步骤参考上一篇 Graylog安装使用


集群配置

集群配置包括三个部分:配置MongoDB副本集、配置Elasticsearch集群、Graylog多节点配置。

配置MongoDB副本集

(1)指定副本集名称。
方法一:通过配置文件指定
更改每个节点上的MongoDB配置文件/etc/mongod.conf,将bindIp所在的行前加上#进行注释,replication部分添加replSetName: rs0,更改后的配置文件如下:

# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

# Where and how to store data.
storage:
  dbPath: /var/lib/mongo
  journal:
    enabled: true
#  engine:
#  mmapv1:
#  wiredTiger:

# how the process runs
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb/mongod.pid  # location of pidfile

# network interfaces
net:
  port: 27017
#  bindIp: 127.0.0.1  # Listen to local interface only, comment to listen on all interfaces.


#security:

#operationProfiling:

#replication:
replication:
  replSetName: rs0

#sharding:

## Enterprise-Only Options

#auditLog:

#snmp:

重启服务:

$ sudo systemctl restart mongod.service

方法二:使用mongo --replSet命令行指定
在每个节点上执行命令:

$ mongod --replSet "rs0"

(2)在集群中的一个节点上执行mongo命令行:

$ mongo

初始化副本集
使用本机hostname或IP加端口,如下

$ rs.initiate( {
    _id : "rs0",
    members: [ { _id : 0, host : "192.168.2.121:27017" } ]
 })

查看配置

rs.conf()

配置如下:

{
        "_id" : "rs0",
        "version" : 1,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "192.168.2.121:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("59ef0832a5da3378b1487f4e")
        }
}

向副本集中添加成员

rs0:PRIMARY> rs.add("192.168.2.122:27017")
{ "ok" : 1 }
rs0:PRIMARY> rs.add("192.168.2.123:27017")
{ "ok" : 1 }

添加完成之后的配置:

rs0:PRIMARY> rs.config()
{
        "_id" : "rs0",
        "version" : 4,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "192.168.2.121:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "192.168.2.122:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "192.168.2.123:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("59ef0832a5da3378b1487f4e")
        }
}

查看状态

rs0:PRIMARY> rs.status()

创建graylog数据库,添加graylog用户

rs0:PRIMARY> use graylog
switched to db graylog
rs0:PRIMARY> db.createUser( {
... user: "graylog",
... pwd: "75PN76Db66En",
... roles: [ { role: "readWrite", db: "graylog" } ]
... });
rs0:PRIMARY> db.grantRolesToUser( "graylog" , [ { role: "dbAdmin", db: "graylog" } ])
rs0:PRIMARY> show users
rs0:PRIMARY> db.auth("graylog","75PN76Db66En")

配置Elasticsearch集群

(1)修改每个节点的配置文件

$ sudo vim /etc/elasticsearch/elasticsearch.yml

需要更改的部分如下:

#es集群名称,每个节点中cluster.name要保持一致(建议名称为graylog)
cluster.name: graylog  
#节点名称
node.name: es-node-01
#当前节点IP
network.host: 192.168.2.121
#端口
http.port: 9200
#集群中的主机
discovery.zen.ping.unicast.hosts: ["192.168.2.121", "192.168.2.122", "192.168.2.123"]
#可发现的主节点
discovery.zen.minimum_master_nodes: 2

(2)重启服务

$ sudo systemctl restart elasticsearch.service

(3)查看集群状态

$ curl 'http://192.168.2.121:9200/_cluster/health?pretty=true'
{
  "cluster_name" : "graylog",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 80,
  "active_shards" : 80,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

(4)查看集群中的节点

$ curl 'http://192.168.2.121:9200/_cat/nodes?v'
ip            heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.2.122            4          96   4    0.01    0.04     0.12 mdi       -      es-node-02
192.168.2.121            4          96   5    0.03    0.04     0.06 mdi       *      es-node-01
192.168.2.123            4          97   6    0.04    0.18     0.20 mdi       -      es-node-03

Graylog多节点配置

  1. 打开配置文件进行编辑:

    $ sudo vim /etc/graylog/server/server.conf

    (1)设置master节点
    本次环境是将192.168.2.121作为graylog主节点,因此在192.168.2.121上修改配置文件/etc/graylog/server/server.confis_master = true,其他两个节点的配置文件中设置为is_master = false
    (2)修改mongodb连接配置mongodb_uri

    mongodb_uri = mongodb://graylog:75PN76Db66En@192.168.2.121:27017,192.168.2.122:27017,192.168.2.123:27017/graylog?replicaSet=rs0

    (3)修改elasticsearch连接配置elasticsearch_hosts

    elasticsearch_hosts = http://192.168.2.121:9200,http://192.168.2.122:9200,http://192.168.2.123:9200

    (4)开启web界面web_enable

    web_enable = true

    (5)修改web_listen_uri

    
    #不同的节点不同的IP
    
    web_listen_uri = http://192.168.2.121:9000/

    (6)修改rest_listen_uri

    
    #不同的节点不同的IP
    
    rest_listen_uri = http://192.168.2.121:9000/api/
  2. 重启服务

    $ sudo systemctl restart graylog-server.service
  3. 创建负载均衡器,对graylog配置负载均衡
    本次使用nginx进行负载均衡,安装步骤如下:
    添加yum源:在/etc/yum.repos.d/目录下新建文件nginx.repo,添加如下内容:

    [nginx]
    name=nginx repo
    baseurl=http://nginx.org/packages/mainline/OS/OSRELEASE/$basearch/
    gpgcheck=0
    enabled=1

    补充OS换成你的系统,如:“centos”、“rhel”;OSRELEASE换成系统版本,本次的系统环境是centos7,所以写成如下的内容。

    [nginx]
    name=nginx repo
    baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/
    gpgcheck=0
    enabled=1

    安装

    $ sudo yum -y install nginx

    启动服务$ sudo service nginx start
    配置Nginx
    更改Nginx的配置文件

       server {
       listen       80;
       listen [::]:80 default_server ipv6only=on;
       server_name graylog.example.org;
    
       location / {
           proxy_set_header Host $http_host;
           proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header X-Graylog-Server-URL http://$server_name/api;
           proxy_pass http://graylog-web-cluster;
       }
    }
        upstream graylog-web-cluster {
            server 192.168.2.121:9000 max_fails=3 fail_timeout=30s;
            server 192.168.2.122:9000 max_fails=3 fail_timeout=30s;
            server 192.168.2.123:9000 max_fails=3 fail_timeout=30s;
        }

    重启服务:sudo nginx -s reload
    此时可以通过访问http://graylog.example.org/查看graylog,可看到已经搭建好的节点信息:

graylog nodes view


Q&A

  • 在初始化mongodb副本集的时候不要使用loaclahost,否则在添加其他成员的时候会出现以下错误:

    rs0:PRIMARY> rs.add("192.168.2.122")
    {
        "ok" : 0,
        "errmsg" : "Either all host names in a replica set configuration must be localhost references, or none must be; found 1 out of 2",
        "code" : 103
    }
    

    解决方法:

      rs0:PRIMARY> var config=rs.config()
      rs0:PRIMARY> config.members[0].host="192.168.2.121:27017"
      192.168.2.121:27017
      rs0:PRIMARY> rs.reconfig(config)
      { "ok" : 1 }
      rs0:PRIMARY> rs.config()
      {
            "_id" : "rs0",
            "version" : 2,
            "protocolVersion" : NumberLong(1),
            "members" : [
                    {
                            "_id" : 0,
                            "host" : "192.168.2.121:27017",
                            "arbiterOnly" : false,
                            "buildIndexes" : true,
                            "hidden" : false,
                            "priority" : 1,
                            "tags" : {
    
                            },
                            "slaveDelay" : NumberLong(0),
                            "votes" : 1
                    }
            ],
            "settings" : {
                    "chainingAllowed" : true,
                    "heartbeatIntervalMillis" : 2000,
                    "heartbeatTimeoutSecs" : 10,
                    "electionTimeoutMillis" : 10000,
                    "getLastErrorModes" : {
    
                    },
                    "getLastErrorDefaults" : {
                            "w" : 1,
                            "wtimeout" : 0
                    },
                    "replicaSetId" : ObjectId("59ef0832a5da3378b1487f4e")
            }
      }
      rs0:PRIMARY> rs.add("192.168.2.122:27017")
      { "ok" : 1 }
    

参考文章

<think>好的,我现在需要帮助用户在Kubernetes上搭建Graylog。首先,我要回想一下用户提供的引用内容,看看有没有相关的部署步骤或者配置信息。用户提到了使用juju和maas环境部署k8s,并且在引用中提到了Graylog的部署,比如引用[3]中提到了通过filebeat到kafka再到Graylog的日志收集流程,还有配置filebeat的logpath和logstash_hosts。引用[2]里还提到了juju add-relation命令来关联filebeat和graylog模块,这可能涉及到服务之间的连接配置。 接下来,我需要确定在Kubernetes上部署Graylog的一般步骤。通常,部署有状态的应用如Graylog需要考虑持久化存储,比如MongoDB和Elasticsearch的持久卷。用户提供的引用[5]提到了Juju存储支持,可能需要配置存储卷。不过用户的问题是关于Kubernetes的原生部署,而不是使用Juju,所以可能需要调整思路,使用Kubernetes的PersistentVolume和StatefulSet。 然后,考虑Graylog的三个主要组件:Graylog服务器、Elasticsearch和MongoDB。在Kubernetes中,可能需要分别为它们创建Deployment或StatefulSet,并配置Service来暴露端口。引用[3]中提到了使用filebeat收集日志并发送到Graylog,所以还需要部署filebeat作为DaemonSet来收集每个节点的日志。 另外,用户引用中提到的配置如logpath和logstash_hosts(引用[2])可能需要对应到Kubernetes的ConfigMap中,以便配置filebeat的配置文件。此外,服务发现和网络配置也很重要,确保Graylog的Web界面可以被访问,可能需要使用Ingress或者NodePort类型的Service。 还需要注意Graylog的配置参数,比如密码加密、外部访问地址等,这些可以通过环境变量或ConfigMap注入到容器中。引用[5]中的存储配置可能需要为每个有状态服务分配持久卷,确保数据持久化。 最后,测试和验证部分,用户引用[4]中提到了检查日志文件,可能需要通过kubectl logs或者直接访问Graylog的Web界面来确认日志是否正常收集。 总结一下,步骤可能包括:准备持久化存储,部署MongoDB、Elasticsearch、Graylog,配置服务发现,部署日志收集器如filebeat,并通过Ingress暴露服务。同时,需要根据用户的引用中的具体配置调整参数,比如logstash_hosts的地址可能需要设置为Graylog服务的ClusterIP和端口。</think>在Kubernetes上部署Graylog需要结合日志收集、存储和可视化组件的协同工作。以下是分步说明: --- ### **1. 环境准备** - 确保Kubernetes集群已启用持久化存储(如NFS、Ceph或云存储),并为MongoDB、Elasticsearch分配持久卷(PVC)[^5]。 - 准备命名空间(例如`graylog`)隔离部署环境: ```bash kubectl create ns graylog ``` --- ### **2. 部署依赖组件** #### **2.1 MongoDB** ```yaml # mongo.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: mongodb namespace: graylog spec: serviceName: mongodb replicas: 1 template: spec: containers: - name: mongodb image: mongo:4 volumeMounts: - name: mongo-storage mountPath: /data/db volumeClaimTemplates: - metadata: name: mongo-storage spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 10Gi ``` #### **2.2 Elasticsearch** ```yaml # elasticsearch.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: elasticsearch namespace: graylog spec: serviceName: elasticsearch replicas: 3 template: spec: containers: - name: elasticsearch image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2 env: - name: "discovery.type" value: "single-node" volumeMounts: - name: es-storage mountPath: /usr/share/elasticsearch/data volumeClaimTemplates: - metadata: name: es-storage spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 20Gi ``` --- ### **3. 部署Graylog服务器** #### **3.1 Graylog Deployment** ```yaml # graylog.yaml apiVersion: apps/v1 kind: Deployment metadata: name: graylog namespace: graylog spec: replicas: 1 template: spec: containers: - name: graylog image: graylog/graylog:4.3 env: - name: GRAYLOG_PASSWORD_SECRET value: "mysecret" - name: GRAYLOG_ROOT_PASSWORD_SHA2 value: "8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918" # 默认密码为"admin" - name: GRAYLOG_HTTP_EXTERNAL_URI value: "http://graylog.example.com/" ports: - containerPort: 9000 ``` #### **3.2 服务暴露** ```yaml apiVersion: v1 kind: Service metadata: name: graylog namespace: graylog spec: ports: - port: 9000 targetPort: 9000 selector: app: graylog type: NodePort # 或通过Ingress暴露 ``` --- ### **4. 配置日志收集器(Filebeat)** #### **4.1 部署Filebeat DaemonSet** ```yaml # filebeat.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat namespace: graylog spec: template: spec: containers: - name: filebeat image: docker.elastic.co/beats/filebeat:7.10.2 volumeMounts: - name: varlog mountPath: /var/log - name: config mountPath: /usr/share/filebeat/filebeat.yml subPath: filebeat.yml volumes: - name: varlog hostPath: path: /var/log - name: config configMap: name: filebeat-config ``` #### **4.2 配置文件(ConfigMap)** ```yaml # filebeat-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: filebeat-config namespace: graylog data: filebeat.yml: | filebeat.inputs: - type: log paths: - /var/log/*.log - /var/log/containers/*.log # 收集容器日志[^3] output.logstash: hosts: ["graylog:5044"] # 对应Graylog的输入端口[^2] ``` --- ### **5. 验证部署** 1. 访问Graylog Web界面(通过NodePort或Ingress地址)。 2.Graylog中创建**Beats输入**(端口5044)以接收Filebeat日志。 3. 查看日志是否正常流入: ```bash kubectl logs -n graylog deploy/graylog | grep "Received message" ``` ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值