2022-02-17 influxdb缺省配置记录

本文详细解读了InfluxDB的默认配置文件,包括各部分的配置项及其含义,对配置的生成、内容以及各项设置进行了深入介绍,有助于理解和调整InfluxDB的运行参数。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

目录

摘要:

influxdb缺省配置:

生成缺省配置文件命令:

influxdb.conf 文件内容:

influxdb.conf谷歌硬翻译:


摘要:

记录influxdb缺省配置, 目的一方面是为了记录配置的解释, 一方面是为了以后修改配置的核对.

influxdb缺省配置:

生成缺省配置文件命令:

influxd config > influxdb.conf

influxdb.conf 文件内容:

### Welcome to the InfluxDB configuration file.

# The values in this file override the default values used by the system if
# a config option is not specified. The commented out lines are the configuration
# field and the default value used. Uncommenting a line and changing the value
# will change the value used at runtime when the process is restarted.

# Once every 24 hours InfluxDB will report usage data to usage.influxdata.com
# The data includes a random ID, os, arch, version, the number of series and other
# usage data. No data from user databases is ever transmitted.
# Change this option to true to disable reporting.
# reporting-disabled = false

# Bind address to use for the RPC service for backup and restore.
# bind-address = "127.0.0.1:8088"

###
### [meta]
###
### Controls the parameters for the Raft consensus group that stores metadata
### about the InfluxDB cluster.
###

[meta]
  # Where the metadata/raft database is stored
  dir = "/var/lib/influxdb/meta"

  # Automatically create a default retention policy when creating a database.
  # retention-autocreate = true

  # If log messages are printed for the meta service
  # logging-enabled = true

###
### [data]
###
### Controls where the actual shard data for InfluxDB lives and how it is
### flushed from the WAL. "dir" may need to be changed to a suitable place
### for your system, but the WAL settings are an advanced configuration. The
### defaults should work for most systems.
###

[data]
  # The directory where the TSM storage engine stores TSM files.
  dir = "/var/lib/influxdb/data"

  # The directory where the TSM storage engine stores WAL files.
  wal-dir = "/var/lib/influxdb/wal"

  # The amount of time that a write will wait before fsyncing.  A duration
  # greater than 0 can be used to batch up multiple fsync calls.  This is useful for slower
  # disks or when WAL write contention is seen.  A value of 0s fsyncs every write to the WAL.
  # Values in the range of 0-100ms are recommended for non-SSD disks.
  # wal-fsync-delay = "0s"


  # The type of shard index to use for new shards.  The default is an in-memory index that is
  # recreated at startup.  A value of "tsi1" will use a disk based index that supports higher
  # cardinality datasets.
  # index-version = "inmem"

  # Trace logging provides more verbose output around the tsm engine. Turning
  # this on can provide more useful output for debugging tsm engine issues.
  # trace-logging-enabled = false

  # Whether queries should be logged before execution. Very useful for troubleshooting, but will
  # log any sensitive data contained within a query.
  # query-log-enabled = true

  # Provides more error checking. For example, SELECT INTO will err out inserting an +/-Inf value
  # rather than silently failing.
  # strict-error-handling = false

  # Validates incoming writes to ensure keys only have valid unicode characters.
  # This setting will incur a small overhead because every key must be checked.
  # validate-keys = false

  # Settings for the TSM engine

  # CacheMaxMemorySize is the maximum size a shard's cache can
  # reach before it starts rejecting writes.
  # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
  # Values without a size suffix are in bytes.
  # cache-max-memory-size = "1g"

  # CacheSnapshotMemorySize is the size at which the engine will
  # snapshot the cache and write it to a TSM file, freeing up memory
  # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
  # Values without a size suffix are in bytes.
  # cache-snapshot-memory-size = "25m"

  # CacheSnapshotWriteColdDuration is the length of time at
  # which the engine will snapshot the cache and write it to
  # a new TSM file if the shard hasn't received writes or deletes
  # cache-snapshot-write-cold-duration = "10m"

  # CompactFullWriteColdDuration is the duration at which the engine
  # will compact all TSM files in a shard if it hasn't received a
  # write or delete
  # compact-full-write-cold-duration = "4h"

  # The maximum number of concurrent full and level compactions that can run at one time.  A
  # value of 0 results in 50% of runtime.GOMAXPROCS(0) used at runtime.  Any number greater
  # than 0 limits compactions to that value.  This setting does not apply
  # to cache snapshotting.
  # max-concurrent-compactions = 0

  # CompactThroughput is the rate limit in bytes per second that we
  # will allow TSM compactions to write to disk. Note that short bursts are allowed
  # to happen at a possibly larger value, set by CompactThroughputBurst
  # compact-throughput = "48m"

  # CompactThroughputBurst is the rate limit in bytes per second that we
  # will allow TSM compactions to write to disk.
  # compact-throughput-burst = "48m"

  # If true, then the mmap advise value MADV_WILLNEED will be provided to the kernel with respect to
  # TSM files. This setting has been found to be problematic on some kernels, and defaults to off.
  # It might help users who have slow disks in some cases.
  # tsm-use-madv-willneed = false

  # Settings for the inmem index

  # The maximum series allowed per database before writes are dropped.  This limit can prevent
  # high cardinality issues at the database level.  This limit can be disabled by setting it to
  # 0.
  # max-series-per-database = 1000000

  # The maximum number of tag values per tag that are allowed before writes are dropped.  This limit
  # can prevent high cardinality tag values from being written to a measurement.  This limit can be
  # disabled by setting it to 0.
  # max-values-per-tag = 100000

  # Settings for the tsi1 index

  # The threshold, in bytes, when an index write-ahead log file will compact
  # into an index file. Lower sizes will cause log files to be compacted more
  # quickly and result in lower heap usage at the expense of write throughput.
  # Higher sizes will be compacted less frequently, store more series in-memory,
  # and provide higher write throughput.
  # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
  # Values without a size suffix are in bytes.
  # max-index-log-file-size = "1m"

  # The size of the internal cache used in the TSI index to store previously 
  # calculated series results. Cached results will be returned quickly from the cache rather
  # than needing to be recalculated when a subsequent query with a matching tag key/value 
  # predicate is executed. Setting this value to 0 will disable the cache, which may
  # lead to query performance issues.
  # This value should only be increased if it is known that the set of regularly used 
  # tag key/value predicates across all measurements for a database is larger than 100. An
  # increase in cache size may lead to an increase in heap usage.
  series-id-set-cache-size = 100

###
### [coordinator]
###
### Controls the clustering service configuration.
###

[coordinator]
  # The default time a write request will wait until a "timeout" error is returned to the caller.
  # write-timeout = "10s"

  # The maximum number of concurrent queries allowed to be executing at one time.  If a query is
  # executed and exceeds this limit, an error is returned to the caller.  This limit can be disabled
  # by setting it to 0.
  # max-concurrent-queries = 0

  # The maximum time a query will is allowed to execute before being killed by the system.  This limit
  # can help prevent run away queries.  Setting the value to 0 disables the limit.
  # query-timeout = "0s"

  # The time threshold when a query will be logged as a slow query.  This limit can be set to help
  # discover slow or resource intensive queries.  Setting the value to 0 disables the slow query logging.
  # log-queries-after = "0s"

  # The maximum number of points a SELECT can process.  A value of 0 will make
  # the maximum point count unlimited.  This will only be checked every second so queries will not
  # be aborted immediately when hitting the limit.
  # max-select-point = 0

  # The maximum number of series a SELECT can run.  A value of 0 will make the maximum series
  # count unlimited.
  # max-select-series = 0

  # The maximum number of group by time bucket a SELECT can create.  A value of zero will max the maximum
  # number of buckets unlimited.
  # max-select-buckets = 0

###
### [retention]
###
### Controls the enforcement of retention policies for evicting old data.
###

[retention]
  # Determines whether retention policy enforcement enabled.
  # enabled = true

  # The interval of time when retention policy enforcement checks run.
  # check-interval = "30m"

###
### [shard-precreation]
###
### Controls the precreation of shards, so they are available before data arrives.
### Only shards that, after creation, will have both a start- and end-time in the
### future, will ever be created. Shards are never precreated that would be wholly
### or partially in the past.

[shard-precreation]
  # Determines whether shard pre-creation service is enabled.
  # enabled = true

  # The interval of time when the check to pre-create new shards runs.
  # check-interval = "10m"

  # The default period ahead of the endtime of a shard group that its successor
  # group is created.
  # advance-period = "30m"

###
### Controls the system self-monitoring, statistics and diagnostics.
###
### The internal database for monitoring data is created automatically if
### if it does not already exist. The target retention within this database
### is called 'monitor' and is also created with a retention period of 7 days
### and a replication factor of 1, if it does not exist. In all cases the
### this retention policy is configured as the default for the database.

[monitor]
  # Whether to record statistics internally.
  # store-enabled = true

  # The destination database for recorded statistics
  # store-database = "_internal"

  # The interval at which to record statistics
  # store-interval = "10s"

###
### [http]
###
### Controls how the HTTP endpoints are configured. These are the primary
### mechanism for getting data into and out of InfluxDB.
###

[http]
  # Determines whether HTTP endpoint is enabled.
  # enabled = true

  # Determines whether the Flux query endpoint is enabled.
  # flux-enabled = false

  # Determines whether the Flux query logging is enabled.
  # flux-log-enabled = false

  # The bind address used by the HTTP service.
  # bind-address = ":8086"

  # Determines whether user authentication is enabled over HTTP/HTTPS.
  # auth-enabled = false

  # The default realm sent back when issuing a basic auth challenge.
  # realm = "InfluxDB"

  # Determines whether HTTP request logging is enabled.
  # log-enabled = true

  # Determines whether the HTTP write request logs should be suppressed when the log is enabled.
  # suppress-write-log = false

  # When HTTP request logging is enabled, this option specifies the path where
  # log entries should be written. If unspecified, the default is to write to stderr, which
  # intermingles HTTP logs with internal InfluxDB logging.
  #
  # If influxd is unable to access the specified path, it will log an error and fall back to writing
  # the request log to stderr.
  # access-log-path = ""

  # Filters which requests should be logged. Each filter is of the pattern NNN, NNX, or NXX where N is
  # a number and X is a wildcard for any number. To filter all 5xx responses, use the string 5xx.
  # If multiple filters are used, then only one has to match. The default is to have no filters which
  # will cause every request to be printed.
  # access-log-status-filters = []

  # Determines whether detailed write logging is enabled.
  # write-tracing = false

  # Determines whether the pprof endpoint is enabled.  This endpoint is used for
  # troubleshooting and monitoring.
  # pprof-enabled = true

  # Enables authentication on pprof endpoints. Users will need admin permissions
  # to access the pprof endpoints when this setting is enabled. This setting has
  # no effect if either auth-enabled or pprof-enabled are set to false.
  # pprof-auth-enabled = false

  # Enables a pprof endpoint that binds to localhost:6060 immediately on startup.
  # This is only needed to debug startup issues.
  # debug-pprof-enabled = false

  # Enables authentication on the /ping, /metrics, and deprecated /status
  # endpoints. This setting has no effect if auth-enabled is set to false.
  # ping-auth-enabled = false

  # Determines whether HTTPS is enabled.
  # https-enabled = false

  # The SSL certificate to use when HTTPS is enabled.
  # https-certificate = "/etc/ssl/influxdb.pem"

  # Use a separate private key location.
  # https-private-key = ""

  # The JWT auth shared secret to validate requests using JSON web tokens.
  # shared-secret = ""

  # The default chunk size for result sets that should be chunked.
  # max-row-limit = 0

  # The maximum number of HTTP connections that may be open at once.  New connections that
  # would exceed this limit are dropped.  Setting this value to 0 disables the limit.
  # max-connection-limit = 0

  # Enable http service over unix domain socket
  # unix-socket-enabled = false

  # The path of the unix domain socket.
  # bind-socket = "/var/run/influxdb.sock"

  # The maximum size of a client request body, in bytes. Setting this value to 0 disables the limit.
  # max-body-size = 25000000

  # The maximum number of writes processed concurrently.
  # Setting this to 0 disables the limit.
  # max-concurrent-write-limit = 0

  # The maximum number of writes queued for processing.
  # Setting this to 0 disables the limit.
  # max-enqueued-write-limit = 0

  # The maximum duration for a write to wait in the queue to be processed.
  # Setting this to 0 or setting max-concurrent-write-limit to 0 disables the limit.
  # enqueued-write-timeout = 0

	# User supplied HTTP response headers
	#
	# [http.headers]
	#   X-Header-1 = "Header Value 1"
	#   X-Header-2 = "Header Value 2"

###
### [logging]
###
### Controls how the logger emits logs to the output.
###

[logging]
  # Determines which log encoder to use for logs. Available options
  # are auto, logfmt, and json. auto will use a more a more user-friendly
  # output format if the output terminal is a TTY, but the format is not as
  # easily machine-readable. When the output is a non-TTY, auto will use
  # logfmt.
  # format = "auto"

  # Determines which level of logs will be emitted. The available levels
  # are error, warn, info, and debug. Logs that are equal to or above the
  # specified level will be emitted.
  # level = "info"

  # Suppresses the logo output that is printed when the program is started.
  # The logo is always suppressed if STDOUT is not a TTY.
  # suppress-logo = false

###
### [subscriber]
###
### Controls the subscriptions, which can be used to fork a copy of all data
### received by the InfluxDB host.
###

[subscriber]
  # Determines whether the subscriber service is enabled.
  # enabled = true

  # The default timeout for HTTP writes to subscribers.
  # http-timeout = "30s"

  # Allows insecure HTTPS connections to subscribers.  This is useful when testing with self-
  # signed certificates.
  # insecure-skip-verify = false

  # The path to the PEM encoded CA certs file. If the empty string, the default system certs will be used
  # ca-certs = ""

  # The number of writer goroutines processing the write channel.
  # write-concurrency = 40

  # The number of in-flight writes buffered in the write channel.
  # write-buffer-size = 1000


###
### [[graphite]]
###
### Controls one or many listeners for Graphite data.
###

[[graphite]]
  # Determines whether the graphite endpoint is enabled.
  # enabled = false
  # database = "graphite"
  # retention-policy = ""
  # bind-address = ":2003"
  # protocol = "tcp"
  # consistency-level = "one"

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Batching
  # will buffer points in memory if you have many coming in.

  # Flush if this many points get buffered
  # batch-size = 5000

  # number of batches that may be pending in memory
  # batch-pending = 10

  # Flush at least this often even if we haven't hit buffer limit
  # batch-timeout = "1s"

  # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
  # udp-read-buffer = 0

  ### This string joins multiple matching 'measurement' values providing more control over the final measurement name.
  # separator = "."

  ### Default tags that will be added to all metrics.  These can be overridden at the template level
  ### or by tags extracted from metric
  # tags = ["region=us-east", "zone=1c"]

  ### Each template line requires a template pattern.  It can have an optional
  ### filter before the template and separated by spaces.  It can also have optional extra
  ### tags following the template.  Multiple tags should be separated by commas and no spaces
  ### similar to the line protocol format.  There can be only one default template.
  # templates = [
  #   "*.app env.service.resource.measurement",
  #   # Default template
  #   "server.*",
  # ]

###
### [collectd]
###
### Controls one or many listeners for collectd data.
###

[[collectd]]
  # enabled = false
  # bind-address = ":25826"
  # database = "collectd"
  # retention-policy = ""
  #
  # The collectd service supports either scanning a directory for multiple types
  # db files, or specifying a single db file.
  # typesdb = "/usr/local/share/collectd"
  #
  # security-level = "none"
  # auth-file = "/etc/collectd/auth_file"

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Batching
  # will buffer points in memory if you have many coming in.

  # Flush if this many points get buffered
  # batch-size = 5000

  # Number of batches that may be pending in memory
  # batch-pending = 10

  # Flush at least this often even if we haven't hit buffer limit
  # batch-timeout = "10s"

  # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
  # read-buffer = 0

  # Multi-value plugins can be handled two ways.
  # "split" will parse and store the multi-value plugin data into separate measurements
  # "join" will parse and store the multi-value plugin as a single multi-value measurement.
  # "split" is the default behavior for backward compatibility with previous versions of influxdb.
  # parse-multivalue-plugin = "split"
###
### [opentsdb]
###
### Controls one or many listeners for OpenTSDB data.
###

[[opentsdb]]
  # enabled = false
  # bind-address = ":4242"
  # database = "opentsdb"
  # retention-policy = ""
  # consistency-level = "one"
  # tls-enabled = false
  # certificate= "/etc/ssl/influxdb.pem"

  # Log an error for every malformed point.
  # log-point-errors = true

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Only points
  # metrics received over the telnet protocol undergo batching.

  # Flush if this many points get buffered
  # batch-size = 1000

  # Number of batches that may be pending in memory
  # batch-pending = 5

  # Flush at least this often even if we haven't hit buffer limit
  # batch-timeout = "1s"

###
### [[udp]]
###
### Controls the listeners for InfluxDB line protocol data via UDP.
###

[[udp]]
  # enabled = false
  # bind-address = ":8089"
  # database = "udp"
  # retention-policy = ""

  # InfluxDB precision for timestamps on received points ("" or "n", "u", "ms", "s", "m", "h")
  # precision = ""

  # These next lines control how batching works. You should have this enabled
  # otherwise you could get dropped metrics or poor performance. Batching
  # will buffer points in memory if you have many coming in.

  # Flush if this many points get buffered
  # batch-size = 5000

  # Number of batches that may be pending in memory
  # batch-pending = 10

  # Will flush at least this often even if we haven't hit buffer limit
  # batch-timeout = "1s"

  # UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
  # read-buffer = 0

###
### [continuous_queries]
###
### Controls how continuous queries are run within InfluxDB.
###

[continuous_queries]
  # Determines whether the continuous query service is enabled.
  # enabled = true

  # Controls whether queries are logged when executed by the CQ service.
  # log-enabled = true

  # Controls whether queries are logged to the self-monitoring data store.
  # query-stats-enabled = false

  # interval for how often continuous queries will be checked if they need to run
  # run-interval = "1s"

###
### [tls]
###
### Global configuration settings for TLS in InfluxDB.
###

[tls]
  # Determines the available set of cipher suites. See https://golang.org/pkg/crypto/tls/#pkg-constants
  # for a list of available ciphers, which depends on the version of Go (use the query
  # SHOW DIAGNOSTICS to see the version of Go used to build InfluxDB). If not specified, uses
  # the default settings from Go's crypto/tls package.
  # ciphers = [
  #   "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
  #   "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",
  # ]

  # Minimum version of the tls protocol that will be negotiated. If not specified, uses the
  # default settings from Go's crypto/tls package.
  # min-version = "tls1.2"

  # Maximum version of the tls protocol that will be negotiated. If not specified, uses the
  # default settings from Go's crypto/tls package.
  # max-version = "tls1.3"

influxdb.conf谷歌硬翻译:

### 欢迎使用 InfluxDB 配置文件。

# 此文件中的值会覆盖系统使用的默认值,如果
# 未指定配置选项。注释掉的行是配置
# 字段和使用的默认值。取消注释一行并更改值
# 将在进程重新启动时更改运行时使用的值。

# 每 24 小时一次 InfluxDB 会向 usage.influxdata.com 报告使用数据
# 数据包括随机ID、os、arch、version、series个数等
# 使用数据。从未传输过来自用户数据库的数据。
# 将此选项更改为 true 以禁用报告。
#reporting-disabled = false

# 绑定用于 RPC 服务进行备份和恢复的地址。
# 绑定地址 = "127.0.0.1:8088"

###
### [元]
###
### 控制存储元数据的 Raft 共识组的参数
### 关于 InfluxDB 集群。
###

[元]
  # metadata/raft 数据库的存储位置
  dir = "/var/lib/influxdb/meta"

  # 创建数据库时自动创建默认保留策略。
  # 保留自动创建 = true

  # 如果为元服务打印日志消息
  # 启用日志记录 = true

###
### [数据]
###
### 控制 InfluxDB 的实际分片数据在哪里以及它是如何存在的
### 从 WAL 中刷新。“dir”可能需要改到合适的地方
### 适用于您的系统,但 WAL 设置是高级配置。这
### 默认值应该适用于大多数系统。
###

[数据]
  # TSM存储引擎存放TSM文件的目录。
  dir = "/var/lib/influxdb/data"

  # TSM存储引擎存放WAL文件的目录。
  wal-dir = "/var/lib/influxdb/wal"

  # 在 fsyncing 之前写入将等待的时间量。持续时间
  # 大于 0 可用于批量处理多个 fsync 调用。这对于较慢的
  # 磁盘或当看到 WAL 写入争用时。值 0s fsync 每次写入 WAL。
  # 非SSD盘推荐取值范围为0-100ms。
  # wal-fsync-delay = "0s"


  # 用于新分片的分片索引类型。默认是内存索引,即
  # 在启动时重新创建。“tsi1”的值将使用基于磁盘的索引,该索引支持更高
  # 基数数据集。
  # 索引版本 = "inmem"

  # 跟踪日志记录围绕 tsm 引擎提供更详细的输出。车削
  # this on 可以为调试 tsm 引擎问题提供更有用的输出。
  # 启用跟踪记录 = 假

  # 在执行之前是否应该记录查询。对于故障排除非常有用,但会
  # 记录查询中包含的任何敏感数据。
  # 启用查询日志 = true

  # 提供更多错误检查。例如, SELECT INTO 在插入 +/-Inf 值时会出错
  # 而不是默默地失败。
  # 严格错误处理 = false

  # 验证传入的写入以确保密钥只有有效的 unicode 字符。
  # 此设置会产生少量开销,因为必须检查每个键。
  # 验证键 = 假

  # TSM 引擎的设置

  # CacheMaxMemorySize 是分片缓存的最大大小
  # 在它开始拒绝写入之前到达。
  # 有效的大小后缀是 k、m 或 g(不区分大小写,1024 = 1k)。
  # 没有大小后缀的值以字节为单位。
  # 缓存最大内存大小 = "1g"

  # CacheSnapshotMemorySize 是引擎将使用的大小
  # 对缓存进行快照并将其写入 TSM 文件,释放内存
  # 有效的大小后缀是 k、m 或 g(不区分大小写,1024 = 1k)。
  # 没有大小后缀的值以字节为单位。
  # 缓存快照内存大小 = "25m"

  # CacheSnapshotWriteColdDuration 是时间长度
  # 引擎将快照缓存并将其写入
  # 如果分片没有收到写入或删除,则创建一个新的 TSM 文件
  # 缓存快照写入冷持续时间 = “10m”

  # CompactFullWriteColdDuration 是引擎的持续时间
  # 如果没有收到一个分片,将压缩所有 TSM 文件到一个分片中
  # 写入或删除
  #compact-full-write-cold-duration = "4h"

  # 一次可以运行的最大并发完整和级别压缩数。一个
  # 值为 0 会导致 50% 的 runtime.GOMAXPROCS(0) 在运行时使用。任何更大的数字
  # than 0 将压缩限制为该值。此设置不适用
  # 缓存快照。
  # max-concurrent-compactions = 0

  # CompactThroughput 是我们以每秒字节数为单位的速率限制
  # 将允许 TSM 压缩写入磁盘。请注意,允许短脉冲
  # 以可能更大的值发生,由 CompactThroughputBurst 设置
  # 紧凑吞吐量 = "48m"

  # CompactThroughputBurst 是我们以每秒字节数为单位的速率限制
  # 将允许 TSM 压缩写入磁盘。
  #紧凑吞吐量突发=“48m”

  # 如果为真,则 mmap 建议值 MADV_WILLNEED 将提供给内核
  #TSM 文件。已发现此设置在某些内核上存在问题,默认为关闭。
  # 在某些情况下它可能会帮助那些磁盘速度较慢的用户。
  # tsm-use-madv-willneed = false

  # inmem 索引的设置

  # 在删除写入之前每个数据库允许的最大系列。这个限制可以防止
  # 数据库级别的高基数问题。可以通过将其设置为禁用此限制
  #0。
  # 每个数据库的最大系列数 = 1000000

  # 在删除写入之前允许的每个标签的最大标签值数。这个限制
  # 可以防止将高基数标记值写入测量值。这个限制可以
  # 通过将其设置为 0 来禁用。
  # 每个标签的最大值 = 100000

  # tsi1 索引的设置

  # 索引预写日志文件压缩的​​阈值,以字节为单位
  # 放入​​索引文件。较小的大小将导致日志文件被压缩得更多
  # 快速并以牺牲写入吞吐量为代价降低堆使用率。
  # 更大的尺寸将不那么频繁地压缩,在内存中存储更多的系列,
  # 并提供更高的写入吞吐量。
  # 有效的大小后缀是 k、m 或 g(不区分大小写,1024 = 1k)。
  # 没有大小后缀的值以字节为单位。
  # max-index-log-file-size = "1m"

  # TSI 索引中使用的内部缓存的大小,用于先前存储
  # 计算的系列结果。缓存的结果将从缓存中快速返回,而不是
  # 比后续查询匹配标签键/值时需要重新计算
  # 谓词被执行。将此值设置为 0 将禁用缓存,这可能
  # 导致查询性能问题。
  # 这个值只有在知道经常使用的集合时才应该增加
  # 跨数据库所有度量的标记键/值谓词大于 100。
  # 缓存大小的增加可能会导致堆使用量的增加。
  系列 ID 设置缓存大小 = 100

###
### [协调员]
###
### 控制集群服务配置。
###

[协调员]
  # 写入请求的默认时间将等待,直到“超时”错误返回给调用者。
  # 写入超时 = "10s"

  # 一次允许执行的最大并发查询数。如果查询是
  # 执行并超过此限制,将错误返回给调用者。可以禁用此限制
  # 通过将其设置为 0。
  # 最大并发查询数 = 0

  # 查询在被系统杀死之前允许执行的最长时间。这个限制
  # 可以帮助防止逃跑的查询。将该值设置为 0 将禁用限制。
  # 查询超时 = "0s"

  # 查询将被记录为慢查询的时间阈值。可以设置此限制以帮助
  # 发现缓慢或资源密集型查询。将该值设置为 0 将禁用慢查询日志记录。
  # log-queries-after = "0s"

  # SELECT 可以处理的最大点数。值为 0 将使
  # 最大点数无限制。这只会每秒检查一次,因此查询不会
  # 达到限制时立即中止。
  # 最大选择点 = 0

  # SELECT 可以运行的最大系列数。值 0 将使最大系列
  #计数无限。
  # 最大选择系列 = 0

  # SELECT 可以创建的按时间段分组的最大数量。零值将最大化最大值
  # 桶数不受限制。
  # 最大选择桶 = 0

###
### [保留]
###
### 控制用于驱逐旧数据的保留策略的执行。
###

[保留]
  # 确定是否启用保留策略强制。
  # 启用 = 真

  # 运行保留策略强制检查的时间间隔。
  # 检查间隔 = "30m"

###
### [分片预创建]
###
### 控制分片的预创建,因此它们在数据到达之前可用。
### 只有在创建后在
### 未来,将永远被创造。永远不会预先创建完整的分片
### 或部分过去。

[分片预创建]
  # 判断是否开启分片预创建服务。
  # 启用 = 真

  # 预创建新分片检查运行的时间间隔。
  # 检查间隔 = "10m"

  # 分片组结束时间之前的默认时间段,其后继者
  # 组已创建。
  #提前期=“30m”

###
### 控制系统自我监控、统计和诊断。
###
### 监控数据的内部数据库是自动创建的,如果
### 如果它尚不存在。此数据库中的目标保留
### 被称为“监视器”,并且创建时保留期为 7 天
### 如果不存在,则复制因子为 1。在所有情况下
### 此保留策略配置为数据库的默认值。

[监视器]
  # 是否在内部记录统计信息。
  # 已启用商店 = true

  # 记录统计信息的目标数据库
  # 存储数据库 = “_internal”

  # 记录统计的时间间隔
  # 存储间隔 = "10s"

###
### [http]
###
### 控制如何配置 HTTP 端点。这些是主要的
### 数据进出 InfluxDB 的机制。
###

[http]
  # 确定是否启用 HTTP 端点。
  # 启用 = 真

  # 确定是否启用了 Flux 查询端点。
  # 启用通量 = 假

  # 确定是否启用 Flux 查询日志记录。
  # 启用通量日志 = 假

  # HTTP 服务使用的绑定地址。
  # 绑定地址 = ":8086"

  # 确定是否通过 HTTP/HTTPS 启用用户身份验证。
  # 启用身份验证 = 假

  # 发出基本身份验证质询时发回的默认领域。
  # 领域 = "InfluxDB"

  # 确定是否启用 HTTP 请求日志记录。
  # 启用日志 = true

  # 确定启用日志时是否应抑制 HTTP 写请求日志。
  # 抑制写入日志 = 假

  # 当启用 HTTP 请求日志记录时,此选项指定其中的路径
  # 日志条目应该被写入。如果未指定,则默认写入 stderr,即
  # 将 HTTP 日志与内部 InfluxDB 日志混合。
  #
  # 如果 influxd 无法访问指定路径,则会记录错误并回退到写入
  # 请求日志到标准错误。
  # 访问日志路径 = ""

  # 过滤哪些请求应该被记录。每个过滤器的模式为 NNN、NNX 或 NXX,其中 N 为
  # 一个数字,X 是任何数字的通配符。要过滤所有 5xx 响应,请使用字符串 5xx。
  # 如果使用多个过滤器,那么只有一个必须匹配。默认是没有过滤器
  # 将导致打印每个请求。
  # 访问日志状态过滤器 = []

  # 确定是否启用详细的写入日志记录。
  # 写跟踪 = 假

  # 确定是否启用了 pprof 端点。该端点用于
  # 故障排除和监控。
  # pprof 启用 = true

  # 在 pprof 端点上启用身份验证。用户将需要管理员权限
  # 在启用此设置时访问 pprof 端点。这个设置有
  # 如果 auth-enabled 或 pprof-enabled 设置为 false,则无效。
  # pprof-auth-enabled = false

  # 启用在启动时立即绑定到 localhost:6060 的 pprof 端点。
  # 这仅在调试启动问题时需要。
  # debug-pprof-enabled = false

  # 在 /ping、/metrics 和不推荐使用的 /status 上启用身份验证
  # 端点。如果 auth-enabled 设置为 false,则此设置无效。
  # ping-auth-enabled = false

  # 判断是否开启HTTPS。
  # https 启用 = 假

  # 启用 HTTPS 时使用的 SSL 证书。
  # https-certificate = "/etc/ssl/influxdb.pem"

  # 使用单独的私钥位置。
  # https-私钥 = ""

  # JWT auth 共享密钥,用于使用 JSON Web 令牌验证请求。
  # 共享秘密 = ""

  # 应该分块的结果集的默认块大小。
  # 最大行限制 = 0

  # 一次可以打开的最大 HTTP 连接数。新的连接
  # 将超过此限制被丢弃。将此值设置为 0 将禁用限制。
  # 最大连接限制 = 0

  # 通过 unix 域套接字启用 http 服务
  # unix-socket-enabled = false

  # unix 域套接字的路径。
  # bind-socket = "/var/run/influxdb.sock"

  # 客户端请求正文的最大大小,以字节为单位。将此值设置为 0 将禁用限制。
  # 最大身体尺寸 = 25000000

  # 并发处理的最大写入数。
  # 将此设置为 0 禁用限制。
  # 最大并发写入限制 = 0

  # 排队等待处理的最大写入数。
  # 将此设置为 0 禁用限制。
  # max-enqueued-write-limit = 0

  # 写入在待处理队列中等待的最大持续时间。
  # 将此设置为 0 或将 max-concurrent-write-limit 设置为 0 将禁用限制。
  # enqueued-write-timeout = 0

	# 用户提供的 HTTP 响应标头
	#
	# [http.headers]
	# X-Header-1 = "标题值 1"
	# X-Header-2 = "标题值 2"

###
### [记录]
###
### 控制记录器如何将日志发送到输出。
###

[记录]
  # 确定用于日志的日志编码器。可用选项
  # 是 auto、logfmt 和 json。auto 将使用一个更人性化的
  # 如果输出终端是TTY,则输出格式,但格式不一样
  # 易于机器读取。当输出为非 TTY 时,auto 将使用
  #logfmt。
  # 格式 = “自动”

  # 确定将发出哪个级别的日志。可用级别
  # 是错误、警告、信息和调试。等于或高于
  # 指定级别将被发射。
  # 级别 = “信息”

  # 禁止程序启动时打印的标志输出。
  # 如果 STDOUT 不是 TTY,则标志总是被抑制。
  # 抑制标志 = 假

###
### [订阅者]
###
### 控制订阅,可用于分叉所有数据的副本
### 由 InfluxDB 主机接收。
###

[订户]
  # 判断订阅者服务是否开启。
  # 启用 = 真

  # HTTP 写入订阅者的默认超时时间。
  # http-timeout = "30s"

  # 允许与订阅者的不安全 HTTPS 连接。这在使用自我测试时很有用
  # 签名证书。
  # 不安全跳过验证 = 假

  # PEM 编码的 CA 证书文件的路径。如果为空字符串,将使用默认的系统证书
  # ca-certs = ""

  # 处理写入通道的写入器 goroutine 的数量。
  # 写入并发 = 40

  # 在写入通道中缓冲的进行中写入的数量。
  # 写入缓冲区大小 = 1000


###
### [[石墨]]
###
### 控制 Graphite 数据的一个或多个侦听器。
###

[[石墨]]
  # 确定是否启用了石墨端点。
  # 启用 = 假
  # 数据库 = “石墨”
  # 保留政策 = ""
  # 绑定地址 = ":2003"
  # 协议 = “tcp”
  # 一致性级别 = “一”

  # 这些下一行控制批处理的工作方式。您应该启用此功能
  # 否则你可能会丢失指标或性能不佳。配料
  # 如果有很多点进入,将缓冲内存中的点。

  # 如果缓冲了这么多点,则刷新
  # 批量大小 = 5000

  # 可能在内存中挂起的批次数
  # 批待处理 = 10

  # 即使我们没有达到缓冲区限制,也至少经常刷新
  # 批处理超时 = "1s"

  # UDP 读取缓冲区大小,0 表示操作系统默认值。如果设置高于 OS 最大值,UDP 侦听器将失败。
  # udp 读取缓冲区 = 0

  ### 此字符串连接多个匹配的“测量”值,提供对最终测量名称的更多控制。
  # 分隔符 = "."

  ### 将添加到所有指标的默认标签。这些可以在模板级别被覆盖
  ### 或通过从指标中提取的标签
  # 标签 = ["region=us-east", "zone=1c"]

  ### 每个模板行都需要一个模板模式。它可以有一个可选的
  ### 在模板之前过滤并用空格分隔。它也可以有可选的额外
  ### 模板后面的标签。多个标签之间用逗号分隔,不能有空格
  ### 类似于线路协议格式。只能有一个默认模板。
  # 模板 = [
  # "*.app env.service.resource.measurement",
  # # 默认模板
  # “服务器。*”,
  # ]

###
### [已收藏]
###
### 控制收集数据的一个或多个侦听器。
###

[[收藏]]
  # 启用 = 假
  # 绑定地址 = ":25826"
  # 数据库 = “收集”
  # 保留政策 = ""
  #
  # collectd 服务支持扫描一个目录的多种类型
  # db 文件,或指定单个 db 文件。
  # typesdb = "/usr/local/share/collectd"
  #
  # 安全级别 = “无”
  # auth-file = "/etc/collectd/auth_file"

  # 这些下一行控制批处理的工作方式。您应该启用此功能
  # 否则你可能会丢失指标或性能不佳。配料
  # 如果有很多点进入,将缓冲内存中的点。

  # 如果缓冲了这么多点,则刷新
  # 批量大小 = 5000

  # 可能在内存中待处理的批次数
  # 批待处理 = 10

  # 即使我们没有达到缓冲区限制,也至少经常刷新
  # 批处理超时 = "10s"

  # UDP 读取缓冲区大小,0 表示操作系统默认值。如果设置高于 OS 最大值,UDP 侦听器将失败。
  # 读取缓冲区 = 0

  # 多值插件可以通过两种方式处理。
  # "split" 将解析多值插件数据并将其存储到单独的测量中
  # "join" 会将多值插件解析并存储为单个多值测量值。
  # "split" 是向后兼容 influxdb 以前版本的默认行为。
  # 解析多值插件 = “拆分”
###
### [opentsdb]
###
### 控制一个或多个 OpenTSDB 数据的侦听器。
###

[[opentsdb]]
  # 启用 = 假
  # 绑定地址 = ":4242"
  # 数据库 = "opentsdb"
  # 保留政策 = ""
  # 一致性级别 = “一”
  # 启用 tls = false
  # 证书=“/etc/ssl/influxdb.pem”

  # 为每个畸形点记录一个错误。
  # 日志点错误 = true

  # 这些下一行控制批处理的工作方式。您应该启用此功能
  # 否则你可能会丢失指标或性能不佳。只有积分
  # 通过 telnet 协议接收的指标正在批处理。

  # 如果缓冲了这么多点,则刷新
  # 批量大小 = 1000

  # 可能在内存中待处理的批次数
  # 待批 = 5

  # 即使我们没有达到缓冲区限制,也至少经常刷新
  # 批处理超时 = "1s"

###
### [[udp]]
###
### 通过 UDP 控制 InfluxDB 线路协议数据的侦听器。
###

[[udp]]
  # 启用 = 假
  # 绑定地址 = ":8089"
  # 数据库 = “udp”
  # 保留政策 = ""

  # 接收点上时间戳的 InfluxDB 精度(“”或“n”、“u”、“ms”、“s”、“m”、“h”)
  # 精度 = ""

  # 这些下一行控制批处理的工作方式。您应该启用此功能
  # 否则你可能会丢失指标或性能不佳。配料
  # 如果有很多点进入,将缓冲内存中的点。

  # 如果缓冲了这么多点,则刷新
  # 批量大小 = 5000

  # 可能在内存中待处理的批次数
  # 批待处理 = 10

  # 即使我们没有达到缓冲区限制,也会至少刷新这个频率
  # 批处理超时 = "1s"

  # UDP 读取缓冲区大小,0 表示操作系统默认值。如果设置高于 OS 最大值,UDP 侦听器将失败。
  # 读取缓冲区 = 0

###
### [连续查询]
###
### 控制如何在 InfluxDB 中运行连续查询。
###

[连续查询]
  # 判断是否开启持续查询服务。
  # 启用 = 真

  # 控制 CQ 服务执行时是否记录查询。
  # 启用日志 = true

  # 控制是否将查询记录到自我监控数据存储中。
  # 启用查询统计 = 假

  # 如果需要运行连续查询,检查它们的频率间隔
  # 运行间隔 = "1s"

###
### [tls]
###
### InfluxDB 中 TLS 的全局配置设置。
###

[tls]
  # 确定可用的密码套件集。见 https://golang.org/pkg/crypto/tls/#pkg-constants
  # 获取可用密码列表,这取决于 Go 的版本(使用查询
  # SHOW DIAGNOSTICS 查看用于构建 InfluxDB 的 Go 版本)。如果未指定,则使用
  # Go 的 crypto/tls 包中的默认设置。
  # 密码 = [
  # "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
  # "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",
  # ]

  # 将协商的 tls 协议的最低版本。如果未指定,则使用
  # Go 的 crypto/tls 包中的默认设置。
  # min-version = "tls1.2"

  # 将协商的 tls 协议的最大版本。如果未指定,则使用
  # Go 的 crypto/tls 包中的默认设置。
  # 最大版本 = "tls1.3"

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

悟世者

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值