本文主要介绍并测试一下PostgreSQL 在中高端x86服务器上的数据插入速度,帮助企业用户了解PostgreSQL在这种纯插入场景的性能。
(例如运营商网关数据,金融行业数据,产生量大,并且要求快速插入大数据库中持久化保存。)
另外, 用户如果需要流式实时处理,可以参考基于PostgreSQL的流式处理方案,一天处理1万亿的实时流式处理是如何实现的?
https://yq.aliyun.com/articles/166
测试结果写在前面:
.1.平均每条记录长度约360字节。
.2.测试十轮,每轮持续100秒,每轮测试的结果在1200万行每秒以上,约1210万左右。
.3.每秒约入库1200 万条记录,相当于4.2GB数据。
.4.换算成天,入库 10368 亿记录,相当于363TB数据。
测试的硬件环境
.1. X86服务器
.2. 3?核。
.3. 5??G 内存
.4. 几块SSD,15TB容量
软件环境
.1. CENTOS 6.x x64
.2 .xfs
.3. PostgreSQL 9.5
系统配置参考
https://github.com/digoal/pgsql_admin_script/blob/master/pgsql_perf_tuning.md
数据库配置
./configure --prefix=/home/digoal/pgsql9.5.1 --with-blocksize=32 --with-segsize=128 --with-wal-blocksize=32 --with-wal-segsize=64
make && make install
PostgreSQL支持hugepage的方法请参考:
https://yq.aliyun.com/articles/8482
参数
listen_addresses = '0.0.0.0' # what IP address(es) to listen on;
fsync=on
port = 1921 # (change requires restart)
max_connections = 600 # (change requires restart)
superuser_reserved_connections = 13 # (change requires restart)
unix_socket_directories = '.' # comma-separated list of directories
unix_socket_permissions = 0700 # begin with 0 to use octal notation
tcp_keepalives_idle = 60 # TCP_KEEPIDLE, in seconds;
tcp_keepalives_interval = 10 # TCP_KEEPINTVL, in seconds;
tcp_keepalives_count = 10 # TCP_KEEPCNT;
shared_buffers = 256GB # min 128kB
huge_pages = on # on, off, or try
work_mem = 512MB # min 64kB
maintenance_work_mem = 1GB # min 1MB
autovacuum_work_mem = 1GB # min 1MB, or -1 to use maintenance_work_mem
dynamic_shared_memory_type = posix # the default is the first option
bgwriter_delay = 10ms # 10-10000ms between rounds
bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round
bgwriter_lru_multiplier = 2.0
synchronous_commit = off # synchronization level;
full_page_writes = on # recover from partial page writes
wal_buffers =