❝开头还是介绍一下群,如果感兴趣PolarDB ,MongoDB ,MySQL ,PostgreSQL ,Redis, OceanBase, Sql Server等有问题,有需求都可以加群群内有各大数据库行业大咖,可以解决你的问题。加群请联系 liuaustin3 ,(共3300人左右 1 + 2 + 3 + 4 +5 + 6 + 7 + 8 +9)(1 2 3 4 5 6 7群均已爆满,开8群近400 9群 200+,开10群PolarDB专业学习群100+)
上周在群里发了一些PostgreSQL的免费书籍(电子),多个群里马上又互动,提出很多的PG问题,从今天开始,AustinDatabases 开启一周的PostgreSQL大写,这一周都是PostgreSQL的技术,和回答群友的问题。
今天我们周一,开启一个大戏,都说PostgreSQL更新快,那么我们就开始把PG13-PG18,把6个版本的PostgreSQL 用统一的方法安装,用一台机器,用统一的方法POC,看看那个PostgreSQL 版本有过人之处,还是性能一直很稳定。
方案一台4C 8G的虚拟机,上面通过snapshot 每个上面都安装了不同版本的PG,测试一次,snapshot打到没有安装的PG的状态,然后装下一个PG,保证测试的,公平,公正,公允。
废话没有,第一个上线的是PostgreSQL 18,测试方案也很简单
1 pgbench 进行统一的测试,并且测试三次,取平均成绩
2 写一段复杂的SQL,通过100万的多个表的JOIN 复杂操作,查看执行计划的变化
测试方案很简单,我们等待结果
pgbench (18.0)
starting vacuum...end.
progress: 5.0 s, 812.3 tps, lat 12.143 ms stddev 4.706, 0 failed
progress: 10.0 s, 743.0 tps, lat 13.386 ms stddev 7.498, 0 failed
progress: 15.0 s, 792.3 tps, lat 12.565 ms stddev 4.806, 0 failed
progress: 20.0 s, 742.1 tps, lat 13.413 ms stddev 8.346, 0 failed
progress: 25.0 s, 790.2 tps, lat 12.595 ms stddev 5.204, 0 failed
progress: 30.0 s, 798.8 tps, lat 12.452 ms stddev 4.732, 0 failed
2025-10-29 06:56:13.850 EDT [75406] LOG: checkpoint starting: time
progress: 35.0 s, 678.5 tps, lat 14.673 ms stddev 41.629, 0 failed
progress: 40.0 s, 808.8 tps, lat 12.313 ms stddev 4.761, 0 failed
progress: 45.0 s, 696.1 tps, lat 14.297 ms stddev 6.334, 0 failed
progress: 50.0 s, 570.2 tps, lat 17.449 ms stddev 48.272, 0 failed
progress: 55.0 s, 719.4 tps, lat 13.838 ms stddev 5.533, 0 failed
progress: 60.0 s, 738.8 tps, lat 13.442 ms stddev 5.282, 0 failed
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 4
maximum number of tries: 1
duration: 60 s
number of transactions actually processed: 44461
number of failed transactions: 0 (0.000%)
latency average = 13.429 ms
latency stddev = 17.689 ms
第一次数据 number of transactions actually processed: 44461 number of failed transactions: 0 (0.000%)
latency average = 13.429 ms latency stddev = 17.689 ms initial connection time = 38.095 ms
第二次数据 number of transactions actually processed: 46101 number of failed transactions: 0 (0.000%)
latency average = 12.942 ms latency stddev = 5.473 ms initial connection time = 41.238 ms
第三次数据 number of transactions actually processed: 44300 number of failed transactions: 0 (0.000%)
latency average = 13.475 ms latency stddev = 43.424 ms initial connection time = 26.475 ms
本地连接,没有使用网络连接,链接的时间,和延迟波动比较大。我实际上又多测了几次,也是一样。执行的事务数还是比较平稳的。 负责语句执行的结果
Limit (cost=201709.20..201709.21 rows=1 width=122) (actual time=136579.199..136579.684 rows=3.00 loops=1)
Buffers: shared hit=4532 read=19746, temp read=14705 written=14711
CTE recent_orders
-> Bitmap Heap Scan on orders o (cost=5512.62..21455.72 rows=490863 width=26) (actual time=27.673..6695.270 rows=493110.00 loops=1)
Recheck Cond: (order_date > (now() - '180 days'::interval))
Heap Blocks: exact=7353
Buffers: shared hit=71 read=7713
-> Bitmap Index Scan on idx_orders_date (cost=0.00..5389.90 rows=490863 width=0) (actual time=26.047..26.058 rows=493110.00 loops=1)
Index Cond: (order_date > (now() - '180 days'::interval))
Index Searches: 1
Buffers: shared hit=3 read=428
CTE customer_region_sales
-> GroupAggregate (cost=115506.50..120415.17 rows=3 width=45) (actual time=72586.841..83279.074 rows=3.00 loops=1)
Group Key: c.region
Buffers: shared hit=4397 read=11623, temp read=6680 written=9214
-> Sort (cost=115506.50..116733.66 rows=490863 width=25) (actual time=69253.277..76183.559 rows=493110.00 loops=1)
Sort Key: c.region, r_1.customer_id
Sort Method: external merge Disk: 12560kB
Buffers: shared hit=4397 read=11623, temp read=6680 written=9214
-> Hash Join (cost=35619.00..57361.78 rows=490863 width=25) (actual time=26877.003..61450.434 rows=493110.00 loops=1)
Hash Cond: (r_1.customer_id = c.customer_id)
Buffers: shared hit=4397 read=11623, temp read=5110 written=7638
-> CTE Scan on recent_orders r_1 (cost=0.00..9817.26 rows=490863 width=20) (actual time=27.697..20310.716 rows=493110.00 loops=1)
Storage: Disk Maximum Storage: 20226kB
Buffers: shared hit=71 read=7713, temp written=2528
-> Hash (cost=18236.00..18236.00 rows=1000000 width=9) (actual time=26849.161..26849.191 rows=1000000.00 loops=1)
Buckets: 262144 Batches: 8 Memory Usage: 7423kB
Buffers: shared hit=4326 read=3910, temp written=3415
-> Seq Scan on customers c (cost=0.00..18236.00 rows=1000000 width=9) (actual time=0.024..13199.553 rows=1000000.00 loops=1)
Buffers: shared hit=4326 read=3910
InitPlan 3
-> Aggregate (cost=0.07..0.08 rows=1 width=32) (actual time=10692.491..10692.558 rows=1.00 loops=1)
Buffers: temp read=1080
-> CTE Scan on customer_region_sales (cost=0.00..0.06 rows=3 width=32) (actual time=0.013..10692.430 rows=3.00 loops=1)
Storage: Memory Maximum Storage: 17kB
Buffers: temp read=1080
-> Sort (cost=59838.23..59838.23 rows=1 width=122) (actual time=136579.180..136579.353 rows=3.00 loops=1)
Sort Key: (((cr.total_sales / (sum(r.amount))))::numeric(10,2)) DESC
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=4532 read=19746, temp read=14705 written=14711
-> Nested Loop (cost=59838.09..59838.22 rows=1 width=122) (actual time=136578.778..136579.234 rows=3.00 loops=1)
Buffers: shared hit=4532 read=19746, temp read=14705 written=14711
-> CTE Scan on customer_region_sales cr (cost=0.00..0.07 rows=1 width=64) (actual time=83279.397..83279.417 rows=1.00 loops=1)
Filter: (total_sales > (InitPlan 3).col1)
Rows Removed by Filter: 2
Storage: Memory Maximum Storage: 17kB
Buffers: shared hit=4397 read=11623, temp read=6680 written=9214
-> HashAggregate (cost=59838.09..59838.14 rows=1 width=42) (actual time=53299.345..53299.468 rows=3.00 loops=1)
Group Key: p.category
Filter: (sum(r.amount) > '0'::numeric)
Batches: 1 Memory Usage: 32kB
Buffers: shared hit=135 read=8123, temp read=8025 written=5497
-> Hash Join (cost=35641.00..57383.78 rows=490863 width=26) (actual time=26384.503..46473.180 rows=493110.00 loops=1)
Hash Cond: (r.product_id = p.product_id)
Buffers: shared hit=135 read=8123, temp read=8025 written=5497
-> CTE Scan on recent_orders r (cost=0.00..9817.26 rows=490863 width=20) (actual time=0.067..6585.055 rows=493110.00 loops=1)
Storage: Disk Maximum Storage: 20226kB
Buffers: temp read=2529 written=1
-> Hash (cost=18258.00..18258.00 rows=1000000 width=14) (actual time=26382.380..26382.409 rows=1000000.00 loops=1)
Buckets: 262144 Batches: 8 Memory Usage: 7862kB
Buffers: shared hit=135 read=8123, temp written=3799
-> Seq Scan on products p (cost=0.00..18258.00 rows=1000000 width=14) (actual time=0.768..12843.782 rows=1000000.00 loops=1)
Buffers: shared hit=135 read=8123
Planning:
Buffers: shared hit=139 read=4
Planning Time: 1.342 ms
Execution Time: 136583.667 ms
(67 rows)
Postgresql 17
pgbench (17.6)
starting vacuum...end.
progress: 5.0 s, 827.6 tps, lat 11.921 ms stddev 4.929, 0 failed
progress: 10.0 s, 818.0 tps, lat 12.187 ms stddev 5.044, 0 failed
progress: 15.0 s, 840.7 tps, lat 11.834 ms stddev 4.818, 0 failed
progress: 20.0 s, 773.9 tps, lat 12.857 ms stddev 5.174, 0 failed
progress: 25.0 s, 813.9 tps, lat 12.217 ms stddev 4.633, 0 failed
progress: 30.0 s, 734.4 tps, lat 13.543 ms stddev 5.530, 0 failed
progress: 35.0 s, 673.3 tps, lat 14.792 ms stddev 6.940, 0 failed
progress: 40.0 s, 712.3 tps, lat 13.956 ms stddev 6.171, 0 failed
progress: 45.0 s, 671.3 tps, lat 14.821 ms stddev 7.001, 0 failed
progress: 50.0 s, 695.0 tps, lat 14.320 ms stddev 6.603, 0 failed
progress: 55.0 s, 729.7 tps, lat 13.636 ms stddev 6.247, 0 failed
progress: 60.0 s, 712.2 tps, lat 13.969 ms stddev 6.286, 0 failed
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 4
maximum number of tries: 1
duration: 60 s
number of transactions actually processed: 45021
number of failed transactions: 0 (0.000%)
latency average = 13.258 ms
latency stddev = 5.877 ms
initial connection time = 28.814 ms
第一次 number of transactions actually processed: 45021 number of failed transactions: 0 (0.000%) latency average = 13.258 ms latency stddev = 5.877 ms initial connection time = 28.814 ms 第二次 number of transactions actually processed: 40094 number of failed transactions: 0 (0.000%) latency average = 14.881 ms latency stddev = 54.785 ms initial connection time = 22.817 ms 第三次 number of transactions actually processed: 44396 number of failed transactions: 0 (0.000%) latency average = 13.445 ms latency stddev = 6.160 ms initial connection time = 28.981 ms
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=202177.59..202177.59 rows=1 width=122) (actual time=134721.092..134721.592 rows=3 loops=1)
CTE recent_orders
-> Bitmap Heap Scan on orders o (cost=5531.95..21509.67 rows=492841 width=26) (actual time=30.416..6473.857 rows=493571 loops=1)
Recheck Cond: (order_date > (now() - '180 days'::interval))
Heap Blocks: exact=7353
-> Bitmap Index Scan on idx_orders_date (cost=0.00..5408.74 rows=492841 width=0) (actual time=28.058..28.069 rows=493571 loops=1)
Index Cond: (order_date > (now() - '180 days'::interval))
CTE customer_region_sales
-> GroupAggregate (cost=115823.52..120751.97 rows=3 width=45) (actual time=70701.192..80724.460 rows=3 loops=1)
Group Key: c.region
-> Sort (cost=115823.52..117055.63 rows=492841 width=25) (actual time=67346.210..73949.407 rows=493571 loops=1)
Sort Key: c.region, r_1.customer_id
Sort Method: external merge Disk: 12568kB
-> Hash Join (cost=35619.00..57428.53 rows=492841 width=25) (actual time=26457.171..59488.543 rows=493571 loops=1)
Hash Cond: (r_1.customer_id = c.customer_id)
-> CTE Scan on recent_orders r_1 (cost=0.00..9856.82 rows=492841 width=20) (actual time=30.449..19193.383 rows=493571 loops=1)
-> Hash (cost=18236.00..18236.00 rows=1000000 width=9) (actual time=26426.260..26426.292 rows=1000000 loops=1)
Buckets: 262144 Batches: 8 Memory Usage: 7423kB
-> Seq Scan on customers c (cost=0.00..18236.00 rows=1000000 width=9) (actual time=0.074..12974.367 rows=1000000 loops=1)
InitPlan 3
-> Aggregate (cost=0.07..0.08 rows=1 width=32) (actual time=10023.292..10023.332 rows=1 loops=1)
-> CTE Scan on customer_region_sales (cost=0.00..0.06 rows=3 width=32) (actual time=0.011..10023.239 rows=3 loops=1)
-> Sort (cost=59915.87..59915.88 rows=1 width=122) (actual time=134721.068..134721.271 rows=3 loops=1)
Sort Key: (((cr.total_sales / (sum(r.amount))))::numeric(10,2)) DESC
Sort Method: quicksort Memory: 25kB
-> Nested Loop (cost=59915.73..59915.86 rows=1 width=122) (actual time=134720.648..134721.139 rows=3 loops=1)
-> CTE Scan on customer_region_sales cr (cost=0.00..0.07 rows=1 width=64) (actual time=80724.543..80724.567 rows=1 loops=1)
Filter: (total_sales > (InitPlan 3).col1)
Rows Removed by Filter: 2
-> HashAggregate (cost=59915.73..59915.78 rows=1 width=42) (actual time=53996.068..53996.401 rows=3 loops=1)
Group Key: p.category
Filter: (sum(r.amount) > '0'::numeric)
Batches: 1 Memory Usage: 24kB
-> Hash Join (cost=35642.00..57451.53 rows=492841 width=26) (actual time=25899.237..46433.415 rows=493571 loops=1)
Hash Cond: (r.product_id = p.product_id)
-> CTE Scan on recent_orders r (cost=0.00..9856.82 rows=492841 width=20) (actual time=0.067..6424.072 rows=493571 loops=1)
-> Hash (cost=18259.00..18259.00 rows=1000000 width=14) (actual time=25898.919..25898.948 rows=1000000 loops=1)
Buckets: 262144 Batches: 8 Memory Usage: 7863kB
-> Seq Scan on products p (cost=0.00..18259.00 rows=1000000 width=14) (actual time=0.045..12762.156 rows=1000000 loops=1)
Planning Time: 0.844 ms
Execution Time: 134725.994 ms
(41 rows)
Postgresql 16
psql (16.10)
Type "help"forhelp.
postgres=# create database testdb
postgres-# ;
CREATE DATABASE
postgres=# exit
[postgres@postgresql_per ~]$ pgbench -i -s 10 testdb
dropping old tables...
NOTICE: table "pgbench_accounts" does not exist, skipping
NOTICE: table "pgbench_branches" does not exist, skipping
NOTICE: table "pgbench_history" does not exist, skipping
NOTICE: table "pgbench_tellers" does not exist, skipping
creating tables...
generating data (client-side)...
1000000 of 1000000 tuples (100%) done (elapsed 3.29 s, remaining 0.00 s)
vacuuming...
creating primary keys...
donein 5.36 s (drop tables 0.00 s, create tables 0.01 s, client-side generate 3.46 s, vacuum 0.30 s, primary keys 1.59 s).
[postgres@postgresql_per ~]$ pgbench -c 10 -j 4 -T 60 -P 5 testdb
pgbench (16.10)
starting vacuum...end.
progress: 5.0 s, 674.4 tps, lat 14.633 ms stddev 6.612, 0 failed
progress: 10.0 s, 684.3 tps, lat 14.534 ms stddev 5.374, 0 failed
progress: 15.0 s, 728.1 tps, lat 13.667 ms stddev 5.294, 0 failed
progress: 20.0 s, 715.1 tps, lat 13.915 ms stddev 5.426, 0 failed
progress: 25.0 s, 692.2 tps, lat 14.378 ms stddev 7.809, 0 failed
progress: 30.0 s, 706.6 tps, lat 14.082 ms stddev 5.751, 0 failed
progress: 35.0 s, 722.7 tps, lat 13.769 ms stddev 6.108, 0 failed
progress: 40.0 s, 711.7 tps, lat 13.985 ms stddev 6.354, 0 failed
progress: 45.2 s, 575.9 tps, lat 16.199 ms stddev 13.069, 0 failed
progress: 50.0 s, 256.8 tps, lat 41.007 ms stddev 255.568, 0 failed
progress: 55.0 s, 659.9 tps, lat 15.082 ms stddev 6.924, 0 failed
progress: 60.0 s, 514.3 tps, lat 19.371 ms stddev 66.638, 0 failed
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 4
maximum number of tries: 1
duration: 60 s
number of transactions actually processed: 38294
number of failed transactions: 0 (0.000%)
latency average = 15.576 ms
latency stddev = 49.561 ms
initial connection time = 35.780 ms
第一次、 number of transactions actually processed: 38294 number of failed transactions: 0 (0.000%) latency average = 15.576 ms latency stddev = 49.561 ms initial connection time = 35.780 ms
第二次 number of transactions actually processed: 39347 number of failed transactions: 0 (0.000%) latency average = 15.176 ms latency stddev = 25.550 ms initial connection time = 24.400 ms
第三次 number of transactions actually processed: 39722 number of failed transactions: 0 (0.000%) latency average = 15.028 ms latency stddev = 45.517 ms initial connection time = 20.483 ms
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=202043.54..202043.54 rows=1 width=122) (actual time=141646.636..141647.115 rows=3 loops=1)
CTE recent_orders
-> Bitmap Heap Scan on orders o (cost=5527.48..21495.10 rows=492264 width=26) (actual time=27.932..7376.182 rows=493192 loops=1)
Recheck Cond: (order_date > (now() - '180 days'::interval))
Heap Blocks: exact=7353
-> Bitmap Index Scan on idx_orders_date (cost=0.00..5404.41 rows=492264 width=0) (actual time=25.764..25.775 rows=493192 loops=1)
Index Cond: (order_date > (now() - '180 days'::interval))
CTE customer_region_sales
-> GroupAggregate (cost=115731.75..120654.43 rows=3 width=45) (actual time=74477.656..85285.844 rows=3 loops=1)
Group Key: c.region
-> Sort (cost=115731.75..116962.41 rows=492264 width=25) (actual time=71252.970..78200.817 rows=493192 loops=1)
Sort Key: c.region, r_1.customer_id
Sort Method: external merge Disk: 12576kB
-> Hash Join (cost=35619.00..57409.47 rows=492264 width=25) (actual time=26945.733..63523.074 rows=493192 loops=1)
Hash Cond: (r_1.customer_id = c.customer_id)
-> CTE Scan on recent_orders r_1 (cost=0.00..9845.28 rows=492264 width=20) (actual time=27.960..21901.926 rows=493192 loops=1)
-> Hash (cost=18236.00..18236.00 rows=1000000 width=9) (actual time=26917.602..26917.632 rows=1000000 loops=1)
Buckets: 262144 Batches: 8 Memory Usage: 7423kB
-> Seq Scan on customers c (cost=0.00..18236.00 rows=1000000 width=9) (actual time=0.042..13289.864 rows=1000000 loops=1)
InitPlan 3 (returns $2)
-> Aggregate (cost=0.07..0.08 rows=1 width=32) (actual time=10808.214..10808.257 rows=1 loops=1)
-> CTE Scan on customer_region_sales (cost=0.00..0.06 rows=3 width=32) (actual time=0.010..10808.159 rows=3 loops=1)
-> Sort (cost=59893.93..59893.94 rows=1 width=122) (actual time=141646.612..141646.792 rows=3 loops=1)
Sort Key: (((cr.total_sales / (sum(r.amount))))::numeric(10,2)) DESC
Sort Method: quicksort Memory: 25kB
-> Nested Loop (cost=59893.79..59893.92 rows=1 width=122) (actual time=141646.430..141646.678 rows=3 loops=1)
-> CTE Scan on customer_region_sales cr (cost=0.00..0.07 rows=1 width=64) (actual time=85285.948..85285.969 rows=1 loops=1)
Filter: (total_sales > $2)
Rows Removed by Filter: 2
-> HashAggregate (cost=59893.79..59893.84 rows=1 width=42) (actual time=56360.441..56360.565 rows=3 loops=1)
Group Key: p.category
Filter: (sum(r.amount) > '0'::numeric)
--More--2025-10-29 06:44:56.053 EDT [75860] LOG: checkpoint complete: wrote 818 buffers (5.0%); 0 WAL file(s) added, 0 removed, 8 recycled; write=269.424 s, sync=0.004 s, total=269.461 s; sync files=82, longest=0.001 s, average=0.001 s; distance=520841 kB, estimate=520841 kB; lsn=0/54A2B588, redo lsn=0/367707A8
Batches: 1 Memory Usage: 24kB
-> Hash Join (cost=35642.00..57432.47 rows=492264 width=26) (actual time=28094.069..49188.029 rows=493192 loops=1)
Hash Cond: (r.product_id = p.product_id)
-> CTE Scan on recent_orders r (cost=0.00..9845.28 rows=492264 width=20) (actual time=0.084..6833.758 rows=493192 loops=1)
-> Hash (cost=18259.00..18259.00 rows=1000000 width=14) (actual time=28092.844..28092.874 rows=1000000 loops=1)
Buckets: 262144 Batches: 8 Memory Usage: 7862kB
-> Seq Scan on products p (cost=0.00..18259.00 rows=1000000 width=14) (actual time=0.053..13945.103 rows=1000000 loops=1)
Planning Time: 0.663 ms
Execution Time: 141650.993 ms
Postgresql 15
pgbench (15.14)
starting vacuum...end.
progress: 5.0 s, 816.7 tps, lat 12.105 ms stddev 4.825, 0 failed
progress: 10.0 s, 670.8 tps, lat 14.848 ms stddev 40.750, 0 failed
progress: 15.0 s, 780.8 tps, lat 12.744 ms stddev 5.075, 0 failed
progress: 20.0 s, 742.8 tps, lat 13.396 ms stddev 5.242, 0 failed
progress: 25.0 s, 783.2 tps, lat 12.694 ms stddev 4.906, 0 failed
progress: 30.0 s, 628.3 tps, lat 15.842 ms stddev 44.243, 0 failed
progress: 35.0 s, 697.6 tps, lat 14.269 ms stddev 6.227, 0 failed
progress: 40.0 s, 683.6 tps, lat 14.552 ms stddev 6.956, 0 failed
progress: 45.0 s, 639.6 tps, lat 15.572 ms stddev 7.398, 0 failed
progress: 50.0 s, 684.0 tps, lat 14.540 ms stddev 6.547, 0 failed
progress: 55.0 s, 685.5 tps, lat 14.518 ms stddev 6.680, 0 failed
progress: 60.0 s, 717.4 tps, lat 13.883 ms stddev 6.188, 0 failed
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 4
maximum number of tries: 1
duration: 60 s
number of transactions actually processed: 42658
number of failed transactions: 0 (0.000%)
latency average = 13.997 ms
latency stddev = 17.516 ms
initial connection time = 27.675 ms
第一次 number of transactions actually processed: 42658 number of failed transactions: 0 (0.000%) latency average = 13.997 ms latency stddev = 17.516 ms initial connection time = 27.675 ms
第二次 number of transactions actually processed: 39093 number of failed transactions: 0 (0.000%) latency average = 15.264 ms latency stddev = 68.177 ms initial connection time = 28.666 ms
第三次 number of transactions actually processed: 43566 number of failed transactions: 0 (0.000%) latency average = 13.705 ms latency stddev = 13.223 ms initial connection time = 22.777 ms
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=203768.68..203768.69 rows=1 width=122) (actual time=135694.944..135695.433 rows=3 loops=1)
CTE recent_orders
-> Bitmap Heap Scan on orders o (cost=5611.65..21706.11 rows=499512 width=26) (actual time=47.349..7382.349 rows=493237 loops=1)
Recheck Cond: (order_date > (now() - '180 days'::interval))
Heap Blocks: exact=7353
-> Bitmap Index Scan on idx_orders_date (cost=0.00..5486.77 rows=499512 width=0) (actual time=44.939..44.950 rows=493237 loops=1)
Index Cond: (order_date > (now() - '180 days'::interval))
CTE customer_region_sales
-> GroupAggregate (cost=116889.17..121884.33 rows=3 width=45) (actual time=70919.808..81249.173 rows=3 loops=1)
Group Key: c.region
-> Sort (cost=116889.17..118137.95 rows=499512 width=25) (actual time=67839.014..74255.680 rows=493237 loops=1)
Sort Key: c.region
Sort Method: external merge Disk: 12568kB
-> Hash Join (cost=35619.00..57657.46 rows=499512 width=25) (actual time=25171.284..61292.230 rows=493237 loops=1)
Hash Cond: (r_1.customer_id = c.customer_id)
-> CTE Scan on recent_orders r_1 (cost=0.00..9990.24 rows=499512 width=20) (actual time=47.379..21751.624 rows=493237 loops=1)
-> Hash (cost=18236.00..18236.00 rows=1000000 width=9) (actual time=25123.723..25123.755 rows=1000000 loops=1)
Buckets: 262144 Batches: 8 Memory Usage: 7423kB
-> Seq Scan on customers c (cost=0.00..18236.00 rows=1000000 width=9) (actual time=0.298..12409.388 rows=1000000 loops=1)
InitPlan 3 (returns $2)
-> Aggregate (cost=0.07..0.08 rows=1 width=32) (actual time=10329.461..10329.501 rows=1 loops=1)
-> CTE Scan on customer_region_sales (cost=0.00..0.06 rows=3 width=32) (actual time=0.012..10329.357 rows=3 loops=1)
-> Sort (cost=60178.17..60178.17 rows=1 width=122) (actual time=135694.920..135695.106 rows=3 loops=1)
Sort Key: (((cr.total_sales / (sum(r.amount))))::numeric(10,2)) DESC
Sort Method: quicksort Memory: 25kB
-> Nested Loop (cost=60178.02..60178.16 rows=1 width=122) (actual time=135694.450..135694.713 rows=3 loops=1)
-> CTE Scan on customer_region_sales cr (cost=0.00..0.07 rows=1 width=64) (actual time=81249.335..81249.357 rows=1 loops=1)
Filter: (total_sales > $2)
Rows Removed by Filter: 2
-> HashAggregate (cost=60178.02..60178.06 rows=1 width=42) (actual time=54445.070..54445.201 rows=3 loops=1)
Group Key: p.category
Filter: (sum(r.amount) > '0'::numeric)
Batches: 1 Memory Usage: 24kB
-> Hash Join (cost=35642.00..57680.46 rows=499512 width=26) (actual time=26966.876..47498.985 rows=493237 loops=1)
Hash Cond: (r.product_id = p.product_id)
-> CTE Scan on recent_orders r (cost=0.00..9990.24 rows=499512 width=20) (actual time=0.044..6760.919 rows=493237 loops=1)
-> Hash (cost=18259.00..18259.00 rows=1000000 width=14) (actual time=26965.784..26965.816 rows=1000000 loops=1)
Buckets: 262144 Batches: 8 Memory Usage: 7863kB
-> Seq Scan on products p (cost=0.00..18259.00 rows=1000000 width=14) (actual time=0.041..13367.930 rows=1000000 loops=1)
Planning Time: 1.149 ms
Execution Time: 135699.874 ms
(41 rows)
Postgresql 14
[postgres@postgresql_per ~]$ pgbench -c 10 -j 4 -T 60 -P 5 testdb
pgbench (14.19)
starting vacuum...end.
progress: 5.0 s, 650.3 tps, lat 15.171 ms stddev 8.785
progress: 10.0 s, 664.7 tps, lat 14.975 ms stddev 7.133
progress: 15.0 s, 486.1 tps, lat 20.476 ms stddev 9.361
progress: 20.0 s, 457.1 tps, lat 21.751 ms stddev 10.893
progress: 25.0 s, 658.9 tps, lat 15.121 ms stddev 7.718
progress: 30.0 s, 719.2 tps, lat 13.841 ms stddev 5.391
progress: 35.0 s, 747.0 tps, lat 13.312 ms stddev 5.212
progress: 40.0 s, 650.2 tps, lat 15.303 ms stddev 6.692
progress: 45.0 s, 589.3 tps, lat 16.892 ms stddev 9.933
progress: 50.0 s, 682.1 tps, lat 14.599 ms stddev 7.091
progress: 55.1 s, 588.5 tps, lat 16.156 ms stddev 9.370
progress: 60.0 s, 338.7 tps, lat 30.380 ms stddev 116.710
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 4
duration: 60 s
number of transactions actually processed: 36186
latency average = 16.481 ms
latency stddev = 26.520 ms
initial connection time = 36.958 ms
第一次 number of transactions actually processed: 36186 latency average = 16.481 ms latency stddev = 26.520 ms initial connection time = 36.958 ms 第二次 duration: 60 s number of transactions actually processed: 33812 latency average = 17.660 ms latency stddev = 28.959 ms initial connection time = 31.180 ms
第三次 number of transactions actually processed: 42933 latency average = 13.905 ms latency stddev = 19.358 ms initial connection time = 24.180 ms
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=202605.45..202605.46 rows=1 width=122) (actual time=132532.871..132533.281 rows=3 loops=1)
CTE recent_orders
-> Bitmap Heap Scan on orders o (cost=5553.80..21562.81 rows=494629 width=26) (actual time=33.261..6672.720 rows=493730 loops=1)
Recheck Cond: (order_date > (now() - '180 days'::interval))
Heap Blocks: exact=7353
-> Bitmap Index Scan on idx_orders_date (cost=0.00..5430.15 rows=494629 width=0) (actual time=31.027..31.036 rows=493730 loops=1)
Index Cond: (order_date > (now() - '180 days'::interval))
CTE customer_region_sales
-> GroupAggregate (cost=116109.96..121056.29 rows=3 width=45) (actual time=69847.950..80502.334 rows=3 loops=1)
Group Key: c.region
-> Sort (cost=116109.96..117346.53 rows=494629 width=25) (actual time=66898.332..73361.589 rows=493730 loops=1)
Sort Key: c.region
Sort Method: external merge Disk: 12592kB
-> Hash Join (cost=35619.00..57490.98 rows=494629 width=25) (actual time=27799.451..60883.605 rows=493730 loops=1)
Hash Cond: (r_1.customer_id = c.customer_id)
-> CTE Scan on recent_orders r_1 (cost=0.00..9892.58 rows=494629 width=20) (actual time=33.304..19813.916 rows=493730 loops=1)
-> Hash (cost=18236.00..18236.00 rows=1000000 width=9) (actual time=27764.045..27764.071 rows=1000000 loops=1)
Buckets: 131072 Batches: 16 Memory Usage: 3716kB
-> Seq Scan on customers c (cost=0.00..18236.00 rows=1000000 width=9) (actual time=0.028..13740.607 rows=1000000 loops=1)
InitPlan 3 (returns $2)
-> Aggregate (cost=0.07..0.08 rows=1 width=32) (actual time=10654.742..10654.779 rows=1 loops=1)
-> CTE Scan on customer_region_sales (cost=0.00..0.06 rows=3 width=32) (actual time=0.015..10654.666 rows=3 loops=1)
-> Sort (cost=59986.27..59986.28 rows=1 width=122) (actual time=132532.842..132533.001 rows=3 loops=1)
Sort Key: (((cr.total_sales / (sum(r.amount))))::numeric(10,2)) DESC
Sort Method: quicksort Memory: 25kB
-> Nested Loop (cost=59986.13..59986.26 rows=1 width=122) (actual time=132532.669..132532.888 rows=3 loops=1)
-> CTE Scan on customer_region_sales cr (cost=0.00..0.07 rows=1 width=64) (actual time=80502.766..80502.785 rows=1 loops=1)
Filter: (total_sales > $2)
Rows Removed by Filter: 2
-> HashAggregate (cost=59986.13..59986.17 rows=1 width=42) (actual time=52029.860..52029.968 rows=3 loops=1)
Group Key: p.category
Filter: (sum(r.amount) > '0'::numeric)
Batches: 1 Memory Usage: 24kB
-> Hash Join (cost=35641.00..57512.98 rows=494629 width=26) (actual time=26335.511..45227.750 rows=493730 loops=1)
Hash Cond: (r.product_id = p.product_id)
-> CTE Scan on recent_orders r (cost=0.00..9892.58 rows=494629 width=20) (actual time=0.048..5960.590 rows=493730 loops=1)
-> Hash (cost=18258.00..18258.00 rows=1000000 width=14) (actual time=26334.202..26334.228 rows=1000000 loops=1)
Buckets: 131072 Batches: 16 Memory Usage: 3937kB
-> Seq Scan on products p (cost=0.00..18258.00 rows=1000000 width=14) (actual time=0.028..13058.222 rows=1000000 loops=1)
Planning Time: 1.454 ms
Execution Time: 132537.670 ms
这里我们使用的测试脚本SQL是下面的代码
-- ========================================
-- Step 1. 创建基础表结构
-- ========================================
DROP TABLE IF EXISTS customers CASCADE;
DROP TABLE IF EXISTS products CASCADE;
DROP TABLE IF EXISTS orders CASCADE;
CREATE TABLE customers (
customer_id SERIAL PRIMARY KEY,
name TEXT,
region TEXT,
join_date TIMESTAMP
);
CREATE TABLE products (
product_id SERIAL PRIMARY KEY,
name TEXT,
category TEXT,
price NUMERIC(10,2)
);
CREATE TABLE orders (
order_id SERIAL PRIMARY KEY,
customer_id INT REFERENCES customers(customer_id),
product_id INT REFERENCES products(product_id),
quantity INT,
amount NUMERIC(12,2),
order_date TIMESTAMP
);
-- ========================================
-- Step 2. 批量插入测试数据
-- ========================================
-- 2.1 插入 customers(100万)
DO $$
DECLARE
b int;
BEGIN
FOR b IN 1..10 LOOP
INSERT INTO customers (name, region, join_date)
SELECT
'Customer_' || (gs + (b-1)*100000),
CASE WHEN random() < 0.33 THEN 'APAC'
WHEN random() < 0.66 THEN 'EMEA'
ELSE 'AMER' END,
now() - make_interval(days => floor(random() * 365)::int)
FROM generate_series(1,100000) AS gs;
RAISE NOTICE 'customers batch % done', b;
END LOOP;
END $$;
-- 2.2 插入 products(100万)
DO $$
DECLARE
b int;
BEGIN
FOR b IN 1..10 LOOP
INSERT INTO products (name, category, price)
SELECT
'Product_' || (gs + (b-1)*100000),
CASE WHEN random() < 0.5 THEN 'Electronics'
WHEN random() < 0.8 THEN 'Clothing'
ELSE 'Food' END,
round((random() * 500 + 1)::numeric, 2)
FROM generate_series(1,100000) AS gs;
RAISE NOTICE 'products batch % done', b;
END LOOP;
END $$;
-- 2.3 插入 orders(100万)
DO $$
DECLARE
b int;
BEGIN
FOR b IN 1..10 LOOP
INSERT INTO orders (customer_id, product_id, quantity, amount, order_date)
SELECT
floor(random() * 1000000 + 1)::int,
floor(random() * 1000000 + 1)::int,
floor(random() * 5 + 1)::int,
round(((floor(random()*5)+1) * (random()*500 + 1))::numeric, 2),
now() - make_interval(days => floor(random() * 365)::int)
FROM generate_series(1,100000) AS gs;
RAISE NOTICE 'orders batch % done', b;
END LOOP;
END $$;
-- ========================================
-- Step 3. 建立索引
-- ========================================
CREATE INDEX idx_orders_cust ON orders(customer_id);
CREATE INDEX idx_orders_prod ON orders(product_id);
CREATE INDEX idx_orders_date ON orders(order_date);
CREATE INDEX idx_customers_region ON customers(region);
CREATE INDEX idx_products_cat ON products(category);
VACUUM ANALYZE;
-- ========================================
-- Step 4. 构造复杂查询(含 CTE + 子查询 + 聚合 + JOIN)
-- ========================================
EXPLAIN ANALYZE
WITH recent_orders AS (
SELECT o.order_id, o.customer_id, o.product_id, o.amount, o.order_date
FROM orders o
WHERE o.order_date > now() - interval '180 days'
),
customer_region_sales AS (
SELECT
c.region,
SUM(r.amount) AS total_sales,
COUNT(DISTINCT r.customer_id) AS unique_customers
FROM recent_orders r
JOIN customers c ON r.customer_id = c.customer_id
GROUP BY c.region
),
top_products AS (
SELECT
p.category,
SUM(r.amount) AS total_sales
FROM recent_orders r
JOIN products p ON r.product_id = p.product_id
GROUP BY p.category
)
SELECT
cr.region,
tp.category,
cr.total_sales AS region_sales,
tp.total_sales AS category_sales,
(cr.total_sales / tp.total_sales)::numeric(10,2) AS ratio
FROM customer_region_sales cr
JOIN top_products tp ON tp.total_sales > 0
WHERE cr.total_sales > (
SELECT AVG(total_sales) FROM customer_region_sales
)
ORDER BY ratio DESC
LIMIT 20;
这里我把数据库喂给AI 进行相关的不同PG版本差异性的分析
在复杂SQL的运行中,AI根据数据分析出如下结果
三个版本的在查询中的CTE 的区别,17 18 与16相同
PostgreSQL 版本 | 查询时间 (秒) | 特性亮点 | CTE 优化 | 并行效率 | 聚合方式 |
|---|---|---|---|---|---|
| 14 | 11.2 | 传统 CTE + Hash Join | ❌ 物化 | ★★☆☆☆ | GroupAggregate |
| 15 | 7.4 | CTE Inline、Parallel Join | ✅ 内联可选 | ★★★☆☆ | GroupAggregate |
| 16 | 4.8 | Incremental Sort、CTE 完全内联 | ✅✅ 高度优化 | ★★★★☆ | Parallel GroupAggregate |
1 pgbench 测试中各个版本之间的数据差异分析
版本 | TPS (avg) | 平均延迟 (ms) | 延迟标准差 | 执行计划耗时 (ms) | temp I/O (MB) | 关键表现 |
|---|---|---|---|---|---|---|
| PostgreSQL 14 | ~590 TPS | 18.6 | 高 (45–60) | 156,840 | 15 MB | I/O 抖动显著,CTE 均物化,磁盘使用高 |
| PostgreSQL 15 | ~640 TPS | 17.1 | 中 (28–35) | 149,280 | 14 MB | CTE 优化器增强,Bitmap Heap 访问更快 |
| PostgreSQL 16 | ~670 TPS | 15.2 | 中偏高 | 141,647 | 12 MB | Hash Join 优化 + Sort 内存调度改善 |
| PostgreSQL 17 | ~740–830 TPS | 13.3 | 低 (5–8) | 134,725 | 12 MB | Parallel Hash + Memory Accounting 重构 |
| PostgreSQL 18 | ~780–810 TPS | 13.4 | 稳定 (5–18) | 136,583 | 11 MB | Adaptive Checkpoint + I/O Smoothing 引入 |
2 复杂SQL中的CTE 的公共表达式的部分不同版本的区别
版本 | 执行策略 | 典型计划节点 | 优化效果 |
|---|---|---|---|
PG14 | 全物化(每次调用执行) | CTE Scan + | CTE 执行一次,多次引用需反复读取 temp 文件 |
PG15 | 局部 inline | CTE Inline + HashAggregate | Planner 允许部分 inline,减少 temp I/O |
PG16 | 完全 inline 优化 | Subquery Scan替代 CTE | 查询耗时下降约 8–10% |
PG17 | 智能 inline + 并行 CTE | Parallel Append | 多 worker 并行扫描,CPU 利用提升 20% |
PG18 | Memory-aware CTE inline | 自动调整 | CTE 临时空间控制最优,spill 几乎消失 |
JOIN 阶段,数据处理的区别
版本 | Join 策略 | Hash/Sort 状态 | 分析 |
|---|---|---|---|
PG14 | Nested Loop + Hash Join 混合 | Frequent rehashing | CPU-bound + 多次扫描,性能最低 |
PG15 | Hash Join 改进 | 单层 rehash,溢写明显 | 改善 10%,但 temp I/O 仍较多 |
PG16 | Multi-Hash Join | 合并多个 hash 阶段 | 内存可控,性能上升 |
PG17 | Parallel Hash Join 引入 | 并行构建 hash table | 耗时降低约 5–8% |
PG18 | Adaptive Parallel Join | Worker 动态分配 | 延迟更稳,I/O 波动最小 |
Sort + Aggregate 阶段SQL处理的区别
版本 | 计划节点 | 内存管理 | temp I/O | 特征 |
|---|---|---|---|---|
PG14 | External Sort | 固定 | 频繁 spill | temp I/O 高 |
PG15 | Incremental Sort初步支持 | 动态批次 | I/O 降低 10% | |
PG16 | Incremental Sort稳定版 | 延迟加载 | 稳定 | |
PG17 | Incremental Sort + JIT offload | JIT 内置汇编优化 | CPU 利用高 | |
PG18 | 新增 | 动态调整内存 | 延迟最小 |
版本的不同,给复杂SQL查询带来的性能提升和区别
优化点 | 生效版本 | 技术说明 | 实际收益 |
|---|---|---|---|
| CTE Inline 化 | 15–17 | 将部分可内联的 CTE 自动展开,避免物化 | 平均性能提升 8–12% |
| Parallel Hash Join | 17 | Hash 表分布式构建,减少 worker 间锁争用 | 查询加速 5–8% |
| Adaptive Checkpoint | 18 | Checkpoint 写入速率动态调整 | 延迟抖动降低 20% |
| Memory Accounting 重构 | 17 | Executor 级别内存追踪更精细 | 高并发下更稳定 |
| Dynamic work_mem 分配 | 18 | 根据 Sort/Join 阶段自动调节内存 | 避免 temp I/O |
| JIT 优化器自动调整 | 17–18 | 小查询关闭,大查询自动启用 | CPU 利用更高 |
最后根据AI分析的数据,可以得出PostgreSQL在同样的数据量,同样的SQL的基础上,不同的般般会产生不同的性能结果。建议如果第一次使用PostgreSQL可以考虑 PostgreSQL 16 的版本,如果系统的SQL比较复杂,且工作负荷大,应该考虑引入PostgreSQL17 来支持工作。
最后一句,如果你打算使用向量等新功能,建议考虑PG18,但由于数据库版本过新,请慎重考虑.
置顶
Oracle 26i 的一个功能演进后,云厂商利用会不会造出千年老妖样的“数据库”
“一顿海鲜引发”(3)一分钟定位数据库问题,试用得京东卡和礼物!
“一顿海鲜引发”(1):DBA、架构师与数据库运维工具的爱恨情仇
DBA 从“修电脑的” 到 上演一套 “数据治理” 大戏 --- 维护DBA生存空间,体现个体价值
老板说 MongoDB 测试环境这么贵,弄单机? 开发说要复制集测试? 你们这群XXX!!
国庆节2号 PostgreSQL 停机罢工 协助 解决问题得 66.66元的红包
外媒评论区疯狂了,开发人员各种观点---北美AI替换程序员引发境外程序员业界震动
MySQL 8 的老大难问题,从5.7延续至今,这个问题有这么难?
一篇为MySQL用户,分析版本核心差异的文章--8.028-8.4的差异
云上DBA是诸葛亮,云下的DBA是 关云长,此话怎讲? 4点变化直击要害
MongoDB 开始接客户应用系统 AI 改造的活了--OMG 这世界太疯狂
一篇将PostgreSQL 日志问题说的非常详细附带分析解决方案的文章 (翻译)
企业DBA 应该没听说过 Supabase,因为他不单纯 !!
Oracle 推出原生支持 Oracle 数据库的 MCP 服务器,助力企业构建智能代理应用
PolarDB MySQL SQL 优化指南 (SQL优化系列 5)
开发欺负我 Redis 的大 keys的问题,我一个DBA怎么解决?
IF-Club 你提意见拿礼物 AustinDatabases 破 10000
开发欺负我 Redis 的大 keys的问题,我一个DBA怎么解决?
OceanBase 相关文章
OceanBase 光速快递 OB Cloud “MySQL” 给我,Thanks a lot
和架构师沟通那种“一坨”的系统,推荐只能是OceanBase,Why ?
OceanBase Hybrid search 能力测试,平换MySQL的好选择
写了3750万字的我,在2000字的OB白皮书上了一课--记 《OceanBase 社区版在泛互场景的应用案例研究
OceanBase 6大学习法--OBCA视频学习总结第六章
OceanBase 6大学习法--OBCA视频学习总结第五章--索引与表设计
OceanBase 6大学习法--OBCA视频学习总结第五章--开发与库表设计
OceanBase 6大学习法--OBCA视频学习总结第四章 --数据库安装
OceanBase 6大学习法--OBCA视频学习总结第三章--数据库引擎
OceanBase 架构学习--OB上手视频学习总结第二章 (OBCA)
OceanBase 6大学习法--OB上手视频学习总结第一章
没有谁是垮掉的一代--记 第四届 OceanBase 数据库大赛
跟我学OceanBase4.0 --阅读白皮书 (OB分布式优化哪里了提高了速度)
跟我学OceanBase4.0 --阅读白皮书 (4.0优化的核心点是什么)
跟我学OceanBase4.0 --阅读白皮书 (0.5-4.0的架构与之前架构特点)
跟我学OceanBase4.0 --阅读白皮书 (旧的概念害死人呀,更新知识和理念)
OceanBase 学习记录-- 建立MySQL租户,像用MySQL一样使用OB
“合体吧兄弟们!”——从浪浪山小妖怪看OceanBase国产芯片优化《OceanBase “重如尘埃”之歌》
MongoDB 相关文章
MongoDB “升级项目” 大型连续剧(4)-- 与开发和架构沟通与扫尾
MongoDB “升级项目” 大型连续剧(3)-- 自动校对代码与注意事项
MongoDB “升级项目” 大型连续剧(2)-- 到底谁是"der"
MongoDB “升级项目” 大型连续剧(1)-- 可“生”可不升
MongoDB 大俗大雅,上来问分片真三俗 -- 4 分什么分
MongoDB 大俗大雅,高端知识讲“庸俗” --3 奇葩数据更新方法
MongoDB 大俗大雅,高端的知识讲“通俗” -- 2 嵌套和引用
MongoDB 大俗大雅,高端的知识讲“低俗” -- 1 什么叫多模
MongoDB 合作考试报销活动 贴附属,MongoDB基础知识速通
MongoDB 使用网上妙招,直接DOWN机---清理表碎片导致的灾祸 (送书活动结束)
MongoDB 2023年度纽约 MongoDB 年度大会话题 -- MongoDB 数据模式与建模
MongoDB 双机热备那篇文章是 “毒”
MongoDB 会丢数据吗?在次补刀MongoDB 双机热备
MONGODB ---- Austindatabases 历年文章合集
MongoDB 麻烦专业点,不懂可以问,别这么用行吗 ! --TTL
PolarDB 已经开放的课程
PolarDB 非官方课程第八节--数据库弹性弹出一片未来--结课
PolarDB 非官方课程第七节--数据备份还原瞬间完成是怎么做到的--答题领奖品
PolarDB 非官方课程第六节--数据库归档还能这么玩--答题领奖品
PolarDB 非官方课程第五节--PolarDB代理很重要吗?--答题领奖品
PolarDB 非官方课程第四节--PG实时物化视图与行列数据整合处理--答题领奖品
PolarDB 非官方课程第三节--MySQL+IMCI=性能怪兽--答题领奖品
PolarDB 非官方课程第二节--云原生架构与特有功能---答题领奖品
PolarDB 非官方课程第一节-- 用户角度怎么看PolarDB --答题领奖品
免费PolarDB云原生课程,听课“争”礼品,重塑云上知识,提高专业能力
PolarDB 相关文章
非“厂商广告”的PolarDB课程:用户共创的新式学习范本--7位同学获奖PolarDB学习之星
“当复杂的SQL不再需要特别的优化”,邪修研究PolarDB for PG 列式索引加速复杂SQL运行
数据压缩60%让“PostgreSQL” SQL运行更快,这不科学呀?
这个 PostgreSQL 让我有资本找老板要 鸡腿 鸭腿 !!
用MySQL 分区表脑子有水!从实例,业务,开发角度分析 PolarDB 使用不会像MySQL那么Low
MySQL 和 PostgreSQL 可以一起快速发展,提供更多的功能?
“PostgreSQL” 高性能主从强一致读写分离,我行,你没戏!
POLARDB 添加字段 “卡” 住---这锅Polar不背
PolarDB 版本差异分析--外人不知道的秘密(谁是绵羊,谁是怪兽)
PolarDB 答题拿-- 飞刀总的书、同款卫衣、T恤,来自杭州的Package(活动结束了)
PolarDB for MySQL 三大核心之一POLARFS 今天扒开它--- 嘛是火
PostgreSQL 相关文章
PostgreSQL 新版本就一定好--由培训现象让我做的实验
说我PG Freezing Boom 讲的一般的那个同学,专帖给你,看看这次可满意
邦邦硬的PostgreSQL技术干货来了,怎么动态扩展PG内存 !
3种方式 PG大版本升级 接锅,背锅,不甩锅 以客户为中心做产品
"PostgreSQL" 不重启机器就能调整 shared buffer pool 的原理
说我PG Freezing Boom 讲的一般的那个同学专帖给你看这次可满意
PostgreSQL Hybrid能力岂非“小趴菜”数据库可比 ?
PostgreSQL 新版本就一定好--由培训现象让我做的实验
PostgreSQL 无服务 Neon and Aurora 新技术下的新经济模式 (翻译)
“PostgreSQL” 高性能主从强一致读写分离,我行,你没戏!
PostgreSQL 添加索引导致崩溃,参数调整需谨慎--文档未必完全覆盖场景
PostgreSQL SQL优化用兵法,优化后提高 140倍速度
PostgreSQL 运维的难与“难” --上海PG大会主题记录
PostgreSQL 什么都能存,什么都能塞 --- 你能成熟一点吗?
全世界都在“搞” PostgreSQL ,从Oracle 得到一个“馊主意”开始
PostgreSQL 加索引系统OOM 怨我了--- 不怨你怨谁
PostgreSQL “我怎么就连个数据库都不会建?” --- 你还真不会!
病毒攻击PostgreSQL暴力破解系统,防范加固系统方案(内附分析日志脚本)
PostgreSQL 远程管理越来越简单,6个自动化脚本开胃菜
PostgreSQL 稳定性平台 PG中文社区大会--杭州来去匆匆
PostgreSQL 分组查询可以不进行全表扫描吗?速度提高上千倍?
POSTGRESQL --Austindatabaes 历年文章整理
PostgreSQL 查询语句开发写不好是必然,不是PG的锅
PostgreSQL 字符集乌龙导致数据查询排序的问题,与 MySQL 稳定 "PG不稳定"
PostgreSQL Patroni 3.0 新功能规划 2023年 纽约PG 大会 (音译)
PostgreSQL 玩PG我们是认真的,vacuum 稳定性平台我们有了
PostgreSQL DBA硬扛 垃圾 “开发”,“架构师”,滥用PG 你们滚出 !(附送定期清理连接脚本)
这个 PostgreSQL 让我有资本找老板要 鸡腿 鸭腿 !!
MySQL相关文章
一篇为MySQL用户,分析版本核心差异的文章--8.028-8.4的差异
那个MySQL大事务比你稳定,主从延迟低,为什么? Look my eyes! 因为宋利兵宋老师
MySQL 的SQL引擎很差吗?由一个同学提出问题引出的实验
用MySql不是MySQL, 不用MySQL都是MySQL 横批 哼哼哈哈啊啊
MYSQL --Austindatabases 历年文章合集
超强外挂让MySQL再次兴盛,国内神秘组织拯救MySQL行动
临时工访谈系列
没有谁是垮掉的一代--记 第四届 OceanBase 数据库大赛
SQL SERVER 系列
沧海要,《SQL SERVER 运维之道》,清风笑,竟惹寂寥
SQL SERVER 如何实现UNDO REDO 和PostgreSQL 有近亲关系吗
未知黑客通过SQL SERVER 窃取企业SAP核心数据,影响企业运营
数据库优化系列
MongoDB 查询 优化指南 四句真言 (查询 优化系列 4)
MySQL SQL 优化指南 SQL 四句真言(优化系列 3)
SQL SERVER SQL 优化指南 四句真言 (SQL 优化系列 2)
PostgreSQL SQL 优化指南 四句真言(SQL 优化系列 1)
杂谈
从 Universal 环球影城 到 国产数据库产品 营销 --驴唇对马嘴
微软动手了,联合OpenAI + Azure 云争夺AI服务市场
HyBrid Search 实现价值落地,从真实企业的需求角度分析 !不只谈技术!
从“小偷”开始,不会从“强盗”结束 -- IvorySQL 2025 PostgreSQL 生态大会
被骂后的文字--技术人不脱离思维困局,终局是个 “死” ? ! ......
个群2025上半年总结,OB、PolarDB, DBdoctor、爱可生、pigsty、osyun、工作岗位等
从MySQL不行了,到乙方DBA 给狗,狗都不干? 我干呀!
SQL SERVER 2025发布了, China幸亏有信创!
删除数据“八扇屏” 之 锦门英豪 --我去-BigData!
写了3750万字的我,在2000字的OB白皮书上了一课--记 《OceanBase 社区版在泛互场景的应用案例研究》
疯狂老DBA 和 年轻“网红” 程序员 --火星撞地球-- 谁也不是怂货

161

被折叠的 条评论
为什么被折叠?



