【GreatSQL 优化器 - 09】make_join_query_block

GreatSQL优化器make_join_query_block详解

【GreatSQL 优化器 - 09】make_join_query_block

一、make_join_query_block 介绍

GreatSQL 优化器对于多张表 join 的连接顺序在前面的章节介绍过的 best_access_path 函数已经执行了,接着就是把 where 条件进行切割然后推给合适的表。这个过程就是由函数 make_join_query_block 来执行的。

下面用几个简单的例子来说明 join 连接中条件推送是什么。

CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 INT,date1 DATETIME);
INSERT INTO t1 VALUES (1,10,'2021-03-25 16:44:00.123456'),(2,1,'2022-03-26 16:44:00.123456'),(3,4,'2023-03-27 16:44:00.123456'),(5,5,'2024-03-25 16:44:00.123456'),(7,null,'2020-03-25 16:44:00.123456'),(8,10,'2020-10-25 16:44:00.123456'),(11,16,'2023-03-25 16:44:00.123456');
CREATE TABLE t2 (cc1 INT PRIMARY KEY, cc2 INT);
INSERT INTO t2 VALUES (1,3),(2,1),(3,2),(4,3),(5,15);
CREATE TABLE t3 (ccc1 INT, ccc2 varchar(100));
INSERT INTO t3 VALUES (1,'aa1'),(2,'bb1'),(3,'cc1'),(4,'dd1'),(null,'ee');
CREATE INDEX idx1 ON t1(c2);
CREATE INDEX idx2 ON t1(c2,date1);
CREATE INDEX idx2_1 ON t2(cc2);
CREATE INDEX idx3_1 ON t3(ccc1);

下面这个例子 ((t1.c1 = t3.ccc1) or (t3.ccc1 < 3)) 条件推送给 t1

greatsql> EXPLAIN FORMAT=TREE SELECT * FROM t1 join t3 ON t1.c1=t3.ccc1 or t3.ccc1<3;
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN                                                                                                                                                                                                                                                         |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| -> Filter: ((t1.c1 = t3.ccc1) or (t3.ccc1 < 3))  (cost=5.26 rows=35)
    -> Inner hash join (no condition)  (cost=5.26 rows=35)
        -> Index scan on t1 using idx2  (cost=0.34 rows=7)
        -> Hash
            -> Table scan on t3  (cost=0.75 rows=5)
 |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

下面例子 (t1.c1 < 3) 条件推给 t1,(ccc1=t1.c1) 条件推给 t3

greatsql> EXPLAIN FORMAT=TREE SELECT * FROM t1 join t3 ON t1.c1=t3.ccc1 and t3.ccc1<3;
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN                                                                                                                                                                                                                          |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| -> Nested loop inner join  (cost=2.40 rows=2)
    -> Filter: (t1.c1 < 3)  (cost=1.70 rows=2)
        -> Index scan on t1 using idx2  (cost=1.70 rows=7)
    -> Index lookup on t3 using idx3_1 (ccc1=t1.c1)  (cost=0.30 rows=1)
 |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

下面例子 ((t3.ccc1 = t1.c2) or (t3.ccc1 is null) or (t3.ccc2 like 'a%')) 条件推给 t3,(((t3.ccc1 = t1.c2) and (t2.cc1 = t1.c1)) or (t3.ccc1 is null) or (t3.ccc2 like 'a%')) 条件推给 t2

greatsql> EXPLAIN SELECT * FROM t2,t1,t3 WHERE t1.c1=t2.cc1 AND t1.c2=t3.ccc1 OR t3.ccc1 IS NULL OR t3.ccc2 LIKE 'a%';
| -> Filter: (((t3.ccc1 = t1.c2) and (t2.cc1 = t1.c1)) or (t3.ccc1 is null) or (t3.ccc2 like 'a%'))  (cost=14.27 rows=85)
    -> Inner hash join (no condition)  (cost=14.27 rows=85)
        -> Index scan on t2 using idx2_1  (cost=0.09 rows=5)
        -> Hash
            -> Filter: ((t3.ccc1 = t1.c2) or (t3.ccc1 is null) or (t3.ccc2 like 'a%'))  (cost=4.70 rows=17)
                -> Inner hash join (no condition)  (cost=4.70 rows=17)
                    -> Table scan on t3  (cost=0.07 rows=5)
                    -> Hash
                        -> Index scan on t1 using idx2  (cost=0.95 rows=7)

二、make_join_query_block 代码解释

make_join_query_block 函数通过 join 表顺序和每张表的 table_map 属性以及 cond 条件的属性来决定 cond 条件添加到哪张表,并且可能会重新对表的索引进行 check 找出 cost 更低的索引,下面是代码解析。

bool JOIN::optimize() {
  make_join_query_block();
}

static bool make_join_query_block(JOIN *join, Item *cond) {
  for (uint i = join->const_tables; i < join->tables; i++) {
      // 这四个变量说明见表一
      JOIN_TAB *const tab = join->best_ref[i];
      const plan_idx first_inner = tab->first_inner();
      const table_map used_tables = tab->prefix_tables();
      const table_map current_map = tab->added_tables();
      if (cond)
        // 这里通过table_map属性决定了是否给这个表添加条件,见下面表二、表四和表五说明
        tmp = make_cond_for_table(thd, cond, used_tables, current_map, false);
     // 如果recheck_reason=true,这里需要重新做一次确认,找出cost最低的索引。见表六
      if (recheck_reason)
        test_if_order_by_key();
        test_if_cheaper_ordering();
        test_quick_select();
      }  
      /* Add conditions added by add_not_null_conds(). */
      if (and_conditions(&tmp, tab->condition())) return true;
      if (join->attach_join_conditions(i)) return true;
  }
}
// 条件添加基本原则是条件带有表列的添加到该表,但是如果属性不一致的话也不会添加,只会添加到最后一张表。具体解释见下面实际例子。

表一:上面四个变量解释

变量解释说明
tab当前检测是否需要加条件的表这里是排序后的表
first_inner多张表 join 的时候排序后的第一张表例如:SELECT * FROM t1 LEFT OUTER JOIN (t2 JOIN t3) ON X
used_tables包括当前表以及之前的左连接表的信息如果该值 = 0 代表所有表,used_tables 信息在之前的 set_prefix_tables () 获得
current_map当前检测表信息包含被添加的一些信息

表二:make_cond_for_table () 动作

场景解释返回
AND 条件遍历 Item_cond 包含的所有 list,判断 list 是否有包含该表左连接的表的列,有的话加入新的 Item_cond_andnew Item_cond_and
OR 条件遍历 Item_cond 包含的所有 list,判断 list 是否有包含该表左连接的表的列,有的话加入新的 Item_cond_ornew Item_cond_or
其他 Item如果条件没有涉及左连接的表,或者给所有表添加条件 cond->is_expensive ()nullptr (无条件)
其他 Item如果条件涉及左连接的表并且 item 与表的属性一致 (见表四)该 Item 条件

表三:is_expensive_processor () 函数

Itemis_expensive说明
Item_udf_functrue这个 Item 作为条件不会添加到所有 join 表里面
Item_func_sptrue这个 Item 作为条件不会添加到所有 join 表里面
普通 Itemfalse这个 Item 作为条件可能会添加到 join 表里面

表四:Item 的 table_map 属性

table_map使用的 Item说明举例
INNER_TABLE_BITItem_trigger_field,Item_sp_variable,Item_param,Item_func_connection_id,Item_func_get_system_var,Item_func_user,Item_load_file, Item_func_sp,Item_func_get_user_var确定的常量,在表每行都一样f1(1)@@optimizer_search_depth
OUTER_REF_TABLE_BITItem_field,Item_ref,Item_view_ref,Item_outer_ref内部 field 需要外部 query block 的 item 信息select (select t1.a from t1 as t2 limit 1) from t1 group by t1.pk,t1.a 就是 OUTER_REF_TABLE_BIT
RAND_TABLE_BITItem_func_rand,Item_func_sleep,Item_func_uuid,Item_func_sysdate_local,Item_func_reject_if,Item_func_sp,Item_func_get_user_var不确定的,表每行都要换数据,非只有一行表,跟 INNER_TABLE_BIT 相反f1(t1.c1)
PSEUDO_TABLE_BITS包含以上三个

表五:表连接添加的属性

table_map第一张表中间的表最后一张表
INNER_TABLE_BITyes
OUTER_REF_TABLE_BITallow_outer_refs
RAND_TABLE_BITyes

表六:表的索引是否要重新 check

recheck_reason说明
DONT_RECHECK没有索引的表不需要重新 check
NOT_FIRST_TABLE不是第一张表需要重新 check
LOW_LIMIT带有 limit 语句的 sql 需要重新 check
三、实际例子说明

接下来看几个例子来说明上面的代码。

首先看一下最后确定的连接顺序,为 t1,t3,t2,因为条件不带有 RAND_TABLE_BIT 的 Item,因此最后是按照 cond 含有的列推送给对应表来实现的。

例子一:

greatsql> EXPLAIN SELECT * FROM t2,t1,t3 WHERE t1.c1=t2.cc1 AND t1.c2=t3.ccc1 OR t3.ccc1 IS NULL OR t3.ccc2 LIKE 'a%';
+----+-------------+-------+------------+-------+-------------------+--------+---------+------+------+----------+---------------------------------------------------------+
| id | select_type | table | partitions | type  | possible_keys     | key    | key_len | ref  | rows | filtered | Extra                                                   |
+----+-------------+-------+------------+-------+-------------------+--------+---------+------+------+----------+---------------------------------------------------------+
|  1 | SIMPLE      | t1    | NULL       | index | PRIMARY,idx1,idx2 | idx2   | 11      | NULL |    7 |   100.00 | Using index                                             |
|  1 | SIMPLE      | t3    | NULL       | ALL   | idx3_1            | NULL   | NULL    | NULL |    5 |    48.80 | Using where; Using join buffer (hash join)              |
|  1 | SIMPLE      | t2    | NULL       | index | PRIMARY           | idx2_1 | 5       | NULL |    5 |   100.00 | Using where; Using index; Using join buffer (hash join) |
+----+-------------+-------+------------+-------+-------------------+--------+---------+------+------+----------+---------------------------------------------------------+

表一:是否把 cond 条件推送给表

CONDt1[t1,]t3[t1,t3,]t2
(t1.c1=t2.cc1)nonoyes
(t3.ccc1 = t1.c2)noyesyes
(t3.ccc1 is null)noyesyes
(t3.ccc2 like 'a%')noyesyes

注:这里的中括号代表当前检测表的左连接表,中括号右边就是当前正在检测的表

表二:表的 table_map 值

CONDt1t3t2
best_ref->table_ref->map()0x0100x1000x001
best_ref->prefix_tables()INNER_TABLE_BIT+ OUTER_REF_TABLE_BIT+0x010INNER_TABLE_BIT+ OUTER_REF_TABLE_BIT+0x010+0x100INNER_TABLE_BIT+OUTER_REF_TABLE_BIT +RAND_TABLE_BIT+0x010+0x100+0x001
best_ref->added_tables()INNER_TABLE_BIT+ OUTER_REF_TABLE_BIT+0x0100x100RAND_TABLE_BIT+0x001

注:这里的 INNER_TABLE_BIT 和 OUTER_REF_TABLE_BIT 在函数 JOIN::set_prefix_tables () 默认加上了

看一下结果是否符合预期,确实如上表所述。这里看到又执行了一次 test_quick_select() 来确定走哪个索引。

"attaching_conditions_to_tables": {
              "original_condition": "(((`t3`.`ccc1` = `t1`.`c2`) and (`t2`.`cc1` = `t1`.`c1`)) or (`t3`.`ccc1` is null) or (`t3`.`ccc2` like 'a%'))",
              "attached_conditions_computation": [
                {
                  "table": "`t2`",
                  "rechecking_index_usage": { 这里对索引重新做了一次check
                    "recheck_reason": "not_first_table",
                    "range_analysis": {
                      "table_scan": {
                        "rows": 5,
                        "cost": 3.6
                      },
                      "potential_range_indexes": [
                        {
                          "index": "PRIMARY",
                          "usable": true,
                          "key_parts": [
                            "cc1"
                          ]
                        },
                        {
                          "index": "idx2_1",
                          "usable": false,
                          "cause": "not_applicable"
                        }
                      ],
                      "best_covering_index_scan": {
                        "index": "idx2_1",
                        "cost": 0.751098,
                        "chosen": true
                      },
                      "setup_range_conditions": [
                      ],
                      "group_index_range": {
                        "chosen": false,
                        "cause": "not_single_table"
                      },
                      "skip_scan_range": {
                        "chosen": false,
                        "cause": "not_single_table"
                      }
                    }
                  }
                }
              ],
              "attached_conditions_summary": [
                {
                  "table": "`t1`",
                  "attached": null
                },
                {
                  "table": "`t3`",
                  "attached": "((`t3`.`ccc1` = `t1`.`c2`) or (`t3`.`ccc1` is null) or (`t3`.`ccc2` like 'a%'))"
                },
                {
                  "table": "`t2`",
                  "attached": "(((`t3`.`ccc1` = `t1`.`c2`) and (`t2`.`cc1` = `t1`.`c1`)) or (`t3`.`ccc1` is null) or (`t3`.`ccc2` like 'a%'))"
                }
              ]
            }
          },
          {
            "finalizing_table_conditions": [
              {
                "table": "`t3`",
                "original_table_condition": "((`t3`.`ccc1` = `t1`.`c2`) or (`t3`.`ccc1` is null) or (`t3`.`ccc2` like 'a%'))",
                "final_table_condition   ": "((`t3`.`ccc1` = `t1`.`c2`) or (`t3`.`ccc1` is null) or (`t3`.`ccc2` like 'a%'))"
              },
              {
                "table": "`t2`",
                "original_table_condition": "(((`t3`.`ccc1` = `t1`.`c2`) and (`t2`.`cc1` = `t1`.`c1`)) or (`t3`.`ccc1` is null) or (`t3`.`ccc2` like 'a%'))",
                "final_table_condition   ": "(((`t3`.`ccc1` = `t1`.`c2`) and (`t2`.`cc1` = `t1`.`c1`)) or (`t3`.`ccc1` is null) or (`t3`.`ccc2` like 'a%'))"
              }
            ]
          },
          {
            "refine_plan": [
              {
                "table": "`t1`"
              },
              {
                "table": "`t3`"
              },
              {
                "table": "`t2`"
              }
            ]
          }
        ]
      }
    }

如果条件带有 RAND_TABLE_BIT 的 Item,那么即使 cond 带有表的列,也不会推送给对应的表,而是推送到最后一张表。看下面的 t1.c1 <rand () 这个条件。

例子二:

greatsql> SELECT * FROM t2,t1,t3 WHERE t1.c1=t2.cc1 AND t1.c2=t3.ccc1 OR t3.ccc1 IS NULL AND t1.c1 < rand();
              "attached_conditions_summary": [
                {
                  "table": "`t1`",
                  "attached": null
                },
                {
                  "table": "`t3`",
                  "attached": "((`t3`.`ccc1` = `t1`.`c2`) or (`t3`.`ccc1` is null))"
                },
                {
                  "table": "`t2`",
                  "attached": "(((`t3`.`ccc1` = `t1`.`c2`) and (`t2`.`cc1` = `t1`.`c1`)) or ((`t3`.`ccc1` is null) and (`t1`.`c1` < rand())))"  看到条件t1.c1 < rand()没有推送给t1而是推送到最后一张表t2去了
                }
              ]
            }
          },
          {
            "finalizing_table_conditions": [
              {
                "table": "`t3`",
                "original_table_condition": "((`t3`.`ccc1` = `t1`.`c2`) or (`t3`.`ccc1` is null))",
                "final_table_condition   ": "((`t3`.`ccc1` = `t1`.`c2`) or (`t3`.`ccc1` is null))"
              },
              {
                "table": "`t2`",
                "original_table_condition": "(((`t3`.`ccc1` = `t1`.`c2`) and (`t2`.`cc1` = `t1`.`c1`)) or ((`t3`.`ccc1` is null) and (`t1`.`c1` < rand())))",
                "final_table_condition   ": "(((`t3`.`ccc1` = `t1`.`c2`) and (`t2`.`cc1` = `t1`.`c1`)) or ((`t3`.`ccc1` is null) and (`t1`.`c1` < rand())))"
              }
            ]
          },
          {
            "refine_plan": [
              {
                "table": "`t1`"
              },
              {
                "table": "`t3`"
              },
              {
                "table": "`t2`"
              }

看一下每张表的属性:

CONDt1t3t2
best_ref->table_ref->map()0x0100x1000x001
best_ref->prefix_tables()INNER_TABLE_BIT+ OUTER_REF_TABLE_BIT+0x010INNER_TABLE_BIT+ OUTER_REF_TABLE_BIT+0x010+0x100INNER_TABLE_BIT+OUTER_REF_TABLE_BIT +RAND_TABLE_BIT+0x010+0x100+0x001
best_ref->added_tables()INNER_TABLE_BIT+ OUTER_REF_TABLE_BIT+0x0100x100RAND_TABLE_BIT+0x001
四、总结

从上面优化器最早的步骤我们认识了 make_join_query_block 函数的作用,知道了通过 join 表顺序和每张表的 table_map 属性以及 cond 条件的属性来决定 cond 条件添加到哪张表,并且可能会重新对表的索引进行 check 找出 cost 更低的索引,需要注意的是有的带有表列的条件不会被添加到对应表,因为 Item 的属性跟表的属性不一致所以最后只会被添加到最后一张 join 表。

# Other default tuning values # MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows, when MySQL has been installed using MySQL Installer you # should keep this file in the ProgramData directory of your server # (e.g. C:\ProgramData\MySQL\MySQL Server X.Y). To make sure the server # reads the config file, use the startup option "--defaults-file". # # To run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guidelines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # For advice on how to change settings please see # https://dev.mysql.com/doc/refman/8.0/en/server-configuration-defaults.html # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] # pipe= # socket=MYSQL port=3306 [mysql] no-beep # default-character-set= # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] port = 3306 bind-address = 0.0.0.0 # The next three options are mutually exclusive to SERVER_PORT below. # skip-networking # enable-named-pipe # shared-memory # shared-memory-base-name=MYSQL # The Pipe the MySQL Server will use. # socket=MYSQL # The access control granted to clients on the named pipe created by the MySQL Server. # named-pipe-full-access-group= # The TCP/IP Port the MySQL Server will listen on port=3306 # Path to installation directory. All paths are usually resolved relative to this. # basedir="D:/mysql" # Path to the database root datadir=D:/mysql\Data # The default character set that will be used when a new schema or table is # created and no character set is defined # character-set-server= # The default storage engine that will be used when create new tables when default-storage-engine=INNODB # The current server SQL mode, which can be set dynamically. # Modes affect the SQL syntax MySQL supports and the data validation checks it performs. This # makes it easier to use MySQL in different environments and to use MySQL together with other # database servers. sql-mode="ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION" # General and Slow logging. log-output=FILE general-log=0 general_log_file="WIN-20240617SLP.log" slow-query-log=1 slow_query_log_file="WIN-20240617SLP-slow.log" long_query_time=10 # Error Logging. log-error="WIN-20240617SLP.err" # ***** Group Replication Related ***** # Specifies the base name to use for binary log files. With binary logging # enabled, the server logs all statements that change data to the binary # log, which is used for backup and replication. log-bin="WIN-20240617SLP-bin" # ***** Group Replication Related ***** # Specifies the server ID. For servers that are used in a replication topology, # you must specify a unique server ID for each replication server, in the # range from 1 to 2^32 − 1. "Unique" means that each ID must be different # from every other ID in use by any other source or replica. server-id=1 # Indicates how table and database names are stored on disk and used in MySQL. # Value 0 = Table and database names are stored on disk using the lettercase specified in the CREATE # TABLE or CREATE DATABASE statement. Name comparisons are case-sensitive. You should not # set this variable to 0 if you are running MySQL on a system that has case-insensitive file # names (such as Windows or macOS). If you force this variable to 0 with # --lower-case-table-names=0 on a case-insensitive file system and access MyISAM tablenames # using different lettercases, index corruption may result. # Value 1 = Table names are stored in lowercase on disk and name comparisons are not case-sensitive. # MySQL converts all table names to lowercase on storage and lookup. This behavior also applies # to database names and table aliases. # Value 2 = Table and database names are stored on disk using the lettercase specified in the CREATE TABLE # or CREATE DATABASE statement, but MySQL converts them to lowercase on lookup. Name comparisons # are not case-sensitive. This works only on file systems that are not case-sensitive! InnoDB # table names and view names are stored in lowercase, as for lower_case_table_names=1. lower_case_table_names=1 # This variable is used to limit the effect of data import and export operations, such as # those performed by the LOAD DATA and SELECT ... INTO OUTFILE statements and the # LOAD_FILE() function. These operations are permitted only to users who have the FILE privilege. secure-file-priv="D:/mysql/Uploads" # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=151 # The number of open tables for all threads. Increasing this value increases the number # of file descriptors that mysqld requires. table_open_cache=4000 # Defines the maximum amount of memory that can be occupied by the TempTable # storage engine before it starts storing data on disk. temptable_max_ram=1G # Defines the maximum size of internal in-memory temporary tables created # by the MEMORY storage engine and, as of MySQL 8.0.28, the TempTable storage # engine. If an internal in-memory temporary table exceeds this size, it is # automatically converted to an on-disk internal temporary table. tmp_table_size=128M # The storage engine for in-memory internal temporary tables (see Section 8.4.4, "Internal # Temporary Table Use in MySQL"). Permitted values are TempTable (the default) and MEMORY. internal_tmp_mem_storage_engine=TempTable #*** MyISAM Specific options # The maximum size of the temporary file that MySQL is permitted to use while re-creating a # MyISAM index (during REPAIR TABLE, ALTER TABLE, or LOAD DATA). If the file size would be # larger than this value, the index is created using the key cache instead, which is slower. # The value is given in bytes. myisam_max_sort_file_size=2146435072 # The size of the buffer that is allocated when sorting MyISAM indexes during a REPAIR TABLE # or when creating indexes with CREATE INDEX or ALTER TABLE. myisam_sort_buffer_size=245M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=8M # Each thread that does a sequential scan for a MyISAM table allocates a buffer # of this size (in bytes) for each table it scans. If you do many sequential # scans, you might want to increase this value, which defaults to 131072. The # value of this variable should be a multiple of 4KB. If it is set to a value # that is not a multiple of 4KB, its value is rounded down to the nearest multiple # of 4KB. read_buffer_size=128K # This variable is used for reads from MyISAM tables, and, for any storage engine, # for Multi-Range Read optimization. read_rnd_buffer_size=256K #*** INNODB Specific options *** # innodb_data_home_dir= # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. # skip-innodb # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size in bytes of the buffer that InnoDB uses to write to the log files on # disk. The default value changed from 8MB to 16MB with the introduction of 32KB # and 64KB innodb_page_size values. A large log buffer enables large transactions # to run without the need to write the log to disk before the transactions commit. # Thus, if you have transactions that update, insert, or delete many rows, making # the log buffer larger saves disk I/O. innodb_log_buffer_size=16M # The size in bytes of the buffer pool, the memory area where InnoDB caches table # and index data. The default value is 134217728 bytes (128MB). The maximum value # depends on the CPU architecture; the maximum is 4294967295 (232-1) on 32-bit systems # and 18446744073709551615 (264-1) on 64-bit systems. On 32-bit systems, the CPU # architecture and operating system may impose a lower practical maximum size than the # stated maximum. When the size of the buffer pool is greater than 1GB, setting # innodb_buffer_pool_instances to a value greater than 1 can improve the scalability on # a busy server. innodb_buffer_pool_size=128M # Defines the amount of disk space occupied by redo log files. This variable supersedes the # innodb_log_files_in_group and innodb_log_file_size variables. innodb_redo_log_capacity=100M # Defines the maximum number of threads permitted inside of InnoDB. A value # of 0 (the default) is interpreted as infinite concurrency (no limit). This # variable is intended for performance tuning on high concurrency systems. # InnoDB tries to keep the number of threads inside InnoDB less than or equal to # the innodb_thread_concurrency limit. Once the limit is reached, additional threads # are placed into a "First In, First Out" (FIFO) queue for waiting threads. Threads # waiting for locks are not counted in the number of concurrently executing threads. innodb_thread_concurrency=25 # The increment size (in MB) for extending the size of an auto-extend InnoDB system tablespace file when it becomes full. innodb_autoextend_increment=64 # The number of regions that the InnoDB buffer pool is divided into. # For systems with buffer pools in the multi-gigabyte range, dividing the buffer pool into separate instances can improve concurrency, # by reducing contention as different threads read and write to cached pages. innodb_buffer_pool_instances=8 # Determines the number of threads that can enter InnoDB concurrently. innodb_concurrency_tickets=5000 # Specifies how long in milliseconds (ms) a block inserted into the old sublist must stay there after its first access before # it can be moved to the new sublist. innodb_old_blocks_time=1000 # When this variable is enabled, InnoDB updates statistics during metadata statements. innodb_stats_on_metadata=0 # When innodb_file_per_table is enabled (the default in 5.6.6 and higher), InnoDB stores the data and indexes for each newly created table # in a separate .ibd file, rather than in the system tablespace. innodb_file_per_table=1 # Use the following list of values: 0 for crc32, 1 for strict_crc32, 2 for innodb, 3 for strict_innodb, 4 for none, 5 for strict_none. innodb_checksum_algorithm=0 # If this is set to a nonzero value, all tables are closed every flush_time seconds to free up resources and # synchronize unflushed data to disk. # This option is best used only on systems with minimal resources. flush_time=0 # The minimum size of the buffer that is used for plain index scans, range index scans, and joins that do not use # indexes and thus perform full table scans. join_buffer_size=256K # The maximum size of one packet or any generated or intermediate string, or any parameter sent by the # mysql_stmt_send_long_data() C API function. max_allowed_packet=64M # If more than this many successive connection requests from a host are interrupted without a successful connection, # the server blocks that host from performing further connections. max_connect_errors=100 # The number of file descriptors available to mysqld from the operating system # Try increasing the value of this option if mysqld gives the error "Too many open files". open_files_limit=8161 # If you see many sort_merge_passes per second in SHOW GLOBAL STATUS output, you can consider increasing the # sort_buffer_size value to speed up ORDER BY or GROUP BY operations that cannot be improved with query optimization # or improved indexing. sort_buffer_size=256K # Specify the maximum size of a row-based binary log event, in bytes. # Rows are grouped into events smaller than this size if possible. The value should be a multiple of 256. binlog_row_event_max_size=8K # If the value of this variable is greater than 0, a replica synchronizes its master.info file to disk. # (using fdatasync()) after every sync_source_info events. sync_source_info=10000 # If the value of this variable is greater than 0, the MySQL server synchronizes its relay log to disk. # (using fdatasync()) after every sync_relay_log writes to the relay log. sync_relay_log=10000 # Load mysql plugins at start."plugin_x ; plugin_y". # plugin_load # The TCP/IP Port the MySQL Server X Protocol will listen on. 这就是配置文件内容
10-03
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值