Table Locking Issues
InnoDB
tables use row-level locking so that multiple sessions and applications can read from and write to the same table simultaneously, without making each other wait or producing inconsistent results. For this storage engine, avoid using the LOCK TABLES
statement, because it does not offer any extra protection, but instead reduces concurrency. The automatic row-level locking makes these tables suitable for your busiest databases with your most important data, while also simplifying application logic since you do not need to lock and unlock tables. Consequently, the InnoDB
storage engine in the default in MySQL 5.5 and higher.
MySQL uses table locking (instead of page, row, or column locking) for all storage engines except InnoDB
andNDBCLUSTER
. The locking operations themselves do not have much overhead. But because only one session can write to a table at any one time, for best performance with these other storage engines, use them primarily for tables that are queried often and rarely inserted into or updated.
Performance Considerations Favoring InnoDB
When choosing whether to create a table using InnoDB
or a different storage engine, keep in mind the following disadvantages of table locking:
-
Table locking enables many sessions to read from a table at the same time, but if a session wants to write to a table, it must first get exclusive access, meaning it might have to wait for other sessions to finish with the table first. During the update, all other sessions that want to access this particular table must wait until the update is done.
-
Table locking causes problems when a session is waiting because the disk is full and free space needs to become available before the session can proceed. In this case, all sessions that want to access the problem table are also put in a waiting state until more disk space is made available.
-
A
SELECT
statement that takes a long time to run prevents other sessions from updating the table in the meantime, making the other sessions appear slow or unresponsive. While a session is waiting to get exclusive access to the table for updates, other sessions that issueSELECT
statements will queue up behind it, reducing concurrency even for read-only sessions.
Workarounds for Locking Performance Issues
The following items describe some ways to avoid or reduce contention caused by table locking:
-
Consider switching the table to the
InnoDB
storage engine, either usingCREATE TABLE ... ENGINE=INNODB
during setup, or usingALTER TABLE ... ENGINE=INNODB
for an existing table. See Section 14.3, “TheInnoDB
Storage Engine” for more details about this storage engine. -
Optimize
SELECT
statements to run faster so that they lock tables for a shorter time. You might have to create some summary tables to do this. -
Start mysqld with
--low-priority-updates
. For storage engines that use only table-level locking (such asMyISAM
,MEMORY
, andMERGE
), this gives all statements that update (modify) a table lower priority thanSELECT
statements. In this case, the secondSELECT
statement in the preceding scenario would execute before theUPDATE
statement, and would not wait for the firstSELECT
to finish. -
To specify that all updates issued in a specific connection should be done with low priority, set the
low_priority_updates
server system variable equal to 1. -
To give a specific
INSERT
,UPDATE
, orDELETE
statement lower priority, use theLOW_PRIORITY
attribute. -
To give a specific
SELECT
statement higher priority, use theHIGH_PRIORITY
attribute. See Section 13.2.9, “SELECT
Syntax”. -
Start mysqld with a low value for the
max_write_lock_count
system variable to force MySQL to temporarily elevate the priority of allSELECT
statements that are waiting for a table after a specific number of inserts to the table occur. This permitsREAD
locks after a certain number ofWRITE
locks. -
If you have problems with
INSERT
combined withSELECT
, consider switching toMyISAM
tables, which support concurrentSELECT
andINSERT
statements. (See Section 8.10.3, “Concurrent Inserts”.) -
If you mix inserts and deletes on the same table,
INSERT DELAYED
may be of great help. See Section 13.2.5.2, “INSERT DELAYED
Syntax”. -
If you have problems with mixed
SELECT
andDELETE
statements, theLIMIT
option toDELETE
may help. SeeSection 13.2.2, “DELETE
Syntax”. -
Using
SQL_BUFFER_RESULT
withSELECT
statements can help to make the duration of table locks shorter. SeeSection 13.2.9, “SELECT
Syntax”. -
Splitting table contents into separate tables may help, by allowing queries to run against columns in one table, while updates are confined to columns in a different table.
-
You could change the locking code in
mysys/thr_lock.c
to use a single queue. In this case, write locks and read locks would have the same priority, which might help some applications.
----------------------------------------------------------------------------------------------------------------------------------
Concurrent Inserts
The MyISAM
storage engine supports concurrent inserts to reduce contention between readers and writers for a given table: If a MyISAM
table has no holes in the data file (deleted rows in the middle), an INSERT
statement can be executed to add rows to the end of the table at the same time that SELECT
statements are reading rows from the table. If there are multiple INSERT
statements, they are queued and performed in sequence, concurrently with the SELECT
statements. The results of a concurrent INSERT
may not be visible immediately.
The concurrent_insert
system variable can be set to modify the concurrent-insert processing. By default, the variable is set to AUTO
(or 1) and concurrent inserts are handled as just described. If concurrent_insert
is set to NEVER
(or 0), concurrent inserts are disabled. If the variable is set to ALWAYS
(or 2), concurrent inserts at the end of the table are permitted even for tables that have deleted rows. See also the description of theconcurrent_insert
system variable.
Under circumstances where concurrent inserts can be used, there is seldom any need to use the DELAYED
modifier for INSERT
statements. See Section 13.2.5.2, “INSERT DELAYED
Syntax”.
If you are using the binary log, concurrent inserts are converted to normal inserts for CREATE ... SELECT
orINSERT ... SELECT
statements. This is done to ensure that you can re-create an exact copy of your tables by applying the log during a backup operation. See Section 5.2.4, “The Binary Log”. In addition, for those statements a read lock is placed on the selected-from table such that inserts into that table are blocked. The effect is that concurrent inserts for that table must wait as well.
With LOAD DATA INFILE
, if you specify CONCURRENT
with a MyISAM
table that satisfies the condition for concurrent inserts (that is, it contains no free blocks in the middle), other sessions can retrieve data from the table while LOAD DATA
is executing. Use of the CONCURRENT
option affects the performance of LOAD DATA
a bit, even if no other session is using the table at the same time.
If you specify HIGH_PRIORITY
, it overrides the effect of the --low-priority-updates
option if the server was started with that option. It also causes concurrent inserts not to be used.
For LOCK TABLE
, the difference between READ LOCAL
and READ
is that READ LOCAL
permits nonconflictingINSERT
statements (concurrent inserts) to execute while the lock is held. However, this cannot be used if you are going to manipulate the database using processes external to the server while you hold the lock.
Internal Locking Methods
This section discusses internal locking; that is, locking performed within the MySQL server to manage contention for table contents by multiple sessions.
MySQL uses row-level locking for InnoDB
tables, and table-level locking for MyISAM
, MEMORY
, and MERGE
tables.
Which lock type works better for your application depends on the application and its workload, especially whether the data is modified frequently and how many concurrent sessions need to read or write the same tables. Different parts of an application may require different lock types.
To decide whether you want to use a storage engine with row-level locking, look at what your application does and what mix of select and update statements it uses. For example, the InnoDB
storage engine is targeted towards a wide variety of application workloads, especially those with heavy write activity or high concurrent usage. TheMyISAM
storage engine is targeted towards Web applications that perform many selects, relatively few deletes, updates based mainly on key values, and inserts into a few specific tables.
Considerations for Row Locking
Advantages of row-level locking:
-
Fewer lock conflicts when different sessions access different rows.
-
Fewer changes for rollbacks.
-
Possible to lock a single row for a long time.
Disadvantages of row-level locking:
-
Requires more memory than table-level locks.
-
Slower than table-level locks when used on a large part of the table because you must acquire many more locks.
-
Slower than other locks if you often do
GROUP BY
operations on a large part of the data or if you must scan the entire table frequently.
Considerations for Table Locking
Table locking in MySQL is deadlock-free for storage engines that use table-level locking. Deadlock avoidance is managed by always requesting all needed locks at once at the beginning of a query and always locking the tables in the same order.
MySQL grants table write locks as follows:
-
If there are no locks on the table, put a write lock on it.
-
Otherwise, put the lock request in the write lock queue.
MySQL grants table read locks as follows:
-
If there are no write locks on the table, put a read lock on it.
-
Otherwise, put the lock request in the read lock queue.
Table updates are given higher priority than table retrievals. Therefore, when a lock is released, the lock is made available to the requests in the write lock queue and then to the requests in the read lock queue. This ensures that updates to a table are not “starved” even if there is heavy SELECT
activity for the table. However, if you have many updates for a table, SELECT
statements wait until there are no more updates.
For information on altering the priority of reads and writes, see Section 8.10.2, “Table Locking Issues”.
You can analyze the table lock contention on your system by checking the Table_locks_immediate
andTable_locks_waited
status variables, which indicate the number of times that requests for table locks could be granted immediately and the number that had to wait, respectively:
mysql> SHOW STATUS LIKE 'Table%';
+-----------------------+---------+
| Variable_name | Value |
+-----------------------+---------+
| Table_locks_immediate | 1151552 |
| Table_locks_waited | 15324 |
+-----------------------+---------+
The MyISAM
storage engine supports concurrent inserts to reduce contention between readers and writers for a given table: If a MyISAM
table has no free blocks in the middle of the data file, rows are always inserted at the end of the data file. In this case, you can freely mix concurrent INSERT
and SELECT
statements for a MyISAM
table without locks. That is, you can insert rows into a MyISAM
table at the same time other clients are reading from it. Holes can result from rows having been deleted from or updated in the middle of the table. If there are holes, concurrent inserts are disabled but are enabled again automatically when all holes have been filled with new data.. This behavior is altered by the concurrent_insert
system variable. See Section 8.10.3, “Concurrent Inserts”.
If you acquire a table lock explicitly with LOCK TABLES
, you can request a READ LOCAL
lock rather than a READ
lock to enable other sessions to perform concurrent inserts while you have the table locked.
To perform many INSERT
and SELECT
operations on a table real_table
when concurrent inserts are not possible, you can insert rows into a temporary table temp_table
and update the real table with the rows from the temporary table periodically. This can be done with the following code:
mysql>LOCK TABLES real_table WRITE, temp_table WRITE;
mysql>INSERT INTO real_table SELECT * FROM temp_table;
mysql>DELETE FROM temp_table;
mysql>UNLOCK TABLES;
InnoDB
uses row locks. Deadlocks are possible for InnoDB
because it automatically acquires locks during the processing of SQL statements, not at the start of the transaction.
Choosing the Type of Locking
Generally, table locks are superior to row-level locks in the following cases:
-
Most statements for the table are reads.
-
Statements for the table are a mix of reads and writes, where writes are updates or deletes for a single row that can be fetched with one key read:
UPDATE
tbl_name
SETcolumn
=value
WHEREunique_key_col
=key_value
; DELETE FROMtbl_name
WHEREunique_key_col
=key_value
; -
SELECT
combined with concurrentINSERT
statements, and very fewUPDATE
orDELETE
statements. -
Many scans or
GROUP BY
operations on the entire table without any writers.
With higher-level locks, you can more easily tune applications by supporting locks of different types, because the lock overhead is less than for row-level locks.
Options other than row-level locking:
-
Versioning (such as that used in MySQL for concurrent inserts) where it is possible to have one writer at the same time as many readers. This means that the database or table supports different views for the data depending on when access begins. Other common terms for this are “time travel,” “copy on write,” or “copy on demand.”
-
Copy on demand is in many cases superior to row-level locking. However, in the worst case, it can use much more memory than using normal locks.
-
Instead of using row-level locks, you can employ application-level locks, such as those provided by
GET_LOCK()
andRELEASE_LOCK()
in MySQL. These are advisory locks, so they work only with applications that cooperate with each other. See Section 12.15, “Miscellaneous Functions”.