(mysql.info.gz) Innodb deadlocks
Info Catalog
(mysql.info.gz) InnoDB Deadlock detection
(mysql.info.gz) InnoDB transaction model
15.11.10 How to Cope with Deadlocks
-----------------------------------
Deadlocks are a classic problem in transactional databases, but they
are not dangerous unless they are so frequent that you cannot run
certain transactions at all. Normally, you must write your applications
so that they are always prepared to re-issue a transaction if it gets
rolled back because of a deadlock.
`InnoDB' uses automatic row-level locking. You can get deadlocks even
in the case of transactions that just insert or delete a single row.
That is because these operations are not really "atomic"; they
automatically set locks on the (possibly several) index records of the
row inserted or deleted.
You can cope with deadlocks and reduce the likelihood of their
occurrence with the following techniques:
* Use `SHOW INNODB STATUS' to determine the cause of the latest
deadlock. That can help you to tune your application to avoid
deadlocks. This strategy can be used as of MySQL 3.23.52 and
4.0.3, depending on your MySQL series.
* Always be prepared to re-issue a transaction if it fails due to
deadlock. Deadlocks are not dangerous. Just try again.
* Commit your transactions often. Small transactions are less prone
to collide.
* If you are using locking reads (`SELECT ... FOR UPDATE' or `...
LOCK IN SHARE MODE'), try using a lower isolation level such as
`READ COMMITTED'.
* Access your tables and rows in a fixed order. Then transactions
form nice queues and do not deadlock.
* Add well-chosen indexes to your tables. Then your queries need to
scan fewer index records and consequently set fewer locks. Use
`EXPLAIN SELECT' to determine which indexes the MySQL server
regards as the most appropriate for your queries.
* Use less locking. If you can afford to allow a `SELECT' to return
data from an old snapshot, do not add the clause `FOR UPDATE' or
`LOCK IN SHARE MODE' to it. Using `READ COMMITTED' isolation level
is good here, because each consistent read within the same
transaction reads from its own fresh snapshot.
* If nothing helps, serialize your transactions with table-level
locks. The correct way to use `LOCK TABLES' with transactional
tables, like InnoDB, is to set `AUTOCOMMIT = 0' and not to call
`UNLOCK TABLES' until you commit the transaction explicitly. For
example, if you need to write table `t1' and read table `t2', you
can do this:
SET AUTOCOMMIT=0;
LOCK TABLES t1 WRITE, t2 READ, ...;
[do something with tables t1 and t2 here];
COMMIT;
UNLOCK TABLES;
Table-level locks make your transactions queue nicely, and
deadlocks are avoided.
* Another way to serialize transactions is to create an auxiliary
"semaphore" table that contains just a single row. Have each
transaction update that row before accessing other tables. In that
way, all transactions happen in a serial fashion. Note that the
`InnoDB' instant deadlock detection algorithm also works in this
case, because the serializing lock is a row-level lock. With
MySQL table-level locks, the timeout method must be used to
resolve deadlocks.
* In applications using `AUTOCOMMIT=1' and MySQL's `LOCK TABLES'
command, InnoDB's internal table locks that were present from
4.0.20 to 4.0.23 can cause deadlocks. Starting from 4.0.22, you
can set `innodb_table_locks=0' in `my.cnf' to fall back to the old
behavior and remove the problem. 4.0.24 does not set InnoDB table
locks if `AUTOCOMMIT=1'.
Info Catalog
(mysql.info.gz) InnoDB Deadlock detection
(mysql.info.gz) InnoDB transaction model
automatically generated byinfo2html