summaryrefslogtreecommitdiff
path: root/sql/table.cc
diff options
context:
space:
mode:
authorMagne Mahre <magne.mahre@sun.com>2009-10-15 13:07:04 +0200
committerMagne Mahre <magne.mahre@sun.com>2009-10-15 13:07:04 +0200
commit53d549483b9f34c46a26aace15c4db75fe81b9a8 (patch)
tree21861596a609c2bbc81a41ae1e6dfea0660f2351 /sql/table.cc
parentffbe8512f87aa27efd6a22a7db474a1466d2c767 (diff)
downloadmariadb-git-53d549483b9f34c46a26aace15c4db75fe81b9a8.tar.gz
Bug #37433 Deadlock between open_table, close_open_tables,
get_table_share, drop_open_table In the partition handler code, LOCK_open and share->LOCK_ha_data are acquired in the wrong order in certain cases. When doing a multi-row INSERT (i.e a INSERT..SELECT) in a table with auto- increment column(s). the increments must be in a monotonically continuous increasing sequence (i.e it can't have "holes"). To achieve this, a lock is held for the duration of the operation. share->LOCK_ha_data was used for this purpose. Whenever there was a need to open a view _during_ the operation (views are not currently pre-opened the way tables are), and LOCK_open was grabbed, a deadlock could occur. share->LOCK_ha_data is other places used _while_ holding LOCK_open. A new mutex was introduced in the HA_DATA_PARTITION structure, for exclusive use of the autoincrement data fields, so we don't need to overload the use of LOCK_ha_data here. A module test case has not been supplied, since the problem occurs as a result of a race condition, and testing for this condition is thus not deterministic. Testing for it could be done by setting up a test case as described in the bug report.
Diffstat (limited to 'sql/table.cc')
-rw-r--r--sql/table.cc2
1 files changed, 2 insertions, 0 deletions
diff --git a/sql/table.cc b/sql/table.cc
index 52e06809d7c..325e84ca55a 100644
--- a/sql/table.cc
+++ b/sql/table.cc
@@ -1601,6 +1601,8 @@ static int open_binary_frm(THD *thd, TABLE_SHARE *share, uchar *head,
delete crypted;
delete handler_file;
my_hash_free(&share->name_hash);
+ if (share->ha_data_destroy)
+ share->ha_data_destroy(share->ha_data);
open_table_error(share, error, share->open_errno, errarg);
DBUG_RETURN(error);