summaryrefslogtreecommitdiff
path: root/mysql-test/r/flush.result
diff options
context:
space:
mode:
authorDmitry Lenev <dlenev@mysql.com>2010-02-15 13:23:34 +0300
committerDmitry Lenev <dlenev@mysql.com>2010-02-15 13:23:34 +0300
commiteb0f09712e1b09955cc9e60d516ddf19336c9ffc (patch)
tree34f7770c9a35a8f2e093f9073c804ce696ae9071 /mysql-test/r/flush.result
parentd5a498abc668763053d46c83e61827f78a4bad0d (diff)
downloadmariadb-git-eb0f09712e1b09955cc9e60d516ddf19336c9ffc.tar.gz
Fix for bug #51134 "Crash in MDL_lock::destroy on a concurrent
DDL workload". When a RENAME TABLE or LOCK TABLE ... WRITE statement which mentioned the same table several times were aborted during the process of acquring metadata locks (due to deadlock which was discovered or because of KILL statement) server might have crashed. When attempt to acquire all locks requested had failed we went through the list of requests and released locks which we have managed to acquire by that moment one by one. Since in the scenario described above list of requests contained duplicates this led to releasing the same ticket twice and a crash as result. This patch solves the problem by employing different approach to releasing locks in case of failure to acquire all locks requested. Now we take a MDL savepoint before starting acquiring locks and simply rollback to it if things go bad. mysql-test/r/lock_multi.result: Updated test results (see lock_multi.test). mysql-test/t/lock_multi.test: Added test case for bug #51134 "Crash in MDL_lock::destroy on a concurrent DDL workload". sql/mdl.cc: MDL_context::acquire_locks(): When attempt to acquire all locks requested has failed do not go through the list of requests and release locks which we have managed to acquire one by one. Since list of requests can contain duplicates such approach may lead to releasing the same ticket twice and a crash as result. Instead use the following approach - take a MDL savepoint before starting acquiring locks and simply rollback to it if things go bad.
Diffstat (limited to 'mysql-test/r/flush.result')
0 files changed, 0 insertions, 0 deletions