summaryrefslogtreecommitdiff
path: root/sql/rpl_parallel.h
Commit message (Collapse)AuthorAgeFilesLines
* Merge 10.6 into 10.8Marko Mäkelä2023-03-291-1/+2
|\
| * MDEV-26071: rpl.rpl_perfschema_applier_status_by_worker failed in bb …Andrei2023-03-241-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | …with: Test assertion failed Problem: ======= Assertion text: 'Value returned by SSS and PS table for Last_Error_Number should be same.' Assertion condition: '"1146" = "0"' Assertion condition, interpolated: '"1146" = "0"' Assertion result: '0' Analysis: ======== In parallel replication when slave is started the worker pool gets activated and it gets cleared when slave stops. Each time the worker pool gets activated a backup worker pool also gets created to store worker specific perforance schema information in case of errors. On error, all relevant information is copied from rpl_parallel_thread to rli and it gets cleared from thread. Then server waits for all workers to complete their work, during this stage performance schema table specific worker info is stored into the backup pool and finally the actual pool gets cleared. If users query the performance schema table to know the status of workers the information from backup pool will be used. The test simulates ER_NO_SUCH_TABLE error and verifies the worker information in pfs table. Test works fine if execution occurs in following order. Step 1. Error occurred 'worker information is copied to backup pool'. Step 2. handle_slave_sql invokes 'rpl_parallel_resize_pool_if_no_slaves' to deactivate worker pool, it marks the pool->count=0 Step 3. PFS table is queried, since actual pool is deactivated backup pool information is read. If the Step 3 happens prior to Step2 the pool is yet to be deactivated and the actual pool is read, which doesn't have any error details as they were cleared. Hence test ocasionally fails. Fix: === Upon error mark the back pool as being active so that if PFS table is quried since the backup pool is flagged as valid its information will be read, in case it is not flagged regular pool will be read. This work is one of the last pieces created by the late Sujatha Sivakumar.
* | MDEV-11675 Lag Free Alter On SlaveSachin2022-01-271-1/+25
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit implements two phase binloggable ALTER. When a new @@session.binlog_alter_two_phase = YES ALTER query gets logged in two parts, the START ALTER and the COMMIT or ROLLBACK ALTER. START Alter is written in binlog as soon as necessary locks have been acquired for the table. The timing is such that any concurrent DML:s that update the same table are either committed, thus logged into binary log having done work on the old version of the table, or will be queued for execution on its new version. The "COMPLETE" COMMIT or ROLLBACK ALTER are written at the very point of a normal "single-piece" ALTER that is after the most of the query work is done. When its result is positive COMMIT ALTER is written, otherwise ROLLBACK ALTER is written with specific error happened after START ALTER phase. Replication of two-phase binloggable ALTER is cross-version safe. Specifically the OLD slave merely does not recognized the start alter part, still being able to process and memorize its gtid. Two phase logged ALTER is read from binlog by mysqlbinlog to produce BINLOG 'string', where 'string' contains base64 encoded Query_log_event containing either the start part of ALTER, or a completion part. The Query details can be displayed with `-v` flag, similarly to ROW format events. Notice, mysqlbinlog output containing parts of two-phase binloggable ALTER is processable correctly only by binlog_alter_two_phase server. @@log_warnings > 2 can reveal details of binlogging and slave side processing of the ALTER parts. The current commit also carries fixes to the following list of reported bugs: MDEV-27511, MDEV-27471, MDEV-27349, MDEV-27628, MDEV-27528. Thanks to all people involved into early discussion of the feature including Kristian Nielsen, those who helped to design, implement and test: Sergei Golubchik, Andrei Elkin who took the burden of the implemenation completion, Sujatha Sivakumar, Brandon Nesterenko, Alice Sherepa, Ramesh Sivaraman, Jan Lindstrom.
* MDEV-25502: rpl.rpl_perfschema_applier_status_by_worker failed in bb with: ↵Sujatha2021-05-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Test assertion failed Problem: ======= Test fails with 3 different symptoms connection slave; Assertion text: 'Last_Seen_Transaction should show .' Assertion condition: '"0-1-1" = ""' Assertion condition, interpolated: '"0-1-1" = ""' Assertion result: '0' connection slave; Assertion text: 'Value returned by SSS and PS table for Last_Error_Number should be same.' Assertion condition: '"1146" = "0"' Assertion condition, interpolated: '"1146" = "0"' Assertion result: '0' connection slave; Assertion text: 'Value returned by PS table for worker_idle_time should be >= 1' Assertion condition: '"0" >= "1"' Assertion condition, interpolated: '"0" >= "1"' Assertion result: '0' Fix1: ==== Performance schema table's Last_Seen_Transaction is compared with 'SELECT gtid_slave_pos'. Since DDLs are not transactional changes to user table and gtid_slave_pos table are not guaranteed to be synchronous. To fix the issue Gtid_IO_Pos value from SHOW SLAVE STATUS command will be used to verify the correctness of Performance schema specific Last_Seen_Transaction. Fix2: ==== On error worker thread information is stored as part of backup pool. Access to this backup pool should be protected by 'LOCK_rpl_thread_pool' mutex so that simultaneous START SLAVE cannot destroy the backup pool, while it is being queried by performance schema. Fix3: ==== When a worker is waiting for events if performance schema table is queried, at present it just returns the difference between current_time and start_time. This is incorrect. It should be worker_idle_time + (current_time - start_time). For example a worker thread was idle for 10 seconds and then it got events to process. Upon completion it goes to idle state, now if the pfs table is queried it should return current_idle time + worker_idle_time.
* MDEV-20220: Merge 5.7 P_S replication table ↵Sujatha2021-04-081-0/+35
| | | | | | | | | | | | | | 'replication_applier_status_by_worker Step 3: ====== Preserve worker pool information on either STOP SLAVE/Error. In case STOP SLAVE is executed worker threads will be gone, hence worker threads will be unavailable. Querying the table at this stage will give empty rows. To address this case when worker threads are about to stop, due to an error or forced stop, create a backup pool and preserve the data which is relevant to populate performance schema table. Clear the backup pool upon slave start.
* MDEV-20220: Merge 5.7 P_S replication table ↵Sujatha2021-04-081-0/+23
| | | | | | | | | | | | | | | | | | | 'replication_applier_status_by_worker Step2: ===== Add two extra columns mentioned below. --------------------------------------------------------------------------- |Column Name: | Description: | |-------------------------------------------------------------------------| | | | |WORKER_IDLE_TIME | Total idle time in seconds that the worker | | | thread has spent waiting for work from | | | co-ordinator thread | | | | |LAST_TRANS_RETRY_COUNT | Total number of retries attempted by last | | | transaction | ---------------------------------------------------------------------------
* MDEV-20220: Merge 5.7 P_S replication table ↵Sujatha2021-04-081-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'replication_applier_status_by_worker Step1: ===== Backport 'replication_applier_status_by_worker' from upstream. Iterate through rpl_parallel_thread_pool and display slave worker thread specific information as part of 'replication_applier_status_by_worker' table. --------------------------------------------------------------------------- |Column Name: | Description: | |-------------------------------------------------------------------------| | | | |CHANNEL_NAME | Name of replication channel through which the | | | transaction is received. | | | | |THREAD_ID | Thread_Id as displayed in 'performance_schema. | | | threads' table for thread with name | | | 'thread/sql/rpl_parallel_thread' | | | | | | THREAD_ID will be NULL when worker threads are | | | stopped due to an error/force stop | | | | |SERVICE_STATE | Thread is running or not | | | | |LAST_SEEN_TRANSACTION | Last GTID executed by worker | | | | |LAST_ERROR_NUMBER | Last Error that occured on a particular worker | | | | |LAST_ERROR_MESSAGE | Last error specific message | | | | |LAST_ERROR_TIMESTAMP | Time stamp of last error | | | | --------------------------------------------------------------------------- CHANNEL_NAME will be empty when the worker has not processed any transaction. Channel_name points to valid source channel_name when it is processing a transaction/event group.
* Merge 10.4 into 10.5Marko Mäkelä2020-06-181-0/+2
|\
| * MDEV-22370 safe_mutex: Trying to lock uninitialized mutex at ↵Sachin2020-06-171-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | /data/src/10.4-bug/sql/rpl_parallel.cc, line 470 upon shutdown during FTWRL Problem:- When we issue FTWRL with shutdown in parallel, there is race between FTWRL and shutdown. Shutdown might destroy the mutex (pool->LOCK_rpl_thread_pool) before FTWRL can lock it. So we can get crash on FTWRL thread Solution:- mysql_mutex_destroy(pool->LOCK_rpl_thread_pool) should wait for FTWRL thread to complete its work , and then destroy. So slave_prepare_for_shutdown will just deactivate the pool, and mutex is destroyed later in end_slave()
* | MDEV-742 XA PREPAREd transaction survive disconnect/server restartAndrei Elkin2020-03-141-1/+2
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lifted long standing limitation to the XA of rolling it back at the transaction's connection close even if the XA is prepared. Prepared XA-transaction is made to sustain connection close or server restart. The patch consists of - binary logging extension to write prepared XA part of transaction signified with its XID in a new XA_prepare_log_event. The concusion part - with Commit or Rollback decision - is logged separately as Query_log_event. That is in the binlog the XA consists of two separate group of events. That makes the whole XA possibly interweaving in binlog with other XA:s or regular transaction but with no harm to replication and data consistency. Gtid_log_event receives two more flags to identify which of the two XA phases of the transaction it represents. With either flag set also XID info is added to the event. When binlog is ON on the server XID::formatID is constrained to 4 bytes. - engines are made aware of the server policy to keep up user prepared XA:s so they (Innodb, rocksdb) don't roll them back anymore at their disconnect methods. - slave applier is refined to cope with two phase logged XA:s including parallel modes of execution. This patch does not address crash-safe logging of the new events which is being addressed by MDEV-21469. CORNER CASES: read-only, pure myisam, binlog-*, @@skip_log_bin, etc Are addressed along the following policies. 1. The read-only at reconnect marks XID to fail for future completion with ER_XA_RBROLLBACK. 2. binlog-* filtered XA when it changes engine data is regarded as loggable even when nothing got cached for binlog. An empty XA-prepare group is recorded. Consequent Commit-or-Rollback succeeds in the Engine(s) as well as recorded into binlog. 3. The same applies to the non-transactional engine XA. 4. @@skip_log_bin=OFF does not record anything at XA-prepare (obviously), but the completion event is recorded into binlog to admit inconsistency with slave. The following actions are taken by the patch. At XA-prepare: when empty binlog cache - don't do anything to binlog if RO, otherwise write empty XA_prepare (assert(binlog-filter case)). At Disconnect: when Prepared && RO (=> no binlogging was done) set Xid_cache_element::error := ER_XA_RBROLLBACK *keep* XID in the cache, and rollback the transaction. At XA-"complete": Discover the error, if any don't binlog the "complete", return the error to the user. Kudos ----- Alexey Botchkov took to drive this work initially. Sergei Golubchik, Sergei Petrunja, Marko Mäkelä provided a number of good recommendations. Sergei Voitovich made a magnificent review and improvements to the code. They all deserve a bunch of thanks for making this work done!
* Change "static int" to enum in classesMichael Widenius2017-04-181-17/+21
| | | | This was done when static int where used as bit fields or enums
* Merge 10.0 into 10.1Marko Mäkelä2017-03-031-0/+1
|\
| * MDEV-9573 'Stop slave' hangs on replication slaveMonty2017-02-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The reason for this is that stop slave takes LOCK_active_mi over the whole operation while some slave operations will also need LOCK_active_mi which causes deadlocks. Fixed by introducing object counting for Master_info and not taking LOCK_active_mi over stop slave or even stop_all_slaves() Another benefit of this approach is that it allows: - Multiple threads can run SHOW SLAVE STATUS at the same time - START/STOP/RESET/SLAVE STATUS on a slave will not block other slaves - Simpler interface for handling get_master_info() - Added some missing unlock of 'log_lock' in error condtions - Moved rpl_parallel_inactivate_pool(&global_rpl_thread_pool) to end of stop_slave() to not have to use LOCK_active_mi inside terminate_slave_threads() - Changed argument for remove_master_info() to Master_info, as we always have this available - Fixed core dump when doing FLUSH TABLES WITH READ LOCK and parallel replication. Problem was that waiting for pause_for_ftwrl was not done when deleting rpt->current_owner after a force_abort.
* | Merge branch '10.0' into 10.1Sergei Golubchik2016-03-211-1/+1
|\ \ | |/
| * Fix spelling: occurred, execute, which etcOtto Kekäläinen2016-03-041-1/+1
| |
* | Merge branch '10.0' into 10.1Sergei Golubchik2015-12-211-0/+1
|\ \ | |/
| * Fixed failures in rpl_parallel2Monty2015-11-231-0/+1
| | | | | | | | | | | | | | | | Problem was that we used same condition variable with 2 different mutex. Fixed by changing to use COND_rpl_thread_stop instead of COND_parallel_entry for stopping threads. Patch by Kristian Nielsen
* | Merge branch 'mdev7818-4' into 10.1Kristian Nielsen2015-11-131-7/+31
|\ \ | |/ | | | | | | | | | | Conflicts: mysql-test/suite/perfschema/r/stage_mdl_global.result sql/rpl_rli.cc sql/sql_parse.cc
| * MDEV-7818: Deadlock occurring with parallel replication and FTWRLKristian Nielsen2015-11-131-7/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem is that FLUSH TABLES WITH READ LOCK first blocks threads from starting new commits, then waits for running commits to complete. But in-order parallel replication needs commits to happen in a particular order, so this can easily deadlock. To fix this problem, this patch introduces a way to temporarily pause the parallel replication worker threads. Before starting FTWRL, we let all worker threads complete in-progress transactions, and then wait. Then we proceed to take the global read lock. Once the lock is obtained, we unpause the worker threads. Now commits are blocked from starting by the global read lock, so the deadlock will no longer occur.
* | Merge MDEV-8147 into 10.1Kristian Nielsen2015-05-261-0/+6
|\ \ | |/
| * MDEV-8147: Assertion `m_lock_type == 2' failed in handler::ha_close() during ↵Kristian Nielsen2015-05-261-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | parallel replication When the slave processes the master restart format_description event, parallel replication needs to complete any prior events before processing the restart event (which closes temporary tables and such stuff). This happens in wait_for_workers_idle(), however it was not waiting long enough. The wait was using wait_for_prior_commit(), but at that points table can still be open. This lead to assertion in this case. So change wait_for_workers_idle() to wait until all worker threads have reached finish_event_group(), at which point all tables should have been closed.
| * Merge MDEV-7847 and MDEV-7882 into 10.0.Kristian Nielsen2015-03-301-1/+1
| |\ | | | | | | | | | | | | | | | Conflicts: mysql-test/suite/rpl/r/rpl_parallel.result mysql-test/suite/rpl/t/rpl_parallel.test
* | \ Merge 10.0 -> 10.1.Kristian Nielsen2015-04-171-4/+2
|\ \ \ | |/ / | | | | | | | | | | | | Conflicts: mysql-test/suite/multi_source/multisource.result sql/sql_base.cc
| * | MDEV-5289: master server starts slave parallel threadsKristian Nielsen2015-03-111-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | Delay spawning parallel replication worker threads until a slave SQL thread is running, and de-spawn them when the last SQL thread stops. This is especially useful to avoid needless threads on a master in a setup where same my.cnf is used on masters and slaves.
* | | Merge MDEV-7847 and MDEV-7882 into 10.0.Kristian Nielsen2015-03-301-1/+1
|\ \ \ | | |/ | |/| | | | | | | | | | Conflicts: mysql-test/suite/rpl/r/rpl_parallel.result sql/rpl_parallel.cc
| * | MDEV-7847: "Slave worker thread retried transaction 10 time(s) in vain, ↵Kristian Nielsen2015-03-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | giving up", followed by replication hanging This patch fixes a bug in the error handling in parallel replication, when one worker thread gets a failure and other worker threads processing later transactions have to rollback and abort. The problem was with the lifetime of group_commit_orderer objects (GCOs). A GCO is freed when we register that its last event group has committed. This relies on register_wait_for_prior_commit() and wait_for_prior_commit() to ensure that the fact that T2 has committed implies that any earlier T1 has also committed, and can thus no longer execute mark_start_commit(). However, in the error case, the code was skipping the register_wait_for_prior_commit() and wait_for_prior_commit() calls. Thus commit ordering was not guaranteed, and a GCO could be freed too early. Then a later mark_start_commit() would reference deallocated GCO, which could lead to lost wakeup (causing slave threads to hang) or other corruption. This patch makes also the error case respect commit order. This way, also the error case gets the GCO lifetime correct, and the hang no longer occurs.
* | | MDEV-7825: Parallel replication race condition on gco->flags, possibly ↵Kristian Nielsen2015-03-241-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | resulting in slave hang The patch for optimistic parallel replication as a memory optimisation moved the gco->installed field into a bit in gco->flags. However, that is just plain wrong. The gco->flags field is owned by the SQL driver thread, but gco->installed is used by the worker threads, so this will cause a race condition. The user-visible problem might be conflicts between transactions and/or slave threads hanging. So revert this part of the optimistic parallel replication patch, going back to using a separate field gco->installed like in 10.0.
* | | Merge MDEV-6589 and MDEV-6403 into 10.1.Kristian Nielsen2015-03-041-0/+1
|\ \ \ | | |/ | |/| | | | | | | | | | | | | Conflicts: sql/log.cc sql/rpl_rli.cc sql/sql_repl.cc
| * | MDEV-6589: Incorrect relay log start position when restarting SQL thread ↵Kristian Nielsen2015-03-041-0/+1
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | after error in parallel replication The problem occurs in parallel replication in GTID mode, when we are using multiple replication domains. In this case, if the SQL thread stops, the slave GTID position may refer to a different point in the relay log for each domain. The bug was that when the SQL thread was stopped and restarted (but the IO thread was kept running), the SQL thread would resume applying the relay log from the point of the most advanced replication domain, silently skipping all earlier events within other domains. This caused replication corruption. This patch solves the problem by storing, when the SQL thread stops with multiple parallel replication domains active, the current GTID position. Additionally, the current position in the relay logs is moved back to a point known to be earlier than the current position of any replication domain. Then when the SQL thread restarts from the earlier position, GTIDs encountered are compared against the stored GTID position. Any GTID that was already applied before the stop is skipped to avoid duplicate apply. This patch should have no effect if multi-domain GTID parallel replication is not used. Similarly, if both SQL and IO thread are stopped and restarted, the patch has no effect, as in this case the existing relay logs are removed and re-fetched from the master at the current global @@gtid_slave_pos.
* | Merge branch '10.0' into merge-wipSergei Golubchik2015-01-311-4/+18
|\ \ | |/
| * MDEV-7326: Server deadlock in connection with parallel replicationKristian Nielsen2015-01-071-4/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The bug occurs when a transaction does a retry after all transactions have done mark_start_commit() in a batch of group commit from the master. In this case, the retrying transaction can unmark_start_commit() after the following batch has already started running and de-allocated the GCO. Then after retry, the transaction will re-do mark_start_commit() on a de-allocated GCO, and also wakeup of later GCOs can be lost. This was seen "in the wild" by a user, even though it is not known exactly what circumstances can lead to retry of one transaction after all transactions in a group have reached the commit phase. The lifetime around GCO was somewhat clunky anyway. With this patch, a GCO lives until rpl_parallel_entry::last_committed_sub_id has reached the last transaction in the GCO. This guarantees that the GCO will still be alive when a transaction does mark_start_commit(). Also, we now loop over the list of active GCOs for wakeup, to ensure we do not lose a wakeup even in the problematic case.
* | MDEV-6676: Optimistic parallel replicationKristian Nielsen2014-12-061-1/+23
|/ | | | | | | | | Implement a new mode for parallel replication. In this mode, all transactions are optimistically attempted applied in parallel. In case of conflicts, the offending transaction is rolled back and retried later non-parallel. This is an early-release patch to facilitate testing, more changes to user interface / options will be expected. The new mode is not enabled by default.
* MDEV-6680: Performance of domain_parallel replication is disappointingKristian Nielsen2014-11-131-1/+49
| | | | | | | | | | | | | | The code that handles free lists of various objects passed to worker threads in parallel replication handles freeing in batches, to avoid taking and releasing LOCK_rpl_thread too often. However, it was possible for freeing to be delayed to the point where one thread could stall the SQL driver thread due to full queue, while other worker threads might be idle. This could significantly degrade possible parallelism and thus performance. Clean up the batch freeing code so that it is more robust and now able to regularly free batches of object, so that normally the queue will not run full unless the SQL driver thread is really far ahead of the worker threads.
* MDEV-6321: close_temporary_tables() in format description event not ↵Kristian Nielsen2014-08-201-1/+1
| | | | | | | | | | | | serialised correctly After-review fixes. Mainly catching if the wait in wait_for_workers_idle() is aborted due to kill. In this case, we should return an error and not proceed to execute the format description event, as other threads might still be running for a bit until the error is caught in all threads.
* MDEV-6321: close_temporary_tables() in format description event not ↵Kristian Nielsen2014-08-191-4/+9
| | | | | | | | | | | | | | | serialised correctly Follow-up patch, fixing a possible deadlock issue. If the master crashes in the middle of an event group, there can be an active transaction in a worker thread when we encounter the following master restart format description event. In this case, we need to notify that worker thread to abort and roll back the partial event group. Otherwise a deadlock occurs: the worker thread waits for the commit that never arrives, and the SQL driver thread waits for the worker thread to complete its event group, which it never does.
* MDEV-6321: close_temporary_tables() in format description event not ↵Kristian Nielsen2014-07-021-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | serialised correctly When a master server starts up, it logs a special format_description event at the start of a new binlog to mark that is has restarted. This is used by a slave to drop all temporary tables - this is needed in case the master crashed and did not have a chance to send explicit DROP TEMPORARY TABLE statements to the slave. In parallel replication, we need to be careful when dropping the temporary tables - we need to be sure that no prior events are still executing that might be using the temporary tables to be dropped, _and_ that no following events have started executing that might have created new temporary tables that should not be dropped. This was not handled correctly, which could cause errors about access to not existing temporary tables or even crashes. This patch implements that such format_description events cause serialisation of event execution; all prior events are executed to completion first, then the format_description event is executed, dropping temporary tables, then following events are queued for execution. Master restarts should be sufficiently infrequent that the resulting loss of parallelism should be of minimal impact.
* MDEV-6551: Some replication errors are ignored if slave_parallel_threads > 0Kristian Nielsen2014-08-151-1/+9
| | | | | | | | | | | | | | | | | | The problem occured when using parallel replication, and an error occured that caused the SQL thread to stop when the IO thread had already reached a following binlog file from the master (or otherwise performed a relay log rotation). In this case, the Rotate Event at the end of the relay log file could still be executed, even though an earlier event in that relay log file had gotten an error. This would cause the position to be incorrectly updated, so that upon restart of the SQL thread, the event that had failed would be silently skipped and ignored, causing replication corruption. Fixed by checking before executing Rotate Event, whether an earlier event has failed. If so, the Rotate Event is not executed, just dequeued, same as for other normal events following a failing event.
* MDEV-5262, MDEV-5914, MDEV-5941, MDEV-6020: Deadlocks during parallelunknown2014-06-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | replication causing replication to fail. Remove the temporary fix for MDEV-5914, which used READ COMMITTED for parallel replication worker threads. Replace it with a better, more selective solution. The issue is with certain edge cases of InnoDB gap locks, for example between INSERT and ranged DELETE. It is possible for the gap lock set by the DELETE to block the INSERT, if the DELETE runs first, while the record lock set by INSERT does not block the DELETE, if the INSERT runs first. This can cause a conflict between the two in parallel replication on the slave even though they ran without conflicts on the master. With this patch, InnoDB will ask the server layer about the two involved transactions before blocking on a gap lock. If the server layer tells InnoDB that the transactions are already fixed wrt. commit order, as they are in parallel replication, InnoDB will ignore the gap lock and allow the two transactions to proceed in parallel, avoiding the conflict. Improve the fix for MDEV-6020. When InnoDB itself detects a deadlock, it now asks the server layer for any preferences about which transaction to roll back. In case of parallel replication with two transactions T1 and T2 fixed to commit T1 before T2, the server layer will ask InnoDB to roll back T2 as the deadlock victim, not T1. This helps in some cases to avoid excessive deadlock rollback, as T2 will in any case need to wait for T1 to complete before it can itself commit. Also some misc. fixes found during development and testing: - Remove thd_rpl_is_parallel(), it is not used or needed. - Use KILL_CONNECTION instead of KILL_QUERY when a parallel replication worker thread is killed to resolve a deadlock with fixed commit ordering. There are some cases, eg. in sql/sql_parse.cc, where a KILL_QUERY can be ignored if the query otherwise completed successfully, and this could cause the deadlock kill to be lost, so that the deadlock was not correctly resolved. - Fix random test failure due to missing wait_for_binlog_checkpoint.inc. - Make sure that deadlock or other temporary errors during parallel replication are not printed to the the error log; there were some places around the replication code with extra error logging. These conditions can occur occasionally and are handled automatically without breaking replication, so they should not pollute the error log. - Fix handling of rgi->gtid_sub_id. We need to be able to access this also at the end of a transaction, to be able to detect and resolve deadlocks due to commit ordering. But this value was also used as a flag to mark whether record_gtid() had been called, by being set to zero, losing the value. Now, introduce a separate flag rgi->gtid_pending, so rgi->gtid_sub_id remains valid for the entire duration of the transaction. - Fix one place where the code to handle ignored errors called reset_killed() unconditionally, even if no error was caught that should be ignored. This could cause loss of a deadlock kill signal, breaking deadlock detection and resolution. - Fix a couple of missing mysql_reset_thd_for_next_command(). This could cause a prior error condition to remain for the next event executed, causing assertions about errors already being set and possibly giving incorrect error handling for following event executions. - Fix code that cleared thd->rgi_slave in the parallel replication worker threads after each event execution; this caused the deadlock detection and handling code to not be able to correctly process the associated transactions as belonging to replication worker threads. - Remove useless error code in slave_background_kill_request(). - Fix bug where wfc->wakeup_error was not cleared at wait_for_commit::unregister_wait_for_prior_commit(). This could cause the error condition to wrongly propagate to a later wait_for_prior_commit(), causing spurious ER_PRIOR_COMMIT_FAILED errors. - Do not put the binlog background thread into the processlist. It causes too many result differences in mtr, but also it probably is not useful for users to pollute the process list with a system thread that does not really perform any user-visible tasks...
* MDEV-5262: Missing retry after temp error in parallel replicationunknown2014-05-151-0/+2
| | | | | | | | | | Handle retry of event groups that span multiple relay log files. - If retry reaches the end of one relay log file, move on to the next. - Handle refcounting of relay log files, and avoid purging relay log files until all event groups have completed that might have needed them for transaction retry.
* MDEV-5262: Missing retry after temp error in parallel replicationunknown2014-05-081-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | Start implementing that an event group can be re-tried in parallel replication if it fails with a temporary error (like deadlock). Patch is very incomplete, just some very basic retry works. Stuff still missing (not complete list): - Handle moving to the next relay log file, if event group to be retried spans multiple relay log files. - Handle refcounting of relay log files, to ensure that we do not purge a relay log file and then later attempt to re-execute events out of it. - Handle description_event_for_exec - we need to save this somehow for the possible retry - and use the correct one in case it differs between relay logs. - Do another retry attempt in case the first retry also fails. - Limit the max number of retries. - Lots of testing will be needed for the various edge cases.
* MDEV-6120: When slave stops with error, error message should indicate the ↵Kristian Nielsen2014-06-251-1/+1
| | | | | | | | | | | | | failing GTID If replication breaks in GTID mode, it is not trivial to determine the GTID of the failing event group. This is a problem, as such GTID is needed eg. to explicitly set @@gtid_slave_pos to skip to after that event group, or to compare errors on different servers, etc. Fix by ensuring that relevant slave errors logged to the error log include the GTID of the event group containing the problem event.
* Merge MDEV-5754, MDEV-5769, and MDEV-5764 into 10.0unknown2014-03-041-3/+3
|\
| * MDEV-5769: Slave crashes on attempt to do parallel replication from an older ↵unknown2014-03-041-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | master Older master has no GTID events, so such events are not available for deciding on scheduling of event groups and so on. With this patch, we run such events from old masters single-threaded, in the sql driver thread. This seems better than trying to make the parallel code handle the data from older masters; while possible, this would require a lot of testing (as well as possibly some extra overhead in the scheduling of events), which hardly seems worthwhile.
| * MDEV-5764: START SLAVE UNTIL does not work with parallel replicationunknown2014-03-031-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With parallel replication, there can be any number of events queued on in-memory lists in the worker threads. For normal STOP SLAVE, we want to skip executing any remaining events on those lists and stop as quickly as possible. However, for START SLAVE UNTIL, when the UNTIL position is reached in the SQL driver thread, we must _not_ stop until all already queued events for the workers have been executed - otherwise we would stop too early, before the actual UNTIL position had been completely reached. The code did not handle UNTIL correctly, stopping too early due to not executing the queued events to completion. Fix this, and also implement that an explicit STOP SLAVE in the middle (when the SQL driver thread has reached the UNTIL position but the workers have not) _will_ cause an immediate stop.
* | Merge MDEV-5657 (parallel replication) to 10.0unknown2014-02-261-20/+126
|\ \ | |/
| * MDEV-5657: Parallel replication.unknown2014-02-261-20/+126
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Clean up and improve the parallel implementation code, mainly related to scheduling of work to threads and handling of stop and errors. Fix a lot of bugs in various corner cases that could lead to crashes or corruption. Fix that a single replication domain could easily grab all worker threads and stall all other domains; now a configuration variable --slave-domain-parallel-threads allows to limit the number of workers. Allow next event group to start as soon as previous group begins the commit phase (as opposed to when it ends it); this allows multiple event groups on the slave to participate in group commit, even when no other opportunities for parallelism are available. Various fixes: - Fix some races in the rpl.rpl_parallel test case. - Fix an old incorrect assertion in Log_event iocache read. - Fix repeated malloc/free of wait_for_commit and rpl_group_info objects. - Simplify wait_for_commit wakeup logic. - Fix one case in queue_for_group_commit() where killing one thread would fail to correctly signal the error to the next, causing loss of the transaction after slave restart. - Fix leaking of pthreads (and their allocated stack) due to missing PTHREAD_CREATE_DETACHED attribute. - Fix how one batch of group-committed transactions wait for the previous batch before starting to execute themselves. The old code had a very complex scheduling where the first transaction was handled differently, with subtle bugs in corner cases. Now each event group is always scheduled for a new worker (in a round-robin fashion amongst available workers). Keep a count of how many transactions have started to commit, and wait for that counter to reach the appropriate value. - Fix slave stop to wait for all workers to actually complete processing; before, the wait was for update of last_committed_sub_id, which happens a bit earlier, and could leave worker threads potentially accessing bits of the replication state that is no longer valid after slave stop. - Fix a couple of places where the test suite would kill a thread waiting inside enter_cond() in connection with debug_sync; debug_sync + kill can crash in rare cases due to a race with mysys_var_current_mutex in this case. - Fix some corner cases where we had enter_cond() but no exit_cond(). - Fix that we could get failure in wait_for_prior_commit() but forget to flag the error with my_error(). - Fix slave stop (both for normal stop and stop due to error). Now, at stop we pick a specific safe point (in terms of event groups executed) and make sure that all event groups before that point are executed to completion, and that no event group after start executing; this ensures a safe place to restart replication, even for non-transactional stuff/DDL. In error stop, make sure that all prior event groups are allowed to execute to completion, and that any later event groups that have started are rolled back, if possible. The old code could leave eg. T1 and T3 committed but T2 not, or it could even leave half a transaction not rolled back in some random worker, which would cause big problems when that worker was later reused after slave restart. - Fix the accounting of amount of events queued for one worker. Before, the amount was reduced immediately as soon as the events were dequeued (which happens all at once); this allowed twice the amount of events to be queued in memory for each single worker, which is not what users would expect. - Fix that an error set during execution of one event was sometimes not cleared before executing the next, causing problems with the error reporting. - Fix incorrect handling of thd->killed in worker threads.
* MDEV-5509: Seconds_behind_master incorrect in parallel replicationunknown2014-01-081-0/+1
| | | | | | | | | | | | | | | | The problem was a race between the SQL driver thread and the worker threads. The SQL driver thread would set rli->last_master_timestamp to zero to mark that it has caught up with the master, while the worker threads would set it to the timestamp of the executed event. This can happen out-of-order in parallel replication, causing the "caught up" status to be overwritten and Seconds_Behind_Master to wrongly grow when the slave is idle. To fix, introduce a separate flag rli->sql_thread_caught_up to mark that the SQL driver thread is caught up. This avoids issues with worker threads overwriting the SQL driver thread status. In parallel replication, we then make SHOW SLAVE STATUS check in addition that all worker threads are idle before showing Seconds_Behind_Master as 0 due to slave idle.
* MDEV-4506: Parallel replicationunknown2013-11-051-0/+7
| | | | | | | | | | | | | | | | | | | MDEV-5217: SQL thread hangs during stop if error occurs in the middle of an event group Normally, when we stop the slave SQL thread in parallel replication, we want the worker threads to continue processing events until the end of the current event group. But if we stop due to an error that prevents further events from being queued, such as an error reading the relay log, no more events can be queued for the workers, so they have to abort even if they are in the middle of an event group. There was a bug that we would deadlock, the workers waiting for more events to be queued for the event group, the SQL thread stopped and waiting for the workers to complete their current event group before exiting. Fixed by now signalling from the SQL thread to all workers when it is about to exit, and cleaning up in all workers when so signalled. This patch fixes one of multiple problems reported in MDEV-5217.
* MDEV-5206: Incorrect slave old-style position in MDEV-4506, parallel ↵unknown2013-10-311-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | replication. In parallel replication, there are two kinds of events which are executed in different ways. Normal events that are part of event groups/transactions are executed asynchroneously by being queued for a worker thread. Other events like format description and rotate and such are executed directly in the driver SQL thread. If the direct execution of the other events were to update the old-style position, then the position gets updated too far ahead, before the normal events that have been queued for a worker thread have been executed. So this patch adds some special cases to prevent such position updates ahead of time, and instead queues dummy events for the worker threads, so that they will at an appropriate time do the position updates instead. (Also fix a race in a test case that happened to trigger while running tests for this patch).
* MDEV-4506: Parallel replication.unknown2013-10-241-1/+24
| | | | | Implement --slave-parallel-max-queue to limit memory usage of SQL thread read-ahead in the relay log.