| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| | |
after b9d64989 the test for MDEV-16962 is not suitable anymore
(and probably the bug is not reproducible).
|
| | |
|
| |\ |
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When you only need view structure, don't call handle_derived with
DT_CREATE and rely on its internal hackish check to skip DT_CREATE.
Because handle_derived is called from many different places,
and this internal hackish check is indiscriminative.
Instead, just don't ask handle_derived to do DT_CREATE
if you don't want it to do DT_CREATE.
|
| | |\ \
| | | |/ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Fix a race condition in the testcase. The testcase assumed that
State='Sending data' means that the thread is already in an
InnoDB lock wait. This is not case, there is a gap between the
state changing to Sending data and execution reaching the point
where it is waiting for a lock.
Use a more precise check instead, through I_S.INNODB_TRX.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Problem:
========
180511 11:07:58 [ERROR] Slave I/O: Unexpected master's heartbeat data:
heartbeat is not compatible with local info;the event's data: log_file_name
mysql-bin.000009 log_pos 1054262041, Error_code: 1623
Analysis:
=========
In replication setup when master server doesn't have any events to send to
slave server it sends an 'Heartbeat_log_event'. This event carries the
current binary log filename and offset details. The offset values is stored
within 4 bytes of event header. When the size of binary log is higher than
UINT32_MAX the log_pos values will not fit in 4 bytes memory. It overflows
and hence slave stops with an error.
Fix:
===
Since we cannot extend the common_header of Log_event class, a greater than
4GB value of Log_event::log_pos is made to be transported with a HeartBeat
event's sub-header. Log_event::log_pos in such case is set to zero to
indicate that the 8 byte sub-header is allocated in the event.
In case of cross version replication following behaviour is expected
OLD - Server without fix
NEW - Server with fix
OLD<->NEW : works bidirectionally as long as the binlog offset is
(normally) within 4GB.
When log_pos > UINT32_MAX
OLD->NEW : The 'log_pos' is bound to overflow and NEW slave may report
an invalid event/incompatible heart beat event error.
NEW->OLD : Since patched server sets log_pos=0 on overflow, OLD slave will
report invalid event error.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
pars_retrieve_table_def
- Fixing post-push failure of innodb_fts_misc_1 test case.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
pars_retrieve_table_def
InnoDB tries to fetch the deleted doc ids for discarded
tablespace. In i_s_fts_deleted_generic_fill(), InnoDB needs
to check whether the table is discarded or not before fetching
deleted doc ids.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
fil_ibd_load(): Remove a message that is basically saying that
everything works as expected. The other "Ignoring data file" message
about the presence of an extraneous file will be retained
(and expected by the test innodb.log_file_name).
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The problem is that sharing default expression among set instruction
leads to attempt access result field of function created in
other instruction runtime MEM_ROOT and already freed
(a bit different then MySQL problem).
Fix is the same as in MySQL (but no optimisation for constant), turn
DECLARE a, b, c type DEFAULT expr;
to
DECLARE a type DEFAULT expr, b type DEFAULT a, c type DEFAULT a;
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
telling, never turns it off
Removed explicit InnoDB monitor startup and used just functions
to print current lock information.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Problem was that we should skip strict password validation on
applier nodes similarly as is done for slave nodes.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Replace unnecessary sleeps with real wait_conditions to make
sure correct cluster sizes.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Relax the assert condition. A locked table that did existed prior to
CREATE IF NOT EXIST, retains the MDL_NO_SHARED_READ_WRITE MDL lock prio.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
plugin variables in SET only locked the plugin till the end of the
statement. If SET with a plugin variable was prepared, it was possible
to uninstall the plugin before EXECUTE. Then EXECUTE would crash,
trying to resolve a now-invalid pointer to a disappeared variable.
Fix: keep plugins locked until the prepared statement is closed.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
InnoDB startup hangs if a DDL transaction needs to be
rolled back and a recovered transaction on statistics
tables exists. In that case, InnoDB should rollback
the transaction which holds locks on innodb_table_stats
or innodb_index_stats during trx_rollback_or_clean_recovered().
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
InnoDB fails to fetch the index type when innodb dictionary
doesn't match with frm. InnoDB should return corrupted if it
can't find the index in ha_innobase::index_type().
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
table->move_fields wasn't undone in case of error.
1. move_fields is unconditionally undone even when error is occurred
2. cherry-pick an assertion in `ptr_in_record`, which is already in 10.5
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The assertion is improved: storage engines like myisam always have to store
at least one field, so the assertion does not cover tables with no stored
columns.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
So we are having a race condition of three of threads, resulting in a
deadlock backoff in purge, which is unexpected.
More precisely, the following happens:
T1: NOCOPY ALTER TABLE begins, and eventually it holds MDL_SHARED_NO_WRITE
lock;
T2: FLUSH TABLES begins. it sets share->tdc->flushed = true
T3: purge on a record with virtual column begins. it is going to open a
table. MDL_SHARED_READ lock is acquired therefore.
Since share->tdc->flushed is set, it waits for a TDC purge end.
T1: is going to elevate MDL LOCK to exclusive and therefore has to set
other waiters to back off.
T3: receives VICTIM status, reports a DEADLOCK, sets OT_BACKOFF_AND_RETRY
to Open_table_context::m_action
My fix is to allow opening table in purge while flushing. It is already
done the same way in other maintainance facilities like REPAIR TABLE.
Another way would be making an actual backoff, but Open_table_context
does not allow to distinguish it from other failure types, which still
seem to be unexpected. Making this would require hacking into
Open_table_context interface for no benefit, in comparison to passing
MYSQL_OPEN_IGNORE_FLUSH during table open.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
innodb_debug_sync was introduced in commit
b393e2cb0c079b30563dcc87a62002c9c778643c and reverted in
commit fc58c1721631fcc6c9414482b3b7e90cd8e7325d due to memory leak reported
by valgrind, see MDEV-21336.
The leak is now fixed by adding `rw_lock_free(&slot->debug_sync_lock)`
after background thread working loop is finished, and the patch is
reapplied, with respect to c++98 fixes by Marko.
The missing DEBUG_SYNC for MDEV-18546 in row0vers.cc is also reapplied.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
(trivial backport to 10.2)
Add a testcase
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
(trivial backport to 10.2)
The optimizer removes redundant GROUP BY operations. If GROUP BY element
is a subselect, it is "eliminated".
However one must not eliminate the item if it is used both in the select
list and in the GROUP BY, like so:
select (select ... ) as SUBQ from ... group by SUBQ
Do not eliminate such items.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The problem was caused by the following scenario:
Subquery's table has two indexes, KEY a(a), KEY a_b(a,b)
- LATERAL DERIVED optimization decides to use index a.
= The subquery uses ref access over key a.
- test_if_skip_sort_order() sees that KEY a_b satisfies the
subquery's GROUP BY clause, and attempts to switch to it.
= It fails to do so, because KEYUSE objects for index a_b
are switched off.
Fixed by disallowing to change the ref access key if it uses KEYUSE
objects injected by LATERAL DERIVED optimization.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
if it starts with an _ and is not backticked
remove code duplication in Lex_input_stream::scan_ident_middle(),
make sure identifiers are always use the same code path whether
they start form an underscore or not.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
don't try to lowercase a db name if it's zero-length.
(empty_lex_str is not writable, even db.str[0]=0 will fail)
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
from server
Remove plugin functions via item_create_remove() at deinit time.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Item_func_history (is_history()) is a bool function that checks if the
row is the history row by checking row_end->is_max(). The argument to
this function must be row_end system field.
Added the above function to conjunction with SYSTEM_TIME_BEFORE
versioning condition.
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch changes statement rollback for streaming replication.
Previously, a statement rollback was turned into full transaction
rollback in the case where the transaction had already replicated a
fragment. This was introduced in the initial implementation of
streaming replication due to the fact that we do not have a mechanism
to perform a statement rollback on the applying side.
This policy is however overly pessimistic, causing full rollbacks even
in cases where a local statement rollback, would not require a
statement rollback on the applying side. This happens to be case when
the statement itself has not replicated any fragments.
So the patch changes the condition that determines if a statement
rollback should be turned into a full rollback accordingly.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
wsrep::client_state::start_transaction
Removed redundant code for BF abort transaction in `thr_lock.cc`.
TOI operations will ignore provided lock_wait_timeout and use `LONG_TIMEOUT`
until operation is finished.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Problem:
=======
InnoDB alter fails before applying instant operation. So rollback
assigns wrong column to the secondary index field. It leads
to the assert failure in the consecutive alter.
Fix:
===
InnoDB shouldn't do rollback of instant operation when it fails
before applying instant operation.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This happens during repair when a temporary table is opened
with HA_OPEN_COPY, which resets 'share->born_transactional', which
the encryption code did not like.
Fixed by resetting just share->now_transactional.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
foreign key constraint fails
after dfb41fddf6 tables that failed to drop are excluded from the
binlogged DROP TABLE statement. It means that the slave should not
expect any errors when executing DROP TABLE, and the binlog should
report that no error has happened, even if it was.
Do not write error code into the binlogged DROP TABLE,
and remove all code that was needed to compute it.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The assert is no more reproducible in the lastest 10.5-10.6
The patch only adds testcase from MDEV-24382.
|
|\ \ \ \
| |/ / / |
|
| |\ \ \
| | |/ / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
row_merge_is_index_usable(): Allow access to any SEQUENCE, even if it was
created after the read view. SQL sequences are no-rollback tables with no
history at all.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is a backport of
commit fd9ca2a742abe2e91b2b77e70915dec7bd3cd7e1 (MDEV-23295) and
commit 9a156e1a23046ba3e37bdb1e4e1ad887d3f5829b (MDEV-23345) to 10.3.
An instant ADD/DROP/reorder column could create a dummy table
object with the wrong ROW_FORMAT when innodb_default_row_format
was changed between CREATE TABLE and ALTER TABLE.
prepare_inplace_alter_table_dict(): If we had promised that
ALGORITHM=INPLACE is supported, we must preserve the ROW_FORMAT.
The rest of the changes are related to adding
Alter_inplace_info::inplace_supported to cache the return value of
handler::check_if_supported_inplace_alter().
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
replication
Back port upstream fix
commit 1800b015a1d487330f7b15f2020b887be348a66b
Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com>
Date: Fri Sep 8 20:29:22 2017 +0530
Bug#26027024 SLAVE_COMPRESSED_PROTOCOL DOESN'T WORK WITH
SEMI-SYNC REPLICATION IN MYSQL-5.7
Analysis: In mysql-5.6, dump thread (the thread that is created
on Master after Slave requested for a binlog dump) is also used
to receive acknowledgements from the Slave and act on them accordingly.
For performance reasons, a special thread called Ack Receiver thread
is added in mysql-5.7 Semi synchronous replication plugin.
This thread does not have special handling to receive acknowledgements
if Slave has enabled compression in the protocol. Hence Master is
unable to handle any slave if Slave_compressed_protocol is enabled
on it.
Fix: Enable compress flag on the communication channels if the Slave
has Slave_compressed_protocol ON.
|
| | | |
| | | |
| | | |
| | | | |
Add a testcase
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The optimizer removes redundant GROUP BY operations. If GROUP BY element
is a subselect, it is "eliminated".
However one must not eliminate the item if it is used both in the select
list and in the GROUP BY, like so:
select (select ... ) as SUBQ from ... group by SUBQ
Do not eliminate such items.
|