| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add history row outside of compare_record() check. For TRX_ID
versioning we have to fail can_compare_record to force InnoDB update
which adds history row; and there in ha_innobase::update_row() is
additional "row changed" check where we force history row anyway.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
self-reference in system versioned table
First part of the fix (row0mysql.cc) addresses external columns when adding history
row on referential action. The full data must be retrieved before the
row is inserted.
Second part of the fix (the rest) avoids duplicate primary key error between
the history row generated on referential action and the history row
generated by SQL command. Both command and referential action can
happen on same table since foreign key can be self-reference (parent
and child tables are same). Moreover, the self-reference can refer
multiple rows when the key is non-unique. In such case history is
generated by referential action occured on first row but processed all
rows by a matched key. The second round is when the next row is
processed by a command but history already exists. In such case we
check TRX_ID of existing history row and if it is the same we assume
the above situation and skip adding one more history row or failing
the command.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
row_build_index_entry_low
First part (row0mysql.cc) fixes ins_node_set_new_row() usage workflow
as it is designed to operate on empty row (see row_get_prebuilt_insert_row()
for example).
Second part (row0ins.cc) fixes duplicate key error in FTS_DOC_ID_INDEX
since history rows must not generate entries in that index. We detect
FTS_DOC_ID_INDEX by a number of attributes and skip it if the row is
historical.
Misc fixes:
row_build_index_entry_low() does not accept non-NULL tuple
for FTS index (subject assertion fails), assertion (index->type !=
DICT_FTS) adds code understanding.
Now as historical_row is copied in row_update_vers_insert() there is
no need to copy the row twice: ROW_COPY_POINTERS is used to build
historical_row initially.
dbug_print_rec() debug functions.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
partition_info::check_partition_info instead of ER_VERS_WRONG_PARTS
Assign create_info->alias for ALTER TABLE since it is NULL and later
accessed for printing error message.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
ip_len has a different meaning on AIX so we use a
different variable name here not to conflict.
Backport from MDEV-20178 2f5d372444cff53914cfcd118e92a91f575cec35
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
61 is too large for 32-bit type 'int' (on optimized builds)
Use a ulonglong instead of uint when left shifting to calculate the table map for all the tables in a query
|
| | |
| | |
| | |
| | | |
This reverts commit bc2dc83cb56851144a8c15e73a83c7817dc705a2.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
feedback plugin now fakes a SHOW command to force
create_schema_table() to instantiate the table at once,
not lazily.
The test from plugins.feedback_plugin_send applies.
Caused by e64084d5a3a7
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Assertion `thd->mdl_context.is_lock_owner(MDL_key::TABLE, table->db.str, table->table_name.str, MDL_SHARED)' failed in mysql_rm_table_no_locks
Early report error in case of DROP SEQUENCE <non-sequence>
Do not use error variable for other purposes except error.
|
| | |
| | |
| | |
| | | |
Closes PR #1649
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Galera replication does not support XA transactions yet. Reject any
attempt to `XA START` a transaction, if Galera is enabled.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
failed on shutdown
Closing remaining threads in `wsrep_close_client_connections` should also
check `thd_is_connection_alive` for thd before closing connection. Assert is
happening when thread already doing shutdown, but still not removed from threads
list.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Handling of write sets, which fail in certification happens differently than
with write sets which pass certification. When certification fails, the
write set applying can be skipped and applier needs only to take care of
wsrep XID checkpointing. With current implementation, this can rush ahead
of wsrep XID checkpointing of successful write sets.
The fix in this PR registers wsrep XID checkpointing of certification failure cases in group commit,
which guarantees that XID ceckpointing order is synchronized with real committing transactions.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If log_slave_updates==OFF, wsrep applier threads used to be configured
with option: thd->variables.option_bits&= ~(OPTION_BIN_LOG);
(i.e. like sql_log_bin=ON). And this was regardless of log-bin configuration.
With this, having configuration of: --log-bin && --log-slave-updates=OFF,
local threads used binlogging, but applier threads did not. And further:
local threads went through binlog group commit, while applier threads did
direct commits. This resulted in situation, where applier threads entered
earlier in wsrep XID checkpointing, and could sync their wsrep XID out of order.
Later local thread commit would see that higher seqno was already checkpointed,
and fire an assert because of this.
As a fix, applier threads are now forced to enable binlogging regardless of
log-slave-updates configuration.
This PR comes with new mtr test: galera.MDEV-24327, which causes a scenario
where applier transaction is applied and committed while earlier local transaction
is parked before commit order monitor enter. A buggy mariadb versoin would fail
for assertion because of wsrep XID checkpoint order violation.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The main goal of this patch is to prevent MariaDB's native_password_plugin
from "parsing" the hex (or non hex) authentication_string. Due to how the
current code is written, we convert any string (within native_password_get_salt)
that has the appropriate length to a "binary" representation, that can
potentially match a real password.
More specifically,
"*THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE" produces the same results as
"*d13c3c78dafa52d9bce09bdd1adcb7befced1ebe".
The length indicator is the main indicator of an invalid password. We use
use same trick with "invalid" to change its internal representation.
The "parsing" mentioned is by get_salt_from_password down to char_val()
and because if where it is, its effectively a static plugin API that cannot
change.
In supporting these, we support the SHOW CREATE USER from MySQL may have the
hashed password string: *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE.
Obviously this isn't a hash because it contains non-hex characters.
After this patch we do however recognise the pattern;
[any char, notionally *]{40 chars not all are hex}
as a pattern for an invalid password. This was determined to be the general
pattern that MySQL used.
Reviewers: Sergei G, Vicentiu
|
| | | |
|
| | |
| | |
| | |
| | | |
following the same masquerading logic
|
| | |
| | |
| | |
| | | |
Add the new file
|
| | |
| | |
| | |
| | |
| | | |
Move the testcase into a separate file: embedded server
doesn't have optimizer trace.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Basic variant of the fix: do not consider conditions in form
unique_key NOT IN (c1,c2...)
to be sargable. If there are only a few constants, the condition
is not selective. If there are a lot constants, the overhead of
processing such a huge range list is not worth it.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- There is no reason to collect EITS statistics
- The test is sporadically failing on some platforms. I believe the
issue is in InnoDB. Let's rule out EITS code as a possible source
of the issue.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
THD::log_events_and_free_tmp_shares
Post push fix to address test failure.
Problem:
=======
rpl.rpl_drop_temp_table_invaid_lex added as part bug fix has occasional
failures in build bot.
MTR's internal check of the test case
'rpl.rpl_drop_temp_table_invaid_lex' failed.
Variable_name Value
-Slave_open_temp_tables 0
+Slave_open_temp_tables 1
Analysis:
=========
The reason for the failure is that the DROP TEMPORARY TABLE command which
gets generated on connection disconnect might not have reached the slave
and hence the temp table remains on the slave.
Fix:
===
On master, upon disconnect, wait till connection is completely gone. Then
ensure that DROP TEMPORARY table statement is available in the binary log.
Sync the slave with master and check that temporary table count is zero on
slave. Fixed a typo in test name.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Atomic_relaxed<T>: add fetch_or() and fetch_and()
innodb_init(): rely on a zero-initialization of a global variable
monitor_set_tbl: make Atomic_relaxed<ulint> array and use proper operations
for setting bit, unsetting bit and reading bit
Reviewed by: Marko Mäkelä
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
os_n_file_reads: make Atomic_counter and correct the semantics of an imprecise
counter.
Reviewed by: Marko Mäkelä
|
| | |
| | |
| | |
| | | |
The Galera tests were massively failing with debug assertions.
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | | |
encountered
Post-push Windows compilation errors fix.
|
| | |
| | |
| | |
| | |
| | | |
Change thd->mdl_context.release_transactional_locks() to
thd->mdl_release_transactional_locks()
|
| | |
| | |
| | |
| | |
| | | |
row_undo_ins_parse_undo_rec(): Do not try to read non-existing
virtual column information for the metadata record.
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
encountered
The new option --log-innodb-page-corruption is introduced.
When this option is set, backup is not interrupted if innodb corrupted
page is detected. Instead it logs all found corrupted pages in
innodb_corrupted_pages file in backup directory and finishes with error.
For incremental backup corrupted pages are also copied to .delta file,
because we can't do LSN check for such pages during backup,
innodb_corrupted_pages will also be created in incremental backup
directory.
During --prepare, corrupted pages list is read from the file just after
redo log is applied, and each page from the list is checked if it is allocated
in it's tablespace or not. If it is not allocated, then it is zeroed out,
flushed to the tablespace and removed from the list. If all pages are removed
from the list, then --prepare is finished successfully and
innodb_corrupted_pages file is removed from backup directory. Otherwise
--prepare is finished with error message and innodb_corrupted_pages contains
the list of the pages, which are detected as corrupted during backup, and are
allocated in their tablespaces, what means backup directory contains corrupted
innodb pages, and backup can not be considered as consistent.
For incremental --prepare corrupted pages from .delta files are applied
to the base backup, innodb_corrupted_pages is read from both base in
incremental directories, and the same action is proceded for corrupted
pages list as for full --prepare. innodb_corrupted_pages file is
modified or removed only in base directory.
If DDL happens during backup, it is also processed at the end of backup
to have correct tablespace names in innodb_corrupted_pages.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The reason for the failure is that
thd->mdl_context.release_transactional_locks()
was called after commit & rollback even in cases where the current
transaction is still active.
For 10.2, 10.3 and 10.4 the fix is simple:
- Replace all calls to thd->mdl_context.release_transactional_locks() with
thd->release_transactional_locks(). The thd function will only call
the mdl_context function if there are no active transactional locks.
In 10.6 we will better fix where we will change the return value for
some trans_xxx() functions to indicate if transaction did close the
transaction or not. This will avoid the need of the indirect call.
Other things:
- trans_xa_commit() and trans_xa_rollback() will automatically
call release_transactional_locks() if the transaction is closed.
- We can't do that for the other functions as the caller of many of these
are doing additional work (like close_thread_tables) before calling
release_transactional_locks().
- Added missing abort_result_set() and missing DBUG_RETURN in
select_create::send_eof()
- Fixed wrong indentation in injector::transaction::commit()
|
| | | |
|
| | |
| | |
| | |
| | | |
Reviewed by:serg@mariadb.com
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is a fixup patch for MDEV-23991 afc9d00c66db946c8240fe1fa6b345a3a8b6fec1
We really should read result.n_leaf_pages, which was set previously.
Analysis and fix was provided by Jukka Santala. Thanks!
Reviewed by: Marko Mäkelä
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The nonnull attribute is not applicable to parameters that are
passed by reference, at least not in the Intel compiler.
Let us remove the reference indirection, which was only there
so that the pointer could be assigned to NULL, and let the
callers perform that task.
row_log_allocate(): Fix a bug in out-of-memory error handling
that would leave a pointer to freed memory.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In galera_3nodes.galera_safe_to_bootstrap node restart can happen too soon, when earlier SST joiner process is still active in the node.
Similar issue may hurt other mtr tests as well.
This is second variant of fix for this issue. Here we only change rsync SST script to wait a little bit if lingering SST rsync is observed to be in execution.
We assume that the previous mysqld and SST processes have been already signaled to abort during earlier stataup attempt.
If other SST methods (than rsync) suffer from similar overlapping SST execution, they should be sorted out separately within each SST method handler scripts.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|
| | |
| | |
| | |
| | | |
Add test case
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
A bogus error message was issued when a condition was pushed into a
materialized derived table or view specified as union of selects with
aggregation when the corresponding columns of the selects had different
names. This happened because the expression pushed into having clauses of
the selects was adjusted for the names of the first select of the union.
The easiest solution was to rename the columns of the other selects to be
name compatible with the columns of the first select.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
sequence.
Explicitly setting encoding to UTF-8 when writing to file and
replacing wide characters from MTR_RES_FAILED when writing to
XML file. The wide characters are not allowed in XML.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
FindBlockX::operator(): Return false if an x-latched block is found.
Previously, we were incorrectly returning false if the block was in
the log, only if not x-latched.
It is unknown if this mistake had any visible impact. Often,
we would register both MTR_MEMO_BUF_FIX and MTR_MEMO_PAGE_X_FIX
for the same block.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
in TABLE_LIST::is_recursive_with_tables
After the patch for MDEV-23619 the code of st_select_lex::cleanup started
using the list st_select_lex::leaf_tables. This list is built for any
query with FROM clause in the function setup_tables(). If such query is
used in a stored procedure it must be ensured that the list is empty
before each new call of the procedure. Otherwise if the first call of
the procedure is successful while the second call reports an error before
the setup_tables() is invoked then list st_select_lex::leaf_tables would
point to a piece of memory that has been already freed.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
| | | |
|
| | |
| | |
| | |
| | | |
Add wait_condition.
|
| | |
| | |
| | |
| | | |
Add primary key and wait condition.
|