| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Background: Used encryption key_id is stored to encryption metadata
i.e. crypt_data that is stored on page 0 of the tablespace of the
table. crypt_data is created only if implicit encryption/not encryption
is requested i.e. ENCRYPTED=[YES|NO] table option is used
fil_create_new_single_table_tablespace on fil0fil.cc.
Later if encryption is enabled all tables that use default encryption
mode (i.e. no encryption table option is set) are encrypted with
default encryption key_id that is 1. See fil_crypt_start_encrypting_space on
fil0crypt.cc.
ha_innobase::check_table_options()
If default encryption is used and encryption is disabled, you may
not use nondefault encryption_key_id as it is not stored anywhere.
|
|
|
|
| |
Add wait on second node.
|
|
|
|
|
|
| |
if thread specific memory is requested and current_thd is NULL.
Leave DBUG_ASSERT() in place, to check in DBUG version.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bug appears as a slave SQL thread hanging in
rpl_parallel_thread_pool::get_thread() while there are no slave worker
threads to awake it.
The reason of the hang is that at the parallel slave worker pool
activation the being stared SQL thread could read the worker pool size
concurrently with pool deactivation. At reading the SQL thread did not
employ necessary protection from a race.
Fixed with making the SQL thread at the pool activation first
to grab the same lock as potential deactivator also does prior
to access the pool size.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
|
|\
| |
| | |
MDEV-17313 Data race in ib_counter_t
|
| |
| |
| |
| | |
ib_counter_t: make all reads/writes to m_counter relaxed atomical
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
pthread_detach_this_thread() was intended to be defined to something
meaningful only on some ancient unixes, which don't have
pthread_attr_setdetachstate() defined. Otherwise, on normal unixes,
threads are created detached in the first place.
This was broken in 0f01bf267680244ec488adaf65a42838756ed48e so that
we started calling pthread_detach() for already detached threads.
Intention was to detach aria checkpoint thread.
However in 87007dc2f71634cc460271eb277ad851ec69c04b aria service threads
were made joinable with appropriate handling, which makes breaking
revision unneccessary.
Revert remnants of 0f01bf267680244ec488adaf65a42838756ed48e, so that
pthread_detach_this_thread() is meaningful only on some ancient unixes
again.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
SLES11 can't build currently latest Galera library version.
|
| |
| |
| |
| | |
Add wait until cluster has correct number of nodes.
|
|\ \
| | |
| | | |
MDEV-16656: DROP DATABASE crashes the Galera Cluster
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When converting table identifiers to a new format,
some tables can be renamed twice, which subsequently
leads to the appearance of "false" auxiliary tables
belonging to another main (parent) table (which does
not actually have auxiliary tables).
This is because the table number is repeatedly added
to the aux_tables_to_rename vector inside the function
fts_check_and_drop_orphaned_tables.
To correct this error, we must add a check for the
occurrence of the table number in the aux_tables_to_rename
vector before adding a new element.
https://jira.mariadb.org/browse/MDEV-16656
|
| |/
|/| |
|
|\ \ |
|
| | |
| | |
| | |
| | | |
to guarantee that it's destroyed when plugin deinit is called, not after
|
| | | |
|
| |\ \ |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Unary minus operation for the smallest possible signed long long value
(LONLONG_MIN) is undefined in C++. Because of this, func_time.test
failed on ppc64 buildbot machines.
Fixing the code to avod using undefined operations.
This is fix is similar to "MDEV-7973 bigint fail with gcc 5.0"
|
| |\ \ \
| | |/ / |
|
| | |\ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
alloc_root(): unpoison only requested amount of bytes instead of a
possible bigger aligned-sized buffer.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
For the original test in 10.0 it was not really important if
find_user_wild() or find_user_exact() is used in sp_grant_privileges().
sp-security.test passed with either of them.
Fixing the test so it reliably fails with find_user_wild()
and pass with find_user_exact().
|
| |\ \ \ \
| | |/ / / |
|
| | | | | |
|
| | | | | |
|
| | |/ / |
|
| | | |
| | | |
| | | |
| | | | |
multi_delete sets TABLE::no_cache=1 and should set it to 0 when DELETE is done.
|
| | | |
| | | |
| | | |
| | | | |
test case
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
FOR MYSQL.USER TABLE
A test case and a followup fix
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
table->pos_in_locked_tables->table == table'
failed in mark_used_tables_as_free_for_reuse
Assertion failure can be triggered by some DDL executed under LOCK TABLES
that holds lock for DDL target table multiple times (either explicitly or
implcitly).
When closing all table instances for given table (e.g. when preparing for
table removal during CREATE OR REPLACE), only one instance was removed
from m_locked_tables list.
Later we attempt to re-insert one of the instances in mysql_create_table()/
add_back_last_deleted_lock(), which wasn't actually removed. This leads
to m_locks_tables corruption, specifically loss of all following elements.
Then UNLOCK TABLE won't reset some table instances properly (specifically
pos_in_locked_tables), since they're not present in m_locked_tables.
Eventually such table instance gets released to table cache and then
re-used by subsequent statement, which triggers this assertion failure.
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Reset query cache after every test case and add wait after
load infile.
|
| | | |
| | | |
| | | |
| | | | |
Add wsrep_sync_wait as we want INSERT to fail.
|
| |_|/
|/| |
| | |
| | |
| | |
| | | |
'Unknown database'
Wait in second node until tables with databases are created.
|
| | |
| | |
| | |
| | | |
Test changes only.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Failure is due missing .rdiff files for debug build as tests have
@have_debug combination.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
with "Transport endpoint is not connected" or with a failure to start a node
Add correct suppressions and wait conditions for expected database
contents.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
buildbot with "Last Applied Action message in non-primary configuration from member 0"
Add supression.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
and debug builds. Modified version for 10.1 from following commit:
commit 8e68876477eaec7944baa0b63ef26e551693c4f8
Author: Oleksandr Byelkin <sanja@mariadb.com>
Date: Thu Sep 13 15:06:44 2018 +0200
Fix of the test which has debug version
|
| | |
| | |
| | |
| | |
| | | |
Move MW-328[A,B,C] to big tests as there seem to be big variation
on their execution times and sometimes that could lead timeout.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The problem occurs in 10.2 and earlier releases of MariaDB Server because the
Partition Engine was not pushing the engine conditions to the underlying
storage engine of each partition. This caused Spider to return the first 5
rows in the table with the data provided by the customer. 2 of the 5 rows
did not qualify the WHERE clause, so they were removed from the result set by
the server.
To fix the problem, I have back-ported support for engine condition pushdown
in the Partition Engine from MariaDB Server 10.3.
Author:
Jacob Mathew.
Reviewer:
Kentoku Shiba.
Cherry-Picked:
Commit eb2ca3d on branch bb-10.2-MDEV-16912
|