| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
COLUMN
get_col_list_to_be_dropped() incorrectly returned uninteresting instantly
dropped column which was missing in a new dict_index_t
get_col_list_to_be_dropped(): rename to collect_columns_from_dropped_indexes
and stop return dropped columns
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Pruning fix for SYSTEM_TIME INTERVAL partitioning.
Allocating one more element in range_int_array for CURRENT partition
is required for RANGE pruning to work correctly
(get_partition_id_range_for_endpoint()).
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
SYSTEM_TYPE partitioning: COLUMN properties removed. Partitioning is
now pure RANGE based on UNIX_TIMESTAMP(row_end).
DECIMAL type is now allowed as RANGE partitioning, we can partition by
UNIX_TIMESTAMP() (but not for DATETIME which depends on local timezone
of course).
|
| |
| |
| |
| | |
Use EITS statistics to avoid changing query plans
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
row_upd_build_difference_binary(): Correctly handle the
case where columns (or clustered index fields) have been added
since the 'entry' was originally created. In this case,
the update vector must replace any missing columns with the
default values of the instantly added columns.
|
| |
| |
| |
| | |
Adjust the testcase according to the review input
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
(Backported to 10.3, addressed review input)
Sj_materialization_picker::check_qep(): fix error in cost/fanout
calculations:
- for each join prefix, add #prefix_rows / TIME_FOR_COMPARE to the cost,
like best_extension_by_limited_search does
- Remove the fanout produced by the subquery tables.
- Also take into account join condition selectivity
optimize_wo_join_buffering() (used by LooseScan and FirstMatch)
- also add #prefix_rows / TIME_FOR_COMPARE to the cost of each prefix.
- Also take into account join condition selectivity
|
| |
| |
| |
| |
| |
| | |
The test occasionally fails with different table reference count
due to purge activity after INSERT operations (MDEV-12288).
Wait for purge before accessing dict_table_t::n_ref_count.
|
| |\ |
|
| | | |
|
| | |\ |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
ROLLUP on constant table
Also fixes:
MDEV-20431 GREATEST(int_col,date_col) returns wrong results in a view
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
table on exec
Analysis:
========
As part of BUG#28642318 fix, two new test cases were added. The first test
case tests a scenario where two sessions are present, in which the first
session has a regular table named 't1' and another session has a temporary
table named 't1'. Test executes a DELETE statement on regular table. These
statements are captured from binary log and replayed back on new client
connection to prove that DELETE statement is applied successfully. Note that
the binlog contains only CREATE TEMPORARY TABLE part hence a temporary table
gets created in new connection. This replaying logic is implemented by using
'--exec $MYSQL' command. If the new connection gets disconnected within the
scope of first test case the test passes, i.e the temporary table gets dropped
as part thread cleanup. But on slow platforms the connection gets closed at
the time of execution of test case 2. When the temporary table is dropped as
part thread cleanup a "DROP TEMPORARY TABLE t1" is written into the binary
log. In test case two the same sessions continue to exist and and table names
are reused to test a new bug scenario. The additional "DROP TEMPORARY TABLE"
command drops second test specific tables which results in "Unknown table"
error.
Fix:
====
Rename the second case specific table to 't2'. Even if the close connection
from test case one happens later the drop command with has
'DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `t1`' will not result in an error.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
my_well_formed_char_length_utf8 on 2nd execution of SP with ALTER trying to add bad CHECK
Make automatic name generation during execution (not prepare).
Check result of memory allocation operation.
|
| |\ \ \
| | |/ / |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
'varchar(20)'
Cherry picking:
Bug#25135304: RBR: WRONG FIELD LENGTH IN ERROR MESSAGE
commit 47bd3f7cf3c8518f62b1580ec65af2ba7ac13b95
Description:
============
In row based replication, when replicating from a table with a field with
character set set to UTF8mb3 to the same table with the same field set to
character set UTF8mb4 I get a confusing error message:
For VARCHAR: VARCHAR(1) 'utf8mb3' to VARCHAR(1) 'utf8mb4'
"Column 0 of table 'test.t1' cannot be converted from type 'varchar(3)' to
type 'varchar(1)'"
Similar issue with CHAR type as well.
Issue with respect to BLOB types:
For BLOB: LONGBLOB to TINYBLOB - Error message displays incorrect blob type.
"Column 0 of table 'test.t1' cannot be converted from type 'tinyblob' to type
'tinyblob'"
For BINARY to BINARY - Error message displays incorrect type for master side
field.
"Column 0 of table 'test.t' cannot be converted from type 'char(1)' to type
'binary(10)'"
Similar issue exists for VARBINARY type. It is displayed as 'VARCHAR'.
Analysis:
=========
In Row based replication charset information is not sent as part of metadata
from master to slave.
For VARCHAR field its character length is converted into equivalent
octets/bytes and stored internally. At the time of displaying the data to user
it is converted back to original character length.
For example:
VARCHAR(2)- utf8mb3 is stored as:2*3 = VARCHAR(6)
At the time of displaying it to user
VARCHAR(6)- charset utf8mb3:6/3= VARCHAR(2).
At present the internally converted octect length is sent from master to slave
with out providing the charset information. On slave side if the type
conversion fails 'show_sql_type' function is used to get the type specific
information from metadata. Since there is no charset information is available
the filed type is displayed as VARCHAR(6).
This results in confused error message.
For CHAR fields
CHAR(1)- utf8mb3 - CHAR(3)
CHAR(1)- utf8mb4 - CHAR(4)
'show_sql_type' function which retrieves type information from metadata uses
(bytes/local charset length) to get actual character length. If slave's chaset
is 'utf8mb4' then
CHAR(3/4)-->CHAR(0)
CHAR(4/4)-->CHAR(1).
This results in confused error message.
Analysis for BLOB type issue:
BLOB's length is represented in two forms.
1. Actual length
i.e
(length < 256) type= MYSQL_TYPE_TINY_BLOB;
(length < 65536) type= MYSQL_TYPE_BLOB; ...
2. packlength - The number of bytes used to represent the length of the blob
1- tinyblob
2- blob ...
In row based replication only the packlength is written in the binary log. On
the slave side this packlength is interpreted as actual length of the blob.
Hence the length is always < 256 and the type is displayed as tiny blob.
Analysis for BINARY to BINARY type issue:
The character set information is needed to identify a filed's type as char or
binary. Since master side character set information is not available on the
slave side both binary and char fields are displayed as char.
Fix:
===
For CHAR and VARCHAR fields display their length in octets for both source and
target fields. For target field display the charset information if it is
relevant.
For blob type changed the code to use the packlength and display appropriate
blob type in error message.
For binary and varbinary fields use the slave side character set as reference
to map them to binary or varbinary fields.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Depending on version, "handle.exe -?" can output either "Handle v4.0"
or "Nthandle v4.1"
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Some bugs are detected only after a table definition has been evicted
and then reloaded to the InnoDB data dictionary cache.
For debug builds, introduce the settable Boolean configuration parameter
innodb_evict_tables_on_commit_debug that can be set to request InnoDB
to attempt to evict table definitions from the data dictionary cache
whenever a transaction is committed.
This has been tested on 10.3 and 10.4 with the following:
./mysql-test-run.pl --mysqld=--loose-innodb-evict-tables-on-commit-debug
You can also use the following:
SET GLOBAL innodb_evict_tables_on_commit_debug=ON;
SET GLOBAL innodb_evict_tables_on_commit_debug=OFF;
The parameter affects the commit (or rollback or abort) of
transactions that have modified persistent InnoDB tables.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch corrects the fix of the patch for mdev-19421 that resolved
the problem of parsing some embedded join expressions such as
t1 join t2 left join t3 on t2.a=t3.a on t1.a=t2.a.
Yet the patch contained a bug that prevented proper context analysis
of the queries where such expressions were used together with comma
separated table references in from clauses.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
MariaDB 10.4 was crashing when thread-handling was set to
pool-of-threads and wsrep was enabled.
There were two apparent reasons for the crash:
- Connection handling in threadpool_common.cc was missing calls to
control wsrep client state.
- Thread specific storage which contains thread variables (THR_KEY_mysys)
was not handled appropriately by wsrep patch when pool-of-threads
was configured.
This patch addresses the above issues in the following way:
- Wsrep client state open/close was moved in thd_prepare_connection() and
end_connection() to have common handling for one-thread-per-connection
and pool-of-threads.
- Thread local storage handling in wsrep patch was reworked by introducing
set of wsrep_xxx_threadvars() calls which replace calls to
THD store_globals()/reset_globals() and deal with thread handling
specifics internally.
Wsrep-lib was updated to version which relaxes internal concurrency
related sanity checks.
Rollback code from wsrep_rollback_process() was extracted to separate calls
for better readability.
Post rollback thread was removed as it was completely unused.
|
| | | |
| | | |
| | | |
| | | | |
b5615eff0d00cfb4c60b9d1bf67094da7c2258a6
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The reason for hitting the assert is that rec_per_key estimates have some garbage value.
So the solution to fix this would be for long unique keys to use use rec_per_key for only 1 keypart,
that means rec_per_key[0] would have the estimate.
|
| | | |
| | | |
| | | |
| | | | |
VDec::VDec(Item*)
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Added:
- "semijoin_strategy_choice" element (actions in advance_sj_state(), name
matches the name in MySQL)
- semijoin_table_pullout element.
|
|\ \ \ \
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The MDEV-20265 commit e746f451d57def4be679caafc29976741b3e89f7
introduces DBUG_ASSERT(right_op == r_tbl) in
st_select_lex::add_cross_joined_table(), and that assertion would
fail in several tests that exercise joins. That commit was skipped
in this merge, and a separate fix of MDEV-20265 will be necessary in 10.4.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Current easy fix is not possible, because SELECT clones ha_partition
and then closes the clone which leads to unclosed transaction in
partitions we forcely prune out. We cound solve this by closing these
partitions (and release from transaction) in
change_partitions_to_open() at versioning conditions stage, but this
is problematic because table lock is acquired for each partition at
open stage and therefore must be released when we close partition
handler in change_partitions_to_open(). More details in MDEV-20376.
This should change after MDEV-20250 where mechanism of opening
partitions will be improved.
This reverts commit cdbac54df0bd857a053decd66b6067abf15a6801.
|
| |\ \ \
| | |/ / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
ha_innobase::open(): Always ignore problems with FOREIGN KEY constraints
(pass DICT_ERR_IGNORE_FK_NOKEY), no matter whether foreign_key_checks
is enabled. Instead, we must report errors when enforcing the FOREIGN KEY
constraints. As a result of ignoring these errors, the tables will be
loaded with dict_foreign_t objects whose foreign_index or referenced_index
will be NULL.
Also, pass DICT_ERR_IGNORE_FK_NOKEY instead of DICT_ERR_IGNORE_NONE
to dict_table_open_on_id_low() in many other cases. Notably, on
CREATE TABLE and ALTER TABLE, we will keep validating the FOREIGN KEY
constraints as before.
dict_table_open_on_name(): If no other flags than
DICT_ERR_IGNORE_FK_NOKEY are set, refuse access to unreadable tables.
Some encryption tests rely on this code path.
For the DML code path, we used to have the problem that when
one of the indexes was missing in dict_foreign_t, we would ignore
the FOREIGN KEY constraint altogether. The following changes
address that.
row_ins_check_foreign_constraints(): Add the parameter pk.
For the primary key, consider also foreign key constraints for which
foreign->foreign_index=NULL (no underlying index is available).
row_ins_check_foreign_constraint(): Report errors also for !check_ref.
Remove a redundant check for srv_read_only_mode.
row_ins_foreign_report_add_err(): Tolerate foreign->foreign_index=NULL.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
fkerr_t: Errors for the foreign key checks. Replaces ulint,
which used #define that looked like dberr_t literals.
wsrep_dict_foreign_find_index(): Remove. Use
dict_foreign_find_index() instead, with default parameters.
dict_foreign_push_index_error(): Do not add redundant quotes
around quoted table names.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Add wait conditions and compare cardinality etc information
between nodes and print something only if they differ.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Let us invoke wait_all_purged.inc right before the workload.
Starting with MDEV-12288 in MariaDB Server 10.3, also INSERT
generates purge workload. If we do not ensure that purge has
run to completion, the results on 10.3 and later could be
nondeterministic.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* Ensure no background purge is done;
* Better result showing actual number in case of failure;
* Higher and more safe difference threshold.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
main.selectivity_innodb: Re-record the result.
MDEV-18863: Make wsrep_sst_mariabackup use Bash due to the += syntax.
|
| | |\ \
| | | |/
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
For MDEV-15955, the fix in create_tmp_field_from_item() would cause a
compilation error. After a discussion with Alexander Barkov, the fix
was omitted and only the test case was kept.
In 10.3 and later, MDEV-15955 is fixed properly by overriding
create_tmp_field() in Item_func_user_var.
|
| | | |\ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This patch corrects the fix of the patch for mdev-19421 that resolved
the problem of parsing some embedded join expressions such as
t1 join t2 left join t3 on t2.a=t3.a on t1.a=t2.a.
Yet the patch contained a bug that prevented proper context analysis
of the queries where such expressions were used together with comma
separated table references in from clauses.
|
| | | | |
| | | | |
| | | | |
| | | | | |
PAD_CHAR_TO_FULL_LENGTH
|
| | | |\ \
| | | | |/ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
MYSQL_TYPE_LONGLONG' failed in Protocol_text::store_longlong
|
| | | | |
| | | | |
| | | | |
| | | | | |
Update test results.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
There were two problems:
(1) If user wanted same time zone information on all nodes in the Galera
cluster all updates were not replicated as time zone information was
stored on MyISAM tables. This is fixed on Galera by altering time zone
tables to InnoDB while they are modified.
(2) If user wanted different time zone information to nodes in the Galera
cluster TRUNCATE TABLE for time zone tables was replicated by Galera
destroying time zone information from other nodes. This is fixed
on Galera by introducing new option for mysql_tzinfo_to_sql_symlink
tool --skip-write-binlog to disable Galera replication while
time zone tables are modified.
Changes to be committed:
modified: mysql-test/r/mysql_tzinfo_to_sql_symlink.result
modified: mysql-test/suite/wsrep/r/mysql_tzinfo_to_sql_symlink.result
new file: mysql-test/suite/wsrep/r/mysql_tzinfo_to_sql_symlink_skip.result
new file: mysql-test/suite/wsrep/t/mysql_tzinfo_to_sql_symlink_skip.test
modified: sql/tztime.cc
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When discounting selectivity of ref access, don't discount the
selectivity we've already discounted for range access.
The 10.1 version of the fix. Will need to adjust condition filtering
test results in 10.4
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Skip the test on big-endian systems.
In MariaDB Server 10.0 and 10.1 (as well as MySQL 5.6),
the implementation of innodb_checksum_algorithm=crc32
wrongly assumes little-endian byte order.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The execution of mtr in the Windows environment fails due to
the fact that the new code from MDEV-18565 does not take into
account the need to add the ".exe" extension to the names of
executable files when searching for pre-requisites that are
needed to run SST scripts (especially when using mariabackup)
and when searching paths to some other Galera utilities.
This patch fixes this flaw.
Also adding paths to the PATH environment variable is now
done with the correct delimiter character.
|
| | | | | |
|