| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
instant_alter_column_possible
Problem:- Primary key on long unique columns is not allowed but when
`CREATE TABLE t2 (a TEXT, PRIMARY KEY(a(1871))) ENGINE=InnoDB;`
is executed with innodb_page_size=8k or 4k, primary key on DB_ROW_HASH_1 column
is created. Reason of this is in mysql_prepare_create_table we check against
max_key_part_length() to see whether key_part length is more then engine supported
key_part length , and if it is true error is thrown for primary key and long
unique index is created for unique key. But in this case max_key_part_length()
returns 3072 which is more the max_key_length() , which later in code triggers
creating of long unique index.
Solution:- max_supported_key_part_length() should return according to innodb
page size.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The check-testcase record uses a mysqltest connection
to the database to do the recording. With the server configured
as an abstract socket, the mysqltest client cannot connect and
fails.
We work around this by starting the server as normal and then
restart with an abstract socket and test this.
This didn't affect Windows as it just did a tcp connection.
So this did affect all unix socket based systems except Linux
as this was the only one that supported abstract sockets.
|
|
|
|
|
|
|
|
|
| |
Removed Field_map, since it was used only in a single function.
Fixed is_indexed_agg_distinct(), since it relied on initialization of
Bitmap in constructor.
Fixes MDEV-25888 in 10.4
|
|
|
|
|
| |
These are only 10.4+ tests. 10.2+ tests are pushed into 10.2
and will be merged into 10.4+ independently
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the idea of main.failed_auth_unixsocket was to have existing
user account (root) authenticate with unix_socket, then login with
non-existent user name, Non-existent user name forces the server
to perform the authentication in the name of some random existing
user. But it must still fail at the end, as the user name is wrong.
In 10.4 a second predefined user was added, mariadb.sys, so root
is not the only user in mysql.global_priv and unix_socket auth
must be forced for all existing user accounts, because we cannot
know what user account the server will randomly pick for non-existing
user auth.
|
|
|
|
|
|
| |
RocksDB failed to package on fedora 32 and 33 with
*** ERROR: ambiguous python shebang in /usr/bin/myrocks_hotbackup: #!/usr/bin/env python. Change it to python3 (or python2) explicitly.
|
| |
|
|
|
|
|
|
|
| |
Enable AES-GCM for SSL (only).
AES-GCM for encryption plugins remains disabled (aes-t fails, on some bug
in GCM or CTR padding)
|
|
|
|
| |
We're already on 4.4.6
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Occasionally, the test innodb.alter_copy would fail in MariaDB 10.6.1,
reporting DB_MISSING_HISTORY during CHECK TABLE. It started to occur during
the development of MDEV-25180, which introduced purge_sys.stop_SYS().
If we delay purge more during DDL operations, then the test would
almost always fail. The reason is that during startup we will restore
a purge view, and CHECK TABLE would still use REPEATABLE READ
even though innodb_read_only is set and other isolation levels
than READ UNCOMMITTED are not guaranteed to work.
ha_innobase::check(): Use READ UNCOMMITTED isolation level if
innodb_read_only is set or innodb_force_recovery exceeds 3.
dict_set_corrupted(): Do not update the persistent data dictionary
if innodb_force_recovery exceeds 3.
|
| | |
|
|\ \
| |/ |
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
buf_read_ibuf_merge_pages(): If space->size is 0, invoke
fil_space_get_size() to determine the size of the tablespace
by reading the header page. Only after that proceed to delete
any entries that are beyond the end of the tablespace.
Otherwise, we could be deleting valid entries that actually
need to be applied.
This fixes a regression that had been introduced in
commit b80df9eba23b4eb9694e770a41135127c6dbc5df (MDEV-21069),
which aimed to avoid crashes during DROP TABLE of corrupted tables.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This partially reverts commit d7321893d8c50071632a102e17a7869da9cb03a5.
The *.jar files are not being built and all Debian builds are failing
as dh_install stops on missing files. To build them we would need to also
add new Java build dependencies.
In a stable release (10.2->10.5) we shouldn't add new files and certainly
not any new build dependencies, so reverting commit.
Also, the files are located in a different path, and already included
in the mariadb-test-data package:
/usr/share/mysql/mysql-test/plugin/connect/connect/std_data/JavaWrappers.jar
/usr/share/mysql/mysql-test/plugin/connect/connect/std_data/JdbcMariaDB.jar
/usr/share/mysql/mysql-test/plugin/connect/connect/std_data/Mongo2.jar
/usr/share/mysql/mysql-test/plugin/connect/connect/std_data/Mongo3.jar
This change needs to be redesigned and applies only on 10.6 or newer.
|
| | |
| | |
| | |
| | |
| | | |
The fix is to quote service name parameter, when it is passed to
mysql_upgrade_service subprocess.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
source configuration and Ninja generator
- As solution `PLUGIN_CONNECT=NO` use early check to disable plugin:
Solution suggested by wlad@mariadb.com
- `JNI_FOUND` is a internal result variable and should be set with
cached library and header variables (like `JAVA_INCLUDE_PATH`) defined.
* Note: wrapper cmake/FindJNI.cmake runs first time and cmake native Find<module> returns only cached variable, like `JAVA_INCLUDE_PATH`, results variable are not cached).
Reviewed by: serg@mariadb.com
|
| | |
| | |
| | |
| | |
| | | |
only perform the "correct table name" check for *new* generated columns,
but not for already existing ones - they're guaranteed to be valid
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This bug could manifest itself after pushing a where condition over a
mergeable derived table / view / CTE DT into a grouping view / derived
table / CTE V whose item list contained set functions with constant
arguments such as MIN(2), SUM(1) etc. In such cases the field references
used in the condition pushed into the view V that correspond set functions
are wrapped into Item_direct_view_ref wrappers. Due to a wrong implementation
of the virtual method const_item() for the class Item_direct_view_ref the
wrapped set functions with constant arguments could be erroneously taken
for constant items. This could lead to a wrong result set returned by the
main select query in 10.2. In 10.4 where a possibility of pushing condition
from HAVING into WHERE had been added this could cause a crash.
Approved by Sergey Petrunya <sergey.petrunya@mariadb.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If a select query contained an ORDER BY clause that followed a LIMIT clause
or an ORDER BY clause or ORDER BY with LIMIT the EXPLAIN output for the
query showed an execution plan different from that was actually executed.
Approved by Roman Nozdrin <roman.nozdrin@mariadb.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
ha_heap::external_lock contains some consistency checks for the table,#
in a debug compilation.
This code suffers from lack of synchronization, in a rare case
where mysql_lock_tables() fail, and unlock is forced, even if lock was
not previously taken.
To workaround, require EXTRA_DEBUG compile definition in order to activate
the consistency checks.The code still might be useful in some cases - but
the audience are developers looking for errors in single-threaded scenarios,
rather than multiuser stress-tests.
|
| | |
| | |
| | |
| | | |
No fix for this bug is needed starting from version 10.4.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Commit b5615eff0d00 introduced comment in result file during shutdown.
In case of Windows for the tests involving `file_key_managment.so` as plugin-load-add the tests will be overwritten with .dll extension.
The same happens with environment variable `$FILE_KEY_MANAGMENT_SO`.
So the patch is removing the extension to be extension agnostic.
Reviewed by: wlad@mariadb.com
|
|\ \ \
| |/ / |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If a join query uses a derived table (view / CTE) with GROUP BY clause then
the execution plan for such join may employ split optimization. When this
optimization is employed the derived table is not materialized. Rather only
some partitions of the derived table are subject to grouping. Split
optimization can be applied only if:
- there are some indexes over the tables used in the join specifying the
derived table whose prefixes partially cover the field items used in the
GROUP BY list (such indexes are called splitting indexes)
- the WHERE condition of the join query contains conjunctive equalities
between columns of the derived table that comprise major parts of
splitting indexes and columns of the other join tables.
When the optimizer evaluates extending of a partial join by the rows of the
derived table it always considers a possibility of using split optimization.
Different splitting indexes can be used depending on the extended partial
join. At some rare conditions, for example, when there is a non-splitting
covering index for a table joined in the join specifying the derived table
usage of a splitting index to produce rows needed for grouping may be still
less beneficial than usage of such covering index without any splitting
technique. The function JOIN_TAB::choose_best_splitting() must take this
into account.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
| | |
| | |
| | |
| | | |
output as in 10.3
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If a join query uses a derived table (view / CTE) with GROUP BY clause then
the execution plan for such join may employ split optimization. When this
optimization is employed the derived table is not materialized. Rather only
some partitions of the derived table are subject to grouping. Split
optimization can be applied only if:
- there are some indexes over the tables used in the join specifying the
derived table whose prefixes partially cover the field items used in the
GROUP BY list (such indexes are called splitting indexes)
- the WHERE condition of the join query contains conjunctive equalities
between columns of the derived table that comprise major parts of
splitting indexes and columns of the other join tables.
When the optimizer evaluates extending of a partial join by the rows of the
derived table it always considers a possibility of using split optimization.
Different splitting indexes can be used depending on the extended partial
join. At some rare conditions, for example, when there is a non-splitting
covering index for a table joined in the join specifying the derived table
usage of a splitting index to produce rows needed for grouping may be still
less beneficial than usage of such covering index without any splitting
technique. The function JOIN_TAB::choose_best_splitting() must take this
into account.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Removed Tokudb (no need to test this anymore with valgrind)
- Added __attribute__(unused)) to a few places to be able to compile even
if valgrind/memcheck.h is not installed.
Reviewer: Marko Mäkelä <marko.makela@mariadb.com>
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | | |
Silence a warning about an uninitialized variable that was
introduced by commit d8fa71a089aa5cbe569e7f8b6fa67326a9ecab2b.
|
| | | |
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This commit reduces the likelihood of getting a busy port on
quick restarts with rsync SST (problem MDEV-25818) and fixes
a number of other flaws in SST scripts, adds new functionality,
and also synchronizes the xtrabackup-v2 script with the
mariabackup script (the latter applies only to the 10.2 branch):
1) SST via rsync: rsync and stunnel does not always get the right
time to complete by correctly handling SIGTERM. These utilities
are now given more time to complete normally (via normal SIGTERM
processing) before we move on to using "kill -9";
2) SST via rsync: attempts to terminate an rsync or stunnel process
(via "kill" utility) are only made if it did not terminated on
its own;
3) SST via rsync: if a combination of stunnel and rsync is used,
then we need to wait for both utilities to finish or stop, not
just one of them;
4) The config file and pid file for stunnel are now deleted after
successful completion of SST on the donor node;
5) The configs and pid files from rsync and stunnel should not be
deleted unless these utilities succeed (or are sucessfully
terminated) on the joiner node;
6) The configs and pid files now excluded from transfer via rsync;
7) Spaces in paths are now valid for config files as well (when
used with SST via rsync or mariabackup / xtrabackup[-v2]);
8) SST via mariabackup: added preliminary verification of keys and
certificates that are used when establishing a connection using
SSL (to avoid long timeouts and improve diagnostics) - by analogy
with how it is done for the xtrabackup-v2 (plus check for CA file),
while that check is skipped if the user does not have openssl
installed (or does not have diff utility);
9) Added backup-threads=<n> configuration option which adds
"--parallel=<n>" for mariabackup / xtrabackup at backup and
move-back stages;
10) Added encrypt-threads and encrypt-chunk-size configuration
options for xbcrypt management (when xbcrypt is used);
11) Small optimization: checking the socat version and adding
a file with parameters for 2048-bit Diffie-Hellman (if necessary)
is done only if the user has not specified "dhparam=" in the
"sockopt" option value;
12) SST via rsync now supports "backup-threads" configuration option
(in server-related sections or in the "[sst]");
13) Determining the number of available processors is now supported
for FreeBSD + mariabackup/xtrabackup: before that we might have
problems with "--compact" (rebuild indexes) or qpress on FreeBSD;
14) The check_pid() function should not raise an error state in
the rare cases when the pid file was created, but it is empty,
or if it is deleted right during the check, or when zero is read
from the pid file;
15) Iproved templates that are used to check if a requested socket
is "listening" when using the ss utility;
16) Shortened some other templates for socket state utilities;
17) Temporary files created by mariabackup / xtrabackup are moved
to a separate subdirectory inside tmpdir (so they don't get
mixed with other temporary files, which can make debugging
more difficult);
18) 10.2 only: the script for SST via xtrabackup-v2 has been brought
in full compliance with all the bugfixes made for mariabackup (as
it previously contained many flaws compared to the updated script
for mariabackup).
|
| | |
| | |
| | |
| | | |
mariabackup --help
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
prepared statement produces different results for UPDATE with subquery
Both EXPLAIN and EXPLAIN EXTENDED statements produce different results set
in case it is run in normal way and in PS mode for the statements
UPDATE/DELETE with subquery.
The use case below reproduces the issue:
MariaDB [test]> CREATE TABLE t1 (c1 INT KEY) ENGINE=MyISAM;
Query OK, 0 rows affected (0,128 sec)
MariaDB [test]> CREATE TABLE t2 (c2 INT) ENGINE=MyISAM;
Query OK, 0 rows affected (0,023 sec)
MariaDB [test]> CREATE TABLE t3 (c3 INT) ENGINE=MyISAM;
Query OK, 0 rows affected (0,021 sec)
MariaDB [test]> EXPLAIN EXTENDED UPDATE t3 SET c3 =
-> ( SELECT COUNT(d1.c1) FROM ( SELECT a11.c1 FROM t1 AS a11
-> STRAIGHT_JOIN t2 AS a21 ON a21.c2 = a11.c1 JOIN t1 AS a12
-> ON a12.c1 = a11.c1 ) d1 );
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
| 1 | PRIMARY | t3 | ALL | NULL | NULL | NULL | NULL | 0 | 100.00 | |
| 2 | SUBQUERY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Impossible WHERE noticed after reading const tables
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
2 rows in set (0,002 sec)
MariaDB [test]> PREPARE stmt FROM
-> EXPLAIN EXTENDED UPDATE t3 SET c3 =
-> ( SELECT COUNT(d1.c1) FROM ( SELECT a11.c1 FROM t1 AS a11
-> STRAIGHT_JOIN t2 AS a21 ON a21.c2 = a11.c1 JOIN t1 AS a12
-> ON a12.c1 = a11.c1 ) d1 );
Query OK, 0 rows affected (0,000 sec)
Statement prepared
MariaDB [test]> EXECUTE stmt;
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
| 1 | PRIMARY | t3 | ALL | NULL | NULL | NULL | NULL | 0 | 100.00 | |
| 2 | SUBQUERY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | no matching row in const table |
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
2 rows in set (0,000 sec)
The reason by that different result sets are produced is that on execution
of the statement 'EXECUTE stmt' the flag SELECT_DESCRIBE not set
in the data member SELECT_LEX::options for instances of SELECT_LEX that
correspond to subqueries used in the UPDTAE/DELETE statements.
Initially, these flags were set on parsing the statement
PREPARE stmt FROM "EXPLAIN EXTENDED UPDATE t3 SET ..."
but latter they were reset before starting real execution of
the parsed query during handling the statement 'EXECUTE stmt';
So, to fix the issue the functions mysql_update()/mysql_delete()
have been modified to set the flag SELECT_DESCRIBE forcibly
in the data member SELECT_LEX::options for the primary SELECT_LEX
of the UPDATE/DELETE statement.
|
| | | |
|
| | |
| | |
| | |
| | | |
to print all arguments of _verbose(), not just the number of them
|
| | |
| | |
| | |
| | | |
This reverts commit 17106c984b9649641068ddf108dcffd2c7773212.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
InnoDB should calculate the MBR for the first field of
spatial index and do the comparison with the clustered
index field MBR. Due to MDEV-25459 refactoring, InnoDB
calculate the length of the first field and fails with
too long column error.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The only call of the virtual member function
handler::update_table_comment() was removed in
commit 82d28fada7dc928564aefac802400c6684c11917 (MySQL 5.5.53)
but the implementation was not removed.
The only non-trivial implementation was for InnoDB. The information
is now returned via handler::get_foreign_key_create_info() and
ha_statistics::delete_length.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
cherry-pick commit: 1fff2398ef3dda1a7e8404f18e4e165823bd4e0a
MDEV-22530 post push fixes from 10.6.
Followup. If the KILL happens - report it as a failure,
don't eat it up silently. Note that this has to be done after `table_name`
is populated, so that the error message could show it.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The following features have been added:
1) Automatic addition of the pf = ip6 option for socat
when it can be recognized by the format of the connection
address;
2) Automatically add or remove extra commas at the beginning
and at the end of sockopt, for example, sockopt='pf=ip6'
and sockopt=',pf=ip6' work equally well;
Also, due to interference in the code of the get_transfer()
function, I also refactored it and now:
3) encrypt = 4 is supported not only for xtrabackup-v2,
but also for mariabackup - this can help with migration
from Percona;
4) Improved setting of 'commonname' option for encrypt=3
and encrypt=4 modes;
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
mbstream is already supported as a format name after MDEV-24580,
but additional code refactoring has been done to correctly display
the format name in log files and to check if the mbstream utility
is in the path. Also, for xtrabackup-v2 (only available in the 10.2)
both utilities are supported - both xbstram and mbstream, since they
are interchangeable in this context. In this case, the original
innobackupex always receives the correct --stream=xbstream option
as input, but the user can actually try to use the mbstream utility
during the transfer (if the user explicitly specifies this in the
configuration file).
|
| | | |
|