| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| | |
|
| |
| |
| |
| | |
dbug_print_rec() functions used to print data inside GDB.
|
| | |
|
|\ \
| |/ |
|
| | |
|
| |
| |
| |
| |
| | |
Fix a trivial error in the fix for MDEV-21958: check the key in the right
table.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
auth_string to change it to authentication_string
cherry-pick from 10.4:
commit b976b9bfc3e
Author: Sergei Golubchik <serg@mariadb.com>
Date: Tue Apr 21 18:40:15 2020 +0200
MDEV-21244 mysql_upgrade creating empty global_priv table
support upgrades from 5.2 privilege tables
|
| |
| |
| |
| |
| | |
in particular, it caused escape_item->is_expensive() property
to be lost instead of being properly propagated up.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Part II.
It's still possible to bypass Item_func_like::escape
initialization in Item_func_like::fix_fields().
This requires ESCAPE argument being a cacheable subquery
that uses tables and is inside a derived table which
is used in multi-update.
Instead of implementing a complex or expensive fix for
this particular ridiculously artificial case, let's simply disallow it.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
in queries like
create view v1 as select 2 like 1 escape (3 in (select 0 union select 1));
select 2 union select * from v1;
Item_func_like::escape was left uninitialized, because
Item_in_optimizer is const_during_execution()
but not actually const_item() during execution.
It's not, because const subquery evaluation was disabled for derived.
Practically it only needs to be disabled for multi-update
that runs fix_fields() before all tables are locked.
|
| |
| |
| |
| |
| | |
this happens if Item_func_like is copied (get_copy()).
after one copy gets fixed, the other tries to fix escape item again.
|
| |
| |
| |
| |
| | |
At end_connection make sure we have wsrep before trying to free
connection assigned to it.
|
| |
| |
| |
| | |
binlog_truncate_innodb and binlog_spurious_ddl_errors, rpl_parallel_retry fails in bb
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
row_upd_clust_step() calls row_upd_del_mark_clust_rec() which would
allocate some memory in row_ins_foreign_fill_virtual(). Then,
row_upd_store_row() would access the allocated memory, but only after
potentially freeing that memory by invoking mem_heap_empty(),
leading to ASAN heap-use-after-free diagnostics.
row_ins_foreign_fill_virtual(): Use a more appropriate memory heap with a
longer lifetime.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Due to this bug the server reported bogus messages about lack of SELECT
privileges for base tables used in the specifications of CTE tables.
It happened only if such a CTE were referred to at least twice.
For any non-recursive reference to CTE that is not primary the
specification of the CTE is cloned. The function check_table_access() is
called for such reference. The function checks privileges of the tables
referenced in the specification. As no name resolution was performed for
CTE references whose definitions occurred outside the specification before
the call of check_table_access() that was supposed to check the access
rights of the underlying tables these references were considered
as references to base tables rather than references to CTEs. Yet for CTEs
as well as for derived tables no privileges are needed and thus cannot
be granted.
The patch ensures proper name resolution of all references to CTEs before
any acl checks.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If log_slave_updates==OFF, wsrep applier threads used to be configured
with option: thd->variables.option_bits&= ~(OPTION_BIN_LOG);
(i.e. like sql_log_bin=ON). And this was regardless of log-bin configuration.
With this, having configuration of: --log-bin && --log-slave-updates=OFF,
local threads used binlogging, but applier threads did not. And further:
local threads went through binlog group commit, while applier threads did
direct commits. This resulted in situation, where applier threads entered
earlier in wsrep XID checkpointing, and could sync their wsrep XID out of order.
Later local thread commit would see that higher seqno was already checkpointed,
and fire an assert because of this.
As a fix, applier threads are now forced to enable binlogging regardless of
log-slave-updates configuration.
This PR comes with new mtr test: galera.MDEV-24327, which causes a scenario
where applier transaction is applied and committed while earlier local transaction
is parked before commit order monitor enter. A buggy mariadb versoin would fail
for assertion because of wsrep XID checkpoint order violation.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This bug could cause a crash when executing queries that used mutually
recursive CTEs with system variable big_tables set to 1. It happened due
to several bugs in the code that handled recursive table references
referred mutually recursive CTEs. For each recursive table reference a
temporary table is created that contains all rows generated for the
corresponding recursive CTE table on the previous step of recursion.
This temporary table should be created in the same way as the temporary
table created for a regular materialized derived table using the
method select_union::create_result_table(). In this case when the
temporary table is created it uses the select_union::TMP_TABLE_PARAM
structure as the parameter for the table construction. However the
code created the temporary table using just the function create_tmp_table()
and passed pointers to certain fields of the TMP_TABLE_PARAM structure
used for accumulation of rows of the recursive CTE table as parameters
for update. This was a mistake because now different temporary tables
cannot share some TMP_TABLE_PARAM fields in a general case. Besides,
depending on how mutually recursive CTE tables were defined and which
of them were referred in the executed query the select_union object
allocated for a recursive table reference could be allocated again after
the the temporary table had been created. In this case the TMP_TABLE_PARAM
object associated with the temporary table created for the recursive
table reference contained unassigned fields needed for execution when
Aria engine is employed as the engine for temporary tables.
This patch ensures that
- select_union object is created only once for any recursive table
reference
- any temporary table created for recursive CTEs uses its own
TMP_TABLE_PARAM structure
The patch also fixes a problem caused by incomplete cleanup of join tables
associated with recursive table references.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
| |
| |
| |
| |
| |
| |
| | |
backup prepare
open_files_limit option was processed only for --backup, but not for
--prepare.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The last_updated column of innodb_table_stats and innodb_index_stats
hasn't been DATA_FIXBINARY for many years.
Innodb represents TIMESTAMP as INT of length 4. Let's test it with this
and stop hiding the result in mysql_upgrade test.
Reviewer: Marko
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Basic variant of the fix: do not consider conditions in form
unique_key NOT IN (c1,c2...)
to be sargable. If there are only a few constants, the condition
is not selective. If there are a lot constants, the overhead of
processing such a huge range list is not worth it.
(Backport to 10.2)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The policy is not set for 10.2
If it is set, CMake would complain about bundled zlib for which the policy
is not set.
Fix:
- Set policy for 10.2 for the top level project.
For 10.3+ it was already set
- Cleanup zlib to remove unneeded stuff. It is an internal static library,
it needs none of PROJECT, library versioning, RC file on Windows.
The name of the library on Unix does not make any difference, since it is
static and compiled in.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
failed in Diagnostics_area::set_ok_status on INSERT
Analysis: Error is not returned when strict mode is enabled and value is
truncated because double is outside range.
Fix: Return HA_ERR_AUTOINC_ERANGE if the error was reported when double is
outside range.
|
| |
| |
| |
| |
| |
| | |
Analysis: The error is not returned when the statement can't be used.
And so we still go on to search for keywords.
Fix: Return the error state.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Encountered the linker failure on Debug build in 10.4:
[53/585] Linking CXX executable unittest/sql/mf_iocache-t
FAILED: unittest/sql/mf_iocache-t
: && /usr/bin/c++ -pie -fPIC -fstack-protector --param=ssp-buffer-size=4 -fPIC -g -DENABLED_DEBUG_SYNC -ggdb3 -DSAFE_MUTEX -DSAFEMALLOC -DTRASH_FREED_MEMORY -Wall -Wextra -Wno-format-truncation -Wno-init-self -Wno-nonnull-compare -Wno-unused-parameter -Woverloaded-virtual -Wnon-virtual-dtor -Wvla -Wwrite-strings -Werror -Wl,-z,relro,-z,now unittest/sql/CMakeFiles/mf_iocache-t.dir/mf_iocache-t.cc.o unittest/sql/CMakeFiles/mf_iocache-t.dir/__/__/sql/mf_iocache_encr.cc.o -o unittest/sql/mf_iocache-t -lpthread mysys/libmysys.a unittest/mytap/libmytap.a mysys_ssl/libmysys_ssl.a mysys/libmysys.a dbug/libdbug.a mysys/libmysys.a dbug/libdbug.a -lz -lm strings/libstrings.a -lpthread -lssl -lcrypto -ldl && :
/usr/bin/ld: mysys/libmysys.a(my_addr_resolve.c.o):/home/dan/repos/mariadb-server-10.4/mysys/my_addr_resolve.c:173: multiple definition of `info'; unittest/sql/CMakeFiles/mf_iocache-t.dir/mf_iocache-t.cc.o:/home/dan/repos/mariadb-server-10.4/unittest/sql/mf_iocache-t.cc:99: first defined here
We make Dl_info static as in MDEV-21646 moving it out of the function
was the main goal and having it scope limited by static doesn't affect
the function.
|
| |
| |
| |
| |
| |
| |
| | |
view with a subquery
detect derived tables differently. TABLE_LIST::is_derived()
only works after mysql_derived_init()
|
| |
| |
| |
| | |
cppcheck warnings
|
| |
| |
| |
| | |
fix parsing of "1 IS NULL = 2"
|
| |
| |
| |
| |
| | |
don't allow group_concat_max_len values >= 4Gb
(they never worked anyway)
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Diagnostics_area::sql_errno upon query from I_S with LIMIT ROWS EXAMINED
Make the test case more robust. We only care that the query returns
no rows and that the server doesn't crash.
|
| |
| |
| |
| |
| |
| | |
__memcmp_avx2_movbe from native_compare
don't allow too small max_sort_length values
|
| |
| |
| |
| |
| |
| |
| | |
SIGSEGV in __memcmp_avx2_movbe from native_compare"
This reverts commit 5a0c34e4c2fd951119efb432eedcaa65a1d36606.
but keeps the test case
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Cause: shared federatedx_io cannot store table-specific data.
Fix: move current row reference `federatedx_io_mysql::current` to
ha_federatedx.
FederatedX connection (represented by federatedx_io) is stored into
federatedx_txn::txn_list of per-server connections (see
federatedx_txn::acquire()). federatedx_txn object is stored into THD
(see ha_federatedx::external_lock()). When multiple handlers acquire
FederatedX connection they get single federatedx_io instance. Multiple
handlers do their operation via federatedx_io_mysql::mark_position()
and federatedx_io_mysql::fetch_row() in arbitrarty manner. They access
the same federatedx_io_mysql instance and same MYSQL_ROWS *current
pointer, so one handler disrupts the work of the other.
Related to "MDEV-14551 Can't find record in table on multi-table update
with ORDER BY".
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When the query using a recursive CTE whose definition contained wildcard
symbols in the recursive part was processed at the prepare stage an
assertion was hit if the query was executed without any default database
set. The failure happened when the function insert_fields() tried to check
column privileges for the temporary table created for a recursive
reference to the CTE. No acl checks are needed for any CTE. That's why this
check should be blocked as well. The patch formulates a stricter condition
at which this check is to be blocked that covers the case when a query
using recursive CTEs is executed with no default database set.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
For table references to CTEs the field TABLE_LIST::db must be set to
an empty string as it's done for table references to derived tables in
order CTEs to be processed similar to how derived tables are processed.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
| |
| |
| |
| |
| |
| | |
truncate check constraints expressions
- Reviewed by: daniel@mariadb.org
|
| |
| |
| |
| |
| |
| |
| | |
Introduced with 6b7918d524d5 in `10.2` just check for helper and handle
it if exist.
Reviewed by: cvicentiu@mariadb.org
|
| |
| |
| |
| |
| |
| |
| | |
- MDEV-24177: main.sp2 test fails: Result length mismatch
- MDEV-24178: main.upgrade_MDEV-19650 test fails: Result length mismatch
Reviewed by: serg@mariadb.com
|
| |
| |
| |
| |
| |
| |
| |
| | |
mergeable derived table
Do not check privileges for derived tables/CTEs and their fields.
Approved by Oleksandr Byelkin <sanja@mariadb.com>
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
innobase_space_shutdown(): Remove. We want this step to be executed
before the message "InnoDB: Shutdown completed; log sequence number "
is output by innodb_shutdown(). It used to be executed after that step.
innodb_shutdown(): Duplicate the code that used to live in
innobase_space_shutdown().
innobase_init_abort(): Merge with innobase_space_shutdown().
|
| |
| |
| |
| |
| |
| | |
events.
The log line should be added behind the filters.
|
| |
| |
| |
| |
| | |
Remove the unused dbug_print_rec() functions because they break
the clang build due to -Wreturn-stack-address
|
| |
| |
| |
| |
| |
| |
| | |
Add history row outside of compare_record() check. For TRX_ID
versioning we have to fail can_compare_record to force InnoDB update
which adds history row; and there in ha_innobase::update_row() is
additional "row changed" check where we force history row anyway.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
self-reference in system versioned table
First part of the fix (row0mysql.cc) addresses external columns when adding history
row on referential action. The full data must be retrieved before the
row is inserted.
Second part of the fix (the rest) avoids duplicate primary key error between
the history row generated on referential action and the history row
generated by SQL command. Both command and referential action can
happen on same table since foreign key can be self-reference (parent
and child tables are same). Moreover, the self-reference can refer
multiple rows when the key is non-unique. In such case history is
generated by referential action occured on first row but processed all
rows by a matched key. The second round is when the next row is
processed by a command but history already exists. In such case we
check TRX_ID of existing history row and if it is the same we assume
the above situation and skip adding one more history row or failing
the command.
|