| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit cd7201514fee78aaf7d3eb2b28d2573c76f53b84
Author: Laurynas Biveinis <laurynas.biveinis@gmail.com>
Date: Tue Nov 14 06:34:19 2017 +0200
Fix bug 1704195 / 87065 / TDB-83 (Stop ANALYZE TABLE from flushing table definition cache)
Make ANALYZE TABLE stop flushing affected tables from the table
definition cache, which has the effect of not blocking any subsequent
new queries involving the table if there's a parallel long-running
query:
- new table flag HA_ONLINE_ANALYZE, return it for InnoDB and TokuDB
tables;
- in mysql_admin_table, if we are performing ANALYZE TABLE, and the
table flag is set, do not remove the table from the table
definition cache, do not invalidate query cache;
- in partitioning handler, refresh the query optimizer statistics
after ANALYZE if the underlying handler supports HA_ONLINE_ANALYZE;
- new testcases main.percona_nonflushing_analyze_debug,
parts.percona_nonflushing_abalyze_debug and a supporting debug sync
point.
For TokuDB, this change exposes bug TDB-83 (Index cardinality stats
updated for handler::info(HA_STATUS_CONST), not often enough for
tokudb_cardinality_scale_percent). TokuDB may return different
rec_per_key values depending on dynamic variable
tokudb_cardinality_scale_percent value. The server does not have a way
of knowing that changing this variable invalidates the previous
rec_per_key values in any opened table shares, and so does not call
info(HA_STATUS_CONST) again. Fix by updating rec_per_key for both
HA_STATUS_CONST and HA_STATUS_VARIABLE. This also forces a re-record
of tokudb.bugs.db756_card_part_hash_1_pick, with the new output
seeming to be more correct.
|
|
|
|
|
|
| |
On large hard disks (> 2TB), the plugin won't function correctly, always
showing 2 TB of available space due to integer overflow. Upgrade table
fields to bigint to resolve this problem.
|
|
|
|
|
|
|
| |
done by Slave
The prefix of error log message out of a failed BINLOG applying
is corrected to be the sql command name.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CHECK constraint is checked by check_expression() which walks its
items and gets into Item_field::check_vcol_func_processor() to check
for conformity with foreign key list.
WITHOUT OVERLAPS is checked for same conformity in
mysql_prepare_create_table().
Long uniques are already impossible with InnoDB foreign keys. See
ER_CANT_CREATE_TABLE in test case.
2 accompanying bugs fixed (test main.constraints failed):
1. check->name.str lived on SP execute mem_root while "check" obj
itself lives on SP main mem_root. On second SP execute check->name.str
had garbage data. Fixed by allocating from thd->stmt_arena->mem_root
which is SP main mem_root.
2. CHECK_CONSTRAINT_IF_NOT_EXISTS value was mixed with
VCOL_FIELD_REF. VCOL_FIELD_REF is assigned in check_expression() and
then detected as CHECK_CONSTRAINT_IF_NOT_EXISTS in
handle_if_exists_options().
Existing cases for MDEV-16932 in main.constraints cover both fixes.
|
|
|
|
| |
Made the test stable by adding more rows so the range scan is cheaper than table scan.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fil_space_t::freed_ranges: Store ranges of freed page numbers.
fil_space_t::last_freed_lsn: Store the most recent LSN of
freeing a page.
fil_space_t::freed_mutex: Protects freed_ranges, last_freed_lsn.
fil_space_create(): Initialize the freed_range mutex.
fil_space_free_low(): Frees the freed_range mutex.
range_set: Ranges of page numbers.
buf_page_create(): Removes the page from freed_ranges when page
is being reused.
btr_free_root(): Remove the PAGE_INDEX_ID invalidation. Because
btr_free_root() and dict_drop_index_tree() are executed in
the same atomic mini-transaction, there is no need to
invalidate the root page.
buf_release_freed_page(): Split from buf_flush_freed_page().
Skip any I/O
buf_flush_freed_pages(): Get the freed ranges from tablespace and
Write punch-hole or zeroes of the freed ranges.
buf_flush_try_neighbors(): Handles the flushing of freed ranges.
mtr_t::freed_pages: Variable to store the list of freed pages.
mtr_t::add_freed_pages(): To add freed pages.
mtr_t::clear_freed_pages(): To clear the freed pages.
mtr_t::m_freed_in_system_tablespace: Variable to indicate whether page has
been freed in system tablespace.
mtr_t::m_trim_pages: Variable to indicate whether the space has been trimmed.
mtr_t::commit(): Add the freed page and update the last freed lsn
in the tablespace and clear the tablespace freed range if space is
trimmed.
file_name_t::freed_pages: Store the freed pages during recovery.
file_name_t::add_freed_page(), file_name_t::remove_freed_page(): To
add and remove freed page during recovery.
store_freed_or_init_rec(): Store or remove the freed pages while
encountering FREE_PAGE or INIT_PAGE redo log record.
recv_init_crash_recovery_spaces(): Add the freed page encountered
during recovery to respective tablespace.
|
| |
|
|
|
|
|
|
|
| |
as cmake manual says
If a sequential execution of multiple commands is required, use multiple
``execute_process()`` calls with a single ``COMMAND`` argument.
|
|
|
|
|
|
|
|
| |
Field_inet6::sort_string upon INSERT into RocksDB table
For INET6 columns the values are stored as BINARY columns and returned to the client in TEXT format.
For rocksdb the indexes store mem-comparable images for columns, so use the pack_length() to store
the mem-comparable form for INET6 columns. This would also remain consistent with CHAR columns.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For reads, the buf_pool.page_hash is protected by buf_pool.mutex or
by the hash_lock. There is no need to compute or acquire hash_lock
if we are not modifying the buf_pool.page_hash.
However, the buf_pool.page_hash latch must be held exclusively
when changing buf_page_t::in_file(), or if we desire to prevent
buf_page_t::can_relocate() or buf_page_t::buf_fix_count()
from changing.
rw_lock_lock_word_decr(): Add a comment that explains the polling logic.
buf_page_t::set_state(): When in_file() is to be changed, assert that
an exclusive buf_pool.page_hash latch is being held. Unfortunately
we cannot assert this for set_state(BUF_BLOCK_REMOVE_HASH) because
set_corrupt_id() may already have been called.
buf_LRU_free_page(): Check buf_page_t::can_relocate() before
aqcuiring the hash_lock.
buf_block_t::initialise(): Initialize also page.buf_fix_count().
buf_page_create(): Initialize buf_fix_count while not holding
any mutex or hash_lock. Acquire the hash_lock only for the
duration of inserting the block to the buf_pool.page_hash.
buf_LRU_old_init(), buf_LRU_add_block(),
buf_page_t::belongs_to_unzip_LRU(): Do not assert buf_page_t::in_file(),
because buf_page_create() will invoke buf_LRU_add_block()
before acquiring hash_lock and buf_page_t::set_state().
buf_pool_t::validate(): Rely on the buf_pool.mutex and do not
unnecessarily acquire any buf_pool.page_hash latches.
buf_page_init_for_read(): Clarify that we must acquire the hash_lock
upfront in order to prevent a race with buf_pool_t::watch_remove().
|
|
|
|
| |
wrappers of Spider
|
|
|
|
|
| |
This regression was introduced in
commit dd77f072f9338f784d052ae6f46a04c55eabe3fd (MDEV-22841).
|
|
|
|
|
|
|
|
|
| |
ut_filename_hash(): Add better casts to please the compiler:
warning C4307: '*': integral constant overflow
This regression was introduced in
commit dd77f072f9338f784d052ae6f46a04c55eabe3fd (MDEV-22841).
|
|
|
|
| |
That doesn't support STRING(APPEND ..)
|
|
|
|
|
|
|
|
|
|
|
|
| |
select with GROUP BY and GROUP_CONCAT
In the merge_buffers phase for sorting, the sort buffer size is divided between the number of chunks.
The chunks have a start and end position (m_buffer_start and m_buffer_end).
Then we read the as many records that fit in this buffer for a chunk of the file.
The issue here was we were resetting the end of buffer(m_buffer_end) to the number of bytes that was
read, this was causing a problem because with dynamic size of sort keys it is possible that later
we would not be able to accommodate even one key inside a chunk of file.
So the fix was to not reset the end of buffer for a chunk of file.
|
| |
|
|
|
|
|
|
| |
slave during UPDATE
Add missing call for handler->prepare_for_insert() in Rows_log_event::do_apply_event
|
|
|
|
|
|
|
|
| |
MONITOR_SRV_MEM_VALIDATE_MICROSECOND, MEM_PERIODIC_CHECK,
SRV_MASTER_MEM_VALIDATE_INTERVAL: Remove. These were unused
ever since UNIV_MEM_DEBUG was removed.
MONITOR_SRV_PURGE_MICROSECOND: Remove. This was always unused.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CREATE PROCEDURE did not detect unknown SP variables in assignments like this:
SET var=a_long_var_name_with_a_typo;
The error happened only during the SP execution time, and only of the control
flow reaches the erroneous statement.
Fixing most expressions to detect unknown identifiers.
This includes simple subqueries without tables:
- Query specification: SELECT list, WHERE,
HAVING (inside aggregate functions) clauses, e.g.
SET var= (SELECT unknown_ident+1);
SET var= (SELECT 1 WHERE unknown_identifier);
SET var= (SELECT 1 HAVING SUM(unknown_identifier);
- Table value constructor: VALUES clause, e.g.:
SET var= (VALUES(unknown_ident));
Note, in some more complex subquery cases unknown variables are still not detected
(this will be fixed separately):
- Derived tables:
SET a=(SELECT unknown_ident FROM (SELECT 1 AS alias) t1);
SET res=(SELECT * FROM t1 LEFT OUTER JOIN (SELECT unknown_ident) t2 USING (c1));
- CTE:
SET a=(WITH cte1 (a) AS (SELECT unknown_ident) SELECT * FROM cte1);
SET a=(WITH cte1 (a,b) AS (VALUES (unknown,2),(3,4)) SELECT * FROM cte1);
SET a=(WITH cte1 (a,b) AS (VALUES (1,2),(3,4)) SELECT unknown_ident FROM cte1);
- SELECT .. GROUP BY unknown_identifier
- SELECT .. ORDER BY unknown_identifier
- HAVING with an unknown identifier outside of any aggregate functions:
SELECT .. HAVING unknown_identifier;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problematic mutex is dict_sys.mutex.
Idea of the patch: unlink() fd under that mutex while
it's still open. This way unlink() will be fast and
actual file removal will happen on close().
And close() will be called outside of dict_sys.mutex.
This should be safe against crash which may happen between
unlink() and close(): file will be removed by OS anyway.
The same applies to both *nix and Windows.
I created and removed a 4G file on some NVMe SSD on ext4:
write(3, "\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1"..., 1048576) = 1048576 <0.000519>
fdatasync(3) = 0 <3.533763>
close(3) = 0 <0.000011>
unlink("file") = 0 <0.411563>
write(3, "\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1"..., 1048576) = 1048576 <0.000520>
fdatasync(3) = 0 <3.544938>
unlink("file") = 0 <0.000029>
close(3) = 0 <0.407057>
Such systems can benefit of this patch.
fil_node_t::detach(): closes fil_node_t but not file handle,
returns that file handle
fil_node_t::prepare_to_close_or_deatch(): 'closes' fil_node_t
fil_node_t:close_to_free(): new argument detach_handle
fil_system_t::detach(): now can detach file handles
fil_delete_tablespace(): now can detach file handles
row_drop_table_for_mysql(): performs actual file removal
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
To install Spider one can simply drop a /etc/mysql/conf.d/spider.cnf like
[mariadb]
plugin-load-add=ha_spider.so
This is automatically generated and installed when plugin is correctly
registered to plugin.cmake with its own component name. Many other plugins
such as Connect and RocksDB install in the same way.
This solved MDEV-19917 as the mere adding and removing of spider.cnf
automatically installs and uninstalls it.
Remove the overly complex and uncecessary install.sql from Spider,
if should not be needed in modern times anymore.
With this change there is no need for a uninstall.sql either.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Recommend max_allowed_packet=1G which is the same as the default client
value.
- Remove thread_concurrency removed in 10.5.
- Remove query cache, not recommended practice anymore.
- Remove binlog_*, should not recommend those too easily but rather require
the database administrator to read up on those themselves.
- Remove chroot setting, not relevant in modern container era.
- Show explicitly innodb_buffer_pool_size example as the most likely thing
a database administrator should change.
- Don't recommend rate limiting in slow log, logging once in a 1000
would not be optimal for the basic case, hence bad example.
- Install the example configs in /usr/share/mysql.
- Use correct path /run/ instead of /var/run/.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Split the big my.cnf into multiple smaller files with the same filenames
and contents as official Debian/Ubuntu packaging has.
The config contents stays the same apart from following additions
which the original MariaDB upstream configs had and probably needs
to be kept:
- lc-messages=en_US and skip-external-locking in server config
Configs the original MariaDB upstream had that are seemingly
unnecessary and thus removed:
- port=3306 removed from the client config
- log_warnings=2 removed from server config
Also adopt update-alternatives system using
mysql-common/configure-symlinks. This way it is aligned with
downstream Debian/Ubuntu packaging.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Analysis:
========
List of values provided for "replicate_ignore_table" and "replicate_do_table"
are stored in HASH. When an empty list is provided the HASH structure doesn't
get initialized. Existing code treats empty element list as an error and tries
to clean the uninitialized HASH. This results in above MSAN issue.
Fix:
===
The clean up should be initiated only when there is an error while parsing the
'replicate_do_table' or 'replicate_ignore_table' list and the HASH is in
initialized state. Otherwise for empty list it should simply return success.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This corrects build failures on ppc64{,le} with the
WITH_EMBEDDED_SERVER option enabled.
MDEV-22641 added an unusual case in which the same object
file in was included twice with a different function
defination. The original cmake/merge_archives_unix.cmake
did not tolerate such eventualities.
So we move to the highest voted answer on Stack Overflow
for the merging of static libraries.
https://stackoverflow.com/questions/3821916/how-to-merge-two-ar-static-libraries-into-one
Thin archives generated compile failures and the libtool
mechanism would of been another dependency and using .la
files that isn't part of a normal cmake output. The straight
Apple mechanism of libtool with static archives also failed
on Linux.
This leaves the MRI script mechansim which was implemented
in this change.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Change how lookup for the "auto" PSI_memory_keys is done.
Lookup for filename hashes (integers), instead of C strings
Generate these hashes at the compile time with constexpr,
rather than at runtime.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
To install Spider one can simply drop a /etc/mysql/conf.d/spider.cnf like
[mariadb]
plugin-load-add=ha_spider.so
This is automatically generated and installed when plugin is correctly
registered to plugin.cmake with its own component name. Many other plugins
such as Connect and RocksDB install in the same way.
This solved MDEV-19917 as the mere adding and removing of spider.cnf
automatically installs and uninstalls it.
Remove the overly complex and uncecessary install.sql from Spider,
if should not be needed in modern times anymore.
With this change there is no need for a uninstall.sql either.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Recommend max_allowed_packet=1G which is the same as the default client
value.
- Remove thread_concurrency removed in 10.5.
- Remove query cache, not recommended practice anymore.
- Remove binlog_*, should not recommend those too easily but rather require
the database administrator to read up on those themselves.
- Remove chroot setting, not relevant in modern container era.
- Show explicitly innodb_buffer_pool_size example as the most likely thing
a database administrator should change.
- Don't recommend rate limiting in slow log, logging once in a 1000
would not be optimal for the basic case, hence bad example.
- Install the example configs in /usr/share/mysql.
- Use correct path /run/ instead of /var/run/.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Split the big my.cnf into multiple smaller files with the same filenames
and contents as official Debian/Ubuntu packaging has.
The config contents stays the same apart from following additions
which the original MariaDB upstream configs had and probably needs
to be kept:
- lc-messages=en_US and skip-external-locking in server config
Configs the original MariaDB upstream had that are seemingly
unnecessary and thus removed:
- port=3306 removed from the client config
- log_warnings=2 removed from server config
Also adopt update-alternatives system using
mysql-common/configure-symlinks. This way it is aligned with
downstream Debian/Ubuntu packaging.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Analysis:
========
List of values provided for "replicate_ignore_table" and "replicate_do_table"
are stored in HASH. When an empty list is provided the HASH structure doesn't
get initialized. Existing code treats empty element list as an error and tries
to clean the uninitialized HASH. This results in above MSAN issue.
Fix:
===
The clean up should be initiated only when there is an error while parsing the
'replicate_do_table' or 'replicate_ignore_table' list and the HASH is in
initialized state. Otherwise for empty list it should simply return success.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This corrects build failures on ppc64{,le} with the
WITH_EMBEDDED_SERVER option enabled.
MDEV-22641 added an unusual case in which the same object
file in was included twice with a different function
defination. The original cmake/merge_archives_unix.cmake
did not tolerate such eventualities.
So we move to the highest voted answer on Stack Overflow
for the merging of static libraries.
https://stackoverflow.com/questions/3821916/how-to-merge-two-ar-static-libraries-into-one
Thin archives generated compile failures and the libtool
mechanism would of been another dependency and using .la
files that isn't part of a normal cmake output. The straight
Apple mechanism of libtool with static archives also failed
on Linux.
This leaves the MRI script mechansim which was implemented
in this change.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let us invoke the debug member functions of mtr_t directly.
mtr_t::memo_contains(): Change the parameter type to
const rw_lock_t&. This function cannot be invoked on
buf_block_t::lock.
The function mtr_t::memo_contains_flagged() is intended to be invoked
on buf_block_t* or rw_lock_t*, and it along with
mtr_t::memo_contains_page_flagged() are the way to check whether
a buffer pool page has been latched within a mini-transaction.
|
|
|
|
|
|
|
|
| |
xdes_get_state(), fseg_get_nth_frag_page_no(),
fseg_find_free_frag_page_slot(), fseg_find_last_used_frag_page_slot(),
fseg_get_n_frag_pages(), fseg_n_reserved_pages_low(),
fseg_print_low(): Remove the unused parameter mtr, and add
a const qualifier to the pointer to the buffer pool page frame.
|
|
|
|
|
| |
This should have been part of
commit 70d4e55db94b62aa6cfcecdc25054554e7d44d18.
|
|
|
|
|
|
| |
debug_sync_set_action(): Declare the dummy function inline,
to silence a warning about declared-but-unused static function.
This amends commit 3ccd6766d0e47b4331505789eb114c5cf1534ecf.
|
|
|
|
| |
enabled
|
|
|
|
|
|
|
| |
perform it after the plan refinement phase is done
Introduce a function to enable keyreads for indexes and use this
function when all the decision of plan refinement phase are done.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
svr_n_page_hash_locks: Increase from 16 to 64. Before MDEV-15058,
we used to have the buf_pool.page_hash partitioned to each instance.
rw_lock_lock_word_decr(): Sleep a little in the spinloop.
rw_lock_s_lock_low(): Correct a comment. The function does perform
spinning.
This improves scalability in read-only workloads on a 32-CPU system
when the number of concurrent connections exceeds the CPU core count.
Thanks to Axel Schwenke for running benchmarks.
|
| |
|
|
|
|
|
| |
The issue here is charset for Sort_param::tmp_buffer is cleared when bzero is done for Sort_param.
Make sure to set the charset explicitly in the constructor for tmp_buffer.
|
|
|
|
|
|
|
|
| |
reduce the amount of engine-specific code in the server,
particularly as it does not serve any purpose now.
may be needed for VP engine,
to be reconsidered in MDEV-7795
|
|
|
|
|
| |
Disallow BIT_AND(), BIT_OR(), BIT_XOR() for data types GEOMETRY and INET6,
as they cannot return any useful integer values.
|
|
|
|
|
|
|
|
|
| |
size of ib_logfile0 should be bigger than 200 kB * innodb_thread_concurrency.
Correct log message. IMO, we shouldn't be very precise in that
message as the formula behind it is not trivial.
Also performed a little cleanup.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
buf_LRU_make_block_young(): Merge with buf_page_make_young().
buf_pool_check_no_pending_io(): Remove. Replaced with
buf_pool.any_io_pending() and buf_pool.io_pending(),
which do not unnecessarily acquire buf_pool.mutex.
buf_pool_t::init_flush[]: Use atomic access, so that
buf_flush_wait_LRU_batch_end() can avoid acquiring buf_pool.mutex.
buf_pool_t::try_LRU_scan: Declare as bool.
|
|\ |
|
| |\ |
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When MDEV-22769 introduced srv_shutdown_state=SRV_SHUTDOWN_INITIATED in
commit efc70da5fd0459ff44153529d13651741cc32bc4
we forgot to adjust a few checks for SRV_SHUTDOWN_NONE.
In the initial shutdown step, we are waiting for the background
DROP TABLE queue to be processed or discarded. At that time,
some background tasks (such as buffer pool resizing or dumping
or encryption key rotation) may be terminated, but others must
remain running normally.
srv_purge_coordinator_suspend(), srv_purge_coordinator_thread(),
srv_start_wait_for_purge_to_start(): Treat SRV_SHUTDOWN_NONE
and SRV_SHUTDOWN_INITIATED equally.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This bug is the same as the bug MDEV-17024. The crashes caused by these
bugs were due to premature cleanups of the unit specifying recursive CTEs
that happened in some cases when there were several outer references the
same recursive CTE.
The problem of premature cleanups for recursive CTEs could be already
resolved by the correction in TABLE_LIST::set_as_with_table() introduced
in this patch. ALL other changes introduced by the patches for MDEV-17024
and MDEV-22748 guarantee that this clean-ups are performed as soon as
possible: when the select containing the last outer reference to a
recursive CTE is being cleaned up the specification of the recursive CTE
should be cleaned up as well.
|