| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
joins
Part#2: take into account that join nest that we are marking as constant
might already have constant tables in it. Don't count these tables twice.
|
|
|
|
|
|
| |
joins
Update .result files after the previous patch
|
|
|
|
|
|
|
| |
joins
Continuation of the fix: Make condition selectivity estimate use the
right estimate, too.
|
|
|
|
| |
Reuse the fix for MDEV-17518 here, too.
|
|
|
|
| |
joins
|
| |
|
| |
|
|
|
|
| |
Added only test case because the bug was fixed by the patch for mdev-17382.
|
|
|
|
|
|
|
| |
The syntax error happened because we had not implemented a different print for
percentile functions. The syntax is a bit different when we use percentile functions
as window functions in comparision to normal window functions.
Implemented a seperate print function for percentile functions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in Field_iterator_table::create_item
When IN predicate is converted to IN subquery we have to ensure that
any item from the select list of the subquery has some name and this name
is unique across the select list.
This was not guaranteed by the code before the patch for MDEV-17222.
If the name of an item of the select list was not set, and this happened
for binary constants, then the server crashed. If the first row in the IN
list contained the same constant in two different positions then the server
returned an error message.
This was fixed by providing all constants in the first row of the IN list
with generated names.
|
|
|
|
| |
utf8mb4 by default"
|
| |
|
|
|
|
|
|
| |
and join_cache_level=6
This bug was fixed by the patch for mdev-17382 applied to 5.5.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
Change for the test case in 10.3: splitting must be turned off to preserve
the explain.
|
|
|
|
|
|
|
|
|
|
| |
truncating a temporary table
TRUNCATE expects only one TABLE instance (which is used by TRUNCATE
itself) to be open. However this requirement wasn't enforced after
"MDEV-5535: Cannot reopen temporary table".
Fixed by closing unused table instances before performing TRUNCATE.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bitmap_is_set(table->read_set, field_index))' fails upon attempt to update virtual column on partitioned versioned table
When using buffered sort in `UPDATE`, keyread is used. In this case,
`TABLE::update_virtual_field` should be aborted, but it actually isn't,
because it is called not with a top-level handler, but with the one that
is actually going to access the disk. Here the problemm is issued with
partitioning, so the solution is to recursively mark for keyread all the
underlying partition handlers.
* ha_partition: update keyread state for child partitions
Closes #800
|
|
|
|
|
|
|
| |
The function JOIN_TAB::choose_best_splitting() did not take into account
that for some tables whose fields were used in the GROUP BY list of
the specification of a splittable materialized derived there might exist
no elements in the array ext_keyuses_for_splitting.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The optimizer erroneously allowed to use join cache when joining a
splittable materialized table together with splitting optimization.
As a consequence in some rare cases the server returned wrong result
sets for queries with materialized derived.
This patch allows to use either join cache without usage of splitting
technique for materialization of a splittable derived table or splitting
without usage of join cache when joining such table. The costs the these
alternatives are compared and the best variant is chosen.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Field_iterator_table_ref::set_field_iterator
Several functions that processed different prepare statements missed
the DT_INIT flag in last parameter of the open_normal_and_derived_tables()
calls. It made context analysis of derived tables dependent on the order in
which the derived tables were processed by mysql_handle_derived(). This
order was induced by the order of SELECTs in all_select_list.
In 10.4 the order of SELECTs in all_select_list became different and lack
of the DT_INIT flags in some open_normal_and_derived_tables() call became
critical as some derived tables were not identified as such.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a table value constructor shows wrong number of rows
This is another attempt to fix this bug. The previous patch did not take
into account that a transformation for ALL/ANY subqueries could be applied
to the materialized table that wrapped the table value constructor used as
a specification of the subselect used an ALL/ANY subquery. In this case
the result of the derived table used a sink of the class select_subselect
rather than of the class select_unit. Thus the previous fix could cause
memory overwrites when running EXPLAIN for queries with table value
constructors in ALL/ANY subselects.
|
|
|
|
|
|
| |
Implement according to standard SQL specification 2008.
The check_constraints table is used for fetching metadata about
the constraints defined for tables in all databases.
|
| |
|
|
|
|
|
| |
The testcase needs to set in_predicate_conversion_threshold which
is only available in debug builds (this is subject to further discussion).
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch always provides columns of the temporary table used for
materialization of a table value constructor with some names.
Before this patch these names were always borrowed from the items
of the first row of the table value constructor. When this row
contained expressions and expressions were not named then it could cause
different kinds of problems. In particular if the TVC is used as the
specification of a derived table this could cause a crash.
The names given to the expressions used in a TVC are the same as those
given to the columns of the result set from the corresponding SELECT.
|
|
|
|
|
|
|
|
|
|
| |
value constructor shows wrong number of rows
If the specification of a derived table contained a table value constructor
then the optimizer incorrectly estimated the number of rows in the derived
table. This happened because the optimizer did not take into account the
number of rows in the constructor. The wrong estimate could lead to choosing
inefficient execution plans.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:-
If we try to run this query with -WITH_ASAN=ON compiled server
CREATE TABLE t1 (i INT);
SET debug_dbug="+d,test_completely_invisible,test_invisible_index";
CREATE TABLE t2 LIKE t1;
This will generate a stack buffer overflow error.
==8922==ERROR: AddressSanitizer: stack-buffer-overflow on address #ADDR
Analyze:-
Error is generated on this line
if (((*last)=new list_node(info, &end_of_list)))
So info is our Key*, &end_of_list is global variable and last == #ADDR
So last is suspicious variable. And last is the variable present in alter_info
->key_list. Now the question is how this key_list->last gets wrong/
different stack variable. In the backtrace, we can see that key_list is
generated in mysql_create_table_like_table by calling
mysql_preapre_alter_table_function and dummy key_list is created by
mysql_create_like_table. In the end on mysql_prepare_alter_table we call
alter_info->key_list.swap(new_key_list);
So there is two options either key_list is empty or not empty , IF it is not
empty then there is no issues last ptr is replaced by thd->mem_root (allocated ptr)
So problem arises when key_list is empty. It swaps the dummy last ptr by
mysql_prepare_alter_table declared ptr. which is wrong.
Solution:-
We wont swap variable if list does not have any element.
|
|
|
|
|
|
|
|
|
| |
The bug was in the in the code of JOIN::check_for_splittable_materialized()
where the structures describing the fields of a materialized derived
table that potentially could be used in split optimization were build.
As a result of this bug some fields that were not usable for splitting
were detected as usable. This could trigger crashes further in
st_join_table::choose_best_splitting().
|
| |
|
|
|
|
|
| |
This problem was earlier fixed by the patch for MDEV-15340.
Adding tests only.
|
|
|
|
| |
the equal expression optimizer
|
| |
|
| |
|
|
|
|
|
|
|
| |
character sets
Field_varstring::sql_type() did not calculate character length correctly.
Using char_length() instead of the bad code.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In InnoDB, an INSERT will not create an explicit lock object. Instead,
the inserted record is initially implicitly locked by the transaction
that wrote its trx_t::id to the hidden system column DB_TRX_ID.
(Other transactions would check if DB_TRX_ID is referring to a
transaction that has not been committed.)
If a record was inserted in the current transaction, it would be
implicitly locked by that transaction. Only if some other transaction
is requesting access to the record, the implicit lock should be
converted to an explicit one, so that the waits-for graph can be
constructed for detecting deadlocks and lock wait timeouts.
Before this fix, InnoDB would convert implicit locks to
explicit ones, even if no conflict exists.
lock_rec_convert_impl_to_expl(): Return whether caller_trx
already holds an explicit lock that covers the record.
row_vers_impl_x_locked_low(): Avoid a lookup if the record matches
caller_trx->id.
lock_trx_has_expl_x_lock(): Renamed from lock_trx_has_rec_x_lock().
row_upd_clust_step(): In a debug assertion, check for implicit lock
before invoking lock_trx_has_expl_x_lock().
rw_trx_hash_t::find(): Make do_ref_count a mandatory parameter.
Assert that trx_id is not 0 (the caller should check it).
trx_sys_t::is_registered(): Only invoke find() if id != 0.
trx_sys_t::find(): Add the optional parameter do_ref_count.
lock_rec_queue_validate(): Avoid lookup for trx_id == 0.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
* ignore CHECK constraint for historical rows;
* FOREIGN KEY test case.
TODO:
MDEV-16301 IB: use real table name for error messages on ALTER
Closes tempesta-tech/mariadb#491
Closes #748
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
push_handler() created sp_handler_entry instances on THD::main_mem_root,
which is freed only after the SP instructions execution.
So in case of a CONTINUE HANDLER inside a loop (e.g. WHILE) this approach
leaked thread memory on every loop iteration.
Changes:
- Removing sp_handler_entry declaration, it's not really needed.
- Fixing the data type of sp_rcontext::m_handlers from
Dynamic_array<sp_handler_entry*> to Dynamic_array<sp_instr_hpush_jump*>
- Fixing sp_rcontext::push_handler() to push the pointer to
an sp_instr_hpush_jump instance to the handler stack.
This instance contains everything we need.
There is no a need to allocate anything else.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
push_cursor() created sp_cursor instances on THD::main_mem_root,
which is freed only after the SP instructions loop.
Changes:
- Moving sp_cursor declaration from sp_rcontext.h to sql_class.h
- Deriving sp_instr_cpush from sp_cursor. So now sp_cursor is created
only once (at the SP parse time) and then reused on all loop iterations
- Adding a new method reset() into sp_cursor (and its parent classes)
to reset an sp_cursor instance before reuse.
- Moving former sp_cursor members m_fetch_count, m_row_count, m_found
into a separate class sp_cursor_statistics. This helps to reuse
the code in sp_cursor constructors, and in sp_cursor::reset()
- Adding a helper method sp_rcontext::pop_cursor().
- Adding "THD*" parameter to so_rcontext::pop_cursors() and pop_all_cursors()
- Removing "new" and "delete" from sp_rcontext::push_cursor() and
sp_rconext::pop_cursor().
- Fixing sp_cursor not to derive from Sql_alloc, as it's now allocated
only as a part of sp_instr_cpush (and not allocated separately).
- Moving lex_keeper->disable_query_cache() from sp_cursor::sp_cursor()
to sp_instr_cpush::execute().
- Adding tests
|
|
|
|
|
|
|
|
|
|
| |
NULL values when there is no DEFAULT
Copy and inplace algorithm works similarly for
NULL to NOT NULL conversion for the following cases:
(1) strict sql mode - Should give error.
(2) non-strict sql mode - Should give warnings alone
(3) alter ignore table command. - Should give warnings alone.
|
| |
|