| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Issue:
------
When a subquery contains UNION the count of the number of
subquery columns is calculated incorrectly. Only the first
query block in the subquery's UNION is considered and an
array indexing goes out-of-bounds, and this is caught by an
assert.
Solution:
---------
Sum up the columns from all query blocks of the query
expression.
Change specific to 5.6/5.5:
---------------------------
The "child" points to the last query block of the UNION
(as opposed to 5.7+ where it points to the first member of
UNION). So "child->master_unit()->first_select()" is used
to reach the first query block of UNION.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
PROBLEM
-------
Memory sanitizer reports uninitialized comparisons
in log_in_use(), because strings are compared with
memcmp() instead of strncmp.
FIX
---
Use strncmp() to compare strings
|
| |
| |
| |
| | |
The problem was in calculating of the mask to clear unused null bits in case of using full byte.
|
| |
| |
| |
| |
| | |
mysql_install_db.exe should not remove datadir, if it was not created by
it.
|
| |
| |
| |
| |
| | |
remove attempts to track "candidate keys", use what was already
decided in create_table_impl().
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Calling st_select_lex::update_used_tables in JOIN::optimize_unflattened_subqueries
only when we are sure that the join have not been cleaned up.
This can happen for a case when we have a non-merged semi-join and an impossible
where which would lead to the cleanup of the join which has the non-merged semi-join
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
nominated as PK
Problem:
========
Server fails to notify the engine by not setting the ADD_PK_INDEX and
DROP_PK_INDEX When there is a
i) Change in candidate for primary key.
ii) New candidate for primary key.
Fix:
====
Server sets the ADD_PK_INDEX and DROP_PK_INDEX while doing alter for the
above problematic case.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
32 bit int
Row-based slave applier could not parse correctly the table id when
the value exceeded the max of 32 bit unsigned int.
The reason turns out in that the being parsed value placeholder
was sized as 4 bytes.
The type is fixed to ulonglong.
Additionally the patch works around Rows_log_event::m_table_id 4 bytes
size on 32 bits platforms. In case of last_table_id value overflows
the 4 byte max, there won't be the zero value for m_table_id generated
and the first wrapped-around value is one, this is thanks to excluding
UINT_MAX32 + 1 from TABLE_SHARE::table_map_id.
|
| |
| |
| |
| | |
increase to 1M
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In the function QUICK_RANGE_SELECT::init_ror_merged_scan we create a seperate handler if the handler in
head->file cannot be reused. The flag free_file tells us if we have a seperate handler or not.
There are cases where you might create a handler and then there might be a failure(running ALTER)
and then we have to revert the handler back to the original one. The code does that
but it does not reset the flag 'free_file' in this case.
Also backported f2c418079def.
|
| |
| |
| |
| |
| |
| | |
match table_open_cache
Allow table definition cache be bigger than open table cache (due to problem with VIEWs and prepared statements).
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When we have a nested subquery then a subquery that was a dependent subquery
may change to an independent one when we optimizer the inner subqueries.
This is handled st_select_lex::optimize_unflattened_subqueries.
Currently a subquery that was changed to independent from dependent after optimization
phase incorrectly shows dependent in the output of Explain, this happens because we
don't update used_tables for the WHERE clause, ON clause, etc after the optimization phase.
|
|\ \
| |/ |
|
| |
| |
| |
| | |
Forbid ALTER DATABASE under read_only.
|
| |
| |
| |
| | |
Relevant if exists flag are added for create database and drop database.
|
| |
| |
| |
| |
| |
| | |
Create a new constant MAX_DATA_LENGTH_FOR_KEY.
Replace the value of MAX_KEY_LENGTH to also include the LENGTH and NULL BYTES
of a field.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
handler::ha_rnd_init(bool)
with InnoDB, joins, AND/OR conditions
The inited parameter handler is not initialised when we do a quick_select after a table scan.
|
| | |
|
| |
| |
| |
| |
| |
| | |
This is a backport of a part of
commit 18455ec3f1a9c22977f0ed87233852813b53eb49
from 10.1.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
@@use_stat_tables= PREFERABLY
The problem here is EITS statistics does not calculate statistics for the partitions of the table.
So a temporary solution would be to not read EITS statistics for partitioned tables.
Also disabling reading of EITS for columns that participate in the partition list of a table.
|
| |
| |
| |
| |
| |
| |
| |
| | |
merge_role_db_privileges() was remembering pointers into Dynamic_array
acl_dbs, and later was using them, while pushing more elements into the
array. But pushing can cause realloc, and it can invalidate all pointers.
Fix: remember and use indexes of elements, not pointers.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
create_key_parts_for_pseudo_indexes
In this case we were trying to access memory for key_parts which we did not
assign for a fields because it did not any EITS statistics.
The check if EITS statistics for a column is avaialable or not was missing.
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| | |
Also, backporting a part of:
MDEV-11485 Split Item_func_between::val_int() into virtual methods in Type_handler
for easier merge to 10.3.
|
| |
| |
| |
| |
| |
| | |
results in assertion failure or "Can't find record" error
Fix ha_rnd_init() argument (we do not doing scan but use rnd_pos)
|
|\ \
| |/ |
|
| |
| |
| |
| |
| | |
reset lex->many_values for LOAD DATA, as it's used for
auto-inc range size estimation.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
using index_merge union
For index merge union[or sort union], the estimates are not taken into account while calculating the selectivity of
a condition. So instead of showing the estimates of the index merge union[or sort union], it shows estimates equal to
all the records of the table.
The fix for the issue is to include the selectivity of index merge
union[or sort union] while calculating the selectivity of a condition.
|
|\ \
| |/ |
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
PARTITION
Issue:
------
ALTER TABLE REORGANIZE PARTITION .... can result in
incorrect behavior if any partition other than the last
one misses the "VALUES LESS THAN..." part of the syntax.
Root cause:
-----------
Currently ALTER TABLE with changes to partitions is handled
incorrectly by the parser.
Fix:
----
The if condition which handles partition management
differently for ALTER TABLE in the parser should be removed.
Change the code to handle the case in the parser.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
LOSS
ANALYSIS:
=========
When converting from a BLOB/TEXT type to a smaller
BLOB/TEXT type, no warning/error is reported to the user
informing about the truncation/data loss.
FIX:
====
We are now reporting a warning in non-strict mode and an
appropriate error in strict mode.
|
| | |
| | |
| | |
| | | |
Upgrading the zlib lib to 1.2.11
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This reverts commit dc3a20b191362227a6f85fadfc3a8b6c76f69ec6.
It requires GET_BIT in include/my_getopt.h, which is available
only in 10.3+
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
consistently) on Replication Slave
lower_case_table_names 0 -> 1 replication works, it's safe as long as
mixed case names mapping to the lower case ones is one-to-one
|
| |\ \ |
|
| | | | |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
derived table / view by equality
Now rows of a materialized derived table are always put into a
temporary table before join operation. If BNLH is used to join this
table with the result of a partial join then both operands of the
join are actually put into main memory. In most cases this is not
efficient.
We could avoid this by sending the rows of the derived table directly
to the join operation. However this kind of data flow is not supported
yet.
Fixed by not allowing usage of hash join algorithm to join a materialized
derived table if it's joined by an equality predicate of the form
f=e where f is a field of the derived table.
|
| | |
| | |
| | |
| | | |
and followup fixes
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
in thr_lock / has_old_lock upon FLUSH TABLES
Explicit partition access of partitioned MEMORY table under LOCK TABLES
may cause subsequent statements to crash the server, deadlock, trigger
valgrind warnings or ASAN errors. Freed memory was being used due to
incorrect cleanup.
At least MyISAM and InnoDB don't seem to be affected, since their
THR_LOCK structures don't survive FLUSH TABLES. MEMORY keeps table shared
data (including THR_LOCK) even if there're no open instances.
There's partition_info::lock_partitions bitmap, which holds bits of
partitions allowed to be accessed after pruning. This bitmap is
updated for each individual statement.
This bitmap was abused in ha_partition::store_lock() such that when we
need to unlock a table, locked by LOCK TABLES, only locks for partitions
that were accessed by previous statement were released.
Eventually FLUSH TABLES frees THR_LOCK_DATA objects, which are still
linked into THR_LOCK lists. When such THR_LOCK gets reused we end up with
freed memory access.
Fixed by using ha_partition::m_locked_partitions bitmap similarly to
ha_partition::external_lock().
|
| | |
| | |
| | |
| | | |
get_datetime_value on SELECT with YEAR field and IN
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Locked_tables_list::unlock_locked_tables
Similarly to regular DROP TABLE, don't leave locked tables mode if CREATE OR
REPLACE dropped temporary table but failed to cerate new one.
The problem is that there's no track of which temporary table was "locked" by
LOCK TABLES.
|