| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
| |
ha_innobase::commit_inplace_alter_table(): Do not crash if
innobase_update_foreign_cache() returns an error. It can return
an error on ALTER TABLE if an inconsistent FOREIGN KEY constraint
was created earlier when SET foreign_key_checks=0 was in effect.
Instead, report a warning to the client that constraints cannot
be loaded.
|
|
|
|
|
| |
ha_innobase::prepare_inplace_alter_table(): Filter out duplicates
from ha_alter_info->alter_info->drop_list.elements.
|
|
|
|
|
| |
innobase_rename_column_try(): Declare fk_evict as std::set
instead of std::list, in order to filter out duplicates.
|
|\ |
|
| |
| |
| |
| |
| | |
@@open_files_limit now behaves differenly and cannot
be used to skip the test anymore.
|
| |
| |
| |
| |
| |
| |
| | |
Disable LOAD DATA LOCAL INFILE suport by default and
auto-enable it for the duration of one query, if the query
string starts with the word "load". In all other cases the application
should enable LOAD DATA LOCAL INFILE support explicitly.
|
|\ \ |
|
| | | |
|
| | | |
|
|\ \ \
| | |/
| |/| |
|
| | |
| | |
| | |
| | | |
test case
|
| | |
| | |
| | |
| | | |
The problem was in calculating of the mask to clear unused null bits in case of using full byte.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Calling st_select_lex::update_used_tables in JOIN::optimize_unflattened_subqueries
only when we are sure that the join have not been cleaned up.
This can happen for a case when we have a non-merged semi-join and an impossible
where which would lead to the cleanup of the join which has the non-merged semi-join
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
nominated as PK
Problem:
========
Server fails to notify the engine by not setting the ADD_PK_INDEX and
DROP_PK_INDEX When there is a
i) Change in candidate for primary key.
ii) New candidate for primary key.
Fix:
====
Server sets the ADD_PK_INDEX and DROP_PK_INDEX while doing alter for the
above problematic case.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
32 bit int
Row-based slave applier could not parse correctly the table id when
the value exceeded the max of 32 bit unsigned int.
The reason turns out in that the being parsed value placeholder
was sized as 4 bytes.
The type is fixed to ulonglong.
Additionally the patch works around Rows_log_event::m_table_id 4 bytes
size on 32 bits platforms. In case of last_table_id value overflows
the 4 byte max, there won't be the zero value for m_table_id generated
and the first wrapped-around value is one, this is thanks to excluding
UINT_MAX32 + 1 from TABLE_SHARE::table_map_id.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
on startup innodb is checking whether files "ib_logfileN"
(for N from 1 to 100) exist, and whether they're readable.
A non-existent file aborted the scan.
A directory instead of a file made InnoDB to fail.
Now it treats "directory exists" as "file doesn't exist".
|
| | |
| | |
| | |
| | | |
increase to 1M
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In the function QUICK_RANGE_SELECT::init_ror_merged_scan we create a seperate handler if the handler in
head->file cannot be reused. The flag free_file tells us if we have a seperate handler or not.
There are cases where you might create a handler and then there might be a failure(running ALTER)
and then we have to revert the handler back to the original one. The code does that
but it does not reset the flag 'free_file' in this case.
Also backported f2c418079def.
|
| | |
| | |
| | |
| | |
| | |
| | | |
match table_open_cache
Allow table definition cache be bigger than open table cache (due to problem with VIEWs and prepared statements).
|
|\ \ \
| | | |
| | | | |
MDEV-16499 ER_NO_SUCH_TABLE_IN_ENGINE followed by "Please drop the table and recreate" upon adding FULLTEXT key to table with virtual column
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
table and recreate" upon adding FULLTEXT key to table with virtual column
There was an incorrect check for MariaDB and InnoDB
tables fields count. Corruption was reported when there was no corruption.
Also, a warning message had incorrect field numbers for both MariaDB and InnoDB
tables.
ha_innobase::open(): fixed check and message
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When we have a nested subquery then a subquery that was a dependent subquery
may change to an independent one when we optimizer the inner subqueries.
This is handled st_select_lex::optimize_unflattened_subqueries.
Currently a subquery that was changed to independent from dependent after optimization
phase incorrectly shows dependent in the output of Explain, this happens because we
don't update used_tables for the WHERE clause, ON clause, etc after the optimization phase.
|
|\ \ \ \
| | |/ /
| |/| | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is a regression after MDEV-13671.
The bug is related to key part prefix lengths wich are stored in SYS_FIELDS.
Storage format is not obvious and was handled incorrectly which led to data
dictionary corruption.
SYS_FIELDS.POS actually contains prefix length too in case if any key part
has prefix length.
innobase_rename_column_try(): fixed prefixes handling
Tests for prefixed indexes added too.
Closes #1063
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | | |
Forbid ALTER DATABASE under read_only.
|
| | | |
| | | |
| | | |
| | | | |
Relevant if exists flag are added for create database and drop database.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Create a new constant MAX_DATA_LENGTH_FOR_KEY.
Replace the value of MAX_KEY_LENGTH to also include the LENGTH and NULL BYTES
of a field.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
handler::ha_rnd_init(bool)
with InnoDB, joins, AND/OR conditions
The inited parameter handler is not initialised when we do a quick_select after a table scan.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
ha_innobase::prepare_inplace_alter_table(): check max column length for every
index in a table, not just added in this particular ALTER TABLE with ADD INDEX ones.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
@@use_stat_tables= PREFERABLY
The problem here is EITS statistics does not calculate statistics for the partitions of the table.
So a temporary solution would be to not read EITS statistics for partitioned tables.
Also disabling reading of EITS for columns that participate in the partition list of a table.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
merge_role_db_privileges() was remembering pointers into Dynamic_array
acl_dbs, and later was using them, while pushing more elements into the
array. But pushing can cause realloc, and it can invalidate all pointers.
Fix: remember and use indexes of elements, not pointers.
|
| |/ /
|/| | |
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The fix for "MDEV-17698 MEMORY engine performance regression"
previously fixed this problem.
- Adding the test for MDEV-17724
- Re-recording wrong results for tests:
* engines/iuds/r/insert_number
* engines/iuds/r/update_delete_number
which started to fail since MDEV-17698
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
results in assertion failure or "Can't find record" error
Fix ha_rnd_init() argument (we do not doing scan but use rnd_pos)
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | | |
followup for c32f7ed235f
|
| | |
| | |
| | |
| | |
| | | |
reset lex->many_values for LOAD DATA, as it's used for
auto-inc range size estimation.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
the rest of the server
Problem affects INPLACE ALTER rename columns.
innobase_rename_column_try(): some strcmp() was replaced with my_strcasecmp(),
queries to update data dictionary was updated to not match column name case.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
using index_merge union
For index merge union[or sort union], the estimates are not taken into account while calculating the selectivity of
a condition. So instead of showing the estimates of the index merge union[or sort union], it shows estimates equal to
all the records of the table.
The fix for the issue is to include the selectivity of index merge
union[or sort union] while calculating the selectivity of a condition.
|
| | |
| | |
| | |
| | | |
INSERT .. SELECT
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
--gdb now accepts an argument, it will be passed to gdb as a command.
multiple commands can be separated by a (non-standard and not escapable)
delimiter - semicolon (;).
Old usage with a bare --gdb continues to work too, of course.
Cherry-picked c47c0ca50c4 5441bbd3b1f 339b9055791
|
| | |
| | |
| | |
| | |
| | |
| | | |
We should clear trailing zeroes in frac part. Otherwise
that tail is growing quickly and forces unnecessary truncating of
arguments.
|