| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
draft to check with client
|
|\ |
|
| |
| |
| |
| |
| |
| | |
This test and result had been modified in WL#6204, but I missed it,
because the test had been renamed from
innodb_fts.innodb_fts_plugin to innodb_fts.plugin.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Thanks to Zhangyuan from Alibaba for pointing out this bug.
btr_page_empty(): When a clustered index root page is emptied,
preserve PAGE_ROOT_AUTO_INC. This would occur during a page split.
page_create_empty(): Preserve PAGE_ROOT_AUTO_INC when a clustered
index root page becomes empty. Use a faster method for writing
the field.
page_zip_copy_recs(): Reset PAGE_MAX_TRX_ID when copying
clustered index pages. We must clear the field when the root page
was a leaf page and it is being split, so that PAGE_MAX_TRX_ID
will continue to be 0 in clustered index non-root pages.
page_create_zip(): Add debug assertions for validating
PAGE_MAX_TRX_ID and PAGE_ROOT_AUTO_INC.
|
| |
| |
| |
| |
| |
| | |
Remove unnecessary restarts by testing multiple tables across a restart.
This change almost halves the execution time.
Some further restarts could be removed with additional effort.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This should be functionally equivalent to WL#6204 in MySQL 8.0.0, with
the notable difference that the file format changes are limited to
repurposing a previously unused data field in B-tree pages.
For persistent InnoDB tables, write the last used AUTO_INCREMENT
value to the root page of the clustered index, in the previously
unused (0) PAGE_MAX_TRX_ID field, now aliased as PAGE_ROOT_AUTO_INC.
Unlike some other previously unused InnoDB data fields, this one was
actually always zero-initialized, at least since MySQL 3.23.49.
The writes to PAGE_ROOT_AUTO_INC are protected by SX or X latch on the
root page. The SX latch will allow concurrent read access to the root
page. (The field PAGE_ROOT_AUTO_INC will only be read on the
first-time call to ha_innobase::open() from the SQL layer. The
PAGE_ROOT_AUTO_INC can only be updated when executing SQL, so
read/write races are not possible.)
During INSERT, the PAGE_ROOT_AUTO_INC is updated by the low-level
function btr_cur_search_to_nth_level(), adding no extra page
access. [Adaptive hash index lookup will be disabled during INSERT.]
If some rare UPDATE modifies an AUTO_INCREMENT column, the
PAGE_ROOT_AUTO_INC will be adjusted in a separate mini-transaction in
ha_innobase::update_row().
When a page is reorganized, we have to preserve the PAGE_ROOT_AUTO_INC
field.
During ALTER TABLE, the initial AUTO_INCREMENT value will be copied
from the table. ALGORITHM=COPY and online log apply in LOCK=NONE will
update PAGE_ROOT_AUTO_INC in real time.
innodb_col_no(): Determine the dict_table_t::cols[] element index
corresponding to a Field of a non-virtual column.
(The MySQL 5.7 implementation of virtual columns breaks the 1:1
relationship between Field::field_index and dict_table_t::cols[].
Virtual columns are omitted from dict_table_t::cols[]. Therefore,
we must translate the field_index of AUTO_INCREMENT columns into
an index of dict_table_t::cols[].)
Upgrade from old data files:
By default, the AUTO_INCREMENT sequence in old data files would appear
to be reset, because PAGE_MAX_TRX_ID or PAGE_ROOT_AUTO_INC would contain
the value 0 in each clustered index page. In new data files,
PAGE_ROOT_AUTO_INC can only be 0 if the table is empty or does not contain
any AUTO_INCREMENT column.
For backward compatibility, we use the old method of
SELECT MAX(auto_increment_column) for initializing the sequence.
btr_read_autoinc(): Read the AUTO_INCREMENT sequence from a new-format
data file.
btr_read_autoinc_with_fallback(): A variant of btr_read_autoinc()
that will resort to reading MAX(auto_increment_column) for data files
that did not use AUTO_INCREMENT yet. It was manually tested that during
the execution of innodb.autoinc_persist the compatibility logic is
not activated (for new files, PAGE_ROOT_AUTO_INC is never 0 in nonempty
clustered index root pages).
initialize_auto_increment(): Replaces
ha_innobase::innobase_initialize_autoinc(). This initializes
the AUTO_INCREMENT metadata. Only called from ha_innobase::open().
ha_innobase::info_low(): Do not try to lazily initialize
dict_table_t::autoinc. It must already have been initialized by
ha_innobase::open() or ha_innobase::create().
Note: The adjustments to class ha_innopart were not tested, because
the source code (native InnoDB partitioning) is not being compiled.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* some of these tests run just fine with InnoDB:
-> s/have_xtradb/have_innodb/
* sys_var tests did basic tests for xtradb only variables
-> remove them, they're useless anyway (sysvar_innodb does it better)
* multi_update had innodb specific tests
-> move to multi_update_innodb.test
|
| | |
|
| | |
|
|\ \ |
|
| | |
| | |
| | |
| | | |
Updated sysvars_wsrep.result file.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
`0' failed
In file sql/opt_range.cc,when calculate_cond_selectivity_for_table() is called with optimizer_use_condition_selectivity=4 then
- thd->no_errors is set to 1
- the original value of thd->no_error is not restored to its original value
- this is causing the assertion to fail in the subsequent queries
Fixed by restoring the original value of thd->no_errors
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Tasks:-
Changes in wsrep_dirty_reads variable
1.) Global + Session scope (Current: session-only)
2.) Can be set using command line.
3.) Allow all commands that do not change data (besides SELECT)
4.) Allow prepared Statements that do not change data
5.) Works with wsrep_sync_wait enabled
|
| | |
| | |
| | |
| | | |
This reverts commit 7ed5563bbee301bf8217080dc78ea6a3e78e23a8.
|
| | | |
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Role names with trailing whitespaces are truncated in length as of
956e92d90873532fee95581c702f7b76643969ea to fix MDEV-8609. The problem
is that the code that creates role mappings expects the string to be null
terminated.
Add the null terminator to account for that as well. In the future
the rest of the code can be cleaned up to never assume c style strings
but only LEX_STRINGS.
|
| | |\ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
handler::increment_statistics or in make_select or assertion failure pfs_thread == ((PFS_thread*) pthread_getspecific((THR_PFS)))
Different fix. Don't allow Item_func_sp to be evaluated unless
all tables are prelocked.
Extend the test case to make sure Item_func_sp::val_str is called
(the table must have at least one row for that).
|
| | | | |
| | | | |
| | | | |
| | | | | |
Rpl_filter::parse_filter_rule() made NULL-safe.
|
| | | | |
| | | | |
| | | | |
| | | | | |
check for VIEW/DERIVED fields
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
handler::increment_statistics or in make_select or assertion failure pfs_thread == ((PFS_thread*) pthread_getspecific((THR_PFS)))
Move expression execution out of Item constructor.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Exclude untouched in prepare phese subqueries from the select/unit tree
because they became unreachable by execution.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Item::send(Protocol*, String*)
The problem was that null_value was not set to "false" on a well-formed row.
If an ill-formed row was followed by a well-forned row, null_value remained
"true" in the call of Item::send() for the well-formed row.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Full-Text searches
Don't assume that a word of n bytes can match a word of
at most n * charset->mbmaxlen bytes, always go for the worst.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
failed in Lex_input_stream::body_utf8_append(const char*, const char*)
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The flag TABLE_LIST::fill_me must be reset to false at the prepare
phase for any materialized derived table used in the executed query.
Otherwise if the optimizer decides to generate a key for such a table
it is generated only for the first execution of the query.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Filesort_buffer::alloc_sort_buffer(uint, uint)
Updating result for the group_by_innodb.test
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Filesort_buffer::alloc_sort_buffer(uint, uint)
When JOIN::destroy() is called for a JOIN object that has
- join->tmp_join != NULL
- also has join->table[0]->sort
then the latter was not cleaned up.
This could cause a memory leak and/or asserts in the subsequent queries.
Fixed by adding a cleanup call.
|
| | | | |
| | | | |
| | | | |
| | | | | |
date_to_datetime(MYSQL_TIME*)
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
be consistent and don't include the table name into the error message,
no other CREATE TABLE error does it.
(the crash happened, because thd->lex->query_tables was NULL)
|
| | |\ \ \ |
|
| | | | | | |
|
| | | | | | |
|
| | | | | | |
|
| | | / / |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Due to the collation used on the roles_mapping_hash, key comparison
would work in a case-insensitive manner. This is incorrect from the
roles mapping perspective. Make use of a case-sensitive collation for that hash,
the same one used for the acl_roles hash.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The function Item_func_isnull::update_used_tables() must
handle the case when the predicate is over not nullable
column in a special way.
This is actually a bug of MariaDB 5.3/5.5, but it's probably
hard to demonstrate that it can cause problems there.
|
| | | | |
| | | | |
| | | | |
| | | | | |
Test intentionally crashes the server, thus corrupted pages possible.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
followed by a multi-byte character
Partially backporting MDEV-9874 from 10.2 to 10.0
READ_INFO::read_field() raised the ER_INVALID_CHARACTER_STRING error
when reading an escape character followed by a multi-byte character.
Raising wellformedness errors in READ_INFO::read_field() was wrong,
because the main goal of READ_INFO::read_field() is to *unescape* the
data which was presumably escaped using mysql_real_escape_string(),
using the same character set with the one specified in
"LOAD DATA INFILE ... CHARACTER SET ..." (or assumed by default).
During LOAD DATA, multi-byte characters are not always scanned as a single
entity! In case of escaped data, parts of a multi-byte character can be
scanned on different loop iterations. So the old code erroneously tested
welformedness in the middle of a multi-byte character.
Moreover, the data after unescaping can go into a BLOB field, not a text field.
Wellformedness tests are meaningless in this case.
Ater this patch, wellformedness is only checked later, during
Field::store(str,length,cs) time. The loop that scans bytes only
makes sure to revert the changes made by mysql_real_escape_string().
Note, in some cases users can supply data which did not really go through
mysql_real_escape_string() and was escaped by some other means,
or was not escaped at all. The file reported in this MDEV contains
the string "\ä", which is an example of such improperly escaped data, as
- either there should be two backslashes: "\\ä"
- or there should be no backslashes at all: "ä"
mysql_real_escape_string() could not generate "\ä".
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Fixing the "connect" command to use "localhost" instead of "127.0.0.1"
to make it work with both "mtr" and "mtr --embedded".
|
| | |\ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
MDEV-10780 Server crashes in in create_tmp_table
MDEV-11265 Access defied when CREATE VIIEW v1 AS SELECT DEFAULT(column) FROM t1
Item_default_value and Item_insert_value erroneously derive from Item_field
but forgot to override some methods that apply only to true fields,
so the server code mixes Item_{default|insert}_value instances with real
table fields (i.e. true Item_field) in some cases.
Overriding a few methods to avoid this.
TODO: we should eventually derive Item_default_value (and Item_insert_value)
directly from Item, as they don't really need the entire Item_field,
Item_ident and Item_result_field functionality.
Only the member "Field *field" related functionality is actually needed,
like val_xxx(), is_null(), get_geometry_type(), charset_for_protocol(), etc.
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Functions from sql/statistics.cc don't seem to expect
stat tables to fail or to have inadequate structure.
Table open errors suppressed and some validity
checks added. Invalid tables reported to the server
log.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The race condition happened if mark_xid_done was considerably delayed,
and an extra Binlog_checkpoint event was written into the binary log
which was later indicated in an error message. Fixed by ensuring
that the event is written before the binary log is rotated to the one
which is used in the output.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
sporadically in buildbot
The reason is a simple race condition. Initially the test was meant to synchronize with master
before showing tables, but it turned out that the slave IO thread should fail by this point,
and synchronization was removed along with a server bugfix. Now added an intermediate sync
instead, to make sure that slave has replicated events before the point of failure
|