| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
Notable by:
main.plugin_vars w7 [ fail ]
Test ended at 2021-10-28 17:43:48
CURRENT_TEST: main.plugin_vars
mysqltest: At line 70: query 'install plugin query_response_time soname 'query_response_time'' failed: 1126: Can't open shared library '/buildbot/aarch64-rhel-8/build/mysql-test/var/plugins/query_response_time.so' (errno: 11, undefined symbol: __aarch64_ldadd4_relax)
Fixes: f502ccbcb5dfce29067434885a23db8d1bd5f134
|
|
|
|
|
|
|
| |
Long UNIQUE HASH index silently creates virtual column index, which should
be impossible for base columns featuring AUTO_INCREMENT.
Fix: add a relevant check; add new vcol type for a prettier error message.
|
| |
|
|\ |
|
| |\ |
|
| | |
| | |
| | |
| | |
| | | |
The InnoDB changes in MySQL 5.7.36 that were applicable to MariaDB
were covered by MDEV-26864, MDEV-26865, MDEV-26866.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The initial test case for MySQL Bug #33053297 is based on
mysql/mysql-server@27130e25078864b010d81266f9613d389d4a229b.
innobase_get_field_from_update_vector is not a suitable function to fetch
updated row info, as well as parent table's update vector is not always
suitable. For instance, in case of DELETE it contains undefined data.
castade->update vector seems to be good enough to fetch all base columns
update data, and besides faster, and less error-prone.
|
| | |
| | |
| | |
| | |
| | | |
ha_rocksdb.h:459:15: warning: 'table_type' overrides a member
function but is not marked 'override' [-Winconsistent-missing-override]
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The assert inside String::copy() prevents copying from from "str"
if its own String::Ptr also points to the same memory.
The idea of the assert is that copy() performs memory reallocation,
and this reallocation can free (and thus invalidate) the memory pointed by Ptr,
which can lead to further copying from a freed memory.
The assert was incomplete: copy() can free the memory pointed by its Ptr
only if String::alloced is true!
If the String is not alloced, it is still safe to copy even from
the location pointed by Ptr.
This scenario demonstrates a safe copy():
const char *tmp= "123";
String str1(tmp, 3);
String str2(tmp, 3);
// This statement is safe:
str2.copy(str1->ptr(), str1->length(), str1->charset(), cs_to, &errors);
Inside the copy() the parameter "str" is equal to String::Ptr in this example.
But it's still ok to reallocate the memory for str2, because str2
was a constant before the copy() call. Thus reallocation does not
make the memory pointed by str1->ptr() invalid.
Adjusting the assert condition to allow copying for constant strings.
|
| | | |
|
| | |
| | |
| | |
| | | |
Take into account client capabilities.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
Sometimes, although not often, it would timeout after 3 minutes.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Happens with Innodb engine.
Move unlock_locked_table() past drop_open_table(), and
rollback current statement, so that we can actually unlock the table.
Anything else results in assertions, in drop, or unlock, or in close_table.
|
| | |
| | |
| | |
| | |
| | |
| | | |
MDEV-25702(commit 696de6d06c0eeaf7b20d5f89278ed7d62a9f204f) should've
closed the fts table further. This patch closes the table after
finishing the bulk insert operation.
|
| | |
| | |
| | |
| | |
| | | |
DBUG_ASSERT removed as the AUTO INCREMENT can actually be 0 when the
SET insert_id= 0; was done.
|
| | |
| | |
| | |
| | | |
Get rid of the global big_buffer.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
SysTablespace::file_not_found(): If the system tablespace cannot be
found and innodb_force_recovery has been specified, refuse to start up.
The system tablespace is necessary for accessing any InnoDB tables,
because it contains the TRX_SYS page (the state of transactions)
and the InnoDB data dictionary.
This is similar to our handling of innodb_read_only except that
we will happily create the InnoDB temporary tablespace even if
innodb_force_recovry is set.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In the replication-related code, in the exec_relay_log_event() (slave.cc)
function, where the "data_lock" mutex is captured, this mutex is then not
released on one of the early return branches within a specific insert for
WSREP, namely under the branch: "if (wsrep_before_statement(thd))". As a
result, the mutex remains captured, resulting in errors or hangs.
This commit fixes this issue, which is now showing up as intermittent
failures in mtr tests for galera and galera_sr suites.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
w/optimizer_trace enabled
Adding 10.4 specific tests.
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Also fixes:
MDEV-25399 Assertion `name.length == strlen(name.str)' failed in Item_func_sp::make_send_field
Also fixes a problem that in this scenario:
SET NAMES binary;
SELECT 'some not well-formed utf8 string';
the auto-generated column name copied the binary string value directly
to the Item name, without checking utf8 well-formedness.
After this change auto-generated column names work as follows:
- Zero bytes 0x00 are copied to the name using HEX notation
- In case of "SET NAMES binary", all bytes sequences that do not make
well-formed utf8 characters are copied to the name using HEX notation.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
ALTER TABLE IMPORT doesn't properly handle instant alter metadata.
This patch makes IMPORT read, parse and apply instant alter metadata at the
very beginning of operation. So, cases when source table has some metadata
and destination table doesn't have it now works fine.
DISCARD already removes instant metadata so importing normal table into
instant table worked fine before this patch.
decrypt_decompress(): decrypts and decompresses page if needed
handle_instant_metadata(): this should be the first thing to read source
table. Basically, it applies instant metadata to a destination
dict_table_t object. This is the first thing to read FSP flags so
all possible checks of it were moved to this function.
PageConverter::update_index_page(): it doesn't now read instant metadata.
This logic were moved into handle_instant_metadata()
row_import::match_flags(): this is a first part row_import::match_schema().
As a separate function it's used by handle_instant_metadata().
fil_space_t::is_full_crc32_compressed(): added convenient function
ha_innobase::discard_or_import_tablespace(): do not reload table definition
to read instant metadata because handle_instant_metadata() does it better.
The reverted code was originally added in
4e7ee166a9c76eb3546356aabfd2dbc759671cd0
ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit()
full_crc32_import.test shows different results, because no
dict_table_close() and dict_table_open_on_id() happens.
Thus, SHOW CREATE TABLE shows a little bit older table definition.
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | | |
$opt_vs_config to $multiconfig to use with other cmake multiconfig generators
|
| | | |
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | | |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Based on mysql/mysql-server@bc9c46bf2894673d0df17cd0ee872d0d99663121
but without sleeps.
The test was verified to hit the debug assertion if the change to
fts_add_doc_by_id() in commit 2d98b967e31623d9027c0db55330dde2c9d1d99a
was reverted.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
fts_cache_t::total_size_at_sync: New field, to sample total_size.
fts_add_doc_by_id(): Invoke sync if total_size has grown too much
since the previous sync request. (Maintain cache->total_size_at_sync.)
ib_wqueue_t::length: Caches ib_list_len(*items).
ib_wqueue_len(): Removed. We will refer to fts_optimize_wq->length
directly.
Based on mysql/mysql-server@bc9c46bf2894673d0df17cd0ee872d0d99663121
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
trx_commit_in_memory(): Do not release the rseg reference before
trx_undo_commit_cleanup() has been invoked and the current transaction
is truly done with the rollback segment. The purpose of the reference
count is to prevent data races with trx_purge_truncate_history().
This is based on
mysql/mysql-server@ac79aa1522f33e6eb912133a81fa2614db764c9c.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
InnoDB commit fails when consecutive FTS_DOC_ID value
is greater than 4294967295.
Fix is that InnoDB should remove the delta FTS_DOC_ID
value limitations and fts should encode 8 byte value,
remove FTS_DOC_ID_MAX_STEP variable. Replaced the
fts0vlc.ic file with fts0vlc.h
fts_encode_int(): Should be able to encode 10 bytes value
fts_get_encoded_len(): Should get the length of the value
which has 10 bytes
fts_decode_vlc(): Add debug assertion to verify the maximum
length allowed is 10.
mach_read_uint64_little_endian(): Reads 64 bit stored in
little endian format
Added a unit test case which check for minimum and maximum
value to do the fts encoding
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In commit 1811fd51fbae9e6c1f06ce93faef2bf1279cd3b6 the assertion
should have said error_reported instead of !error_reported.
But, that revised assertion would still fail in main.defaults
where ER_BAD_DATA is reported during CREATE TABLE.
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
create_table_info_t::innobase_table_flags(): Refuse to create
a PAGE_COMPRESSED table with PAGE_COMPRESSION_LEVEL=0 if also
innodb_compression_level=0.
The parameter value innodb_compression_level=0 was only somewhat
meaningful for testing or debugging ROW_FORMAT=COMPRESSED tables.
For the page_compressed format, it never made any sense, and the
check in dict_tf_is_valid_not_redundant() that was added in
72378a25830184f91005be7e80cfb28381c79f23 (MDEV-12873) would cause
the server to crash.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The assertion is absolutely correct since no data access is possible after
XA PREPARE.
The check is added in mysql_ha_read.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is a duplicate of MDEV-18278 89936f11e965, but I will add an
additional assertion
Description:
The frm corruption should not be reported during CREATE TABLE. Normally
it doesn't, and the data to fill TABLE is taken by open_table_from_share
call. However, the vcol data is stored as SQL string in
table->s->vcol_defs.str and is anyway parsed on each table open.
It is impossible [or hard] to avoid, because it's hard to clone the
expression tree in general (it's easier to parse).
Normally parse_vcol_defs should only fail on semantic errors. If so,
error_reported is set to true. Any other failure is not expected during
table creation. There is either unhandled/unacknowledged error, or
something went really wrong, like memory reject. This all should be
asserted anyway.
Solution:
* Set *error_reported=true for the forward references check;
* Assert for every unacknowledged error during table creation.
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
We should set the charset in
Item_func_json_format::fix_length_and_dec().
|
| | |
| | |
| | |
| | |
| | | |
Let us use a minimal-size buffer pool to ensure that page flushing
will be slow enough so that LRU eviction cannot be avoided.
|