| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
added summary/description per package.
|
| |
|
| |
|
|
|
|
| |
This reverts commit 5d6f3cebca77ee650e6cde3bd738d1fac0a8110c.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When restoring lastinx last_key.keyinfo must be updated as well. The
good example is in _ma_check_index().
The point of failure is extra(HA_EXTRA_NO_KEYREAD) in
ha_maria::get_auto_increment():
1. extra(HA_EXTRA_KEYREAD) saves lastinx;
2. maria_rkey() changes index, so the lastinx and last_key.keyinfo;
3. extra(HA_EXTRA_NO_KEYREAD) restores lastinx but not
last_key.keyinfo.
So we have discrepancy between lastinx and last_key.keyinfo after 3.
|
|
|
|
|
|
|
|
|
|
| |
my_copy_fix_mb() passed MIN(src_length,dst_length) to
my_append_fix_badly_formed_tail(). It could break a multi-byte
character in the middle, which put the question mark to the
destination.
Fixing the code to pass the true src_length to
my_append_fix_badly_formed_tail().
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a server startup option --gdb a.k.a. --debug-gdb that requests
signals to be set for more convenient debugging. Most notably, SIGINT
(ctrl-c) will not be ignored, and you will be able to interrupt the
execution of the server while GDB is attached to it.
When we are debugging, the signal handlers that would normally display
a terse stack trace are useless.
When we are debugging with rr, the signal handlers may interfere with
a SIGKILL that could be sent to the process by the environment, and ruin
the rr replay trace, due to a Linux kernel bug
https://lkml.org/lkml/2021/10/31/311
To be able to diagnose bugs in kill+restart tests, we may really need
both a trace before the SIGKILL and a trace of the failure after a
subsequent server startup. So, we had better avoid hitting the problem
by simply not installing those signal handlers.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in row_merge_fts_doc_tokenize, stack smashing
strmake() puts one extra 0x00 byte at the end of the string.
The code in my_strnxfrm_tis620[_nopad] did not take this into
account, so in the reported scenario the 0x00 byte was put outside
of a stack variable, which made ASAN crash.
This problem is already fixed in in MySQL:
commit 19bd66fe43c41f0bde5f36bc6b455a46693069fb
Author: bin.x.su@oracle.com <>
Date: Fri Apr 4 11:35:27 2014 +0800
But the fix does not seem to be correct, as it breaks when finds a zero byte
in the source string.
Using memcpy() instead of strmake().
- Unlike strmake(), memcpy() it does not write beyond the destination
size passed.
- Unlike the MySQL fix, memcpy() does not break on the first 0x00 byte found
in the source string.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
temporary table
When transaction creates or drops temporary tables and afterward its statement
faces an error even the transactional table statement's cached ROW
format events get involved into binlog and are visible after the transaction's commit.
Fixed with proper analysis of whether the errored-out statement needs
to be rolled back in binlog.
For instance a fact of already cached CREATE or DROP for temporary
tables by previous statements alone
does not cause to retain the being errored-out statement events in the
cache.
Conversely, if the statement creates or drops a temporary table
itself it can't be rolled back - this rule remains.
|
|
|
|
| |
let's not run them under valgrind
|
|
|
|
| |
Those messages don't indicate errors, they should be normal warnings.
|
|
|
|
|
| |
The InnoDB changes in MySQL 5.7.36 that were applicable to MariaDB
were covered by MDEV-26864, MDEV-26865, MDEV-26866.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The initial test case for MySQL Bug #33053297 is based on
mysql/mysql-server@27130e25078864b010d81266f9613d389d4a229b.
innobase_get_field_from_update_vector is not a suitable function to fetch
updated row info, as well as parent table's update vector is not always
suitable. For instance, in case of DELETE it contains undefined data.
castade->update vector seems to be good enough to fetch all base columns
update data, and besides faster, and less error-prone.
|
|
|
|
|
| |
ha_rocksdb.h:459:15: warning: 'table_type' overrides a member
function but is not marked 'override' [-Winconsistent-missing-override]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The assert inside String::copy() prevents copying from from "str"
if its own String::Ptr also points to the same memory.
The idea of the assert is that copy() performs memory reallocation,
and this reallocation can free (and thus invalidate) the memory pointed by Ptr,
which can lead to further copying from a freed memory.
The assert was incomplete: copy() can free the memory pointed by its Ptr
only if String::alloced is true!
If the String is not alloced, it is still safe to copy even from
the location pointed by Ptr.
This scenario demonstrates a safe copy():
const char *tmp= "123";
String str1(tmp, 3);
String str2(tmp, 3);
// This statement is safe:
str2.copy(str1->ptr(), str1->length(), str1->charset(), cs_to, &errors);
Inside the copy() the parameter "str" is equal to String::Ptr in this example.
But it's still ok to reallocate the memory for str2, because str2
was a constant before the copy() call. Thus reallocation does not
make the memory pointed by str1->ptr() invalid.
Adjusting the assert condition to allow copying for constant strings.
|
| |
|
|
|
|
| |
Take into account client capabilities.
|
| |
|
| |
|
|
|
|
| |
Sometimes, although not often, it would timeout after 3 minutes.
|
|
|
|
|
|
|
|
|
| |
Happens with Innodb engine.
Move unlock_locked_table() past drop_open_table(), and
rollback current statement, so that we can actually unlock the table.
Anything else results in assertions, in drop, or unlock, or in close_table.
|
|
|
|
|
|
| |
MDEV-25702(commit 696de6d06c0eeaf7b20d5f89278ed7d62a9f204f) should've
closed the fts table further. This patch closes the table after
finishing the bulk insert operation.
|
|
|
|
|
| |
DBUG_ASSERT removed as the AUTO INCREMENT can actually be 0 when the
SET insert_id= 0; was done.
|
|
|
|
| |
Get rid of the global big_buffer.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
SysTablespace::file_not_found(): If the system tablespace cannot be
found and innodb_force_recovery has been specified, refuse to start up.
The system tablespace is necessary for accessing any InnoDB tables,
because it contains the TRX_SYS page (the state of transactions)
and the InnoDB data dictionary.
This is similar to our handling of innodb_read_only except that
we will happily create the InnoDB temporary tablespace even if
innodb_force_recovry is set.
|
| |
|
|
|
|
| |
$opt_vs_config to $multiconfig to use with other cmake multiconfig generators
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Based on mysql/mysql-server@bc9c46bf2894673d0df17cd0ee872d0d99663121
but without sleeps.
The test was verified to hit the debug assertion if the change to
fts_add_doc_by_id() in commit 2d98b967e31623d9027c0db55330dde2c9d1d99a
was reverted.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fts_cache_t::total_size_at_sync: New field, to sample total_size.
fts_add_doc_by_id(): Invoke sync if total_size has grown too much
since the previous sync request. (Maintain cache->total_size_at_sync.)
ib_wqueue_t::length: Caches ib_list_len(*items).
ib_wqueue_len(): Removed. We will refer to fts_optimize_wq->length
directly.
Based on mysql/mysql-server@bc9c46bf2894673d0df17cd0ee872d0d99663121
|
|
|
|
|
|
|
|
|
|
| |
trx_commit_in_memory(): Do not release the rseg reference before
trx_undo_commit_cleanup() has been invoked and the current transaction
is truly done with the rollback segment. The purpose of the reference
count is to prevent data races with trx_purge_truncate_history().
This is based on
mysql/mysql-server@ac79aa1522f33e6eb912133a81fa2614db764c9c.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
InnoDB commit fails when consecutive FTS_DOC_ID value
is greater than 4294967295.
Fix is that InnoDB should remove the delta FTS_DOC_ID
value limitations and fts should encode 8 byte value,
remove FTS_DOC_ID_MAX_STEP variable. Replaced the
fts0vlc.ic file with fts0vlc.h
fts_encode_int(): Should be able to encode 10 bytes value
fts_get_encoded_len(): Should get the length of the value
which has 10 bytes
fts_decode_vlc(): Add debug assertion to verify the maximum
length allowed is 10.
mach_read_uint64_little_endian(): Reads 64 bit stored in
little endian format
Added a unit test case which check for minimum and maximum
value to do the fts encoding
|
| |
|
|
|
|
|
|
|
| |
In commit 1811fd51fbae9e6c1f06ce93faef2bf1279cd3b6 the assertion
should have said error_reported instead of !error_reported.
But, that revised assertion would still fail in main.defaults
where ER_BAD_DATA is reported during CREATE TABLE.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
create_table_info_t::innobase_table_flags(): Refuse to create
a PAGE_COMPRESSED table with PAGE_COMPRESSION_LEVEL=0 if also
innodb_compression_level=0.
The parameter value innodb_compression_level=0 was only somewhat
meaningful for testing or debugging ROW_FORMAT=COMPRESSED tables.
For the page_compressed format, it never made any sense, and the
check in dict_tf_is_valid_not_redundant() that was added in
72378a25830184f91005be7e80cfb28381c79f23 (MDEV-12873) would cause
the server to crash.
|
|
|
|
|
|
|
| |
The assertion is absolutely correct since no data access is possible after
XA PREPARE.
The check is added in mysql_ha_read.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a duplicate of MDEV-18278 89936f11e965, but I will add an
additional assertion
Description:
The frm corruption should not be reported during CREATE TABLE. Normally
it doesn't, and the data to fill TABLE is taken by open_table_from_share
call. However, the vcol data is stored as SQL string in
table->s->vcol_defs.str and is anyway parsed on each table open.
It is impossible [or hard] to avoid, because it's hard to clone the
expression tree in general (it's easier to parse).
Normally parse_vcol_defs should only fail on semantic errors. If so,
error_reported is set to true. Any other failure is not expected during
table creation. There is either unhandled/unacknowledged error, or
something went really wrong, like memory reject. This all should be
asserted anyway.
Solution:
* Set *error_reported=true for the forward references check;
* Assert for every unacknowledged error during table creation.
|
| |
|
|
|
|
|
| |
We should set the charset in
Item_func_json_format::fix_length_and_dec().
|
|
|
|
|
| |
Let us use a minimal-size buffer pool to ensure that page flushing
will be slow enough so that LRU eviction cannot be avoided.
|
|
|
|
|
|
|
|
| |
for their definitions
Do not print illegal table field names for non-top-level SELECT list,
they will not be refered in any case but create problem for parsing
of printed result.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
WRITE_CACHE' failed
Problem:
========
This patch addresses two issues.
First, if a CHANGE MASTER command is issued and an error happens
while locating the replica’s relay logs, the logs can be put into an
invalid state where future updates fail and future CHANGE MASTER
calls crash the server. More specifically, right before a replica
purges the relay logs (part of the `CHANGE MASTER TO` logic), the
relay log is temporarily closed with state LOG_TO_BE_OPENED. If the
server errors in-between the temporary log closure and purge, i.e.
during the function find_log_pos, the log should be closed.
MDEV-25284 reveals the log is not properly closed.
Second, upon issuing a RESET SLAVE ALL command, a slave’s GTID
filters are not cleared (DO_DOMAIN_IDS, IGNORE_DOMIAN_IDS,
IGNORE_SERVER_IDS). MySQL had a similar bug report, Bug #18816897,
which fixed this issue to clear IGNORE_SERVER_IDS after issuing
RESET SLAVE ALL in version 5.7.
Solution:
=========
To fix the first problem, the CHANGE MASTER error handling logic was
extended to transition the relay log state to LOG_CLOSED from
LOG_TO_BE_OPENED.
To fix the second problem, the RESET SLAVE ALL logic is extended to
clear the domain_id filter and ignore_server_ids.
Reviewed By:
============
Andrei Elkin <andrei.elkin@mariadb.com>
|