| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
buf_dblwr_process()
There were several problems. Firstly, page decompression code did not handle
possible decompression errors correctly. Secondly, not all compression methods
tolerate corrupted input (e.g. lz4 did not tolerate input that was compressed
using snappy method). Finally, if page is actually also encrypted we can't
decompress page. Solutions: Add proper error handling to decompression code
and add post compression checksum to page. As whole page including page checksum
is compressed we can reuse the original checksum field for post compression
checksum. With post compression checksum we can detect most of the corruptions.
If no corruption is detected hopefully decompression code can detect
remaining problems.
Doublewrite buffer page recovery for page compressed pages require
that post compression checksum matches. For pages from old releases
supporting page compression checksum must be BUF_NO_CHECKSUM_MAGIC.
Upgrade from older versions is supported as post compression
checksum check accepts the BUF_NO_CHECKSUM_MAGIC that they stored
in checksum filed.
Downgrade to older versions is not supported (assuming that there
were some changes to compressed tables) as page compression code
would not tolerate any other checksum except BUF_NO_CHECKSUM_MAGIC.
innochecksum.cc is_page_corrupted:
If page is compressed verify post compression checksum
buf_page_decrypt_after_read
Return DB_PAGE_CORRUPTED if page is found to be corrupted
after post compression checksum check.
buf_page_io_complete
If page is found corrupted after buf_page_decrypt_after_read
there is no need to continue page check.
buf_page_decrypt_after_read
Verify post compression checksum before decompression and
if it does not match mark page corrupted. Note that old
compressed pages do not really have post compression
checksum so they are treated as not corrupted and then
we need to hope that decompression code can handle the
possible corruptions by returning error.
buf_calc_compressed_crc32
New function to calculate post compression checksum
so that necessary compression metadata fields are
included.
buf_dblwr_decompress
New function that handles post compression checksum check
and page decompression if it is ok.
buf_dblwr_process
Verify post compression checksum before trying to decompress
page.
fil_space_verify_crypt_checksum
Remove incorrect code as compressed and encrypted pages
do have post encryption checksum.
fil_compress_page
Calculate and store post compression checksum to FIL_SPACE_OR_CHKSUM
field as original value is stored on compressed image.
fil_decompress_page
Add error handling if decompression fails.
fil_verify_compression_checksum
New function to verify post compression checksum.
Compressed tablespaces before this change have BUF_NO_CHECKSUM_MAGIC
in checksum field and they must be treated as not corrupted.
convert_error_code_to_mysql
Handle also page corruptions DB_PAGE_CORRUPTED as HA_ERR_CRASHED.
Note that there are cases when we do not know for certain
is page corrupted, corrupted and compressed, or still encrypted
after failed decrypt, thus tablespace could be marked just corrupted.
Tests modified
innodb-page_compression_[zip, lz4, lzma, lzo, bzip2, snappy]
to use innodb-page-compression.inc
innodb-page-compression.inc add innochecksum and intentional tablespace
corruption tests.
innodb-force-corrupt, innodb_bug14147491 add new error
messages to mtr suppression and new error codes.
New tests
encryption/innodb-corrupted.test test intentionally corrupted
tablespaces containing encryption and compression.
doublewrite-compressed test doublewrite recovery for page
compressed tables
innodb-import-101 import files from both big_endian and little_endian
machine
This is 10.1 version use null merge to 10.2 as it has its own version.
|
|
|
|
|
|
| |
stay open)
on Appveyor
|
|\
| |
| | |
MDEV-15045 - wsrep_sst_mysqldump: enforce a minimum version only
|
| | |
|
|\ \
| | |
| | |
| | |
| | | |
grooverdan/10.1-MDEV-13789-freebsd-wsrep-sst-xtrabackup
MDEV-13789: FreeBSD wsrep_sst_xtrabackup-v2 - find compatibilty +lsof
|
| | |
| | |
| | |
| | |
| | | |
Based off #451 by angeloudy <angeloudy@yahoo.com> and
Tao Zhou <taozhou@ip-179.ish.com.au>
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
check availability of rsync utility early , when starting backup with
--rsync. Fail if it is not there.
|
|/ / |
|
|/ |
|
| |
|
|\ |
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
list"
The warning was originally added in
commit c67663054a7db1c77dd1dbc85b63595667a53500
(MySQL 4.1.12, 5.0.3) to trace claimed undo log corruption that
was analyzed in https://lists.mysql.com/mysql/176250
on November 9, 2004.
Originally, the limit was 20,000 undo log headers or transactions,
but in commit 9d6d1902e091c868bb288e0ccf9f975ccb474db9
in MySQL 5.5.11 it was increased to 2,000,000.
The message can be triggered when the progress of purge is prevented
by a long-running transaction (or just an idle transaction whose
read view was started a long time ago), by running many transactions
that UPDATE or DELETE some records, then starting another transaction
with a read view, and finally by executing more than 2,000,000
transactions that UPDATE or DELETE records in InnoDB tables. Finally,
when the oldest long-running transaction is completed, purge would
run up to the next-oldest transaction, and there would still be more
than 2,000,000 transactions to purge.
Because the message can be triggered when the database is obviously
not corrupted, it should be removed. Heavy users of InnoDB should be
monitoring the "History list length" in SHOW ENGINE INNODB STATUS;
there is no need to spam the error log.
|
| | |
| | |
| | |
| | | |
Roll back to most general duplicate removing strategi in case of different stratagies for one position.
|
| | |
| | |
| | |
| | |
| | | |
Backport the fix from 10.0.33 to 5.5, in case someone compiles XtraDB
with -DUNIV_LOG_ARCHIVE
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The XtraDB option innodb_track_changed_pages causes
the function log_group_read_log_seg() to be invoked
even when recv_sys==NULL, leading to the SIGSEGV.
This regression was caused by
MDEV-11027 InnoDB log recovery is too noisy
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
error logs
innodb/buf_LRU_get_free_block
Add debug instrumentation to produce error message about
no free pages. Print error message only once and do not
enable innodb monitor.
xtradb/buf_LRU_get_free_block
Add debug instrumentation to produce error message about
no free pages. Print error message only once and do not
enable innodb monitor. Remove code that does not seem to
be used.
innodb-lru-force-no-free-page.test
New test case to force produce desired error message.
|
| | | |
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
ALTER TABLE
dict_foreign_find_index(): Ignore incompletely created indexes.
After a failed ADD UNIQUE INDEX, an incompletely created index
could be left behind until the next ALTER TABLE statement.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
boundary
This bug affects both writing and reading encrypted redo log in
MariaDB 10.1, starting from version 10.1.3 which added support for
innodb_encrypt_log. That is, InnoDB crash recovery and Mariabackup
will sometimes fail when innodb_encrypt_log is used.
MariaDB 10.2 or Mariabackup 10.2 or later versions are not affected.
log_block_get_start_lsn(): Remove. This function would cause trouble if
a log segment that is being read is crossing a 32-bit boundary of the LSN,
because this function does not allow the most significant 32 bits of the
LSN to change.
log_blocks_crypt(), log_encrypt_before_write(), log_decrypt_after_read():
Add the parameter "lsn" for the start LSN of the block.
log_blocks_encrypt(): Remove (unused function).
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
2cd316911309e3db4007aa224f7874bfbeb14032 broke conf_to_src,
because strings library is now dependend on mysys (my_alloc etc are used
now directly in string lib)
Fix by adding appropriate dependency.
Also exclude conf_to_src from VS IDE builds. EXCLUDE_FROM_ALL
is not enough for that.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
debug_key_management
encrypt_and_grep
innodb_encryption
If real table count is different from what is expected by the test, it
just hangs on waiting to fulfill hardcoded number. And then exits with
**failed** after 10 minutes of wait: quite unfriendly and hard to
figure out what's going on.
|
|\ \ \
| |/ / |
|
| | | |
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | | |
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
trx_undo_rec_get_partial_row(): When the PRIMARY KEY includes a
column prefix of an externally stored column, the already parsed
part of the undo log record may contain a reference to
an off-page column. This is the case in the bug58912 test in
innodb.innodb.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
from secondary indexes
This is a regression caused by MDEV-14051 'Undo log record is too big.'
Purge in the secondary index is wrongly skipped in
row_purge_upd_exist_or_extern() because node->row only does not contain all
indexed columns.
trx_undo_rec_get_partial_row(): Add the parameter for node->update
so that the updated columns will be copied from the initial part
of the undo log record.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
installation
* don't use Env module in tests, use $ENV{xxx} instead
* collateral changes:
** $file in the error message was unset
** $file in the other error message was unset too :)
** source file arguments are conventionally upper-cased
** abort the test (die) on error, don't just echo/exit
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If translation table present when we materialize the derived table then
change it to point to the materialized table.
Added debug info to see really what happens with what derived.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
galera
cherry-pick e6ce97a5928
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | | |
Coverage for temporary tables modifications in read-only transactions.
Introduced in 5.7 by 325cdf426
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
A suggestion to make role propagation simpler from serg@mariadb.org.
Instead of gathering the leaf roles in an array, which for very wide
graphs could potentially mean a big part of the whole roles schema, keep
the previous logic. When finally merging a role, set its counter
to something positive.
This will effectively mean that a role has been merged, thus a random pass
through roles hash that touches a previously merged role won't cause the problem
described in MDEV-12366 any more, as propagate_role_grants_action will stop
attempting to merge from that role.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
partitioned MyISAM table with DATA DIRECTORY/INDEX DIRECTORY options
set data_file_name and index_file_name in HA_CREATE_INFO
before calling check_if_incompatible_data()
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
don't allocate them on THD::mem_root on every init(HA_STATUS_CONST) call,
do it once in open() (because they don't change) on TABLE::mem_root
(so they stay valid until the table is closed)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Problem:- Gtid are not transferred in Galera Cluster.
Solution:- We need to transfer gtid in the case on either when cluster is
slave/master in async replication. In normal Gtid replication gtid are generated on
recieving node itself and it is always on sync with other nodes. Because galera keeps
node in sync , So all nodes get same no of event groups. So the issue arises when
say galera is slave in async replication.
A
| (Async replication)
D <-> E <-> F {Galera replication}
So what should happen is that all node should apply the master gtid but this does
node happen, becuase node E, F does not recieve gtid from D in write set , So what E(or F)
does is that it applies wsrep_gtid_domain_id, D server-id , E gtid next seq no. This
generated gtid does not always work when say A has different domain id.
So In this commit, on galera node when we see that this event is recieved from master
we simply write Gtid_Log_Event in write_set and send it to other nodes.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Sometimes, the test would fail with a result difference for
the READ UNCOMMITTED read, because the incremental backup
would finish before redo log was written for all the rows
that were inserted in the second batch.
To fix that, cause a redo log write by creating another
transaction. The transaction rollback (which internally does commit)
will be flushed to the redo log, and before that, all the preceding
changes will be flushed to the redo log as well.
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
table rebuild
row_log_table_apply_insert_low(), row_log_table_apply_update():
When reporting the error_key_num, only count the clustered index
if it corresponds to a key in the SQL layer.
The assertion failure was probably introduced by the (incomplete)
MySQL 5.6.28 bug fix
Bug #21364096 THE BOGUS DUPLICATE KEY ERROR IN ONLINE DDL
WITH INCORRECT KEY NAME
which we are improving.
Side note: the fix was incorrectly merged to MySQL 5.7.10;
incorrect key names will continue to be reported in MySQL 5.7.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
These assertions were disabled in MariaDB 10.1.1 in
commit df4dd593f29aec8e2116aec1775ad4b8833d8c93
with a bogus comment referring to the function wsrep_fake_trx_id()
that was introduced in the very same commit.
|
| | | |
|