summaryrefslogtreecommitdiff
path: root/storage/innobase/row/row0ftsort.cc
diff options
context:
space:
mode:
authorMarko Mäkelä <marko.makela@mariadb.com>2018-07-25 17:05:47 +0300
committerMarko Mäkelä <marko.makela@mariadb.com>2018-07-26 08:44:42 +0300
commit0f90728bc0f8bc946a61500801b23f8a316e73d5 (patch)
tree735b003cdecab7e6a866e49dba9fd0a32e4e8c36 /storage/innobase/row/row0ftsort.cc
parent32eb5823e42d488c2c8eb85fdf822733e0a482a3 (diff)
downloadmariadb-git-0f90728bc0f8bc946a61500801b23f8a316e73d5.tar.gz
MDEV-16809 Allow full redo logging for ALTER TABLE
Introduce the configuration option innodb_log_optimize_ddl for controlling whether native index creation or table-rebuild in InnoDB should keep optimizing the redo log (and writing MLOG_INDEX_LOAD records to ensure that concurrent backup would fail). By default, we have innodb_log_optimize_ddl=ON, that is, the default behaviour that was introduced in MariaDB 10.2.2 (with the merge of InnoDB from MySQL 5.7) will be unchanged. BtrBulk::m_trx: Replaces m_trx_id. We must be able to check for KILL QUERY even if !m_flush_observer (innodb_log_optimize_ddl=OFF). page_cur_insert_rec_write_log(): Declare globally, so that this can be called from PageBulk::insert(). row_merge_insert_index_tuples(): Remove the unused parameter trx_id. row_merge_build_indexes(): Enable or disable redo logging based on the innodb_log_optimize_ddl parameter. PageBulk::init(), PageBulk::insert(), PageBulk::finish(): Write redo log records if needed. For ROW_FORMAT=COMPRESSED, redo log will be written in PageBulk::compress() unless we called m_mtr.set_log_mode(MTR_LOG_NO_REDO).
Diffstat (limited to 'storage/innobase/row/row0ftsort.cc')
-rw-r--r--storage/innobase/row/row0ftsort.cc7
1 files changed, 3 insertions, 4 deletions
diff --git a/storage/innobase/row/row0ftsort.cc b/storage/innobase/row/row0ftsort.cc
index c59d3a95e3e..6abc84ddee7 100644
--- a/storage/innobase/row/row0ftsort.cc
+++ b/storage/innobase/row/row0ftsort.cc
@@ -1684,11 +1684,10 @@ row_fts_merge_insert(
dict_table_close(aux_table, FALSE, FALSE);
aux_index = dict_table_get_first_index(aux_table);
- FlushObserver* observer;
- observer = psort_info[0].psort_common->trx->flush_observer;
-
/* Create bulk load instance */
- ins_ctx.btr_bulk = UT_NEW_NOKEY(BtrBulk(aux_index, trx->id, observer));
+ ins_ctx.btr_bulk = UT_NEW_NOKEY(
+ BtrBulk(aux_index, trx, psort_info[0].psort_common->trx
+ ->flush_observer));
/* Create tuple for insert */
ins_ctx.tuple = dtuple_create(heap, dict_index_get_n_fields(aux_index));