diff options
author | Thirunarayanan Balathandayuthapani <thiru@mariadb.com> | 2020-12-21 14:02:46 +0530 |
---|---|---|
committer | Thirunarayanan Balathandayuthapani <thiru@mariadb.com> | 2020-12-21 14:02:46 +0530 |
commit | 4ad11a6d7fa4a269cc8c63b8031d1ff74201ebad (patch) | |
tree | 613d856a1e140def3a7f34146d2b62bfe4569223 /sql | |
parent | 4e0004ea13a9cf2883afe525b947dbefb18f32f3 (diff) | |
download | mariadb-git-10.6-MDEV-515.tar.gz |
MDEV-515 InnoDB bulk insert10.6-MDEV-515
dict_table_t::bulk_trx_id: Stores the bulk insert
transaction id. Protected by exclusive lock of the table.
It can be used by other connection read operation to
identify whether to read the table
ins_node_t::bulk_insert: Flag to indicate the bulk
insert of the table.
trx_mod_table_time_t::bulk_insert_undo: Enable undo logging
for the consecutive insert during bulk insert operation.
Introduced new undo log record "TRX_UNDO_EMPTY".
It should be first undo log during bulk insert operation.
While rollback, if innodb encounters the undo record
then it should empty the table
dict_table_t::empty_table(): Basically it empties all the
indexes associated with table . This is undo operation
of bulk insert operation. Metadata record should be retained
during this rollback operation
dict_table_t::remove_bulk_trx(): Resets the bulk_trx_id
row_log_table_empty(): If the table is being created during
empty_table() then log the EMPTY operation in the online log
row_log_online_op(): If the secondary index is being created
during empty_table() then log ROW_OP_EMPTY operation
in online log
row_log_table_apply_empty(): Applies the ROW_T_EMPTY to the
table that was being rebuild. It does call empty_table() to
empty the table.
btr_root_page_init(): Initialize the root page of the b-tree.
It is used during dict_index_t::empty() and btr_create().
btr_cur_ins_lock_and_undo(): During first insert of bulk
insert operation, write UNDO_EMPTY undo log of it and
reset DB_TRX_ID and DB_ROLL_PTR.
trx_mark_sql_stat_end(): Set the bulk_insert_undo for
bulk operation of the table. So that consecutive insert
does allow undo operation.
row_search_mvcc(): check whether the current transaction
can view the table records based on bulk_trx_id.
row_merge_read_clustered_index(): Avoid the table read if the
bulk transaction id of the table is not visible within
current transaction read view.
- HA_EXTRA_IGNORE_INSERT flag in ha_innobase::extra() is to
indicate ignore statement. Sets the bulk insert undo
for the transaction.
- Add new insert in many test case to avoid the hang. Because MDEV-515
takes X-lock of the table for first insert. So concurrent insert
will wait for X-lock to be released.
Diffstat (limited to 'sql')
-rw-r--r-- | sql/ha_partition.cc | 2 | ||||
-rw-r--r-- | sql/sql_insert.cc | 2 | ||||
-rw-r--r-- | sql/sql_table.cc | 2 |
3 files changed, 6 insertions, 0 deletions
diff --git a/sql/ha_partition.cc b/sql/ha_partition.cc index 470f59fe15f..291d8f92cb8 100644 --- a/sql/ha_partition.cc +++ b/sql/ha_partition.cc @@ -9146,6 +9146,8 @@ int ha_partition::extra(enum ha_extra_function operation) case HA_EXTRA_END_ALTER_COPY: case HA_EXTRA_FAKE_START_STMT: DBUG_RETURN(loop_partitions(extra_cb, &operation)); + case HA_EXTRA_IGNORE_INSERT: + DBUG_RETURN(loop_partitions(extra_cb, &operation)); default: { /* Temporary crash to discover what is wrong */ diff --git a/sql/sql_insert.cc b/sql/sql_insert.cc index f26ee27df42..697ac87bd53 100644 --- a/sql/sql_insert.cc +++ b/sql/sql_insert.cc @@ -2114,6 +2114,8 @@ int write_record(THD *thd, TABLE *table, COPY_INFO *info, select_result *sink) goto after_trg_or_ignored_err; } + if (info->handle_duplicates == DUP_ERROR && info->ignore) + table->file->extra(HA_EXTRA_IGNORE_INSERT); after_trg_n_copied_inc: info->copied++; thd->record_first_successful_insert_id_in_cur_stmt(table->file->insert_id_for_cur_row); diff --git a/sql/sql_table.cc b/sql/sql_table.cc index 318c01bb1dc..f1f57471253 100644 --- a/sql/sql_table.cc +++ b/sql/sql_table.cc @@ -11499,6 +11499,8 @@ copy_data_between_tables(THD *thd, TABLE *from, TABLE *to, } else { + if (ignore) + to->file->extra(HA_EXTRA_IGNORE_INSERT); DEBUG_SYNC(thd, "copy_data_between_tables_before"); found_count++; mysql_stage_set_work_completed(thd->m_stage_progress_psi, found_count); |