diff options
author | Sachin <sachin.setiya@mariadb.com> | 2019-10-09 21:16:31 +0530 |
---|---|---|
committer | Sachin <sachin.setiya@mariadb.com> | 2020-02-03 12:44:31 +0530 |
commit | eed6d215f13cae8b84d9381918a3bd56dcf16188 (patch) | |
tree | eda4beb45c9f485f04662e4801011ddc1c728a51 /storage/myisam/ha_myisam.cc | |
parent | b615d275b8c26ecec943003e1275ee19f94d9887 (diff) | |
download | mariadb-git-eed6d215f13cae8b84d9381918a3bd56dcf16188.tar.gz |
MDEV-20001 Potential dangerous regression: INSERT INTO >=100 rows fail for myisam table with HASH indexes
Problem:-
So the issue is when we do bulk insert with rows
> MI_MIN_ROWS_TO_DISABLE_INDEXES(100) , We try to disable the indexes to
speedup insert. But current logic also disables the long unique indexes.
Solution:- In ha_myisam::start_bulk_insert if we find long hash index
(HA_KEY_ALG_LONG_HASH) we will not disable the index.
This commit also refactors the mi_disable_indexes_for_rebuild function,
Since this is function is called at only one place, it is inlined into
start_bulk_insert
mi_clear_key_active is added into myisamdef.h because now it is also used
in ha_myisam.cc file.
(Same is done for Aria Storage engine)
Diffstat (limited to 'storage/myisam/ha_myisam.cc')
-rw-r--r-- | storage/myisam/ha_myisam.cc | 30 |
1 files changed, 29 insertions, 1 deletions
diff --git a/storage/myisam/ha_myisam.cc b/storage/myisam/ha_myisam.cc index c046572f0c8..f5f36266347 100644 --- a/storage/myisam/ha_myisam.cc +++ b/storage/myisam/ha_myisam.cc @@ -1750,7 +1750,35 @@ void ha_myisam::start_bulk_insert(ha_rows rows, uint flags) else { my_bool all_keys= MY_TEST(flags & HA_CREATE_UNIQUE_INDEX_BY_SORT); - mi_disable_indexes_for_rebuild(file, rows, all_keys); + MYISAM_SHARE *share=file->s; + MI_KEYDEF *key=share->keyinfo; + uint i; + /* + Deactivate all indexes that can be recreated fast. + These include packed keys on which sorting will use more temporary + space than the max allowed file length or for which the unpacked keys + will take much more space than packed keys. + Note that 'rows' may be zero for the case when we don't know how many + rows we will put into the file. + Long Unique Index (HA_KEY_ALG_LONG_HASH) will not be disabled because + there unique property is enforced at the time of ha_write_row + (check_duplicate_long_entries). So we need active index at the time of + insert. + */ + DBUG_ASSERT(file->state->records == 0 && + (!rows || rows >= MI_MIN_ROWS_TO_DISABLE_INDEXES)); + for (i=0 ; i < share->base.keys ; i++,key++) + { + if (!(key->flag & (HA_SPATIAL | HA_AUTO_KEY)) && + ! mi_too_big_key_for_sort(key,rows) && file->s->base.auto_key != i+1 && + (all_keys || !(key->flag & HA_NOSAME)) && + table->key_info[i].algorithm != HA_KEY_ALG_LONG_HASH) + { + mi_clear_key_active(share->state.key_map, i); + file->update|= HA_STATE_CHANGED; + file->create_unique_index_by_sort= all_keys; + } + } } } else |