From eed6d215f13cae8b84d9381918a3bd56dcf16188 Mon Sep 17 00:00:00 2001 From: Sachin Date: Wed, 9 Oct 2019 21:16:31 +0530 Subject: MDEV-20001 Potential dangerous regression: INSERT INTO >=100 rows fail for myisam table with HASH indexes Problem:- So the issue is when we do bulk insert with rows > MI_MIN_ROWS_TO_DISABLE_INDEXES(100) , We try to disable the indexes to speedup insert. But current logic also disables the long unique indexes. Solution:- In ha_myisam::start_bulk_insert if we find long hash index (HA_KEY_ALG_LONG_HASH) we will not disable the index. This commit also refactors the mi_disable_indexes_for_rebuild function, Since this is function is called at only one place, it is inlined into start_bulk_insert mi_clear_key_active is added into myisamdef.h because now it is also used in ha_myisam.cc file. (Same is done for Aria Storage engine) --- include/myisam.h | 1 + 1 file changed, 1 insertion(+) (limited to 'include/myisam.h') diff --git a/include/myisam.h b/include/myisam.h index 216f041c8a9..f2e31bb9f60 100644 --- a/include/myisam.h +++ b/include/myisam.h @@ -430,6 +430,7 @@ int sort_ft_buf_flush(MI_SORT_PARAM *sort_param); int thr_write_keys(MI_SORT_PARAM *sort_param); int sort_write_record(MI_SORT_PARAM *sort_param); int _create_index_by_sort(MI_SORT_PARAM *info,my_bool no_messages, ulonglong); +my_bool mi_too_big_key_for_sort(MI_KEYDEF *key, ha_rows rows); #ifdef __cplusplus } -- cgit v1.2.1