summaryrefslogtreecommitdiff
path: root/mysql-test/main/long_unique.result
Commit message (Collapse)AuthorAgeFilesLines
* Merge 10.4 into 10.5Marko Mäkelä2021-12-031-44/+2
|\
| * MDEV-27160 Out of memory in main.long_uniquest-10.4-markoMarko Mäkelä2021-12-031-43/+0
| | | | | | | | | | | | | | A part of the test main.long_unique attempts to insert records with two 60,000,001-byte columns. Let us move that test into a separate file main.long_unique_big, declared as big test, so that it can be skipped in environments with limited memory.
* | Merge 10.4 into 10.5Marko Mäkelä2021-11-091-0/+1
|\ \ | |/
| * Merge 10.3 into 10.4Marko Mäkelä2021-11-091-3/+4
| |
* | MDEV-25555 Server crashes in tree_record_pos after INPLACE-recreating index ↵Aleksey Midenkov2021-11-031-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | on HEAP table Drop and add same key is considered rename (look ALTER_RENAME_INDEX in fill_alter_inplace_info()). But in this case order of keys may be changed, because mysql_prepare_alter_table() yet does not know about rename and treats 2 operations: drop and add. In that case we disable inplace algorithm for such engines as Memory, MyISAM and Aria with ALTER_INDEX_ORDER flag. These engines have no specialized check_if_supported_inplace_alter() and default handler::check_if_supported_inplace_alter() sees an unknown flag and returns HA_ALTER_INPLACE_NOT_SUPPORTED. ha_innobase::check_if_supported_inplace_alter() works differently and inplace is not disabled (with the help of modified INNOBASE_INPLACE_IGNORE). add_drop_v_cols fork was also tweaked as it wrongly failed with MSG_UNSUPPORTED_ALTER_ONLINE_ON_VIRTUAL_COLUMN when it seen ALTER_INDEX_ORDER. No-op operation must be still no-op no matter of ALTER_INDEX_ORDER presence, so we tweek its condition as well.
* | Merge branch 'bb-10.4-release' into bb-10.5-releaseSergei Golubchik2021-02-151-1/+1
|\ \ | |/
| * Merge branch 'bb-10.3-release' into bb-10.4-releaseSergei Golubchik2021-02-121-1/+1
| | | | | | | | | | Note, the fix for "MDEV-23328 Server hang due to Galera lock conflict resolution" was null-merged. 10.4 version of the fix is coming up separately
* | Merge 10.4 into 10.5Marko Mäkelä2020-04-251-5/+5
|\ \ | |/ | | | | | | The functional changes of commit 5836191c8f0658d5d75484766fdcc3d838b0a5c1 (MDEV-21168) are omitted due to MDEV-742 having addressed the issue.
| * Merge 10.3 into 10.4Marko Mäkelä2020-04-161-5/+5
| | | | | | | | | | | | | | In main.index_merge_myisam we remove the test that was added in commit a2d24def8cc42d27c72d833abfb39ef24a2b96ba because it duplicates the test case that was added in commit 5af12e463549e4bbc2ce6ab720d78937d5e5db4e.
* | cleanup: prepare "update_handler" for WITHOUT OVERLAPSSergei Golubchik2020-03-311-9/+0
| | | | | | | | | | | | | | | | * rename to a generic name * move remaning initializations from query exec to prepare time * simplify/unify key handling in open_table_from_share and delayed * remove dead code * move tests where they belong
* | Improve update handler (long unique keys on blobs)Monty2020-03-241-0/+24
|/ | | | | | | | | | | | | | | | | | | | | | MDEV-21606 Improve update handler (long unique keys on blobs) MDEV-21470 MyISAM and Aria start_bulk_insert doesn't work with long unique MDEV-21606 Bug fix for previous version of this code MDEV-21819 2 Assertion `inited == NONE || update_handler != this' - Move update_handler from TABLE to handler - Move out initialization of update handler from ha_write_row() to prepare_for_insert() - Fixed that INSERT DELAYED works with update handler - Give an error if using long unique with an autoincrement column - Added handler function to check if table has long unique hash indexes - Disable write cache in MyISAM and Aria when using update_handler as if cache is used, the row will not be inserted until end of statement and update_handler would not find conflicting rows. - Removed not used handler argument from check_duplicate_long_entries_update() - Syntax cleanups - Indentation fixes - Don't use single character indentifiers for arguments
* MDEV-19705: Assertion `tmp >= 0' failed in best_access_pathVarun Gupta2019-08-271-0/+15
| | | | | | The reason for hitting the assert is that rec_per_key estimates have some garbage value. So the solution to fix this would be for long unique keys to use use rec_per_key for only 1 keypart, that means rec_per_key[0] would have the estimate.
* MDEV-19049 Server crashes in check_duplicate_long_entry_key, ASAN ↵Sachin2019-06-261-32/+32
| | | | | | stack-buffer-overflow in Field_blob::get_key_image Long Unique keys should always be last unique key.
* MDEV-18707 Server crash in my_hash_sort_bin, ASAN heap-use-after-free in ↵Sergei Golubchik2019-02-271-4/+4
| | | | | | | Field::is_null, server hang, corrupted double-linked list adjust share->stored_rec_length for LONG_UNIQUE_HASH_FIELD, just like it's done for normal virtual fields
* Long Index is only allowed for unique keys not normal index.sachin2019-02-271-0/+57
|
* fix embedded server testmariadb-10.4.3Oleksandr Byelkin2019-02-241-20/+20
|
* MDEV-371 Unique Index for long columnsSergei Golubchik2019-02-221-2/+2
| | | | post-merge fixes
* MDEV-371 Unique Index for long columnsSachin2019-02-221-0/+1408
This patch implements engine independent unique hash index. Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key length > handler->max_key_length() or it can be explicitly specified. Automatic Creation:- Create TABLE t1 (a blob unique); Explicit Creation:- Create TABLE t1 (a int , unique(a) using HASH); Internal KEY_PART Representations:- Long unique key_info will have 2 representations. (lets understand this with an example create table t1(a blob, b blob , unique(a, b)); ) 1. User Given Representation:- key_info->key_part array will be similar to what user has defined. So in case of example it will have 2 key_parts (a, b) 2. Storage Engine Representation:- In this case there will be only one key_part and it will point to HASH_FIELD. This key_part will be always after user defined key_parts. So:- User Given Representation [a] [b] [hash_key_part] key_info->key_part ----^ Storage Engine Representation [a] [b] [hash_key_part] key_info->key_part ------------^ Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function. Working:- 1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH. 2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags) 3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields, When Explicit length is given by user then Item_left is used to concatenate Item_field values. 4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result field by field.