summaryrefslogtreecommitdiff
path: root/sql
Commit message (Collapse)AuthorAgeFilesLines
* MDEV-22844 JSON_ARRAYAGG is limited by group_concat_max_len.Alexey Botchkov2020-06-155-21/+57
| | | | Warning message and function result fixed
* MDEV-22881 Unexpected errors, corrupt output, Valgrind / ASAN errors in ↵Aleksey Midenkov2020-06-151-1/+2
| | | | | | | | | | | | | | | | | | Item_ident::print or append_identifier After this code end_inplace: if (thd->locked_tables_list.reopen_tables(thd, false)) goto err_with_mdl_after_alter; table is not reopened (need_reopen is false) but some_table_marked_for_reopen is reset to false. Item_field is allocated on table lock and assigned new name on first ALTER which is then freed at the end of the command. Second ALTER accessess this Item_field and gets garbage value.
* MDEV-22891: Optimizer trace: const tables are not clearly visibleSergei Petrunia2020-06-151-0/+6
| | | | Make mark_join_nest_as_const() print its action into the trace.
* Cleaned up compile failure if DEBUG_SYNC is disabled in DEBUG buildsMonty2020-06-141-2/+2
|
* Fixed bug in trans_check() where on error we gave wrong return valueMonty2020-06-141-2/+12
|
* Fix for crash in Aria LOCK TABLES + CREATE TRIGGERMonty2020-06-141-1/+5
| | | | MDEV-22829 SIGSEGV in _ma_reset_history on LOCK
* BINLOG with LOCK TABLES and SAVEPOINT could cause a crash in debug binMonty2020-06-141-0/+1
| | | | | | MDEV-22048 Assertion `binlog_table_maps == 0 || locked_tables_mode == LTM_LOCK_TABLES' failed in THD::reset_for_next_command
* MDEV-19745 BACKUP STAGE BLOCK_DDL hangs on flush sequence tableMonty2020-06-144-11/+22
| | | | | | | | | | | Problem was that FLUSH TABLES where trying to read latest sequence state which conflicted with a running ALTER SEQUENCE. Removed the reading of the state, when opening a table for FLUSH, as it's not needed in this case. Other thing: - Fixed a potential issue with concurrently running ALTER SEQUENCE where the later ALTER could potentially read old data
* Fixed core dump in "echo shutdown | mysqld --bootstrap"Monty2020-06-141-0/+1
|
* Updated code commentsMonty2020-06-141-1/+1
|
* Fixed crash in failing instant alter table with partitioned tableMonty2020-06-141-1/+3
| | | | | MDEV-22649 SIGSEGV in ha_partition::create_partitioning_metadata on ALTER MDEV-22804 SIGSEGV in ha_partition::create_partitioning_metadata
* Changes needed for ColumnStore and insert cacheMonty2020-06-141-1/+1
| | | | | | | | | | | MCOL-3875 Columnstore write cache The main change is to change thr_lock function get_status to return a value that indicates we have to abort the lock. Other thing: - Made start_bulk_insert() and end_bulk_insert() protected so that the insert cache can use these
* Changed some DBUG_PRINT that used error:Monty2020-06-141-3/+3
| | | | | | The reson for the change was to make it easier to find true errors when searching in trace logs. "error:" should mainly be used when we have a real error
* Make error messages from DROP TABLE and DROP TABLE IF EXISTS consistentMonty2020-06-141-75/+88
| | | | | | | | | | | | | - IF EXISTS ends with a list of all not existing object, instead of a separate note for every not existing object - Produce a "Note" for all wrongly dropped objects (like trying to do DROP SEQUENCE for a normal table) - Do not write existing tables that could not be dropped to binlog Other things: MDEV-22820 Bogus "Unknown table" warnings produced upon attempt to drop parent table referenced by FK This was caused by an older version of this commit patch and later fixed
* Fixed error messages from DROP VIEW to align with DROP TABLEMonty2020-06-141-29/+18
| | | | | | | | | | | - Produce a "Note" for all wrongly dropped objects (Like doing DROP VIEW on a table). - IF EXISTS ends with a list of all not existing objects, instead of a separate note for every not existing object. Other things: - Fixed bug where one could do CREATE TEMPORARY SEQUENCE multiple times and create multiple temporary sequences with the same name.
* MDEV-11412 Ensure that table is truly dropped when using DROP TABLEMonty2020-06-148-173/+421
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The used code is largely based on code from Tencent The problem is that in some rare cases there may be a conflict between .frm files and the files in the storage engine. In this case the DROP TABLE was not able to properly drop the table. Some MariaDB/MySQL forks has solved this by adding a FORCE option to DROP TABLE. After some discussion among MariaDB developers, we concluded that users expects that DROP TABLE should always work, even if the table would not be consistent. There should not be a need to use a separate keyword to ensure that the table is really deleted. The used solution is: - If a .frm table doesn't exists, try dropping the table from all storage engines. - If the .frm table exists but the table does not exist in the engine try dropping the table from all storage engines. - Update storage engines using many table files (.CVS, MyISAM, Aria) to succeed with the drop even if some of the files are missing. - Add HTON_AUTOMATIC_DELETE_TABLE to handlerton's where delete_table() is not needed and always succeed. This is used by ha_delete_table_force() to know which handlers to ignore when trying to drop a table without a .frm file. The disadvantage of this solution is that a DROP TABLE on a non existing table will be a bit slower as we have to ask all active storage engines if they know anything about the table. Other things: - Added a new flag MY_IGNORE_ENOENT to my_delete() to not give an error if the file doesn't exist. This simplifies some of the code. - Don't clear thd->error in ha_delete_table() if there was an active error. This is a bug fix. - handler::delete_table() will not abort if first file doesn't exists. This is bug fix to handle the case when a drop table was aborted in the middle. - Cleaned up mysql_rm_table_no_locks() to ensure that if_exists uses same code path as when it's not used. - Use non_existing_Table_error() to detect if table didn't exists. Old code used different errors tests in different position. - Table_triggers_list::drop_all_triggers() now drops trigger file if it can't be parsed instead of leaving it hanging around (bug fix) - InnoDB doesn't anymore print error about .frm file out of sync with InnoDB directory if .frm file does not exists. This change was required to be able to try to drop an InnoDB file when .frm doesn't exists. - Fixed bug in mi_delete_table() where the .MYD file would not be dropped if the .MYI file didn't exists. - Fixed memory leak in Mroonga when deleting non existing table - Fixed memory leak in Connect when deleting non existing table Bugs fixed introduced by the original version of this commit: MDEV-22826 Presence of Spider prevents tables from being force-deleted from other engines
* Merge 10.4 into 10.5Marko Mäkelä2020-06-1411-33/+87
|\
| * MDEV-21560 Assertion `grant_table || grant_table_role' failed in ↵Sergei Golubchik2020-06-131-1/+2
| | | | | | | | | | | | | | | | | | | | | | check_grant_all_columns With RETURNING it can happen that the user has some privileges on the table (namely, DELETE), but later needs different privileges on individual columns (namely, SELECT). Do the same as in check_grant_column() - ER_COLUMNACCESS_DENIED_ERROR, not an assert.
| * Merge 10.3 into 10.4Marko Mäkelä2020-06-1310-46/+101
| |\
| | * Merge 10.2 into 10.3Marko Mäkelä2020-06-136-22/+66
| | |\
| | | * Fix wrong merge of commit d218d1aa49e848cef2bdbe83bbaf08e474d5209cVicențiu Ciorbaru2020-06-121-0/+3
| | | |
| | | * Merge branch '10.1' into 10.2Vicențiu Ciorbaru2020-06-114-22/+56
| | | |\
| | | | * MDEV-22755 CREATE USER leads to indirect SIGABRT in __stack_chk_fail () from ↵Alexander Barkov2020-06-111-16/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fill_schema_user_privileges + *** stack smashing detected *** (on optimized builds) The code erroneously used buff[100] in a fiew places to make a GRANTEE value in the form: 'user'@'host' Fix: - Fixing the code to use (USER_HOST_BUFF_SIZE + 6) instead of 100. - Adding a DBUG_ASSERT to make sure the buffer is enough - Wrapping the code into a class Grantee_str, to reuse it easier in 4 places.
| | | | * MDEV-5924: MariaDB could crash after changing the query_cache sizeOleksandr Byelkin2020-06-101-3/+13
| | | | | | | | | | | | | | | | | | | | | | | | | The real problem was that attempt to roll back cahnes after end of memory in QC was made incorrectly and lead to using uninitialized memory. (bug has nothing to do with resize operation, it is just lack of resources erro processed incorrectly)
| | | | * Revert "MDEV-22830: SQL_CALC_FOUND_ROWS not working properly for single ↵Oleksandr Byelkin2020-06-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SELECT for DUAL" This reverts commit 443391236d20cd0303fcc9957eb49a6aaf28316e.
| | | | * MDEV-22830: SQL_CALC_FOUND_ROWS not working properly for single SELECT for DUALrucha1742020-06-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In case of SELECT without tables which returns either 0 or 1 rows, JOIN::exec_inner() did not check if the flag representing SQL_CALC_FOUND_ROWS is set or not and send_records was direclty assigned 0. So SELECT FOUND_ROWS() was giving 0 in the output. Now it checks if the flag is set, if it is set send_record=1 else 0. 1 is the number of rows that could have been sent to the client if the SELECT query had SQL_CALC_FOUND_ROWS. It is 0 when no rows were sent because the SELECT query did not have SQL_CALC_FOUND_ROWS.
| | | | * MDEV-22717: Conditional jump or move depends on uninitialised value(s) in ↵Sujatha2020-06-081-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | find_uniq_filename(char*, unsigned long) Fix: === Initialize 'number' variable to '0'.
| | | | * MDEV-22728: SIGFPE in Unique::get_cost_calc_buff_size from ↵Varun Gupta2020-06-072-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | prepare_search_best_index_intersect on optimized builds For low sort_buffer_size, in the cost calculation of using the Unique object the elements in the tree were evaluated to 0, make sure to have atleast 1 element in the Unique tree. Also for the function Unique::get allocate memory for atleast MERGEBUFF2+1 keys.
| | | * | MDEV-21619 Server crash or assertion failures in my_datetime_to_strAlexander Barkov2020-06-111-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Item_cache_datetime::decimals was always copied from example->decimals without limiting to 6 (maximum possible fractional digits), so val_str() later crashed on asserts inside my_time_to_str() and my_datetime_to_str().
| | * | | MDEV-22268 virtual longlong Item_func_div::int_op(): Assertion `0' failed in ↵Alexander Barkov2020-06-132-7/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Item_func_div::int_op Item_func_div::fix_length_and_dec_temporal() set the return data type to integer in case of @div_precision_increment==0 for temporal input with FSP=0. This caused Item_func_div to call int_op(), which is not implemented, so a crash on DBUG_ASSERT(0) happened. Fixing fix_length_and_dec_temporal() to set the result type to DECIMAL.
| | * | | MDEV-11563: GROUP_CONCAT(DISTINCT ...) may produce a non-distinct listVarun Gupta2020-06-092-16/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backported from MYSQL Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT Issue: ------ The problem occurs when: 1) GROUP_CONCAT (DISTINCT ....) is used in the query. 2) Data size greater than value of system variable: tmp_table_size. The result would contain values that are non-unique. Root cause: ----------- An in-memory structure is used to filter out non-unique values. When the data size exceeds tmp_table_size, the overflow is written to disk as a separate file. The expectation here is that when all such files are merged, the full set of unique values can be obtained. But the Item_func_group_concat::add function is in a bit of hurry. Even as it is adding values to the tree, it wants to decide if a value is unique and write it to the result buffer. This works fine if the configured maximum size is greater than the size of the data. But since tmp_table_size is set to a low value, the size of the tree is smaller and hence requires the creation of multiple copies on disk. Item_func_group_concat currently has no mechanism to merge all the copies on disk and then generate the result. This results in duplicate values. Solution: --------- In case of the DISTINCT clause, don't write to the result buffer immediately. Do the merge and only then put the unique values in the result buffer. This has be done in Item_func_group_concat::val_str. Note regarding result file changes: ----------------------------------- Earlier when a unique value was seen in Item_func_group_concat::add, it was dumped to the output. So result is in the order stored in SE. But with this fix, we wait until all the data is read and the final set of unique values are written to output buffer. So the data appears in the sorted order. This only fixes the cases when we have DISTINCT without ORDER BY clause in GROUP_CONCAT.
| * | | | MDEV-22854 Garbage returned with SELECT ↵Alexander Barkov2020-06-102-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CASE..DEFAULT(timestamp_field_with_now_as_default) Item_default_value did not override val_native(), so the inherited Item_field::val_native() was called. As a result Item_default_value::calculate() was not called and Item_field::val_native() was called on a Field with a non-initialized ptr. Implementing Item_default_value::val_native() properly.
| * | | | MDEV-22734 Assertion `mon > 0 && mon < 13' failed in sec_since_epochAlexander Barkov2020-06-081-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When processing a condition like: WHERE timestamp_column='2010-00-01 00:00:00' don't replace the constant to Item_datetime_literal if the constant it has zeros (in the month or in the day).
* | | | | MDEV-22884 Assertion `grant_table || grant_table_role' failed on perfschemaSergei Golubchik2020-06-131-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | when allowing access via perfschema callbacks, update the cached GRANT_INFO to match
* | | | | when printing Item_in_optimizer, use precedence of wrapped Itembb-10.5-pr1588Sidney Cammeresi2020-06-121-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | when Item::print() is called with the QT_PARSABLE flag, WHERE i NOT IN (SELECT ...) gets printed as WHERE !i IN (SELECT ...) instead of WHERE !(i in (SELECT ...)) because Item_in_optimizer returns DEFAULT_PRECEDENCE. it should return the precedence of the inner operation.
* | | | | MDEV-22840: JSON_ARRAYAGG gives wrong results with NULL values and ORDER by ↵Varun Gupta2020-06-122-5/+88
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | clause The problem here is similar to the case with DISTINCT, the tree used for ORDER BY needs to also hold the null bytes of the record. This was not done for GROUP_CONCAT as NULLS are rejected by GROUP_CONCAT. Also introduced a comparator function for the order by tree to handle null values with JSON_ARRAYAGG.
* | | | | MDEV-22011: DISTINCT with JSON_ARRAYAGG gives wrong resultsVarun Gupta2020-06-123-8/+130
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For DISTINCT to be handled with JSON_ARRAYAGG, we need to make sure that the Unique tree also holds the NULL bytes of a table record inside the node of the tree. This behaviour for JSON_ARRAYAGG is different from GROUP_CONCAT because in GROUP_CONCAT we just reject NULL values for columns. Also introduced a comparator function for the unique tree to handle null values for distinct inside JSON_ARRAYAGG.
* | | | | MDEV-11563: GROUP_CONCAT(DISTINCT ...) may produce a non-distinct listVarun Gupta2020-06-122-16/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Backported from MYSQL Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT Issue: ------ The problem occurs when: 1) GROUP_CONCAT (DISTINCT ....) is used in the query. 2) Data size greater than value of system variable: tmp_table_size. The result would contain values that are non-unique. Root cause: ----------- An in-memory structure is used to filter out non-unique values. When the data size exceeds tmp_table_size, the overflow is written to disk as a separate file. The expectation here is that when all such files are merged, the full set of unique values can be obtained. But the Item_func_group_concat::add function is in a bit of hurry. Even as it is adding values to the tree, it wants to decide if a value is unique and write it to the result buffer. This works fine if the configured maximum size is greater than the size of the data. But since tmp_table_size is set to a low value, the size of the tree is smaller and hence requires the creation of multiple copies on disk. Item_func_group_concat currently has no mechanism to merge all the copies on disk and then generate the result. This results in duplicate values. Solution: --------- In case of the DISTINCT clause, don't write to the result buffer immediately. Do the merge and only then put the unique values in the result buffer. This has be done in Item_func_group_concat::val_str. Note regarding result file changes: ----------------------------------- Earlier when a unique value was seen in Item_func_group_concat::add, it was dumped to the output. So result is in the order stored in SE. But with this fix, we wait until all the data is read and the final set of unique values are written to output buffer. So the data appears in the sorted order. This only fixes the cases when we have DISTINCT without ORDER BY clause in GROUP_CONCAT.
* | | | | MDEV-15101: Stop ANALYZE TABLE from flushing table definition cacheSergei Petrunia2020-06-124-3/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Apply this patch from Percona Server (amended for 10.5): commit cd7201514fee78aaf7d3eb2b28d2573c76f53b84 Author: Laurynas Biveinis <laurynas.biveinis@gmail.com> Date: Tue Nov 14 06:34:19 2017 +0200 Fix bug 1704195 / 87065 / TDB-83 (Stop ANALYZE TABLE from flushing table definition cache) Make ANALYZE TABLE stop flushing affected tables from the table definition cache, which has the effect of not blocking any subsequent new queries involving the table if there's a parallel long-running query: - new table flag HA_ONLINE_ANALYZE, return it for InnoDB and TokuDB tables; - in mysql_admin_table, if we are performing ANALYZE TABLE, and the table flag is set, do not remove the table from the table definition cache, do not invalidate query cache; - in partitioning handler, refresh the query optimizer statistics after ANALYZE if the underlying handler supports HA_ONLINE_ANALYZE; - new testcases main.percona_nonflushing_analyze_debug, parts.percona_nonflushing_abalyze_debug and a supporting debug sync point. For TokuDB, this change exposes bug TDB-83 (Index cardinality stats updated for handler::info(HA_STATUS_CONST), not often enough for tokudb_cardinality_scale_percent). TokuDB may return different rec_per_key values depending on dynamic variable tokudb_cardinality_scale_percent value. The server does not have a way of knowing that changing this variable invalidates the previous rec_per_key values in any opened table shares, and so does not call info(HA_STATUS_CONST) again. Fix by updating rec_per_key for both HA_STATUS_CONST and HA_STATUS_VARIABLE. This also forces a re-record of tokudb.bugs.db756_card_part_hash_1_pick, with the new output seeming to be more correct.
* | | | | more mysql_create_view link/unlink woesSergei Golubchik2020-06-121-3/+3
| | | | |
* | | | | MDEV-22878 galera.wsrep_strict_ddl hangs in 10.5 after mergeSergei Golubchik2020-06-121-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | if mysql_create_view is aborted when `view` isn't unlinked, it should not be linked back on cleanup
* | | | | MDEV-16470: switch off user variables (and fixes of its support)bb-10.5-MDEV-22550Oleksandr Byelkin2020-06-126-0/+26
| | | | |
* | | | | MDEV-21851: Error in BINLOG_BASE64_EVENT i s always error-logged as if it is ↵Andrei Elkin2020-06-124-6/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | done by Slave The prefix of error log message out of a failed BINLOG applying is corrected to be the sql command name.
* | | | | MDEV-22602 Disable UPDATE CASCADE for SQL constraintsAleksey Midenkov2020-06-128-24/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CHECK constraint is checked by check_expression() which walks its items and gets into Item_field::check_vcol_func_processor() to check for conformity with foreign key list. WITHOUT OVERLAPS is checked for same conformity in mysql_prepare_create_table(). Long uniques are already impossible with InnoDB foreign keys. See ER_CANT_CREATE_TABLE in test case. 2 accompanying bugs fixed (test main.constraints failed): 1. check->name.str lived on SP execute mem_root while "check" obj itself lives on SP main mem_root. On second SP execute check->name.str had garbage data. Fixed by allocating from thd->stmt_arena->mem_root which is SP main mem_root. 2. CHECK_CONSTRAINT_IF_NOT_EXISTS value was mixed with VCOL_FIELD_REF. VCOL_FIELD_REF is assigned in check_expression() and then detected as CHECK_CONSTRAINT_IF_NOT_EXISTS in handle_if_exists_options(). Existing cases for MDEV-16932 in main.constraints cover both fixes.
* | | | | MDEV-22819: Wrong result or Assertion `ix > 0' failed in read_to_buffer upon ↵Varun Gupta2020-06-112-6/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | select with GROUP BY and GROUP_CONCAT In the merge_buffers phase for sorting, the sort buffer size is divided between the number of chunks. The chunks have a start and end position (m_buffer_start and m_buffer_end). Then we read the as many records that fit in this buffer for a chunk of the file. The issue here was we were resetting the end of buffer(m_buffer_end) to the number of bytes that was read, this was causing a problem because with dynamic size of sort keys it is possible that later we would not be able to accommodate even one key inside a chunk of file. So the fix was to not reset the end of buffer for a chunk of file.
* | | | | Fix typoSachin2020-06-111-1/+1
| | | | |
* | | | | MDEV-22722 Assertion "inited==NONE" failed in handler::ha_index_init on the ↵Sachin2020-06-111-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | slave during UPDATE Add missing call for handler->prepare_for_insert() in Rows_log_event::do_apply_event
* | | | | MDEV-14347 CREATE PROCEDURE returns no error when using an unknown variableAlexander Barkov2020-06-107-0/+98
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CREATE PROCEDURE did not detect unknown SP variables in assignments like this: SET var=a_long_var_name_with_a_typo; The error happened only during the SP execution time, and only of the control flow reaches the erroneous statement. Fixing most expressions to detect unknown identifiers. This includes simple subqueries without tables: - Query specification: SELECT list, WHERE, HAVING (inside aggregate functions) clauses, e.g. SET var= (SELECT unknown_ident+1); SET var= (SELECT 1 WHERE unknown_identifier); SET var= (SELECT 1 HAVING SUM(unknown_identifier); - Table value constructor: VALUES clause, e.g.: SET var= (VALUES(unknown_ident)); Note, in some more complex subquery cases unknown variables are still not detected (this will be fixed separately): - Derived tables: SET a=(SELECT unknown_ident FROM (SELECT 1 AS alias) t1); SET res=(SELECT * FROM t1 LEFT OUTER JOIN (SELECT unknown_ident) t2 USING (c1)); - CTE: SET a=(WITH cte1 (a) AS (SELECT unknown_ident) SELECT * FROM cte1); SET a=(WITH cte1 (a,b) AS (VALUES (unknown,2),(3,4)) SELECT * FROM cte1); SET a=(WITH cte1 (a,b) AS (VALUES (1,2),(3,4)) SELECT unknown_ident FROM cte1); - SELECT .. GROUP BY unknown_identifier - SELECT .. ORDER BY unknown_identifier - HAVING with an unknown identifier outside of any aggregate functions: SELECT .. HAVING unknown_identifier;
* | | | | MDEV-22059: MSAN report at replicate_ignore_table_grantSujatha2020-06-101-8/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Analysis: ======== List of values provided for "replicate_ignore_table" and "replicate_do_table" are stored in HASH. When an empty list is provided the HASH structure doesn't get initialized. Existing code treats empty element list as an error and tries to clean the uninitialized HASH. This results in above MSAN issue. Fix: === The clean up should be initiated only when there is an error while parsing the 'replicate_do_table' or 'replicate_ignore_table' list and the HASH is in initialized state. Otherwise for empty list it should simply return success.
* | | | | Fix GCC -Wunused-functionMarko Mäkelä2020-06-101-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | debug_sync_set_action(): Declare the dummy function inline, to silence a warning about declared-but-unused static function. This amends commit 3ccd6766d0e47b4331505789eb114c5cf1534ecf.