summaryrefslogtreecommitdiff
path: root/mysql-test
Commit message (Collapse)AuthorAgeFilesLines
* Update row and key fetch cost models to take into account data copy costsbb-10.7-selectivityMonty2022-03-02195-6145/+7520
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before this patch, when calculating the cost of fetching and using a row/key from the engine, we took into account the cost of finding a row or key from the engine, but did not consistently take into account index only accessed, clustered key or covered keys for all access paths. The cost of the WHERE clause (TIME_FOR_COMPARE) was not consistently considered in best_access_path(). TIME_FOR_COMPARE was used in calculation in other places, like greedy_search(), but was in some cases (like scans) done an a different number of rows than was accessed. The cost calculation of row and index scans didn't take into account the number of rows that where accessed, only the number of accepted rows. When using a filter, the cost of index_only_reads and cost of accessing and disregarding 'filtered rows' where not taken into account, which made filters cost less than there actually where. To remedy the above, the following key & row fetch related costs has been added: - The cost of fetching and using a row is now split into different costs: - key + Row fetch cost (as before) but multiplied with the variable 'optimizer_cache_cost' (default to 0.5). This allows the user to tell the optimizer the likehood of finding the key and row in the engine cache. - RECORD_COPY_COST, The cost copying a row from the engine to the sql layer or creating a row from the join_cache to the record buffer. Mostly affects table scan costs. - INDEX_COPY_COST the cost of finding the next key and copying it from the engine to the SQL layer. This is used when we calculate the cost index only reads. It makes index scans more expensive than before if they cover a lot of rows. (main.index_merge_myisam) - INDEX_LOOKUP_COST, the cost of finding the first key in a range. This replaces the old define IDX_LOOKUP_COST, but with a higher cost. - INDEX_NEXT_FIND_COST, the cost of finding the next key (and rowid). when doing a index scan and comparing the rowid to the filter. Before this cost was assumed to be 0. - ROW_LOOKUP_COST, the cost of fetching a row by rowid. All of the above constants/variables are now tuned to be somewhat in proportion of executing complexity to each other. There is tuning need for these in the future, but that can wait until the above are made user variables as that will make tuning much easier. To make the usage of the above easy, there are new (not virtual) cost calclation functions in handler: - ha_read_time(), like read_time(), but take optimizer_cache_cost into account. - ha_read_and_copy_time(), like ha_read_time() but take into account RECORD_COPY_TIME - ha_read_and_compare_time(), like ha_read_and_copy_time() but take TIME_FOR_COMPARE into account. - ha_read_with_rowid(). Read row with row id, taking RECORD_COPY_COST into account. This is used with filesort where we don't need to execute the WHERE clause again. - ha_keyread_time(), like keyread_time() but take optimizer_cache_cost into account. - ha_keyread_and_copy_time(), like ha_keyread_time(), but add INDEX_COPY_COST. - ha_key_scan_time(), like key_scan_time() but take optimizer_cache_cost nto account. - ha_key_scan_and_compare_time(), like ha_key_scan_time(), but add INDEX_COPY_COST & TIME_FOR_COMPARE. I also added some setup costs for doing different types of scans and creating temporary tables (on disk and in memory). This encourages the optimizer to not use these for simple 'a few row' lookups if there are adequate key lookup strategies. - TABLE_SCAN_SETUP_COST, cost of starting a table scan. - INDEX_SCAN_SETUP_COST, cost of starting an index scan. - HEAP_TEMPTABLE_CREATE_COST, cost of creating in memory temporary table. - DISK_TEMPTABLE_CREATE_COST, cost of creating an on disk temporary table. When calculating cost of fetching ranges, we had a cost of IDX_LOOKUP_COST (0.125) for doing a key div for a new range. This is now replaced with 'io_cost * INDEX_LOOKUP_COST (1.0) * optimizer_cache_cost', which matches the cost we use for 'ref' and other key lookups. The effect is that the cost is now a bit higher when we have many ranges for a key. Allmost all calculation with TIME_FOR_COMPARE is now done in best_access_path(). 'JOIN::read_time' now includes the full cost for finding the rows in the table. In the result files, many of the changes are now again close to what they where before the "Update cost for hash and cached joins" commit, as that commit didn't fix the filter cost (too complex to do everything in one commit). The above changes showed a lot of a lot of inconsistencies in optimizer cost calculation. The main objective with the other changes was to calculation as similar (and accurate) as possible to make different plans more comparable. Detailed list of changes: - Calculate index_only_cost consistently and correctly for all scan and ref accesses. The row fetch_cost and index_only_cost now takes into account clustered keys, covered keys and index only accesses. - cost_for_index_read now returns both full cost and index_only_cost - Fixed cost calculation of get_sweep_read_cost() to match other similar costs. This is bases on the assumption that data is more often stored on SSD than a hard disk. - Replaced constant 2.0 with new define TABLE_SCAN_SETUP_COST. - Some scan cost estimates did not take into account TIME_FOR_COMPARE. Now all scan costs takes this into account. (main.show_explain) - Added session variable optimizer_cache_hit_ratio (default 50%). By adjusting this on can reduce or increase the cost of index or direct record lookups. The effect of the default is that key lookups is now a bit cheaper than before. See usage of 'optimizer_cache_cost' in handler.h. - JOIN_TAB::scan_time() did not take into account index only scans, which produced a wrong cost when index scan was used. Changed JOIN_TAB:::scan_time() to take into consideration clustered and covered keys. The values are now cached and we only have to call this function once. Other calls are changed to use the cached values. Function renamed to JOIN_TAB::estimate_scan_time(). - Fixed that most index cost calculations are done the same way and more close to 'range' calculations. The cost is now lower than before for small data sets and higher for large data sets as we take into account how many keys are read (main.opt_trace_selectivity, main.limit_rows_examined). - Ensured that index_scan_cost() == range(scan_of_all_rows_in_table_using_one_range) + MULTI_RANGE_READ_INFO_CONST. One effect of this is that if there is choice of doing a full index scan and a range-index scan over almost the whole table then index scan will be preferred (no range-read setup cost). (innodb.innodb, main.show_explain, main.range) - Fixed the EQ_REF and REF takes into account clustered and covered keys. This changes some plans to use covered or clustered indexes as these are much cheaper. (main.subselect_mat_cost, main.state_tables_innodb, main.limit_rows_examined) - Rowid filter setup cost and filter compare cost now takes into account fetching and checking the rowid (INDEX_NEXT_FIND_COST). (main.partition_pruning heap.heap_btree main.log_state) - Added INDEX_NEXT_FIND_COST to Range_rowid_filter_cost_info::lookup_cost to account of the time to find and check the next key value against the container - Introduced ha_keyread_time(rows) that takes into account finding the next row and copying the key value to 'record' (INDEX_COPY_COST). - Introduced ha_key_scan_time() for calculating an index scan over all rows. - Added IDX_LOOKUP_COST to keyread_time() as a startup cost. - Added index_only_fetch_cost() as a convenience function to OPT_RANGE. - keyread_time() cost is slightly reduced to prefer shorter keys. (main.index_merge_myisam) - All of the above caused some index_merge combinations to be rejected because of cost (main.index_intersect). In some cases 'ref' where replaced with index_merge because of the low cost calculation of get_sweep_read_cost(). - Some index usage moved from PRIMARY to a covering index. (main.subselect_innodb) - Changed cost calculation of filter to take INDEX_LOOKUP_COST and TIME_FOR_COMPARE into account. See sql_select.cc::apply_filter(). filter parameters and costs are now written to optimizer_trace. - Don't use matchings_records_in_range() to try to estimate the number of filtered rows for ranges. The reason is that we want to ensure that 'range' is calculated similar to 'ref'. There is also more work needed to calculate the selectivity when using ranges and ranges and filtering. This causes filtering column in EXPLAIN EXTENDED to be 100.00 for some cases where range cannot use filtering. (main.rowid_filter) - Introduced ha_scan_time() that takes into account the CPU cost of finding the next row and copying the row from the engine to 'record'. This causes costs of table scan to slightly increase and some test to changed their plan from ALL to RANGE or ALL to ref. (innodb.innodb_mysql, main.select_pkeycache) In a few cases where scan time of very small tables have lower cost than a ref or range, things changed from ref/range to ALL. (main.myisam, main.func_group, main.limit_rows_examined, main.subselect2) - Introduced ha_scan_and_compare_time() which is like ha_scan_time() but also adds the cost of the where clause (TIME_FOR_COMPARE). - Added small cost for creating temporary table for materialization. This causes some very small tables to use scan instead of materialization. - Added checking of the WHERE clause (TIME_FOR_COMPARE) of the accepted rows to ROR costs in get_best_ror_intersect() - Removed '- 0.001' from 'join->best_read' and optimize_straight_join() to ensure that the 'Last_query_cost' status variable contains the same value as the one that was calculated by the optimizer. - Take avg_io_cost() into account in handler::keyread_time() and handler::read_time(). This should have no effect as it's 1.0 by default, except for heap that overrides these functions. - Some 'ref_or_null' accesses changed to 'range' because of cost adjustments (main.order_by) - Added scan type "scan_with_join_cache" for optimizer_trace. This is just to show in the trace what kind of scan was used. - When using 'scan_with_join_cache' take into account number of preceding tables (as have to restore all fields for all previous table combination when checking the where clause) The new cost added is: (row_combinations * RECORD_COPY_COST * number_of_cached_tables). This increases the cost of join buffering in proportion of the number of tables in the join buffer. One effect is that full scans are now done earlier as the cost is then smaller. (main.join_outer_innodb, main.greedy_optimizer) - Removed the usage of 'worst_seeks' in cost_for_index_read as it caused wrong plans to be created; It prefered JT_EQ_REF even if it would be much more expensive than a full table scan. A related issue was that worst_seeks only applied to full lookup, not to clustered or index only lookups, which is not consistent. This caused some plans to use index scan instead of eq_ref (main.union) - Changed federated block size from 4096 to 1500, which is the typical size of an IO packet. - Added costs for reading rows to Federated. Needed as there is no caching of rows in the federated engine. - Added ha_innobase::read_with_rowid() cost function. - A lot of extra things added to optimizer trace - More costs, especially for materialization and index_merge. - Make lables more uniform - Fixed a lot of minor bugs - Added 'trace_started()' around a lot of trace blocks. - When calculating ORDER BY with LIMIT cost for using an index the cost did not take into account the number of row retrivals that has to be done or the cost of comparing the rows with the WHERE clause. The cost calculated would be just a fraction of the real cost. Now we calculate the cost as we do for ranges and 'ref'. - 'Using index for group-by' is used a bit as we now take into account the WHERE clause cost when comparing with 'ref' and prefer the method with fewer row combinations. (main.group_min_max). Bugs fixed: - Fixed that we don't calculate TIME_FOR_COMPARE twice for some plans, like in optimize_straight_join() and greedy_search() - Fixed bug in save_explain_data where we could test for the wrong index when displaying 'Using index'. This caused some old plans to show 'Using index'. (main.subselect_innodb, main.subselect2) - Fixed bug in get_best_ror_intersect() where 'min_cost' was not updated, and the cost we compared with was not the one that was used. - Fixed very wrong cost calculation for priority queues in check_if_pq_applicable(). (main.order_by now correctly uses priority queue) - When calculating cost of EQ_REF or REF, we added the cost of comparing the WHERE clause with the found rows, not all row combinations. This made ref and eq_ref to be regarded way to cheap compared to other access methods. - FORCE INDEX cost calculation didn't take into account clustered or covered indexes. - JT_EQ_REF cost was estimated as avg_io_cost(), which is half the cost of a JT_REF key. This may be true for InnoDB primary key, but not for other unique keys or other engines. Now we use handler function to calculate the cost, which allows us to handle consistently clustered, covered keys and not covered keys. - ha_start_keyread() didn't call extra_opt() if keyread was already enabled but still changed the 'keyread' variable (which is wrong). Fixed by not doing anything if keyread is already enabled. - multi_range_read_info_cost() didn't take into account io_cost when calculating the cost of ranges. - fix_semijoin_strategies_for_picked_join_order() used the wrong record_count when calling best_access_path() for SJ_OPT_FIRST_MATCH and SJ_OPT_LOOSE_SCAN. - Hash joins didn't provide correct best_cost to the upper level, which means that the cost for hash_joins more expensive than calculated in best_access_path (a difference of 10x * TIME_OF_COMPARE). This is fixed in the new code thanks to that we now include TIME_OF_COMPARE cost in 'read_time'. Other things: - Added some 'if (thd->trace_started())' to speed up code - Removed not used function Cost_estimate::is_zero() - Simplified testing of HA_POS_ERROR in get_best_ror_intersect(). (No cost changes) - Moved ha_start_keyread() from join_read_const_table() to join_read_const() to enable keyread for all types of JT_CONST tables. - Made a few very short functions inline in handler.h Notes: - In main.rowid_filter the join order of order and lineitem is swapped. This is because the cost of doing a range fetch of lineitem(98 rows) is almost as big as the whole join of order,lineitem. The filtering will also ensure that we only have to do very small key fetches of the rows in lineitem. - main.index_merge_myisam had a few changes where we are now using less keys for index_merge. This is because index scans are now more expensive than before. - handler->optimizer_cache_cost is updated in ha_external_lock(). This ensures that it is up to date per statements. Not an optimal solution (for locked tables), but should be ok for now. - 'DELETE FROM t1 WHERE t1.a > 0 ORDER BY t1.a' does not take cost of filesort into consideration when table scan is chosen. (main.myisam_explain_non_select_all) - perfschema.table_aggregate_global_* has changed because an update on a table with 1 row will now use table scan instead of key lookup. TODO in upcomming commits: - Fix selectivity calculation for ranges with and without filtering and when there is a ref access but scan is chosen. For this we have to store the lowest known value for 'accepted_records' in the OPT_RANGE structure. - Change that records_read does not include filtered rows. - test_if_cheaper_ordering() needs to be updated to properly calculate costs. This will fix tests like main.order_by_innodb, main.single_delete_update - Extend get_range_limit_read_cost() to take into considering cost_for_index_read() if there where no quick keys. This will reduce the computed cost for ORDER BY with LIMIT in some cases. (main.innodb_ext_key) - Change all optimizer cost constants to user variables. This will make it possible for the users to tune the cost model if needed instead of having to ask for a new server build. - Fix that we take into account selectivity when counting the number of rows we have to read when considering using a index table scan to resolve ORDER BY. - Add new calculation for reaed_with_rowid() where we take into account the benefit of reading multiple rows from the same page. fixup to be combined with previous commit Another fixup, to be combined with the previous commits Fixed a lot of inconsistencies in optimizer cost calculation. The main objective was get cost calculation as similar (and accurate) as possible to make different plans more comparable. - Replaced constant 2.0 with new define TABLE_SCAN_SETUP_COST. - Added RECORD_COPY_COST, the cost of finding the next row and copying it to record for table scans. - Added INDEX_COPY_COST, the cost of finding the next key and copying it to record for index scans. - Added INDEX_NEXT_FIND_COST, the cost of finding the next index entry and checking it against filter. - Some scan cost estimates did not take into account TIME_FOR_COMPARE. Now all scan costs takes this into account. (main.show_explain) - Added session variable optimizer_cache_hit_ratio (default 50%). By adjusting this on can reduce or increase the cost of index or direct record lookups. The effect of the default is that key lookups is now a bit cheaper than before. See usage of 'optimizer_cache_cost' in handler.h. - JOIN_TAB::scan_time() did not take into account index only scans, which produced a wrong cost when index scan was used. Fixed by adding support for covering keys. Cached also the calculated values to avoid future calls during optimization phase. - Fixed that most index cost calculations are done the same way and more close to 'range' calculations. The cost is now lower than before for small data sets and higher for large data sets as we take into account how many keys are read. - Ensured that index_scan_cost() == range(scan_of_all_rows_in_table_using_one_range) + MULTI_RANGE_READ_INFO_CONST. One effect of this is that if there is choice of doing a full index scan and a range-index scan over almost the whole table then index scan will be preferred (no range-read setup cost). (innodb.innodb, main.show_explain, main.range) - Fixed the EQ_REF and REF takes into account clustered and covered keys. This changes some plans to use covered or clustered indexes as these are much cheaper. (main.subselect_mat_cost main.state_tables_innodb) - Rowid filter setup cost and filter compare cost now takes into account fetching and checking the rowid (INDEX_NEXT_FIND_COST). (main.partition_pruning heap.heap_btree main.log_state) - Introduced ha_keyread_time(rows) that takes into account finding the next row and copying the key value to 'record' (INDEX_COPY_COST). - Introduced ha_key_scan_time() for calculating an index scan over all rows. - Added IDX_LOOKUP_COST to keyread_time() as a startup cost. - Added index_only_fetch_cost() as a convenience function to OPT_RANGE. - keyread_time() cost is slightly reduced to prefer shorter keys. (main.index_merge_myisam) - All of the above caused some index_merge combinations to be rejected because of cost (main.index_intersect). In some cases 'ref' where replaced with index_merge because of the low cost calculation of get_sweep_read_cost(). - Some index usage moved from PRIMARY to a covering index. (main.subselect_innodb) - Don't use matching_records_in_range() to try to estimate the number of filtered rows for ranges. The reason is that we want to ensure that 'range' is calculated similar to 'ref'. There is also more work needed to calculate the selectivity when using ranges and ranges and filtering. This causes filtering column in EXPLAIN EXTENDED to be 100.00 for some cases where range cannot use filtering. (main.rowid_filter) - Introduced ha_scan_time() that takes into account the CPU cost of finding the next row and copying the row from the engine to 'record'. This causes costs of table scan to slightly increase and some test to changed their plan from ALL to RANGE or ALL to ref. (innodb.innodb_mysql, main.select_pkeycache) - Introduced ha_scan_and_compare_time() which is like ha_scan_time() but also adds the cost of checking the where clause (TIME_FOR_COMPARE). - Introduced ha_read_with_rowid() that takes into account RECORD_COPY_COST. - Added checking of the WHERE clause of the accepted rows to ROR costs in get_best_ror_intersect() - Removed '- 0.001' from 'join->best_read' and optimize_straight_join() to ensure that the 'Last_query_cost' status variable contains the same value as the one that was calculated by the optimizer. - Added INDEX_NEXT_FIND_COST to Range_rowid_filter_cost_info::lookup_cost to account of the time to find and check the next key value against the container - Changed 'JOIN_TAB:::scan_time() to take into consideration clustered and covered keys. The values are now cached and we only have to call this function once. Other calls are changed to use the cached values. Function renamed to JOIN_TAB::estimate_scan_time(). - Take avg_io_cost() into account in handler::keyread_time() and handler::read_time(). This should have no effect as it's 1.0 by default, except for heap that overrides these functions. - Some 'ref_or_null' accesses changed to 'range' because of cost adjustments (main.order_by) - Added scan type "scan_with_join_cache" for optimizer_trace. This is just to show what kind of scan was used. -I had to remove the usage of 'worst_seeks' in cost_for_index_read as it cases wrong plans to be created; It prefered JT_EQ_REF even if it would be much more expensive than a full table scan. A related issue was that worst_seeks only applied to full lookup, not to clustered or index only lookups, which is not consistent. This caused some plans to use index scan instead of eq_ref (main.union) Bugs fixed: - Fixed that we don't calculate TIME_FOR_COMPARE twice for some plans, like in optimize_straight_join() and greedy_search() - Fixed bug in save_explain_data where we could test for the wrong index when displaying 'Using index'. This caused some old plans to show 'Using index'. (main.subselect_innodb, main.subselect2) - Fixed bug in get_best_ror_intersect() where 'min_cost' was not updated, and the cost we compared with was not the one that was used. - Fixed very wrong cost calculation for priority queues in check_if_pq_applicable(). - When calculating cost of EQ_REF or REF, we added the cost of comparing the WHERE clause with the found rows, not all row combinations. This made ref and eq_ref to be regarded way to cheap compared to other access methods. - FORCE INDEX cost calculation didn't take into account clustered or covered indexes. - JT_EQ_REF cost was estimated as avg_io_cost(), which is half the cost of a JT_REF key. This may be true for InnoDB primary key, but not for other unique keys or other engines. Now we use handler function to calculate the cost, which allows us to handle consistently clustered, covered keys and not covered keys. - ha_start_keyread() didn't call extra_opt() if keyread was already enabled but still changed the 'keyread' variable (which is wrong). Fixed by not doing anything if keyread is already enabled. - multi_range_read_info_cost() didn't take into account io_cost when calculating the cost of ranges. Other things: - Added some 'if (thd->trace_started())' to speed up code - Removed not used function Cost_estimate::is_zero() - Simplified testing of HA_POS_ERROR in get_best_ror_intersect(). (No cost changes) - Moved ha_start_keyread() from join_read_const_table() to join_read_const() to enable keyread for all types of JT_CONST tables. - Made a few very short functions inline in handler.h TODO in upcomming commit: - Fix selectivity calculation for ranges with and without filtering and when there is a ref access but scan is chosen. For this we have to store the lowest known value for 'accepted_records' in the OPT_RANGE structure. - test_if_cheaper_ordering() needs to be updated to properly calculate costs. This will fix tests like main.order_by_innodb. - Extend get_range_limit_read_cost() to take into considering cost_for_index_read() if there where no quick keys. This will reduce the computed cost for ORDER BY with LIMIT in some cases. (main.innodb_ext_key) COMMENTS: - In main.rowid_filter the join order of order and lineitem is swapped. This is because the cost of doing a range fetch of lineitem(98 rows) is almost as big as the whole join of order,lineitem. The filtering will also ensure that we only have to do very small key fetches of the rows in lineitem. - handler->optimizer_cache_cost is updated in ha_external_lock(). This ensures that it is up to date per statements. Not an optimal solution (for locked tables), but should be ok for now. Another temporary commit to be combined with previous ones
* Make trace.add() usage uniformMonty2022-01-213-4/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Before any multiple add() calls, always use (if trace_started()). - Add unlikely() around all tests of trace_started(). - Change trace.add(); trace.add(); to trace.add().add(); - When trace.add() goes over several line, use the following formating: trace. add(xxx). add(yyy). add(zzz); This format was choosen after a discussion between Sergei Petrunia and me as it looks similar indepedent if 'trace' is an object or a pointer. It also more suitable for an editors auto-indentation. Other things: Added DBUG_ASSERT(thd->trace_started()) to a few functions that should only be called if trace is enabled. "use_roworder_index_merge: true" changed to "use_sort_index_merge: false" As the original output was often not correct. Also fixed the related 'cause' to be correct. In best_access_path() print the cost (and number of rows) before checking if it the plan should be used. This removes the need to print the cost in two places. Changed a few "read_time" tags to "cost".
* Update cost for hash and cached joinsMonty2022-01-2163-939/+995
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The old code didn't correctly add TIME_FOR_COMPARE to rows that are part of the scan that will be compared with the attached where clause. Now the cost calculation for hash join and full join cache join are identical except for HASH_FANOUT (10%) The cost for a join with keys is now also uniform. The total cost for a using a key for lookup is calculated in one place as: (cost_of_finding_rows_through_key(records) + records/TIME_FOR_COMPARE)* record_count_of_previous_row_combinations + startup_cost startup_cost is the cost of a creating a temporary table (if needed) Best_cost now includes the cost of comparing all WHERE clauses and also cost of joining with previous row combinations. Other things: - Optimizer trace is now printing the total costs, including testing the WHERE clause (TIME_FOR_COMPARE) and comparing with all previous rows. - In optimizer trace, include also total cost of query together with the final join order. This makes it easier to find out where the cost was calculated. - Old code used filter even if the cost for it was higher than not using a filter. This is not corrected.
* Adjust costs for doing index scan in cost_group_min_max()Monty2022-01-2117-94/+175
| | | | | | | | | | | | | | | | | | | | | The idea is that when doing a tree dive (once per group), we need to compare key values, which is fast. For each new group, we have to compare the full where clause for the row. Compared to original code, the cost of group_min_max() has slightly increased which affects some test with only a few rows. main.group_min_max and main.distinct have been modified to show the effect of the change. The patch also adjust the number of groups in case of quick selects: - For simple WHERE clauses, ensure that we have at least as many groups as we have conditions on the used group-by key parts. The assumption is that each condition will create at least one group. - Ensure that there are no more groups than rows found by quick_select Test changes: - For some small tables there has been a change of Using index for group-by -> Using index for group-by (scanning) Range -> Index and Using index for group-by -> Using index
* Return >= 1 from matching_candidates_in_table if records > 0.0Monty2022-01-2112-34/+35
| | | | | | | | | | Having rows >= 1.0 helps ensure that when we calculate total rows of joins the number of resulting rows will not be less after the join. Changes in test cases: - Join order change for some tables with few records - 'Filtered' is much higher for tables with few rows, as 1 row is a high procent of a table with few rows.
* Update matching_candidates_in_table() to treat all conditions similarMonty2022-01-2136-207/+205
| | | | | | | | | | | | | | | Fixed also that the 'with_found_constraint parameter' to matching_candidates_in_table() is as documented: It is now true only if there is a reference to a previous table in the WHERE condition for the current examined table (as it was originally documented) Changes in test results: - Filtered was 25% smaller for some queries (expected). - Some join order changed (probably because the tables had very few rows). - Some more table scans, probably because there would be fewer returned rows. - Some tests exposes a bug that if there is more filtered rows, then the cost for table scan will be higher. This will be fixed in a later commit.
* Fix calculation of selectivityMonty2022-01-216-12/+436
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | calculate_cond_selectivity_for_table() is largely rewritten: - Process keys in the order of rows found, smaller ranges first. If two ranges has equal number of rows, use the one with more key parts. This helps us to mark more used fields to not be used for further selectivity calculations. See cmp_quick_ranges(). - Ignore keys with fields that where used by previous keys - Don't use rec_per_key[] to calculate selectivity for smaller secondary key parts. This does not work as rec_per_key[] value is calculated in the context of the previous key parts, not for the key part itself. The one exception is if the previous key parts is a constant. Other things: - Ensure that select->cond_selectivity is always between 0 and 1. - Ensure that select->opt_range_condition_rows is never updated to a higher value. It is initially set to the number of rows in table. - We know store in table->opt_range_condition_rows the lowest number of rows that any row-read-method has found so far. Before it was only done for UICK_SELECT_I::QS_TYPE_ROR_UNION and QUICK_SELECT_I::QS_TYPE_INDEX_MERGE. Now it is done for a lot more methods. See calculate_cond_selectivity_for_table() for details. - Calculate and use selectivity for the first key part of a multiple key part if the first key part is a constant. WHERE key1_part1=5 and key2_part1=5. IF key1 is used, then we can still use selectivity for key2 Changes in test results: - 'filtered' is slighly changed, usually to something slightly smaller - A few cases where for group by queries the table order changed. This was because the number of resulting rows from a group by query with MIN/MAX is now set to be smaller. - A few index was changed as we know prefer index with more key parts if the number of resulting rows is the same.
* Fixed bug in SQL_SELECT_LIMITMonty2022-01-214-9/+7
| | | | | We where comparing costs when we should be comparing number of rows that will be examined
* Limit calculated rows to the number of rows in the tableMonty2021-12-2711-35/+35
| | | | | The result file changes are mainly that number of rows is one smaller for some queries with DISTINCT or GROUP BY
* Ensure that test_quick_select doesn't return more rows than in the tableMonty2021-12-2710-21/+21
| | | | | | | | | | | Other changes: - In test_quick_select(), assume that if table->used_stats_records is 0 then the table has 0 rows. - Fixed prepare_simple_select() to populate table->used_stat_records - Enusre that set_statistics_for_tables() doesn't cause used_stats_records to be 0 when using stat_tables. - To get blackhole to work with replication, set stats.records to 2 so that test_quick_select() doesn't assume the table is empty.
* MDEV-14907 FEDERATEDX doesn't respect DISTINCTMonty2021-12-272-0/+91
| | | | | | | | | | | | | | | | | | | | | | | Federated and Federatex cannot be used with ROR scans Federated::position() and Federatex::position() is storing in 'ref' a pointer into a local result set buffer. This means that one cannot compare 'ref' from different handler instances to see if they point to the same physical record. This bug caused federated.federatedx to return wrong results when the optimizer tried to use index_merge to resolve some queries. Fixed by introducing table flag HA_NON_COMPARABLE_ROWID and using this with the above handlers. Todo: - Fix multi_delete(), multi_update and read_records() to use primary key instead of 'ref' if case HA_NON_COMPARABLE_ROWID is set. The current code only works if we have only one range (like table scan) for the tables that will be updated in the second pass. - Enable DBUG_ASSERT() in ha_federated::cmp_ref() and ha_federatedx::cmp_ref().
* Merge 10.6 into 10.7Marko Mäkelä2021-11-0934-24/+1551
|\
| * Merge 10.5 into 10.6Marko Mäkelä2021-11-0934-23/+1550
| |\
| | * Merge 10.4 into 10.5Marko Mäkelä2021-11-0930-20/+1399
| | |\
| | | * Merge 10.3 into 10.4Marko Mäkelä2021-11-0926-19/+1349
| | | |\
| | | | * Merge 10.2 into 10.3Marko Mäkelä2021-11-0917-11/+1241
| | | | |\
| | | | | * Remove restarts from encrypt_and_grep testst-10.2-mergeMarko Mäkelä2021-11-092-10/+12
| | | | | |
| | | | | * Merge mariadb-10.2.41 into 10.2Marko Mäkelä2021-11-0910-35/+135
| | | | | |\
| | | | | * | MDEV-25610 Assertion `escape != -1' failed in Item_func_like::val_intbb-10.2-bar-MDEV-25610Alexander Barkov2021-11-084-0/+1108
| | | | | | |
| | | | | * | MDEV-22284 Aria table key read crash because wrong index usedbb-10.2-midenok2Aleksey Midenkov2021-11-022-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When restoring lastinx last_key.keyinfo must be updated as well. The good example is in _ma_check_index(). The point of failure is extra(HA_EXTRA_NO_KEYREAD) in ha_maria::get_auto_increment(): 1. extra(HA_EXTRA_KEYREAD) saves lastinx; 2. maria_rkey() changes index, so the lastinx and last_key.keyinfo; 3. extra(HA_EXTRA_NO_KEYREAD) restores lastinx but not last_key.keyinfo. So we have discrepancy between lastinx and last_key.keyinfo after 3.
| | | | | * | MDEV-24335 Unexpected question mark in the end of a TINYTEXT columnAlexander Barkov2021-11-022-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | my_copy_fix_mb() passed MIN(src_length,dst_length) to my_append_fix_badly_formed_tail(). It could break a multi-byte character in the middle, which put the question mark to the destination. Fixing the code to pass the true src_length to my_append_fix_badly_formed_tail().
| | | | | * | MDEV-26949 --debug-gdb installs redundant signal handlersMarko Mäkelä2021-11-011-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a server startup option --gdb a.k.a. --debug-gdb that requests signals to be set for more convenient debugging. Most notably, SIGINT (ctrl-c) will not be ignored, and you will be able to interrupt the execution of the server while GDB is attached to it. When we are debugging, the signal handlers that would normally display a terse stack trace are useless. When we are debugging with rr, the signal handlers may interfere with a SIGKILL that could be sent to the process by the environment, and ruin the rr replay trace, due to a Linux kernel bug https://lkml.org/lkml/2021/10/31/311 To be able to diagnose bugs in kill+restart tests, we may really need both a trace before the SIGKILL and a trace of the failure after a subsequent server startup. So, we had better avoid hitting the problem by simply not installing those signal handlers.
| | | | | * | MDEV-24901 SIGSEGV in fts_get_table_name, SIGSEGV in ib_vector_size, SIGSEGV ↵bb-10.2-bar-MDEV-24901Alexander Barkov2021-10-294-0/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | in row_merge_fts_doc_tokenize, stack smashing strmake() puts one extra 0x00 byte at the end of the string. The code in my_strnxfrm_tis620[_nopad] did not take this into account, so in the reported scenario the 0x00 byte was put outside of a stack variable, which made ASAN crash. This problem is already fixed in in MySQL: commit 19bd66fe43c41f0bde5f36bc6b455a46693069fb Author: bin.x.su@oracle.com <> Date: Fri Apr 4 11:35:27 2014 +0800 But the fix does not seem to be correct, as it breaks when finds a zero byte in the source string. Using memcpy() instead of strmake(). - Unlike strmake(), memcpy() it does not write beyond the destination size passed. - Unlike the MySQL fix, memcpy() does not break on the first 0x00 byte found in the source string.
| | | | | * | MDEV-26833 Missed statement rollback in case transaction drops or create ↵bb-10.2-andreiAndrei Elkin2021-10-285-21/+1179
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | temporary table When transaction creates or drops temporary tables and afterward its statement faces an error even the transactional table statement's cached ROW format events get involved into binlog and are visible after the transaction's commit. Fixed with proper analysis of whether the errored-out statement needs to be rolled back in binlog. For instance a fact of already cached CREATE or DROP for temporary tables by previous statements alone does not cause to retain the being errored-out statement events in the cache. Conversely, if the statement creates or drops a temporary table itself it can't be rolled back - this rule remains.
| | | | | * | rpl_get_master_version_and_clock and rpl_row_big_table_id tests are slow, so ↵Alice Sherepa2021-10-282-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | let's not run them under valgrind
| | | | * | | Merge mariadb-10.3.32 into 10.3Marko Mäkelä2021-11-0919-93/+1365
| | | | |\ \ \
| | | | * | | | MDEV-26928 Column-inclusive WITH SYSTEM VERSIONING doesn't work with ↵Aleksey Midenkov2021-11-022-0/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | explicit system fields versioning_fields flag indicates that any columns were specified WITH SYSTEM VERSIONING. In that case we imply WITH SYSTEM VERSIONING for the whole table and WITHOUT SYSTEM VERSIONING for the other columns.
| | | | * | | | MDEV-25552 system versioned partitioned by LIMIT tables break CHECK TABLEAleksey Midenkov2021-11-021-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replaced HA_ADMIN_NOT_IMPLEMENTED error code by HA_ADMIN_OK. Now CHECK TABLE does not fail by unsupported check_misplaced_rows(). Admin message is not needed as well. Test case is the same as for MDEV-21011 (a7cf0db3d86), the result have been changed.
| | | | * | | | MDEV-25803 Inplace ALTER breaks MyISAM/Aria table when order of keys is changedAleksey Midenkov2021-11-022-0/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mysql_prepare_create_table() does my_qsort(sort_keys) on key info. This sorting is indeterministic: a table is created with one order and inplace alter may overwrite frm with another order. Since inplace alter does nothing about key info for MyISAM/Aria storage engines this results in discrepancy between frm and storage engine key definitions. The fix avoids the sorting of keys when no new keys added by ALTER (and this is ok for MyISAM/Aria since it cannot add new keys inplace). Notes: mi_keydef_write()/mi_keyseg_write() are used only in mi_create(). They should be used in ha_inplace_alter_table() as well. Aria corruption detection is unimplemented: maria_check_definition() is never used! MySQL 8.0 has this bug as well as of 8.0.26. This breaks main.long_unique in 10.4. The new result is correct and should be applied as it just different (original) order of keys.
| | | * | | | | Merge branch '10.4' into bb-10.4-releaseOleksandr Byelkin2021-11-086-4/+53
| | | |\ \ \ \ \
| | | | * | | | | MDEV-26929: Make the main testsuite runnable with optimizer trace enabledbb-10.4-mdev26929-part2Sergei Petrunia2021-10-292-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Part#2: check that compile-time default @@optimizer_trace is correct.
| | | | * | | | | MDEV-26453 Assertion `0' failed in row_upd_sec_index_entry & corruptionNikita Malyavin2021-10-292-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Long UNIQUE HASH index silently creates virtual column index, which should be impossible for base columns featuring AUTO_INCREMENT. Fix: add a relevant check; add new vcol type for a prettier error message.
| | | | * | | | | MDEV-26929: fixed opt_trace test for --mysqld=--optimizer_trace=enabled=onbb-10.4-MDEV-26929Sergei Krivonos2021-10-282-4/+4
| | | | | | | | |
| | * | | | | | | Merge branch '10.5' into bb-10.5-releaseOleksandr Byelkin2021-11-087-3/+151
| | |\ \ \ \ \ \ \
| | | * | | | | | | MDEV-24584 Selecting INT column with COLLATE utf8mb4_general_ci throws an errorbb-10.5-bar-MDEV-24584Alexander Barkov2021-11-084-0/+69
| | | | | | | | | |
| | | * | | | | | | MDEV-25555 Server crashes in tree_record_pos after INPLACE-recreating index ↵Aleksey Midenkov2021-11-033-3/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | on HEAP table Drop and add same key is considered rename (look ALTER_RENAME_INDEX in fill_alter_inplace_info()). But in this case order of keys may be changed, because mysql_prepare_alter_table() yet does not know about rename and treats 2 operations: drop and add. In that case we disable inplace algorithm for such engines as Memory, MyISAM and Aria with ALTER_INDEX_ORDER flag. These engines have no specialized check_if_supported_inplace_alter() and default handler::check_if_supported_inplace_alter() sees an unknown flag and returns HA_ALTER_INPLACE_NOT_SUPPORTED. ha_innobase::check_if_supported_inplace_alter() works differently and inplace is not disabled (with the help of modified INNOBASE_INPLACE_IGNORE). add_drop_v_cols fork was also tweaked as it wrongly failed with MSG_UNSUPPORTED_ALTER_ONLINE_ON_VIRTUAL_COLUMN when it seen ALTER_INDEX_ORDER. No-op operation must be still no-op no matter of ALTER_INDEX_ORDER presence, so we tweek its condition as well.
| | | * | | | | | | MDEV-25803 Inplace ALTER breaks MyISAM/Aria table when order of keys is changedAleksey Midenkov2021-11-032-0/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mysql_prepare_create_table() does my_qsort(sort_keys) on key info. This sorting is indeterministic: a table is created with one order and inplace alter may overwrite frm with another order. Since inplace alter does nothing about key info for MyISAM/Aria storage engines this results in discrepancy between frm and storage engine key definitions. The fix avoids the sorting of keys when no new keys added by ALTER (and this is ok for MyISAM/Aria since it cannot add new keys inplace). There is a case when implicit primary key may be changed when removing NOT NULL from the part of unique key. In that case we update modified_primary_key which is then used to not skip key sorting. According to is_candidate_key() there is no other cases when primary key may be changed implicitly. Notes: mi_keydef_write()/mi_keyseg_write() are used only in mi_create(). They should be used in ha_inplace_alter_table() as well. Aria corruption detection is unimplemented: maria_check_definition() is never used! MySQL 8.0 has this bug as well as of 8.0.26.
| * | | | | | | | | Merge branch '10.6' into bb-10.6-releaseOleksandr Byelkin2021-11-087-177/+17
| |\ \ \ \ \ \ \ \ \
* | \ \ \ \ \ \ \ \ \ Merge branch '10.7' into bb-10.7-releaseOleksandr Byelkin2021-11-089-177/+97
|\ \ \ \ \ \ \ \ \ \ \
| * | | | | | | | | | | MDEV-26956 LeakSanitizer/Valgrind errors in ↵bb-10.7-MDEV-26956Thirunarayanan Balathandayuthapani2021-11-072-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | trx_mod_table_time_t::start_bulk_insert upon adding system versioning InnoDB fails to apply buffered insert operation for 'mysql/transaction_registry' table during system versioning DDL. To avoid this, DDL calls extra(HA_EXTRA_IGNORE_INSERT) to inform the InnoDB for applying the buffered insert operation.
| * | | | | | | | | | | Merge branch '10.6' into 10.7Vladislav Vaintroub2021-11-0410-199/+17
| |\ \ \ \ \ \ \ \ \ \ \ | | | |/ / / / / / / / / | | |/| | | | | | | | |
| | * | | | | | | | | | MDEV-26966: Remove innodb_force_load_corruptedMarko Mäkelä2021-11-043-171/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MySQL 5.5 in commit 177d8b0c125b841c0650d27d735e3b87509dc286 introduced a configuration parameter innodb_force_load_corrupted whose purpose was to allow a corrupted table to be dropped. Given that MDEV-11412 in MariaDB 10.5.4 aims to allow any metadata for a missing or corrupted table to be dropped, and given that MDEV-17567 and MDEV-25506 and related tasks made DDL operations crash-safe, the parameter no longer serves any purpose. Because this obscure parameter was read-only (not settable by a client), it seems that we can simply declare it with MARIADB_REMOVED_OPTION (commit 1bc9cce70273c7f4705fb193abd2c938e3f3402a) without breaking any upgrades. DICT_ERR_IGNORE_INDEX: Replaces DICT_ERR_IGNORE_INDEX_ROOT and DICT_ERR_IGNORE_CORRUPT, which were always set equally. dict_load_indexes(): Report "No indexes found for table" in a uniform way, and only when the DICT_ERR_IGNORE_INDEX flag is not set. If the clustered index is marked corrupted, and the operation is DICT_ERR_IGNORE_DROP (we are about to drop the table), we will load the metadata; else, we will return DB_INDEX_CORRUPT. If SYS_INDEXES.PAGE is FIL_NULL, report an error or warning unless we are about to drop the table. dict_load_table_one(): Simplify the logic.
| | * | | | | | | | | | MDEV-26933 InnoDB fails to detect page number mismatchbb-10.6-MDEV-26933Marko Mäkelä2021-11-014-6/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mtr_t::page_lock(): Validate the page number. ibuf_tree_root_get(): Remove assertions that became redundant. The assertions in btr_validate_level() are kind of redundant as well, but because they are ut_a(), they are also present in release builds, while the ones in mtr_t::page_lock() are only present in debug builds. btr_cur_position(): Do not duplicate an assertion that is part of page_cur_position(). dict_load_tablespace(): Introduce a new option DICT_ERR_IGNORE_TABLESPACE that will suppress loading a tablespace when a table is going to be dropped.
| * | | | | | | | | | | MDEV-26947 UNIQUE column checks fail in InnoDB resulting in table corruptionThirunarayanan Balathandayuthapani2021-11-012-0/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | InnoDB fails to rollback the bulk insert buffered operation when trx_mark_sql_stat_end() encounters the DB_DUPLICATE_KEY error. In this case, check table fails with secondary index row count mismatch. InnoDB gives the error "ER_ERROR_DURING_COMMIT" while encountering the DB_DUPLICATE_KEY in trx_mark_sql_stat_end()
* | | | | | | | | | | | Merge branch '10.6' into 10.7mariadb-10.7.1Oleksandr Byelkin2021-11-055-21/+1179
|\ \ \ \ \ \ \ \ \ \ \ \ | | |_|/ / / / / / / / / | |/| | | | | | | | | |
| * | | | | | | | | | | Merge branch '10.5' into 10.6mariadb-10.6.5Oleksandr Byelkin2021-11-055-21/+1179
| |\ \ \ \ \ \ \ \ \ \ \ | | | |_|/ / / / / / / / | | |/| | | | | | | | |
| | * | | | | | | | | | Merge branch '10.4' into 10.5mariadb-10.5.13Oleksandr Byelkin2021-11-055-21/+1179
| | |\ \ \ \ \ \ \ \ \ \ | | | | |_|_|/ / / / / / | | | |/| | | | | | | |
| | | * | | | | | | | | Merge branch '10.3' into 10.4mariadb-10.4.22Oleksandr Byelkin2021-11-055-21/+1179
| | | |\ \ \ \ \ \ \ \ \ | | | | | |_|_|_|_|/ / / | | | | |/| | | | | | |
| | | | * | | | | | | | Merge branch '10.2' into 10.3mariadb-10.3.32Oleksandr Byelkin2021-11-055-21/+1179
| | | | |\ \ \ \ \ \ \ \ | | | | | | |_|_|_|_|_|/ | | | | | |/| | | | | |
| | | | | * | | | | | | MDEV-26833 Missed statement rollback in case transaction drops or create ↵mariadb-10.2.41Andrei Elkin2021-11-055-21/+1179
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | temporary table When transaction creates or drops temporary tables and afterward its statement faces an error even the transactional table statement's cached ROW format events get involved into binlog and are visible after the transaction's commit. Fixed with proper analysis of whether the errored-out statement needs to be rolled back in binlog. For instance a fact of already cached CREATE or DROP for temporary tables by previous statements alone does not cause to retain the being errored-out statement events in the cache. Conversely, if the statement creates or drops a temporary table itself it can't be rolled back - this rule remains.