| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
calculate_cond_selectivity_for_table() is largely rewritten:
- Process keys in the order of rows found, smaller ranges first. If two
ranges has equal number of rows, use the one with more key parts.
This helps us to mark more used fields to not be used for further
selectivity calculations. See cmp_quick_ranges().
- Ignore keys with fields that where used by previous keys
- Don't use rec_per_key[] to calculate selectivity for smaller
secondary key parts. This does not work as rec_per_key[] value
is calculated in the context of the previous key parts, not for the
key part itself. The one exception is if the previous key parts
is a constant.
Other things:
- Ensure that select->cond_selectivity is always between 0 and 1.
- Ensure that select->opt_range_condition_rows is never updated to
a higher value. It is initially set to the number of rows in table.
- We know store in table->opt_range_condition_rows the lowest number of
rows that any row-read-method has found so far. Before it was only done
for QUICK_SELECT_I::QS_TYPE_ROR_UNION and
QUICK_SELECT_I::QS_TYPE_INDEX_MERGE.
Now it is done for a lot more methods. See
calculate_cond_selectivity_for_table() for details.
- Calculate and use selectivity for the first key part of a multiple key
part if the first key part is a constant.
WHERE key1_part1=5 and key2_part1=5. IF key1 is used, then we can still
use selectivity for key2
Changes in test results:
- 'filtered' is slighly changed, usually to something slightly smaller.
- A few cases where for group by queries the table order changed. This was
because the number of resulting rows from a group by query with MIN/MAX
is now set to be smaller.
- A few index was changed as we know prefer index with more key parts if
the number of resulting rows is the same.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The problem was in and_all_keys(), the code of MDEV-9759 which calculates
the new tree weight:
First, it didn't take into account the case when
(next->next_key_part=tmp) == NULL
and dereferenced a NULL pointer when getting tmp->weight.
Second, "if (param->alloced_sel_args > SEL_ARG::MAX_SEL_ARGS) break"
could leave the loop with incorrect value of weight.
Fixed by introducing SEL_ARG::update_weight_locally() and calling it
at the end of the function. This allows to avoid caring about all the
above cases.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(Variant #5, full patch, for 10.5)
Do not produce SEL_ARG graphs that would yield huge numbers of ranges.
Introduce a concept of SEL_ARG graph's "weight". If we are about to
produce a graph whose "weight" exceeds the limit, remove the parts
of SEL_ARG graph that represent the biggest key parts. Do so until
the graph's is within the limit.
Includes
- debug code to verify SEL_ARG graph weight
- A user-visible @@optimizer_max_sel_arg_weight to control the optimization
- Logging the optimization into the optimizer trace.
|
|\ |
|
| |\ |
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
available free memory to use completely
let thd->killed to abort range optimizer
|
|/ / / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
[Warning] InnoDB: Using a partial-field key prefix in search
For a key with keyparts (k1,k2,k3) , if we are building a range over the keyparts
we should make sure that if min_value/max_value for a keypart is not added to
key buffer then the keyparts following should also not be allowed.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Changed the function append_range_all_keyparts to use sel_arg_range_seq_init / sel_arg_range_seq_next to produce ranges.
Also adjusted to print format for the ranges, now the ranges are printed as:
(keypart1_min, keypart2_min,..) OP (keypart1_name,keypart2_name, ..) OP (keypart1_max,keypart2_max, ..)
Also added more tests for range and index merge access for optimizer trace
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |\ |
|
| | | |\ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
* Update wrong zip-code
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When the chosen execution plan accesses a join table employing a range
rowid filter a quick select to scan this range has to be built. This
quick select is built by a call of SQL_SELECT::test_quick_select().
At this call the function should allow to evaluate only single index
range scans. In order to be able to do this a new parameter was added
to this function.
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This task involves the implementation for the optimizer trace.
This feature produces a trace for any SELECT/UPDATE/DELETE/,
which contains information about decisions taken by the optimizer during
the optimization phase (choice of table access method, various costs,
transformations, etc). This feature would help to tell why some decisions were
taken by the optimizer and why some were rejected.
Trace is session-local, controlled by the @@optimizer_trace variable.
To enable optimizer trace we need to write:
set @@optimizer_trace variable= 'enabled=on';
To display the trace one can run:
SELECT trace FROM INFORMATION_SCHEMA.OPTIMIZER_TRACE;
This task also involves:
MDEV-18489: Limit the memory used by the optimizer trace
introduces a switch optimizer_trace_max_mem_size which limits
the memory used by the optimizer trace. This was implemented by
Sergei Petrunia.
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | | |
introduced in the patch fo MDEV-16934.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch introduces support for the system variable eq_range_index_dive_limit
that existed in MySQL starting from 5.6. The variable sets a limit for
index dives into equality ranges. Index dives are performed by optimizer
to estimate the number of rows in range scans. Index dives usually provide
good estimate but they are pretty expensive. To estimate the number of rows
in equality ranges statistical data on indexes can be employed. Its usage gives
not so good estimates but it's cheap. So if the number of equality dives
required by an index scan exceeds the set limit no dives for equality
ranges are performed by the optimizer for this index.
As the new system variable is introduced in a stable version the default
value for it is set to a special value meaning there is no limit for the number
of index dives performed by the optimizer.
The patch partially uses the MySQL code for WL 5957
'Statistics-based Range optimization for many ranges'.
|
|\ \ \ \
| | | | |
| | | | |
| | | | | |
Merge branch '10.3' into trunk
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Other things:
- Cleanup of allocated bitmaps done in open(), which
simplifies init_partition_bitmaps()
- Add needed defines in ha_spider.cc to enable new spider code
- Fixed some DBUG_PRINT() to be consistent with normal code
- Removed end space
- The changes in test cases partition_innodb, partition_range,
partition_pruning etc are becasue partitions can now more exactly
calculate the number of rows in a range.
Contains spider patches:
014,015,023,033,035,037,040,042,044,045,049,050,051,053,059
|
|/ / / / |
|
|/ / /
| | |
| | |
| | | |
TRUE)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
move TABLE::key_read into handler. Because in index merge and DS-MRR
there can be many handlers per table, and some of them use
key read while others don't. "keyread" is really per handler,
not per TABLE property.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
bitmap_is_set(table->read_set, field_index))' failed
Found and fixed 2 problems:
- Filesort addon fields didn't mark virtual columns properly
- multi-range-read calculated vcol bitmap but was not using it.
This caused wrong vcol field to be calculated on read, which caused the assert.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
filesort and init_read_record() for the same table.
This will simplify code for WINDOW FUNCTIONS (MDEV-6115)
- Filesort_info renamed to SORT_INFO and moved to filesort.h
- filesort now returns SORT_INFO
- init_read_record() now takes a SORT_INFO parameter.
- unique declaration is moved to uniques.h
- subselect caching of buffers is now more explicit than before
- filesort_buffer is now reusable even if rec_length has changed.
- filsort_free_buffers() and free_io_cache() calls are removed
- Remove one malloc() when using get_addon_fields()
Other things:
- Added --debug-assert-on-not-freed-memory option to make it easier to
debug some not-freed-memory issues.
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The crash was caused by range optimizer using RANGE_OPT_PARAM::min_key
(and max_key) to store keys. Buffer size was a good upper bound for
range analysis and partition pruning, but not for EITS selectivity
calculations.
Fixed by making these buffers variable-size. The sizes are calculated
from [pseudo]indexes used for range analysis.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
can compile with -Wshadow and get much fewer warnings)
- Changed ER(ER_...) to ER_THD(thd, ER_...) when thd was known or if there was many calls to current_thd in the same function.
- Changed ER(ER_..) to ER_THD_OR_DEFAULT(current_thd, ER...) in some places where current_thd is not necessary defined.
- Removing calls to current_thd when we have access to thd
Part of this is optimization (not calling current_thd when not needed),
but part is bug fixing for error condition when current_thd is not defined
(For example on startup and end of mysqld)
Notable renames done as otherwise a lot of functions would have to be changed:
- In JOIN structure renamed:
examined_rows -> join_examined_rows
record_count -> join_record_count
- In Field, renamed new_field() to make_new_field()
Other things:
- Added DBUG_ASSERT(thd == tmp_thd) in Item_singlerow_subselect() just to be safe.
- Removed old 'tab' prefix in JOIN_TAB::save_explain_data() and use members directly
- Added 'thd' as argument to a few functions to avoid calling current_thd.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
sql_alloc() has additional costs compared to direct mem_root allocation:
- function call: it is defined in a separate translation unit and can't be
inlined
- it needs to call pthread_getspecific() to get THD::mem_root
It is called dozens of times implicitely at least by:
- List<>::push_back()
- List<>::push_front()
- new (for Sql_alloc derived classes)
- sql_memdup()
Replaced lots of implicit sql_alloc() calls with direct mem_root allocation,
passing through THD pointer whenever it is needed.
Number of sql_alloc() calls reduced 345 -> 41 per OLTP RO transaction.
pthread_getspecific() overhead dropped 0.76 -> 0.59
sql_alloc() overhed dropped 0.25 -> 0.06
|
| | |
|
|\ \ |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
InnoDB
- Filesort has an optmization where it reads only columns that are
needed before the sorting is done.
- When ref(_or_null) is picked by the join optimizer, it may remove parts
of WHERE clause that are guaranteed to be true.
- However, if we use quick select, we must put all of the range columns into the
read set. Not doing so will may cause us to fail to detect the end of the range.
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
SEL_ARG::IMPOSSIBLE
Let range optimizer remove parts of OR-clauses for which range analysis
produced SEL_TREE(IMPOSSIBLE).
There is no need to remove parts of AND-clauses: either they are inside
of OR (and the whole AND-clause will be removed), or the AND-clause is
at the top level, in which case the whole WHERE (or ON) is always FALSE
and this is a degenerate case which receives special treatment.
The removal process takes care not to produce 1-way ORs (in that case
we substitute the OR for its remaining member).
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is port of fix for MySQL BUG#17647863.
revno: 5572
revision-id: jon.hauglid@oracle.com-20131030232243-b0pw98oy72uka2sj
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
timestamp: Thu 2013-10-31 00:22:43 +0100
message:
Bug#17647863: MYSQL DOES NOT COMPILE ON OSX 10.9 GM
Rename test() macro to MY_TEST() to avoid conflict with libc++.
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | | |
- Update test results after last few csets
- Generate correct value for `possible_keys` column for single table UPDATE/DELETE.
|
| | |
| | |
| | |
| | | |
- Better EXPLAIN-saving methods for quick selects
|
| | |
| | |
| | |
| | |
| | | |
- Address review feedback: more renames
|
| | |
| | |
| | |
| | | |
- Address review feedback: rename nearly any name used by the new EXPLAIN code.
|
| | |
| | |
| | |
| | |
| | |
| | | |
- Query plan footprint (in new terms, "EXPLAIN structure") should always keep
a copy of key_name. This is because the table might be a temporary table which
may be already freed by the time we use query plan footprint.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Make QPF structures store data members, not strings.
This is not fully possible, because table names (and hence
key names, etc) can be deleted, and we have to store strings.
|
| |\ \
| | | |
| | | |
| | | |
| | | | |
- Merge with 10.0-base
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
- Let Query Plan Footprint store join buffer type
in binary form, not string.
- Same for LooseScan type.
|
|\ \ \ \
| |/ / /
|/| | |
| | | |
| | | |
| | | |
| | | |
| | | | |
includes:
* remove some remnants of "Bug#14521864: MYSQL 5.1 TO 5.5 BUGS PARTITIONING"
* introduce LOCK_share, now LOCK_ha_data is strictly for engines
* rea_create_table() always creates .par file (even in "frm-only" mode)
* fix a 5.6 bug, temp file leak on dummy ALTER TABLE
|
| | | | |
|