diff options
author | Igor Babaev <igor@askmonty.org> | 2014-04-22 14:39:57 -0700 |
---|---|---|
committer | Igor Babaev <igor@askmonty.org> | 2014-04-22 14:39:57 -0700 |
commit | 3e0f63c18fdfae9280fb406c4677c613d1838a64 (patch) | |
tree | 3210f3b7436ff1d36a059bf3ec967515d9faa959 /mysql-test/t/group_by.test | |
parent | bd44c086b33a7b324ac41f8eb0826c31da1c4103 (diff) | |
download | mariadb-git-3e0f63c18fdfae9280fb406c4677c613d1838a64.tar.gz |
Fixed the problem of mdev-5947.
Back-ported from the mysql 5.6 code line the patch with
the following comment:
Fix for Bug#11757108 CHANGE IN EXECUTION PLAN FOR COUNT_DISTINCT_GROUP_ON_KEY
CAUSES PEFORMANCE REGRESSION
The cause for the performance regression is that the access strategy for the
GROUP BY query is changed form using "index scan" in mysql-5.1 to use "loose
index scan" in mysql-5.5. The index used for group by is unique and thus each
"loose scan" group will only contain one record. Since loose scan needs to
re-position on each "loose scan" group this query will do a re-position for
each index entry. Compared to just reading the next index entry as a normal
index scan does, the use of loose scan for this query becomes more expensive.
The cause for selecting to use loose scan for this query is that in the current
code when the size of the "loose scan" group is one, the formula for
calculating the cost estimates becomes almost identical to the cost of using
normal index scan. Differences in use of integer versus floating point arithmetic
can cause one or the other access strategy to be selected.
The main issue with the formula for estimating the cost of using loose scan is
that it does not take into account that it is more costly to do a re-position
for each "loose scan" group compared to just reading the next index entry.
Both index scan and loose scan estimates the cpu cost as:
"number of entries needed too read/scan" * ROW_EVALUATE_COST
The results from testing with the query in this bug indicates that the real
cost for doing re-position four to eight times higher than just reading the
next index entry. Thus, the cpu cost estimate for loose scan should be increased.
To account for the extra work to re-position in the index we increase the
cost for loose index scan to include the cost of navigating the index.
This is modelled as a function of the height of the b-tree:
navigation cost= ceil(log(records in table)/log(indexes per block))
* ROWID_COMPARE_COST;
This will avoid loose index scan being used for indexes where the "loose scan"
group contains very few index entries.
Diffstat (limited to 'mysql-test/t/group_by.test')
-rw-r--r-- | mysql-test/t/group_by.test | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/mysql-test/t/group_by.test b/mysql-test/t/group_by.test index 8ee17d2b2d3..e92780f0523 100644 --- a/mysql-test/t/group_by.test +++ b/mysql-test/t/group_by.test @@ -1334,7 +1334,7 @@ INSERT INTO t1 VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20); let $query0=SELECT col1 AS field1, col1 AS field2 - FROM t1 GROUP BY field1, field2+0; + FROM t1 GROUP BY field1, field2; # Needs to be range to exercise bug --eval EXPLAIN $query0; @@ -1496,8 +1496,7 @@ CREATE TABLE t1 ( b varchar(1), KEY (b,a) ); - -INSERT INTO t1 VALUES (1,NULL),(0,'a'); +INSERT INTO t1 VALUES (1,NULL),(0,'a'),(1,NULL),(0,'a'); let $query= SELECT SQL_BUFFER_RESULT MIN(a), b FROM t1 WHERE t1.b = 'a' GROUP BY b; |