| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The bitmap implementation defines two template Bitmap classes. One
optimized for 64-bit (default) wide bitmaps while the other is used for
all other widths.
In order to optimize the computations, Bitmap<64> class has defined its
own member functions for bitmap operations, the other one, however,
relies on mysys' bitmap implementation (mysys/my_bitmap.c).
Issue 1:
In case of non 64-bit Bitmap class, intersect() wrongly reset the
received bitmap while initialising a new local bitmap structure
(bitmap_init() clears the bitmap buffer) thus, the received bitmap was
getting cleared.
Fixed by initializing the local bitmap structure by using a temporary
buffer and later copying the received bitmap to the initialised bitmap
structure.
Issue 2:
The non 64-bit Bitmap class had the Iterator missing which caused
compilation failure.
Also added a cmake variable to hold the MAX_INDEXES value when supplied
from the command prompt. (eg. cmake .. -DMAX_INDEXES=128U). Checks have
been put in place to trigger build failure if MAX_INDEXES value is
greater than 128.
Test modifications:
* Introduced include/have_max_indexes_[64|128].inc to facilitate
skipping of tests for which the output differs with different
MAX_INDEXES.
* Introduced include/max_indexes.inc which would get modified by cmake
to reflect the MAX_INDEXES value used to build the server. This file
simply sets an mtr variable '$max_indexes' to show the MAX_INDEXES
value, which will then be consumed by the above introduced include file.
* Some tests (portions), dependent on MAX_INDEXES value, have been moved
to separate test files.
|
|
|
|
|
| |
The fix for MDEV-8918 previously fixed the problem reported in MDEV-7195.
Adding a test case from MDEV-7195 only.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Procedure which uses stack of views first executed without most deep view.
It fails but one view cached (as well as whole procedure).
Then simultaniusely create the second view we lack and execute the procedure.
In the beginning of procedure execution the view is not yet created so
procedure used as it was cached (cache was not invalidated).
But by the time we are trying to use most deep view it is already created.
The problem with the view is that thd->select_number (first view was not parsed) so second view will get the same number.
The fix is in keeping the thd->select_number correct even if we use cached views.
In the proposed solution (to keep it simple) counter can be bigger then should but it should not create problem because numbers are still unique and situation is very rare.
|
|\ |
|
| |
| |
| |
| |
| | |
* update *.result files
* fix XtraDB for Windows (again)
|
| |\ |
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
PROBLEMS
Description:- Server variable "--lower_case_tables_names"
when set to "0" on windows platform which does not support
case sensitive file operations leads to problems. A warning
message is printed in the error log while starting the
server with "--lower_case_tables_names=0". Also according to
the documentation, seting "lower_case_tables_names" to "0"
on a case-insensitive filesystem might lead to index
corruption.
Analysis:- The problem reported in the bug is:-
Creating an INNODB table 'a' and executing a query, "INSERT
INTO a SELECT a FROM A;" on a server started with
"--lower_case_tables_names=0" and running on a
case-insensitive filesystem leads innodb to flat spin.
Optimizer thinks that "a" and "A" are two different tables
as the variable "lower_case_table_names" is set to "0". As a
result, optimizer comes up with a plan which does not need a
temporary table. If the same table is used in select and
insert, a temporary table is needed. This incorrect
optimizer plan leads to infinite insertions.
Fix:- If the server is started with
"--lower_case_tables_names" set to 0 on a case-insensitive
filesystem, an error, "The server option
'lower_case_table_names'is configured to use case sensitive
table names but the data directory is on a case-insensitive
file system which is an unsupported combination. Please
consider either using a case sensitive file system for your
data directory or switching to a case-insensitive table name
mode.", is printed in the server error log and the server
exits.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
DESCRIPTION
===========
Inability of mysql LOAD XML command to handle empty XML
tags i.e. <row><tag/></row>. Also the behaviour is wrong
and (different than above) when there is a space in empty
tag i.e. <row><tag /></row>
ANALYSIS
========
In read_xml() the case where we encounter a close tag ('/')
we're decreasing the 'level' blindly which is wrong.
Actually when its an without-space-empty-tag (succeeding
char is '>'), we need to skip the decrement. In other words
whenever we hit a close tag ('/'), decrease the 'level'
only when (i) It's not an (without space) empty tag i.e.
<tag/> or, (ii) It is of format <row col="val" .../>
FIX
===
The switch case for '/' is modified. We've removed the
blind decrement of 'level'. We do it only when its not an
without-space-empty-tag. Also we are setting 'in_tag' to
false to let program know that we're done reading current
tag (required in the case of format <row col="val" .../>)
|
| | | |\ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
SERVER FAILURES.
Analysis :
==========
During JOIN::prepare of sub-query which creates the
derived tables we call setup_procedure. Here we call
fix_fields for parameters of procedure clause. Calling
setup_procedure at this point may cause issue. If
sub-query is one of parameter being fixed it might
lead to complicated dependencies on derived tables
being prepared.
SOLUTION :
==========
In 5.6 with WL#6242, we have made procedure clause
parameters can only be NUM, so sub-queries are not
allowed as parameters. So in 5.5 we can block
sub-queries in procedure clause parameters.
This eliminates above conflicting dependencies.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
IS REJECTED.
Analysis
========
View creation with named columns over UNION is rejected.
Consider the following view definition:
CREATE VIEW v1 (fld1, fld2) AS SELECT 1 AS a, 2 AS b
UNION ALL SELECT 1 AS a, 1 AS a;
A 'duplicate column' error was reported due to the duplicate
alias name in the secondary SELECT. The VIEW column names
are either explicitly specified or determined from the
first SELECT (which can be auto generated if not specified).
Since a duplicate column name check was performed even
for the secondary SELECTs, an error was reported.
Fix
====
Check for duplicate column names only for the named
columns if specified or only for the first SELECT.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
INCORRECT RESULTS
Issue:
-----
Updating varchar and text fields in the same update
statement can produce incorrect results. When a varchar
field is assigned to the text field and the varchar field
is then set to a different value, the text field's result
contains the varchar field's new value.
SOLUTION:
---------
Currently the blob type does not allocate space for the
string to be stored. Instead it contains a pointer to the
varchar string. So when the varchar field is changed as
part of the update statement, the value contained in the
blob also changes.
The fix would be to actually store the value by allocating
space for the blob's string. We can avoid allocating this
space when the varchar field is not being written into.
|
| | | |\ \
| | | | |/ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
MULTIPLE THREADS
Description:- The utility "mysqlimport" does not use
multiple threads for the execution with option
"--use-threads". "mysqlimport" while importing multiple
files and multiple tables, uses a single thread even if the
number of threads are specified with "--use-threads" option.
Analysis:- This utility uses ifdef HAVE_LIBPTHREAD to check
for libpthread library and if defined uses libpthread
library for mutlithreaing. Since HAVE_LIBPTHREAD is not
defined anywhere in the source, "--use-threads" option is
silently ignored.
Fix:- "-DTHREADS" is set to the COMPILE_FLAGS which will
enable pthreads. HAVE_LIBPTHREAD macro is removed.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Problem :
---------
Issue-1: The root cause for the issues is that (col1 > 1) is not a
valid partition function and we should have thrown error at much early
stage [partition_info::check_partition_info]. We are not checking
sub-partition expression when partition expression is NULL.
Issue-2: Potential issue for future if any partition function needs to
change item tree during open/fix_fields. We should release changed
items, if any, before doing closefrm when we open the partitioned table
during creation in create_table_impl.
Solution :
----------
1.check_partition_info() - Check for sub-partition expression even if no
partition expression.
[partition by ... columns(...) subpartition by hash(<expr>)]
2.create_table_impl() - Assert that the change list is empty before doing
closefrm for partitioned table. Currently no supported partition function
seems to be changing item tree during open.
Reviewed-by: Mattias Jonsson <mattias.jonsson@oracle.com>
RB: 9345
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
many rows
make_cond_for_info_schema() does preserve outer fields
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
stack overrun
Substitute into transformed subselects original left expression and than register its change in case it was substituted.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
MDEV-7565: Server crash with Signal 6 (part 2)
followup test suite and its fix.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Problem was in rewriting left expression which had 2 references on it. Solved with making subselect reference main.
Item_in_optimized can have not Item_in_subselect reference in left part so type casting with no check is dangerous.
Item::cols() should be checked after Item::fix_fields().
|
| | | | |
| | | | |
| | | | |
| | | | | |
Preparation of subselect moved earlier (before checks which needs it prepared).
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Made no_rows_in_result()/restore_to_before_no_rows_in_result() not looking
annecessary deep with walk() method.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
`!null_value' failed in Item::send(Protocol*, String*))
Postreview addons by Bar
Fix: keeping contract: NULL value mean NULL pointer in val_str and val_deciman.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
EXPLAIN INSERT ... SELECT tried to use SELECT's execution path. This
caused a collection of problems:
- SELECT_DESCRIBE flag was not put into select_lex->options, which
means it was not in JOIN::select_options either (except for the first
member of the UNION).
- This caused UNION members to be executed. They would attempt to write
join output rows to the output.
- (Actual cause of the crash) second join sibling would call
result->send_eof() when finished execution. Then,
Explain_query::print_explain would attempt to write to query output
again, and cause an assertion due to non-empty query output.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
[EXPLAIN] INSERT .. SELECT creates a select_insert object.
select_insert calls handler->start_bulk_insert() during
initialization.
For MyISAM/Aria this requires that a matching call to
handler->end_bulk_insert() call is made.
Regular INSERT .. SELECT accomplishes this by calling either
select_result->send_eof() or select_result->abort_result_set().
EXPLAIN INSERT ... SELECT didn't call either, which resulted in
improper de-initializaiton of handler object. Make it call
abort_result_set(), which invokes handler->end_bulk_insert().
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
sql/opt_range_mrr.cc:100(step_down_to)
The crash was caused by range optimizer using RANGE_OPT_PARAM::min_key
(and max_key) to store keys. Buffer size was a good upper bound for
range analysis and partition pruning, but not for EITS selectivity
calculations.
Fixed by making these buffers variable-size. The sizes are calculated
from [pseudo]indexes used for range analysis.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
partition ... pMAX values less than MAXVALUE
Made dipper copy of the lists, so now one execution has no influence on the
other.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
- Added --start option to mysqld which don't prints notes to log on startup
This helps to find errors in configure options easier
- Dont write [Note] entries to log after we have abort the server
This makes it easier to find what went wrong
- Don't by default write out Changed limits for max_open_files as this didn't really change from anything the user gave us
- Dont write warnings about not using --explicit_defaults_for_timestamp (we don't have plans do depricate the old behaviour)
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
- Moving Item_xxx_field declarations after Item_sum_xxx declarations,
so Item_xxx_field constructors can be defined directly in item_sum.h
rather than item_sum.cc. This removes some duplicate code, e.g.
initialization of the following members at constructor time:
name, decimals, max_length, unsigned_flag, field, maybe_null.
- Adding Item_sum_field as a common parent for Item_avg_field and
Item_variance_field
- Deriving Item_sum_field directly from Item rather that Item_result_field,
as Item_sum_field descendants do not need anything from Item_result_field.
- Removing hybrid infrastructure from Item_avg_field,
adding Item_avg_field_decimal and Item_avg_field_double instead,
as desired result type is already known at constructor time
(not only at fix_fields time). This simplifies the code.
- Changing Item_avg_field_decimal::val_int() to call val_int_from_decimal()
instead of doing { return (longlong) rint(val_real()); }
This is the fix itself.
|
| | | | | |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
backport mysql parser fixes
0034963fbf199696792491bcb79d5f0731c98804
5948561812bc691bd0c13cf518a3fe77d9daf920
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The problem was that GROUP BY code created Item_field objects
that referred to fields in the temp. tables used for GROUP BY.
Item_ref and set_items_ref_array() call caused pointers to temp.
table fields to occur in many places.
This patch introduces Item_temptable_field, which can handle
item->print() calls made after the underlying table is freed.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
into binlog if nothing to do.
Just log the ALTER statement even if there's nothing to do.
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
argument types.
The patch for MDEV-8871 also fixed the problem reported in MDEV-657.
Adding the test case from the bug report.
|
| | | | |
| | | | |
| | | | |
| | | | | |
MDEV-8873 Wrong field type or metadata for LEAST(int_column,string_column)
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
DATETIME) FROM t1)
MDEV-8875 Wrong metadata for MAX(CAST(time_column AS DATETIME))
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
unsigned_int_column)
Item_func_hybrid_field_type did not return correct field_type(), cmp_type()
and result_type() in some cases, because cached_result_type and
cached_field_type were set in independent pieces of the code and
did not properly match to each other.
Fix:
- Removing Item_func_hybrid_result_type
- Deriving Item_func_hybrid_field_type directly from Item_func
- Introducing a new class Type_handler which guarantees that
field_type(), cmp_type() and result_type() are always properly synchronized
and using the new class in Item_func_hybrid_field_type.
|
| | | | |
| | | | |
| | | | |
| | | | | |
LEAST(unsigned_column,unsigned_column)
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
does not produce warnings
|
| | | | |
| | | | |
| | | | |
| | | | | |
functions
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | | |
Merge downstream Debian packaging (MDEV-6284)
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The fix for MDEV-8466 earlier fixed MDEV-8635 as well. Adding a test only.
|