| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
we don't test for not-existing gis extra in FRM.
|
|
|
|
| |
Part 2: removed hack workaround for bug we do not have.
|
|
|
|
| |
small sixes of used_tables() usage
|
| |
|
|
|
|
| |
THD is already available in Protocol
|
|
|
|
| |
and ErrConvString::ErrConvString(String *).
|
| |
|
| |
|
|
|
|
|
|
|
| |
It was used only temporary, during udf_handler::fix_fields() time,
and then copied to the owner Item_func_or_sum object.
Changing the code to use the Used_tables_and_const_cache part
of the owner Item_sum_or_func object directly.
|
| |
|
|
|
|
| |
After review/QA fixes.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Procedure which uses stack of views first executed without most deep view.
It fails but one view cached (as well as whole procedure).
Then simultaniusely create the second view we lack and execute the procedure.
In the beginning of procedure execution the view is not yet created so
procedure used as it was cached (cache was not invalidated).
But by the time we are trying to use most deep view it is already created.
The problem with the view is that thd->select_number (first view was not parsed) so second view will get the same number.
The fix is in keeping the thd->select_number correct even if we use cached views.
In the proposed solution (to keep it simple) counter can be bigger then should but it should not create problem because numbers are still unique and situation is very rare.
|
|\ |
|
| |\ |
|
| | |\ |
|
| | | |\ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
FIELD_ITERATOR_TABLE::END_OF_FIELDS
Note: This a backport of the patch for bug#19894987
to MySQL-5.5
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
PROBLEMS
Description:- Server variable "--lower_case_tables_names"
when set to "0" on windows platform which does not support
case sensitive file operations leads to problems. A warning
message is printed in the error log while starting the
server with "--lower_case_tables_names=0". Also according to
the documentation, seting "lower_case_tables_names" to "0"
on a case-insensitive filesystem might lead to index
corruption.
Analysis:- The problem reported in the bug is:-
Creating an INNODB table 'a' and executing a query, "INSERT
INTO a SELECT a FROM A;" on a server started with
"--lower_case_tables_names=0" and running on a
case-insensitive filesystem leads innodb to flat spin.
Optimizer thinks that "a" and "A" are two different tables
as the variable "lower_case_table_names" is set to "0". As a
result, optimizer comes up with a plan which does not need a
temporary table. If the same table is used in select and
insert, a temporary table is needed. This incorrect
optimizer plan leads to infinite insertions.
Fix:- If the server is started with
"--lower_case_tables_names" set to 0 on a case-insensitive
filesystem, an error, "The server option
'lower_case_table_names'is configured to use case sensitive
table names but the data directory is on a case-insensitive
file system which is an unsupported combination. Please
consider either using a case sensitive file system for your
data directory or switching to a case-insensitive table name
mode.", is printed in the server error log and the server
exits.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
DESCRIPTION
===========
Inability of mysql LOAD XML command to handle empty XML
tags i.e. <row><tag/></row>. Also the behaviour is wrong
and (different than above) when there is a space in empty
tag i.e. <row><tag /></row>
ANALYSIS
========
In read_xml() the case where we encounter a close tag ('/')
we're decreasing the 'level' blindly which is wrong.
Actually when its an without-space-empty-tag (succeeding
char is '>'), we need to skip the decrement. In other words
whenever we hit a close tag ('/'), decrease the 'level'
only when (i) It's not an (without space) empty tag i.e.
<tag/> or, (ii) It is of format <row col="val" .../>
FIX
===
The switch case for '/' is modified. We've removed the
blind decrement of 'level'. We do it only when its not an
without-space-empty-tag. Also we are setting 'in_tag' to
false to let program know that we're done reading current
tag (required in the case of format <row col="val" .../>)
|
| | | | |\ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
SERVER FAILURES.
Analysis :
==========
During JOIN::prepare of sub-query which creates the
derived tables we call setup_procedure. Here we call
fix_fields for parameters of procedure clause. Calling
setup_procedure at this point may cause issue. If
sub-query is one of parameter being fixed it might
lead to complicated dependencies on derived tables
being prepared.
SOLUTION :
==========
In 5.6 with WL#6242, we have made procedure clause
parameters can only be NUM, so sub-queries are not
allowed as parameters. So in 5.5 we can block
sub-queries in procedure clause parameters.
This eliminates above conflicting dependencies.
|
| | | | |\ \
| | | | | |/ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
send_result_set_metadata
Analysis
--------
Cursor inside trigger accessing NEW/OLD row leads server exit.
The reason for the bug was that implementation of function
create_tmp_table() was not considering Item::TRIGGER_FIELD_ITEM
as possible alternative for type of class being instantiated.
This was resulting in a mismatch between a number of columns
in result list and temp table definition. This mismatch leads
to the failure of assertion
DBUG_ASSERT(send_result_set_metadata.elements == item_list.elements)
in the method Materialized_cursor::send_result_set_metadata
in debug mode.
Fix:
---
Added code to consider Item::TRIGGER_FIELD_ITEM as valid
type while creating fields.
|
| | | | |\ \
| | | | | |/ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
VIEW IS NOT ASSIGNED!
Issue: A select for update subquery in having clause
resulted deadlock and its transaction was rolled back
by innodb. val_XXX interfaces do not handle errors and
it do not propogate errors to its caller. sub_select
did not see this error when it called
evaluate_join_record and later made a call to innodb.
As transaction is rolled back innodb asserted.
Fix: Now evaluate_join_record checks if there is any
error reported and then return the same to its caller.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
FIND_USED_PARTITIONS | SQL/OPT_RANGE.CC:3884
Post-push fix.
|
| | | | |\ \
| | | | | |/ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
FIND_USED_PARTITIONS | SQL/OPT_RANGE.CC:3884
Issue:
-----
During partition pruning, first we identify the partition
in which row can reside and then identify the subpartition.
If we find a partition but not the subpartion then we hit
a debug assert. While finding the subpartition we check
the current thread's error status in part_val_int()
function after some operation. In this case the thread's
error status is already set to an error (multiple rows
returned) so the function returns no partition found and
results in incorrect behavior.
SOLUTION:
---------
Currently any error encountered in part_val_int is
considered a "partition not found" type error. Instead of
an assert, a check needs to be done and a valid error
returned.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
IS REJECTED.
Analysis
========
View creation with named columns over UNION is rejected.
Consider the following view definition:
CREATE VIEW v1 (fld1, fld2) AS SELECT 1 AS a, 2 AS b
UNION ALL SELECT 1 AS a, 1 AS a;
A 'duplicate column' error was reported due to the duplicate
alias name in the secondary SELECT. The VIEW column names
are either explicitly specified or determined from the
first SELECT (which can be auto generated if not specified).
Since a duplicate column name check was performed even
for the secondary SELECTs, an error was reported.
Fix
====
Check for duplicate column names only for the named
columns if specified or only for the first SELECT.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
INCORRECT RESULTS
Issue:
-----
Updating varchar and text fields in the same update
statement can produce incorrect results. When a varchar
field is assigned to the text field and the varchar field
is then set to a different value, the text field's result
contains the varchar field's new value.
SOLUTION:
---------
Currently the blob type does not allocate space for the
string to be stored. Instead it contains a pointer to the
varchar string. So when the varchar field is changed as
part of the update statement, the value contained in the
blob also changes.
The fix would be to actually store the value by allocating
space for the blob's string. We can avoid allocating this
space when the varchar field is not being written into.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
DATABASE WHEN USING TABLE ALIASES
Issue:
-----
When using table aliases for deleting, MySQL checks
privileges against the current database and not the
privileges on the actual table or database the table
resides.
SOLUTION:
---------
While checking privileges for multi-deletes,
correspondent_table should be used since it points to the
correct table and database.
|
| | | | |\ \
| | | | | |/ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
WARNINGS
Backporting to 5.1 and 5.5
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Problem :
---------
The specific issue reported in this bug is with range/list column
value that is allocated and initialized by evaluating partition
expression(item tree) during execution. After evaluation the range
list value is marked fixed [part_column_list_val]. During next
execution, we don't re-evaluate the expression and use the old value
since it is marked fixed.
Solution :
----------
One way to solve the issue is to mark all column values as not fixed
during clone so that the expression is always re-evaluated once we
attempt partition_info::fix_column_value_functions() after cloning
the part_info object during execution of DDL on partitioned table.
Reviewed-by: Jimmy Yang <Jimmy.Yang@oracle.com>
Reviewed-by: Mattias Jonsson <mattias.jonsson@oracle.com>
RB: 9424
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
QUERY_CACHE::FIND_BIN
Valid min value for query_cache_min_res_unit is 512. But attempt
to set value greater than or equal to the ULONG_MAX(max value) is
resulting query_cache_min_res_unit value to 0. This result in
crash while searching for memory block lesser than the valid
min value to store query results.
Free memory blocks in query cache are stored in bins according
to their size. The bins are stored in size descending order.
For the memory block request the appropriate bin is searched using
binary search algorithm. The minimum free memory block request
expected is 512 bytes. And the appropriate bin is searched for block
greater than or equals to 512 bytes.
Because of the bug the query_cache_min_res_unit is set to 0. Due
to which there is a chance of request for memory blocks lesser
than the minimum size in free memory block bins. Search for bin
for this invalid input size fails and returns garbage index.
Accessing bins array element with this index is causing the issue
reported.
The valid value range for the query_cache_min_res_unit is
512 to ULONG_MAX(when value is greater than the max allowed value,
max allowed value is used i.e ULONG_MAX). While setting result unit
block size (query_cache_min_res_unit), size is memory aligned by
using a macro ALIGN_SIZE. The ALIGN_SIZE logic is as below,
(input_size + sizeof(double) - 1) & ~(sizeof(double) - 1)
For unsigned long type variable when input_size is greater than
equal to ULONG_MAX-(sizeof(double)-1), above expression is
resulting in value 0.
Fix:
-----
Comparing value set for query_cache_min_res_unit with max
aligned value which can be stored in ulong type variable.
If it is greater then setting it to the max aligned value for
ulong type variable.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Problem :
---------
Issue-1: The root cause for the issues is that (col1 > 1) is not a
valid partition function and we should have thrown error at much early
stage [partition_info::check_partition_info]. We are not checking
sub-partition expression when partition expression is NULL.
Issue-2: Potential issue for future if any partition function needs to
change item tree during open/fix_fields. We should release changed
items, if any, before doing closefrm when we open the partitioned table
during creation in create_table_impl.
Solution :
----------
1.check_partition_info() - Check for sub-partition expression even if no
partition expression.
[partition by ... columns(...) subpartition by hash(<expr>)]
2.create_table_impl() - Assert that the change list is empty before doing
closefrm for partitioned table. Currently no supported partition function
seems to be changing item tree during open.
Reviewed-by: Mattias Jonsson <mattias.jonsson@oracle.com>
RB: 9345
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
many rows
make_cond_for_info_schema() does preserve outer fields
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
stack overrun
Substitute into transformed subselects original left expression and than register its change in case it was substituted.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
MDEV-7565: Server crash with Signal 6 (part 2)
followup test suite and its fix.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Problem was in rewriting left expression which had 2 references on it. Solved with making subselect reference main.
Item_in_optimized can have not Item_in_subselect reference in left part so type casting with no check is dangerous.
Item::cols() should be checked after Item::fix_fields().
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Preparation of subselect moved earlier (before checks which needs it prepared).
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Made no_rows_in_result()/restore_to_before_no_rows_in_result() not looking
annecessary deep with walk() method.
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Backport of upstream patch. revno: 5696
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
`!null_value' failed in Item::send(Protocol*, String*))
Postreview addons by Bar
Fix: keeping contract: NULL value mean NULL pointer in val_str and val_deciman.
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Just "Master" could be understood as the master IP or hostname and thus can
cause confusion to db admins. "Master connection name" clearly states that
the log line contains connection name in the (possibly) multi-master setup.
|
| | | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
MDEV-8685 MariaDB fails to decode Anonymous_GTID entries
MDEV-5705 Replication testing: 5.6->10.0
- Ignoring GTID events from MySQL 5.6+ (Allows replication from MySQL 5.6+ with GTID enabled)
- Added ignorable events from MySQL 5.6
- mysqlbinlog now writes information about GTID and ignorable events.
- Added more information in error message when replication stops because of wrong information in binary log.
- Fixed wrong test when write_on_release() should flush cache.
|