| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
table causes unpredictable behavior
Cherry-pick: f4a0af070ce49abae60040f6f32e1074309c27fb
Author: Dmitry Lenev <dmitry.lenev@oracle.com>
Date: Mon Jul 25 16:06:52 2016 +0300
Fix for bug #16672723 "CAN'T FIND TEMPORARY TABLE".
Attempt to execute prepared CREATE TABLE SELECT statement which used
temporary table in the subquery in FROM clause and stored function
failed with unwarranted ER_NO_SUCH_TABLE error. The same happened
when such statement was used in stored procedure and this procedure
was re-executed.
The problem occurred because execution of such prepared statement/its
re-execution as part of stored procedure incorrectly set
Query_table_list::query_tables_own_last marker, indicating the last
table which is directly used by statement. As result temporary table
used in the subquery was treated as indirectly used/belonging to
prelocking list and was not pre-opened by open_temporary_tables()
call before statement execution. Thus causing ER_NO_SUCH_TABLE errors
since our code assumes that temporary tables need to be correctly
pre-opened before statement execution.
This problem became visible only in version 5.6 after patches related to
bug 11746602/27480 "EXTEND CREATE TEMPORARY TABLES PRIVILEGE TO ALLOW
TEMP TABLE OPERATIONS" since they have introduced pre-opening of temporary
tables for statements.
Incorrect setting of Query_table_list::query_tables_own_last happened
in LEX::first_lists_tables_same() method which is called by CREATE TABLE
SELECT implementation as part of LEX::unlink_first_table(), which temporary
excludes table list element for table being created from the query table
list before handling SELECT part.
LEX::first_lists_tables_same() tries to ensure that global table list of
the statement starts with the first table list element from the first
statement select. To do this it moves such table list element to the head
of the global table list. If this table happens to be last directly-used
table for the statement, query_tables_own_last marker is pointing to it.
Since this marker was not updated when table list element was moved we
ended up with all tables except the first table separated by it as if
they were not directly used by statement (i.e. belonged to prelocked
tables list).
This fix changes code of LEX::first_lists_tables_same() to update
query_tables_own_last marker in cases when it points to the table
being moved. It is set to the table which precedes table being moved
in this case.
|
|
|
|
|
|
|
|
|
|
| |
MTR raises default wait_for_pos_timeout from 300 to 1500 when tests
are run with valgrind. The same needs to be done for other
replication-related waits.
The change should fix one of failures mentioned in MDEV-10653
(rpl.rpl_parallel fails in buildbot with timeout), the one
on the valgrind builder; but not all of them
|
|
|
|
|
| |
- fix the test to avoid false-negatives before MDEV-5114 patch;
- fix the race condition which made the test fail on slow builders
|
|
|
|
|
| |
Ability to print lock type added.
Restoring correct lock type for CREATE VIEW added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch fixes two test failures:
- on slow builders, sometimes a connection attempt which should
fail due to the exceeded number of thread_pool_max_threads
actually succeeds;
- on even slow builders, MTR sometimes cannot establish the
initial connection, and check-testcase fails prior to the
test start
The problem with check-testcase was caused by connect-timeout=2
which was set for all clients in the test config file. On slow
builders it might be not enough.
There is no way to override it for the pre-test check, so it needed
to be substantially increased or removed.
The other problem was caused by a race condition between sleeps
that the test performs in existing connections and the connect
timeout for the connection attempt which was expected to fail.
If sleeps finished before the connect-timeout was exceeded, it
would allow the connection to succeed.
To solve each problem without making the other one worse,
connect-timeout should be configured dynamically during the test.
Due to the nature of the test (all connections must be busy
at the moment when we need to change the timeout, and cannot execute
SET GLOBAL ...), it needs to be done independently from the server.
The solution:
- recognize 'connect_timeout' as a connection option in mysqltest's
"connect" command;
- remove connect-timeout from the test configuration file;
- use the new connect_timeout option for those connections which
are expected to fail;
- re-arrange the test flow to allow running a huge SLEEP
without affecting the test execution time (because it would be
interrupted after the main test flow is finished).
The test is still subject to false negatives, e.g. if the connection
fails due to timeout rather than due to the exceeded number of
allowed threads, or if the connection on extra port succeeds due
to a race condition and not because the special logic for the extra
port. But those false negatives have always been possible there
on slow builders, they should not be critical because faster builders
should catch such failures if they appear.
|
|
|
|
|
|
|
|
| |
Problem:- In replication if slave has extra persistent column then these
column are not computed while applying write-set from master.
Solution:- While applying row events from server, we will generate values
for extra persistent columns.
|
|
|
|
|
|
| |
Test is very slow with valgrind, and pointless because it is
initially about a race condition which is hardly achievable
with valgrind
|
|
|
|
|
|
|
|
|
|
|
| |
Role names with trailing whitespaces are truncated in length as of
956e92d90873532fee95581c702f7b76643969ea to fix MDEV-8609. The problem
is that the code that creates role mappings expects the string to be null
terminated.
Add the null terminator to account for that as well. In the future
the rest of the code can be cleaned up to never assume c style strings
but only LEX_STRINGS.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
handler::increment_statistics or in make_select or assertion failure pfs_thread == ((PFS_thread*) pthread_getspecific((THR_PFS)))
Different fix. Don't allow Item_func_sp to be evaluated unless
all tables are prelocked.
Extend the test case to make sure Item_func_sp::val_str is called
(the table must have at least one row for that).
|
| |
| |
| |
| | |
Rpl_filter::parse_filter_rule() made NULL-safe.
|
| |
| |
| |
| | |
check for VIEW/DERIVED fields
|
| |
| |
| |
| |
| |
| | |
handler::increment_statistics or in make_select or assertion failure pfs_thread == ((PFS_thread*) pthread_getspecific((THR_PFS)))
Move expression execution out of Item constructor.
|
| |
| |
| |
| |
| | |
Exclude untouched in prepare phese subqueries from the select/unit tree
because they became unreachable by execution.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Item::send(Protocol*, String*)
The problem was that null_value was not set to "false" on a well-formed row.
If an ill-formed row was followed by a well-forned row, null_value remained
"true" in the call of Item::send() for the well-formed row.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Full-Text searches
Don't assume that a word of n bytes can match a word of
at most n * charset->mbmaxlen bytes, always go for the worst.
|
| | |
|
| |
| |
| |
| | |
failed in Lex_input_stream::body_utf8_append(const char*, const char*)
|
| |
| |
| |
| |
| |
| |
| | |
The flag TABLE_LIST::fill_me must be reset to false at the prepare
phase for any materialized derived table used in the executed query.
Otherwise if the optimizer decides to generate a key for such a table
it is generated only for the first execution of the query.
|
| |
| |
| |
| |
| |
| | |
Filesort_buffer::alloc_sort_buffer(uint, uint)
Updating result for the group_by_innodb.test
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Filesort_buffer::alloc_sort_buffer(uint, uint)
When JOIN::destroy() is called for a JOIN object that has
- join->tmp_join != NULL
- also has join->table[0]->sort
then the latter was not cleaned up.
This could cause a memory leak and/or asserts in the subsequent queries.
Fixed by adding a cleanup call.
|
| |
| |
| |
| | |
date_to_datetime(MYSQL_TIME*)
|
| |
| |
| |
| |
| |
| |
| | |
be consistent and don't include the table name into the error message,
no other CREATE TABLE error does it.
(the crash happened, because thd->lex->query_tables was NULL)
|
|\ \ |
|
| | | |
|
| | | |
|
| | | |
|
| / |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Due to the collation used on the roles_mapping_hash, key comparison
would work in a case-insensitive manner. This is incorrect from the
roles mapping perspective. Make use of a case-sensitive collation for that hash,
the same one used for the acl_roles hash.
|
| |
| |
| |
| |
| |
| |
| |
| | |
The function Item_func_isnull::update_used_tables() must
handle the case when the predicate is over not nullable
column in a special way.
This is actually a bug of MariaDB 5.3/5.5, but it's probably
hard to demonstrate that it can cause problems there.
|
| |
| |
| |
| | |
Test intentionally crashes the server, thus corrupted pages possible.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
followed by a multi-byte character
Partially backporting MDEV-9874 from 10.2 to 10.0
READ_INFO::read_field() raised the ER_INVALID_CHARACTER_STRING error
when reading an escape character followed by a multi-byte character.
Raising wellformedness errors in READ_INFO::read_field() was wrong,
because the main goal of READ_INFO::read_field() is to *unescape* the
data which was presumably escaped using mysql_real_escape_string(),
using the same character set with the one specified in
"LOAD DATA INFILE ... CHARACTER SET ..." (or assumed by default).
During LOAD DATA, multi-byte characters are not always scanned as a single
entity! In case of escaped data, parts of a multi-byte character can be
scanned on different loop iterations. So the old code erroneously tested
welformedness in the middle of a multi-byte character.
Moreover, the data after unescaping can go into a BLOB field, not a text field.
Wellformedness tests are meaningless in this case.
Ater this patch, wellformedness is only checked later, during
Field::store(str,length,cs) time. The loop that scans bytes only
makes sure to revert the changes made by mysql_real_escape_string().
Note, in some cases users can supply data which did not really go through
mysql_real_escape_string() and was escaped by some other means,
or was not escaped at all. The file reported in this MDEV contains
the string "\ä", which is an example of such improperly escaped data, as
- either there should be two backslashes: "\\ä"
- or there should be no backslashes at all: "ä"
mysql_real_escape_string() could not generate "\ä".
|
| |
| |
| |
| |
| | |
Fixing the "connect" command to use "localhost" instead of "127.0.0.1"
to make it work with both "mtr" and "mtr --embedded".
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This occured when the SQL thread (but not the IO thread) stops while
GTID and parallel replication are used with multiple domain ids in the
GTID position, and is restarted.
In this case, the SQL needs to start some way back in the relay log,
applying or skipping events within each replication domain as
appropriate.
The SQL threads starts at the beginning of an old relay log file, and
this position may be in the middle of an event group. The bug was that
such partial event group could be re-applied, causing replication
corruption.
This patch fixes the issue, by making sure to skip any initial events
that were part of an earlier (already applied) event group.
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
MDEV-10780 Server crashes in in create_tmp_table
MDEV-11265 Access defied when CREATE VIIEW v1 AS SELECT DEFAULT(column) FROM t1
Item_default_value and Item_insert_value erroneously derive from Item_field
but forgot to override some methods that apply only to true fields,
so the server code mixes Item_{default|insert}_value instances with real
table fields (i.e. true Item_field) in some cases.
Overriding a few methods to avoid this.
TODO: we should eventually derive Item_default_value (and Item_insert_value)
directly from Item, as they don't really need the entire Item_field,
Item_ident and Item_result_field functionality.
Only the member "Field *field" related functionality is actually needed,
like val_xxx(), is_null(), get_geometry_type(), charset_for_protocol(), etc.
|
| |
| |
| |
| |
| |
| | |
doesn't exist.
Update test results after 26b87c3
|
| |
| |
| |
| |
| |
| |
| |
| | |
In the function create_key_parts_for_pseudo_indexes()
the key part structures of pseudo-indexes created for
BLOB fields were set incorrectly.
Also the key parts for long fields must be 'truncated'
up to the maximum length acceptable for key parts.
|
| |
| |
| |
| |
| |
| |
| |
| | |
1. When min/max value is provided the null flag for it must be set to 0
in the bitmap Culumn_statistics::column_stat_nulls.
2. When the calculation of the selectivity of the range condition
over a column requires min and max values for the column then we
have to check that these values are provided.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
'mysql.proc' doesn't exist.
The mysql_rm_db() doesn't seem to expect the 'mysql' database
to be deleted. Checks for that added.
Also fixed the bug MDEV-11105 Table named 'db'
has weird side effect.
The db.opt file now removed separately.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
cherry-pick from 5.7:
commit 6b24763
Author: Manish Kumar <manish.4.kumar@oracle.com>
Date: Tue Mar 27 13:10:42 2012 +0530
BUG#12977988 - ON STOP SLAVE: ERROR READING PACKET FROM SERVER: LOST CONNECTION
TO MYSQL SERVER
BUG#11761457 - ERROR 2013 + "ERROR READING RELAY LOG EVENT" ON STOP SLAVEBUG#12977988 - ON STOP SLAVE: ERROR READING PACKET FROM SERVER: LOST CONNECTION
TO MYSQL SERVER
|
| | |
|
| |
| |
| |
| |
| | |
Problem was that if old virtual column is computed and stored there
was no check if new column is really virtual column.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Code flow hit incorrect branch while closing table instances before removal.
This branch expects thread to hold open table instance, whereas CREATE OR
REPLACE doesn't actually hold open table instance.
Before CREATE OR REPLACE TABLE it was impossible to hit this condition in
LTM_PRELOCKED mode, thus the problem didn't expose itself during DROP TABLE
or DROP DATABASE.
Fixed by adjusting condition to take into account LTM_PRELOCKED mode, which can
be set during CREATE OR REPLACE TABLE.
|
|\ \
| |/ |
|
| |
| |
| |
| | |
Patch provided by Honza Horak
|
| |
| |
| |
| | |
don't let identifiers with new lines to break a comment
|
| |
| |
| |
| | |
pass them through as is
|