| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=838557
MIPS has a different errno for "directory not empty".
|
|
|
|
| |
OpenSSL problems, part II
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| | |
Ensure atomic appends to the error log by using CreateFile with
FILE_APPEND_DATA flag to open error log file (both MTR and server)
|
|\ \
| |/ |
|
| |
| |
| |
| |
| | |
fix PLUGIN_VAR_NOSYSVAR | PLUGIN_VAR_NOCMDOPT plugin thdvars to work.
use that for server_audit_loc_info
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
different fix for 07a33cdcef:
Bug #23296299 : HANDLE_FATAL_SIGNAL (SIG=11) IN MY_TOSORT_UTF32
|
| |\
| | |
| | |
| | | |
80% reverted
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
FROM I_S
Issue:
------
There is a difference in the field type created when the
following DDLs are used:
1) CREATE TABLE t0 AS SELECT NULL;
2) CREATE TABLE t0 AS SELECT GREATEST(NULL,NULL);
The first statement creates field of type Field_string and
the second one creates a field of type Field_null.
This creates a problem when the query mentioned in this bug
is used. Since the null_ptr is calculated differently for
Field_null.
Solution:
---------
When there is a function returning null in the select list
as mentioned above, the field should be of type
Field_string.
This was fixed in 5.6+ as part of Bug#14021323. This is a
backport to mysql-5.5.
An incorrect comment in innodb_bug54044.test has been
corrected in all versions.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Permanently removed test case perfschema.aggregate.
The Performance Schema is generally lock-free, allowing for
race conditions that might arise from multi-threaded operation
which occasionally results in temporary and/or minor variances
when aggregating statistics. This test needs to be redesigned
to accommodate such variances.
|
| | |
| | |
| | |
| | | |
Correct context chain made to allow outer fields pullout.
|
| | |
| | |
| | |
| | |
| | | |
The issue was that in some extreme cases when doing GROUP BY,
buffers for temporary blobs where not properly cleared.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Reexecution of prepared "ANALYZE TABLE merge_table, table" may miss to
reinitialize "table" for subsequent execution and trigger assertion failure.
This happens because MERGE engine may adjust table->next_global chain, which
gets cleared by close_thread_tables()/ha_myisammrg::detach_children() later.
Since reinitilization iterates next_global chain, it won't see tables following
merge table.
Fixed by appending saved next_global chain after merge children.
|
| | |
| | |
| | |
| | |
| | | |
when opening a system table for a SELECT-like read, pretend
(for the sake of engines) it's SQLCOM_SELECT
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
ENOTEMPTY is 247 on hppa, not 39 like on Linux, according to this report:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=837369
So add another replacement for this to rpl.rpl_drop_db and
binlog.binlog_databasae tests (there were already a couple similar
replacements for other platforms).
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The motivation for this is that Perl is moving towards not having
current directory ./ in @INC by default. This is causing
mysql-test-run.pl to fail in latest Debian Unstable:
https://lists.debian.org/debian-devel-announce/2016/08/msg00013.html
However, we have `use "lib"`, there is no need for current directory
in @INC, except for a gross hack. In mtr_cases.pm, there is a
`require "mtr_misc.pl"`, which hides mtr_misc.pl away in mtr_cases
namespace. And things only work because mysql-test-run.pl loads it
with a different name, `require "lib/mtr_misc.pl"`! (Perl will
`require` only once for each unique filename).
Fix this by only using `require` in main program, and referencing
functions with :: scope from other namespaces. For multi-use in
different namespaces, proper `use` modules should be used.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
|
| | |
| | |
| | |
| | | |
Attempt to stabilize the testcase.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
distribution builds
- mysql-test/unstable-tests list is created, it includes
= tests identified as unstable by Debian;
= tests which failed in buildbot on 10.0 over the last ~6 months and were not fixed;
= tests which have been recently modified or newly added
- '*' wildcard is now supported in skip lists
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The issue was that when running with valgrind the wait for master_pos_Wait()
was not long enough.
This patch also fixes two other failures that could affect rpl_mdev6020:
- check_if_conflicting_replication_locks() didn't properly check domains
- 'did_mark_start_commit' was after signals to other threads was sent which could
get the variable read too early.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
compiled for debugging, when the server goes down
This happens in the following scenario:
- Server gets a shutdown message
- Servers sends error ER_CONNECTION_KILLED to the clients connection
- The client sends a query to the server, before the server has time to
close the connection to the client
- Client reads the ER_CONNECTION_KILLED error message
In the above case, the packet number for the reply is one less than
what the client expected and the client prints "Packets out of order".
Fixed the following way:
- The client accepts now error packages with a packet number
one less than expected.
- To ensure that this issue can be detected early in my_real_read(), error
messages sent to the client are not compressed, even when compressed protocol is used.
|
| | |
| | |
| | |
| | |
| | |
| | | |
- Handler_read_retry
- Update_scan
- Delete_scan
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Fixed typos
- Added --core-on-failure to mysql-test-run
- More DBUG_PRINT in viosocket.c
- Don't forget CLIENT_REMEMBER_OPTIONS for compressed slave protocol
- Removed not used stage variables
|
|\ \ \ |
|
| | | | |
|
| |\ \ \
| | |/ / |
|
| | | |
| | | |
| | | |
| | | | |
Windows!
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
a correct fix:
* store properly quoted table names in tables4repair/etc lists
* tell handle_request_for_tables whether the name is aalready properly quoted
* test cases for all uses of fix_table_name()
|
| | | |
| | | |
| | | |
| | | | |
followup
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
result values to the next column
We assume all around the code that null_value==true is in sync
with NULL value returned by val_str()/val_decimal().
Item_sum_sum::val_decimal() erroneously returned a non-NULL value together
with null_value set to true. Fixing to return NULL instead.
|
| | | |
| | | |
| | | |
| | | | |
thd->clear_error() destroyed already existing error status
|
| | | |
| | | |
| | | |
| | | | |
Make the testcase stable by adding FORCE INDEX
|
| | | |
| | | |
| | | |
| | | |
| | | | |
This issue was discovered by
Dawid Golunski (http://legalhackers.com)
|
| | | |
| | | |
| | | |
| | | | |
removed
|
| | | |
| | | |
| | | |
| | | |
| | | | |
wait until the failed connection thread completely dies
before uninstalling pam plugin
|
| | | | |
|
| | |\ \
| | | |/
| | | |
| | | | |
without a fix for Bug#12818255 (MDEV-6581)
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
MYSQL-5.5
The bug asks for a backport of bug#1463594 and bug#20682959. This
is required because of the fact that if replication is enabled, master
transaction can commit whereas slave can't commit due to not exact
'enviroment'. This manifestation is seen in bug#22024200.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
NAME_CONST QUERY
ISSUE:
------
Using NAME_CONST with a non-constant negated expression as
value can result in incorrect behavior.
SOLUTION:
---------
The problem can be avoided by checking whether the argument
is a constant value.
The fix is a backport of Bug#12735545.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
THAT ACTUALLY EXISTS
ANALYSIS:
=========
Stored functions updating a view where the view table has a
trigger defined that updates another table, fails reporting
an error that the table doesn't exist.
If there is a trigger defined on a table, a variable
'trg_event_map' will be set to a non-zero value after the
parsed tree creation. This indicates what triggers we need to
pre-load for the TABLE_LIST when opening an associated table.
During the prelocking phase, the variable 'trg_event_map'
will not be set for the view table. This value will be set
after the processing of triggers defined on the table. During
the processing of sub-statements, 'locked_tables_mode' will be
set to 'LTM_PRELOCKED' which denotes that further locking
of tables/functions cannot be done. This results in the other
table not being locked and thus further processing results in
an error getting reported.
FIX:
====
During the prelocking of view, the value of 'trg_event_map'
of the view is copied to 'trg_event_map' of the next table
in the TABLE_LIST. This results in the locking of tables
associated with the trigger as well.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Revert following bug fix:
Bug#20685029: SLAVE IO THREAD SHOULD STOP WHEN DISK IS
FULL
Bug#21753696: MAKE SHOW SLAVE STATUS NON BLOCKING IF IO
THREAD WAITS FOR DISK SPACE
This fix results in a deadlock between slave IO thread
and SQL thread.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
INSERTS/UPDATES ON TEMPORARY TABLES
Bug#14294223: CHANGES NOT ALLOWED TO TEMPORARY TABLES ON
READ-ONLY SERVERS
Problem:
========
Running 5.5.14 in read only we can create temporary tables
but can not insert or update records in the table. When we
try we get Error 1290 : The MySQL server is running with the
--read-only option so it cannot execute this statement.
Analysis:
=========
This bug is very specific to binlog being enabled and
binlog-format being stmt/mixed. Standalone server without
binlog enabled or with row based binlog-mode works fine.
How standalone server and row based replication work:
=====================================================
Standalone server and row based replication mark the
transactions as read_write only when they are modifying
non temporary tables as part of their current transaction.
Because of this when code enters commit phase it checks
if a transaction is read_write or not. If the transaction
is read_write and global read only mode is enabled those
transaction will fail with 'server is read only mode'
error.
In the case of statement based mode at the time of writing
to binary log a binlog handler is created and it is always
marked as read_write. In case of temporary tables even
though the engine did not mark the transaction as read_write
but the new transaction that is started by binlog handler is
considered as read_write.
Hence in this case when code enters commit phase it finds
one handler which has a read_write transaction even when
we are modifying temporary table. This causes the server
to throw an error when global read-only mode is enabled.
Fix:
====
At the time of commit in "ha_commit_trans" if a read_write
transaction is found, we should check if this transaction is
coming from a handler other than binlog_handler. This will
ensure that there is a genuine read_write transaction being
sent by the engine apart from binlog_handler and only then
it should be blocked.
|
| | | |
| | | |
| | | |
| | | | |
Backporting MDEV-5781 from 10.0.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Use FILE_FLAG_FIRST_PIPE_INSTANCE with the first CreateNamedPipe()
call to make sure the pipe does not already exist.
|