| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Bug#57995: Compiler flag change build error on OSX 10.4: my_getncpus.c
Bug#57996: Compiler flag change build error on OSX 10.5 : bind.c
Bug#57994: Compiler flag change build error : my_redel.c
Bug#57993: Compiler flag change build error on FreeBsd 7.0 : regexec.c
Bug#57992: Compiler flag change build error on FreeBsd : mf_keycache.c
Bug#57997: Compiler flag change build error on OSX 10.6: debug_sync.cc
Fix assorted compiler generated warnings.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
with on duplicate key update
There was a missed corner case in the partitioning
handler, which caused the next_insert_id to be changed
in the second level handlers (i.e the hander of a partition),
which caused this debug assertion.
The solution was to always ensure that only the partitioning
level generates auto_increment values, since if it was done
within a partition, it may fail to match the partition
function.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
in different default schema.
In strict mode, when data truncation or conversion happens,
THD::killed is set to THD::KILL_BAD_DATA.
This is abuse of KILL mechanism to guarantee that execution
of statement is aborted.
The stored procedures execution, on the other hand,
upon detection that a connection was killed, would
terminate immediately, without trying to restore the caller's
context, in particular, restore the caller's current schema.
The fix is, when terminating a stored procedure execution,
to only bypass cleanup if the entire connection was killed,
not in case of other forms of KILL.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
ALTER TABLE RENAME, DISABLE KEYS.
The code of ALTER TABLE RENAME, DISABLE KEYS could
issue a commit while holding LOCK_open mutex.
This is a regression introduced by the fix for
Bug 54453.
This failed an assert guarding us against a potential
deadlock with connections trying to execute
FLUSH TABLES WITH READ LOCK.
The fix is to move acquisition of LOCK_open outside
the section that issues ha_autocommit_or_rollback().
LOCK_open is taken to protect against concurrent
operations with .frms and the table definition
cache, and doesn't need to cover the call to commit.
A test case added to innodb_mysql.test.
The patch is to be null-merged to 5.5, which
already has 54453 null-merged to it.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Quoting from the bug report:
The pstack library has been included in MySQL since version
4.0.0. It's useless and should be removed.
Details: According to its own documentation, pstack only works
on Linux on x86 in 32 bit mode and requires LinuxThreads and a
statically linked binary. It doesn't really support any Linux
from 2003 or later and doesn't work on any other OS.
The --enable-pstack option is thus deprecated and has no effect.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
sporadically.
The cause of the sporadic time out was a leaking protection
against the global read lock, taken by the RENAME statement,
and not released in case of an error occurred during RENAME.
The leaking protection counter would lead to the value of
protect_against_global_read never dropping to 0.
Consequently FLUSH TABLES in all connections, including the
one that leaked the protection, could not proceed.
The fix is to ensure that all branchesin RENAME code properly
release GRL protection.
|
|\ |
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The crash happens because original join table is replaced with temporary table
at execution stage and later we attempt to use this temporary table in
select_describe. It might happen that
Item_subselect::update_used_tables() method which sets const_item flag
is not called by some reasons (no where/having conditon in subquery for example).
It prevents JOIN::join_tmp creation and breaks original join.
The fix is to call ::update_used_tables() before ::const_item() check.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In case of outer join and emtpy WHERE conditon
'always true' condition is created for WHERE clasue.
Later in mysql_select() original SELECT_LEX WHERE
condition is overwritten with created cond.
However SELECT_LEX condition is also used as inital
condition in mysql_select()->JOIN::prepare().
On second execution of PS modified SELECT_LEX condition
is taken and it leads to crash.
The fix is to restore original SELECT_LEX condition
(set to NULL if original cond is NULL) in
reinit_stmt_before_use().
HAVING clause is fixed too for safety reason
(no test case as I did not manage to think out
appropriate example).
|
| | |
| | |
| | |
| | |
| | |
| | | |
in sql_show.cc, find_files()
Removed the extra allocation.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
by a function and column
The bugreport reveals two different bugs about grouping
on a function:
1) grouping by the TIME_TO_SEC function result caused
a server crash or wrong results and
2) grouping by the function returning a blob caused
an unexpected "Duplicate entry" error and wrong
result.
Details for the 1st bug:
TIME_TO_SEC() returns NULL if its argument is invalid (empty
string for example). Thus its nullability depends not only
on the nullability of its arguments but also on their values.
Fixed by (overoptimistically) setting TIME_TO_SEC() to be
nullable despite the nullability of its arguments.
Details for the 2nd bug:
The server is unable to create indices on blobs without
explicit blob key part length. However, this fact was
ignored for blob function result fields of GROUP BY
intermediate tables.
Fixed by disabling GROUP BY index creation for blob
function result fields like regular blob fields.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
"Grantor" columns' data is lost when replicating mysql.tables_priv.
Slave SQL thread used its default user ''@'' as the grantor of GRANT|REVOKE
statements executing on it.
In this patch, current user is put in query log event for all GRANT and REVOKE
statement, SQL thread uses the user in query log event as grantor.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Rows events were applied wrongly on the temporary table with the same name.
But rows events are generated only for base tables. As temporary
table's data never be binlogged on row mode. Normally, base table of the
same name cannot be updated if a temporary table has the same name.
But there are two cases which can generate rows events on
the base table of same name.
Case1: 'CREATE TABLE ... SELECT' statement.
In mixed format, it will generate rows events if it is unsafe.
Case2: Drop a transactional temporary table in a transaction
(happens only on 5.5+).
BEGIN;
DROP TEMPORARY TABLE t1; # t1 is a InnoDB table
INSERT INTO t1 VALUES(rand()); # t1 is a MyISAM table
COMMIT;
'DROP TEMPORARY TABLE' will be put in the transaction cache and
binlogged after the rows events generated by the 'INSERT' statement.
After this patch, slave opens only base table when applying a rows event.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fix assorted warnings that are generated in optimized builds.
Most of it is silencing variables that are set but unused.
This patch also introduces the MY_ASSERT_UNREACHABLE macro
which helps the compiler to deduce that a certain piece of
code is unreachable.
|
| | |
| | |
| | | |
Tag or remove unused arguments and variables.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
data dictionary confusion
On file systems with case insensitive file names, and
lower_case_table_names set to '2', the server could crash
due to a table definition cache inconsistency. This is
the default setting on MacOSX, but may also be set and
used on MS Windows.
The bug is caused by using two different strategies for
creating the hash key for the table definition cache, resulting
in failure to look up an entry which is present in the cache,
or failure to delete an existing entry. One strategy was to
use the real table name (with case preserved), and the other
to use a normalized table name (i.e a lower case version).
This is manifested in two cases. One is during 'DROP DATABASE',
where all known files are removed. The removal from
the table definition cache is done via a generated list of
TABLE_LIST with keys (wrongly) created using the case preserved
name. The other is during CREATE TABLE, where the cache lookup
is also (wrongly) based on the case preserved name.
The fix was to use only the normalized table name when
creating hash keys.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
core files
For crash testing: kill the server without generating core file.
include/my_dbug.h
Use kill(getpid(), SIGKILL) which cannot be caught by signal handlers.
All DBUG_XXX macros should be no-ops in optimized mode, do that for DBUG_ABORT as well.
sql/handler.cc
Kill server without generating core.
sql/log.cc
Kill server without generating core.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
replication aborts
When recieving a 'SLAVE STOP' command, slave SQL thread will roll back the
transaction and stop immidiately if there is only transactional table updated,
even through 'CREATE|DROP TEMPOARY TABLE' statement are in it. But These
statements can never be rolled back. Because the temporary tables to the user
session mapping remain until 'RESET SLAVE', Therefore it will abort SQL thread
with an error that the table already exists or doesn't exist, when it restarts
and executes the whole transaction again.
After this patch, SQL thread always waits till the transaction ends and then stops,
if 'CREATE|DROP TEMPOARY TABLE' statement are in it.
|
|\ \ \
| | | |
| | | | |
latest mysql-5.1-bugteam.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The procedure for setting up a fake binary log, by changing the
relay log files manually, is run twice when we issue mtr with
--repeat=2. However, when setting it up for the second time,
neither the sql thread is reset nor the server is restarted. This
means that previous stale relay log IO_CACHE metadata - from
first run - is left around. As a consequence, when the test is
run for the second time, the IO_CACHE for the relay log has
position in file different than zero, triggering the assertion.
We fix this by deploying a call to my_b_seek before calling
check_binlog_magic in next_event. This prevents the server
from asserting, in the cases that the SQL thread was reads
from a hot log (relay.NNNNN), then is stopped, then is restarted
from a previous cold log (relay.MMMMM), and then it reaches
the hot log relay.NNNNN again, in which case, the read
coordinates are not set to zero, but to the old values.
Additionally, some comments to the source code were added.
|
|\ \ \ \
| | | | |
| | | | | |
latest mysql-5.1-bugteam.
|
| | |_|/
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Suprisingly, a Slave_log_event would show up in the binary
log. This event is never used and should not appear in the
logs. As such, when the slave (or the mysqlbinlog tool) reads the
event, it will hit an invalid pointer (reference to the
descriptor event when deserializing the Slave_log_event was
purposodely set to NULL).
The presence of the Slave_log_event denotes a corrupted log, but
we cannot tell how the log got corrupted in the first
place. However, we can make the server cope with such events when
it reads them - in case of log corruption - and fail gracefully.
This patch makes the server/mysqlbinlog to report that it has
found an invalid log event when Slave_log_event is read.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
After ALTER TABLE which changed only table's metadata, row-based
binlog sometimes got corrupted since the tablemap was unexpectedly
set to 0 for subsequent updates to the same table.
ALTER TABLE which changed only table's metadata always reset
table_map_id for the table share to 0. Despite the fact that
0 is a valid value for table_map_id, this step caused problems
as it could have created situation in which we had more than
one table share with table_map_id equal 0. If more than one
table with table_map_id are 0 were updated in the same statement,
updates to these different tables were written into the same
rows event. This caused slave server to crash.
This bug happens only on 5.1. It doesn't affect 5.5+.
This patch solves this problem by ensuring that ALTER TABLE
statements which change metadata only never reset table_map_id
to 0. To do this it changes reopen_table() to correctly use
refreshed table_map_id value instead of using the old one/
resetting it.
|
|\ \ \ \ |
|
| |/ / /
| | | |
| | | | |
Backported the patch for BUG#55452.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When slave executes a transaction bigger than slave's max_binlog_cache_size,
slave will crash. It is caused by the assert that server should only roll back
the statement but not the whole transaction if the error ER_TRANS_CACHE_FULL
happens. But slave sql thread always rollbacks the whole transaction when
an error happens.
Ather this patch, we always clear any error set in sql thread(it is different
from the error in 'SHOW SLAVE STATUS') and it is cleared before rolling back
the transaction.
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is a regression from the fix for bug no 38999. A storage engine capable
of reading only a subset of a table's columns updates corresponding bits in
the read buffer to signal that it has read NULL values for the corresponding
columns. It cannot, and should not, update any other bits. Bug no 38999
occurred because the implementation of UPDATE statements compare the NULL bits
using memcmp, inadvertently comparing bits that were never requested from the
storage engine. The regression was caused by the storage engine trying to
alleviate the situation by writing to all NULL bits, even those that it had no
knowledge of. This has devastating effects for the index merge algorithm,
which relies on all NULL bits, except those explicitly requested, being left
unchanged.
The fix reverts the fix for bug no 38999 in both InnoDB and InnoDB plugin and
changes the server's method of comparing records. For engines that always read
entire rows, we proceed as usual. For engines capable of reading only select
columns, the record buffers are now compared on a column by column basis. An
assertion was also added so that non comparable buffers are never read. Some
relevant copy-pasted code was also consolidated in a new function.
|
|\ \ \
| | |/
| |/| |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Item_func_spatial_collection::fix_length_and_dec()
changed to use argument's print() method to print
the ER_ILLEGAL_VALUE_FOR_TYPE error.
|
| |\ \ |
|
| | |\ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
called twice in a row
Queries with nested joins could cause an infinite loop in the
server when used from SP/PS.
When flattening nested joins, simplify_joins() tracks if the
name resolution list needs to be updated by setting
fix_name_res to TRUE if the current loop iteration has done any
transformations to the join table list. The problem was that
the flag was not reset before the next loop iteration leading
to unnecessary "fixing" of the name resolution list which in
turn could lead to a loop (i.e. circularly-linked part) in that
list. This was causing problems on subsequent execution when
used together with stored procedures or prepared statements.
Fixed by making sure fix_name_res is reset on every loop
iteration.
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Convertion from a floating point number to a string caused a
crash.
During rare circumstances a String object could crash when
it was requested to allocate new memory.
A crash could occcur in Field_double::val_str() because of
a pointer referencing memory inside a String object which was
of unknown size.
And finally, the geometric collection should not accept
arguments which are non geometric.
|
| |\ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
within query
The server could crash after materializing a derived table
which requires a temporary table for grouping.
When destroying the temporary table used to execute a query for
a derived table, JOIN::destroy() did not clean up Item_fields
pointing to fields in the temporary table. This led to
dereferencing a dangling pointer when printing out the items
tree later in the outer SELECT.
The solution is an addendum to the patch for bug37362: in
addition to cleaning up items in tmp_all_fields3, do the same
for items in tmp_all_fields1, since now we have an example
where this is necessary.
|
| |\ \ \ \ |
|
| | |/ / /
| |/| | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
KILL_BAD_DATA is returned
Two problems discovered with the LEAST()/GREATEST()
functions:
1. The check for a null value should happen even
after the second call to val_str() in the args. This is
important because two subsequent calls to the same
Item::val_str() may yield different results.
Fixed by checking for NULL value before dereferencing
the string result.
2. While looping over the arguments and evaluating them
the loop should stop if there was an error evaluating so far
or the statement was killed. Fixed by checking for error
and bailing out.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
An user assignment variable expression that's
evaluated in a logical expression context
(Item::val_bool()) can be pre-calculated in a
temporary table for GROUP BY.
However when the expression value is used after the
temp table creation it was re-evaluated instead of
being read from the temp table due to a missing
val_bool_result() method.
Fixed by implementing the method.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The CONVERT_TZ function crashes the server when the
timezone argument is an empty SET field value.
1) The CONVERT_TZ may find a timezone string in the
tz_names hash.
2) A string representation of the empty SET is a
String of zero length with the NULL pointer.
3) If the key argument length is zero, hash functions
do comparison using the length of the record being
compared against.
I.e. a zero-length String buffer is an invalid
argument for hash search functions, and if String
points to NULL buffer, hashcmp() fails with SEGV
accessing that memory.
The my_tz_find function has been modified to
treat empty Strings as invalid timezone values
to skip unnecessary hash search.
|
|\ \ \ \ \ |
|
| |\ \ \ \ \ |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Bug#57113: ha_partition::extra(ha_extra_function):
Assertion `m_extra_cache' failed
Fix for bug#55458 included DBUG_ASSERTS causing
debug builds of the server to crash on
another multi-table update.
Removed the asserts since they where wrong.
(updated after testing the patch in 5.5).
|
| |\ \ \ \ \ \ |
|
| | | |_|_|/ /
| | |/| | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
LOAD DATA into partitioned MyISAM table
Problem was that both partitioning and myisam
used the same table_share->mutex for different protections
(auto inc and repair).
Solved by adding a specific mutex for the partitioning
auto_increment.
Also adding destroying the ha_data structure in
free_table_share (which is to be propagated
into 5.5).
This is a 5.1 ONLY patch, already fixed in 5.5+.
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Fixed the length of system variables to be 2^24 - 1
as it is documented for MEDIUMBLOB instead of
2^24.
|
| | | | | | | |
|