| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This a an old legacy performance bug.
When a very selective range scan existed for the second table in a join,
and, at the same time, there was another range condition depending on the
fields of the first table, the optimizer chose a plan with
'Range checked for each record'. This plan was extremely inefficient in
comparison with the regular selective range scan.
As a matter of fact the range scan chosen for each record was the same as
that selective range scan.
Changed the test case for bug 24776 to preserve the old output for explain.
|
|
|
|
|
| |
not_enough_points() introduced to check if the spatial object is incorrect.
|
| |
|
|
|
|
|
|
|
| |
10.x server
extend 5.1 client library to read 4 byte capabilities from the first handshake packet.
two higher bytes are always zeros for 5.1 servers.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ma_blockrec.c or ER_NOT_KEYFILE on query with DISTINCT and GROUP BY
This could happen when using Aria for internal temporary files (default case) and using DISTINCT.
_ma_scan_restore_block_record() didn't work correctly if there was rows inserted, updated or deleted on the handler
between calls to _ma_scan_remember_block_record() and _ma_scan_restore_block_record().
The effect was that some DISTINCT queries that used remove_dup_with_compare() could fail.
.bzrignore:
Ignore sql_yacc.hh
mysql-test/suite/maria/r/distinct.result:
Test case for MDEV-4280
mysql-test/suite/maria/t/distinct.test:
Test case for MDEV-4280
mysql-test/t/mysql.test:
Fixed test suite (we could get error -1 in some cases)
sql/sql_select.cc:
Break loop if restart_rnd_next() gives an error
storage/maria/ha_maria.cc:
scan_restore_pos() can return disk fault error.
storage/maria/ma_blockrec.c:
_ma_scan_remember_block_record() did incorrectly update scan.dir instead of scan_save.dir .
_ma_scan_restore_block_record() didn't work correctly if there was rows inserted,updated or deleted on the handler
between calls to _ma_scan_remember_block_record() and _ma_scan_restore_block_record().
Fixed by adding counters for row changes and reading the current scan page if changes had been made.
storage/maria/ma_blockrec.h:
scan_restore_pos() can return disk fault error.
storage/maria/ma_delete.c:
Increment row_changes
storage/maria/ma_scan.c:
scan_restore_pos() can return disk fault error.
storage/maria/ma_update.c:
Increment row_changes
storage/maria/ma_write.c:
Increment row_changes
storage/maria/maria_def.h:
scan_restore_pos() can return disk fault error.
|
|
|
|
|
|
| |
Removed "optimization" which caused preoblems on second execution of PS with string parameter in LIMIT clause.
Fixed test_bug43560 to be able to skipp it if connection is UNIX socket.
|
|
|
|
| |
fixes for gcc 4.8 - compilation warnings and -fsanitize=address
|
|
|
|
|
|
|
|
|
|
|
| |
update 5.1 to replicate from 10.0 and
to show the server version (as of 10.0) correctly
sql-common/client.c:
mdev:4088
sql/slave.cc:
use the version number, not just the first character of the version string
(we want 10 > 4 not "10" < "4").
|
|
|
|
|
| |
./mtr --suite=main,plugins
will work on all branches.
|
|
|
|
|
|
|
| |
MultiPoint.
Need to check if the number of points is 0 for the polygon.
|
|
|
|
|
| |
Forgotten DBUG_ASSERT should be replaced with the 'return error'.
|
|
|
|
| |
Item_default_value inherited form Item_field so should create temporary table field similary.
|
|
|
|
|
|
|
|
|
| |
Additional fixes for possible overflows in length-related
calculations in 'spatial' implementations.
Checks added to the ::get_data_size() methods.
max_n_points decreased to occupy less 2G size. An
object of that size is practically inoperable anyway.
|
|
|
|
|
|
|
|
|
|
|
|
| |
GROUP BY
Item_func_make_set wasn't taking into account the first argument when
calculating maybe_null.
sql/item_strfunc.cc:
rewrite Item_func_make_set, removing separate storage of the first argument
sql/item_strfunc.h:
rewrite Item_func_make_set, removing separate storage of the first argument
|
|
|
|
|
|
|
|
|
|
| |
The bug was found by Alyssa Milburn.
If the number of points of a geometry feature read from
binary representation is greater than 0x10000000, then
the (uint32) (num_points * 16) will cut the higher byte,
which leads to various errors.
Fixed by additional check if (num_points > max_n_points).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MySQL Bug #12408412: GROUP_CONCAT + ORDER BY + INPUT/OUTPUT SAME USER VARIABLE = CRASH
and
MySQL Bug#14664077 SEVERE PERFORMANCE DEGRADATION IN SOME CASES WHEN USER VARIABLES ARE USED
sql/item_func.cc:
don't use anything from Item_func_set_user_var::fix_fields()
in Item_func_set_user_var::save_item_result()
sql/sql_class.cc:
Call suv->save_item_result(item) *before* doing suv->fix_fields(), because
the former evaluates the item (and caches its value), while the latter marks
the user variable as non-const. The problem is that the item was fix_field'ed
when the user variable was const, and it doesn't expect it to change to non-const
in the middle of the execution.
|
| |
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
mysys/errors.c:
revert upstream's fix. use a much simpler one
mysys/my_write.c:
revert upstream's fix. use a simpler one
sql/item_xmlfunc.cc:
useless, but ok
sql/mysqld.cc:
simplify upstream's fix
storage/heap/hp_delete.c:
remove upstream's fix.
we'll use a much less expensive approach.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
DIAGNOSTICS_AREA::SET_OK_STATUS
Use DBUG_RETURN() instead of return() if DBUG_ENTER() is
used in the function. This patch is to fix the Windows
pb2 failure on mysql-5.1
Approved by Marko. rb#1792
|
| |
| |
| |
| |
| |
| |
| | |
I_MAIN.CTYPE_UTF8 FOR MACOSX10.6 FOR 5.1
Part 2: Fix for test failures on Windows.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
DIAGNOSTICS_AREA::SET_OK_STATUS
Test fails on 5.1 valgrind build. This is because of close(-1)
system call.
Fixed by adding extra checks for valid file descriptor.
Approved by Vasil(Calvin). rb#1792
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I_MAIN.CTYPE_UTF8 FOR MACOSX10.6 FOR 5.1
While converting directory name to filename, a
file separator (FN_LIBCHAR) might get appended
to the resulting file name. This can result in
off-by-one error when length of the input string
is equal to FN_REFLEN. In this case, the terminating
'\0' gets written beyond the buffer allocated to store
the result.
Fixed by incrementing the dst buffer size by 1. As
extra safety, switched to strnmov() and added a debug
assert to check the length of the input file name.
No test case added as the scenario is already
covered by the test cases added for bugs in
the description.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Problem:If Disk becomes full while writing into the binlog,
then the server instance hangs till someone frees the space.
After user frees up the disk space, mysql server crashes
with an assert (m_status != DA_EMPTY)
Analysis: wait_for_free_space is being called in an
infinite loop i.e., server instance will hang until
someone frees up the space. So there is no need to
set status bit in diagnostic area.
Fix: Replace my_error/my_printf_error with
sql_print_warning() which prints the warning in error log.
include/my_sys.h:
Provision to call sql_print_warning from mysys files
mysys/errors.c:
Replace my_error/my_printf_error with
sql_print_warning() which prints the warning in error log.
mysys/my_error.c:
implementation of my_printf_warning
mysys/my_write.c:
Adding logic to break infinite loop in the simulation
sql/mysqld.cc:
Provision to call sql_print_warning from mysys files
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Details of BUG#11746142: CALLING MYSQLD WHILE ANOTHER
INSTANCE IS RUNNING, REMOVES PID FILE
Fix: Before removing the pid file, ensure it was created
by the same process, leave it intact otherwise.
sql/mysqld.cc:
delete_pid_file() introduced, which checks that the pid file
belongs to the process before removing it
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Some shell interpreters do not support '-e' test
primary to construct conditions.
man test 1 (on S10)
...skip...
-e file True if file exists. (Not available in sh.)
...skip...
Hence, check for the existence of a file using
'-e' might result in a syntax error on such
shell programs.
Fixed by replacing it by '-f'.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
DOS ATTACKS
Problem:
For detailed description, see Bug#42502. This bug is a duplicate
of Bug#42502. The complete fix for Bug#42502 was not made as
proposed. Hence the bug still persists.
Fix:
Make the changes as proposed originally for the bugfix of 42502.
Which is to remove the allocation of the memory before we actually
check for any errors.
sql/tztime.cc:
Remove the double allocation for tz_info
|
| |\ |
|
| | |
| | |
| | |
| | | |
machine and pb2 machines in the explain output.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
TO SIGNED
Problem:
When we are joining types (of fields) in case of a union, we usually
upgrade the datatypes to the largest present in the query.
In case of mediumint, it is not happening.
Analysis:
When joined with types LONG and LONGLONG, mediumint should get
upgraded to LONG and LONGLONG respectively.
W.r.t the given query, constant '1' will be created as a LONGLONG
internally and SIGNED flag is enabled. As a result, while combining
types for the field, LONGLONG along with MEDIUMINT gets converted
to LONG first. LONG with MEDIUMINT(of the third select) gets converted
to MEDIUMINT. SIGNED FLAG would be that of the first field's.
As a result, the final result would be SIGNED MEDIUMINT.
Fix:
While joining types, MEDIUMINT with LONGLONG and MEDIUMINT with LONG
is converted to LONGLONG and LONG respectively. Also, made some
changes for FLOAT and DOUBLE.
sql/field.cc:
Changed merge types for MEDIUMINT.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
DBUG_ENTER and DBUG_LEAVE must *always* match,
otherwise all subsequent DBUG_ENTER calls will
be poking into undefined stack frames.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Analysis:
When thread cache is enabled, it does not properly initialize
thd->start_utime when a thread is picked from the thread cache.
This breaks the quota management mechanism.
THD::time_out_user_resource_limits() resets
m_user_connect->conn_per_hour to 0 based on thd->start_utime
Fix:
Initialize start_utime when cached thread is reused.
Notes:
Enabled back tests which were disabled because of this issue.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is a followup to the fix of
Bug#14628410 ASSERTION `! IS_SET()' FAILED IN DIAGNOSTICS_AREA::SET_OK_STATUS
(satya.bodapati@oracle.com-20121213132316-5joz4phltx9yhjs7)
In innobase_mysql_tmpfile(): allocate/open the file after
the return(-1); statement.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
IN QUERY CACHE CODE
DESCRIPTION:
MySQL Server crashes sporadically when Query Caching is on and
the server has high contention among clients.
ANALYSIS :
Scenario 1:
In Query_cache::move_by_type() when handling RESULT or its related blocks,
Write Lock is acquired on its parent Query block. However the next and prev
pointers are cached in local variables before lock acquisition. In an extremely
high contention scenario there exists a possibility that
Query_cache::append_result_data() is operating on the same query block
and as a consequence might append a new Result block to the end of Result
blocks Linked List of the Query. This would manipulate the next, prev pointers
of the Block being processed in move_by_type(), however the local pointers
still point to previous nodes there by causing Data Corruption leading to crash.
FIX :
Scenario 1:
The next, prev pointers are now accessed only after Lock acquisition in
Query_cache::move_by_type().
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
SAME VERSION NUMBER 1.0.17
Now that InnoDB/InnoDB Plugin is no longer separately developed and
distributed from the MySQL server it does not need its own version number.
Thus use the MySQL version instead.
"Removing" the version altogether is not feasible because the config
variable 'innodb_version' cannot be removed in GA branches.
Reviewed by: Marko (rb#1751)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Problem: tag's buffer overflow leads to a problem.
Fix: bound check added.
sql/item_xmlfunc.cc:
Fix for BUG#15948580 UPDATE_XML() CRASHES THE SERVER.
- XML tag/attribute level shouldn't exceed MAX_LEVEL as we use a
static buffer to store them in the MY_XML_USER_DATA.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
BUF_PAGE_GET_GEN REDUNDANT?
rb://1711
approved by: Marko Makela
When decompressing a compressed page that had already been accessed
in the buffer pool, do not attempt to merge buffered changes.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
File names with colon are being disallowed because of the Alternate Data
Stream (ADS) feature of NTFS that could be misused. ADS allows data to be
written to alternate streams of a normal file. The data in alternate
streams cannot be seen by normal tools on Windows (explorer, cmd.exe). As
a result someone can use this feature to hide large amount of data in
alternate streams and admins will have no easy way of figuring out the
files that are using that disk space. The fix also disallows ADS in the
scenarios where file name is passed as some dynamic variable.
An important thing about the fix is that it DOES NOT disallow ADS file
names if they are not dynamic (i.e. if the file is created by using some
option that needs local access to the MySQL server, for example error log
file). The reasoning is that if some MySQL option related to files
requires access to the local machine (it is not dynamic), then user can very
well create data in ADS by some other means. This fixes only those scenarios
which can allow users to create data in ADS over the wire.
File names with colon are being disallowed only on Windows. UNIX
(Linux in particular) supports NTFS, but it will not be a common
scenario for someone to configure a NTFS file system to store MySQL
data on Linux.
Changes in file bug11761752-master.opt are needed due to
bug number 15937938.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The error code returned from Merge file/Temp file creation functions are
ignored.
Use the return codes of the row_merge_file_create() and innobase_mysql_tmpfile()
to return the error to caller if file creation fails.
Approved by Marko. rb#1618
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
DOPROCESSREPLY()
Description: Function DoProcessReply() calls function
decrypt_message() in a while loop without
performing a check on available buffer
space. This can cause buffer overflow and
crash the server. This patch is fix provided
by Sawtooth to resolve the issue.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
ROBUST AGAINST BUGS IN CALLERS".
Both MDL subsystems and Table Definition Cache code assume
that callers ensure that names of objects passed to them are
not longer than NAME_LEN bytes. Unfortunately due to bugs in
callers this assumption might be broken in some cases. As
result we get nasty bugs causing buffer overruns when we
construct MDL key or TDC key from object names.
This patch makes TDC code more robust against such bugs by
ensuring that we always checking size of result buffer when
constructing TDC keys. This doesn't free its callers from
ensuring that both db and table names are shorter than
NAME_LEN bytes. But at least this steps prevents buffer
overruns in case of bug in caller, replacing them with less
harmful behavior.
This is 5.1-only version of patch.
This patch introduces new version of create_table_def_key()
helper function which constructs TDC key without risk of
result buffer overrun. Places in code that construct TDC keys
were changed to use this function.
Also changed rm_temporary_table() and open_new_frm() functions
to avoid use of "unsafe" strmov() and strxmov() functions and
use safer strnxmov() instead.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Problem:
Before the ALTER TABLE statement, the array
dict_index_t::stat_n_diff_key_vals had proper values calculated
and updated. But after the ALTER TABLE statement, all the values
of this array is 0.
Because of this statistics returned by innodb_rec_per_key() is
different before and after the ALTER TABLE statement. Running the
ANALYZE TABLE command populates the statistics correctly.
Solution:
After ALTER TABLE statement, set the flag dict_table_t::stat_initialized
correctly so that the table statistics will be recalculated properly when
the table is next loaded. But note that we still don't choose the loose
index scans. This fix only ensures that an ALTER TABLE does not change
the optimizer plan.
rb://1639 approved by Marko and Jimmy.
|