| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
role
Reviewed-by: serg@mariadb.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are 2 issues here:
Issue #1: memory allocation.
An IO_CACHE that uses encryption uses a larger buffer (it needs space for the encrypted data,
decrypted data, IO_CACHE_CRYPT struct to describe encryption parameters etc).
Issue #2: IO_CACHE::seek_not_done
When IO_CACHE objects are cloned, they still share the file descriptor.
This means, operation on one IO_CACHE may change the file read position
which will confuse other IO_CACHEs using it.
The fix of these issues would be:
Allocate the buffer to also include the extra size needed for encryption.
Perform seek again after one IO_CACHE reads the file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fix printing precedence for BETWEEN, LIKE/ESCAPE, REGEXP, IN
don't use precedence for printing CASE/WHEN/THEN/ELSE/END
fix parsing precedence of BETWEEN, LIKE/ESCAPE, REGEXP, IN
support predicate arguments for IN, BETWEEN, SOUNDS LIKE, LIKE/ESCAPE,
REGEXP
use %nonassoc for unary operators
fix parsing of IS TRUE/FALSE/UNKNOWN/NULL
remove parser_precedence test as superseded by the precedence test
|
|
|
|
|
| |
prefix unary operators don't need to have different precedence,
the syntax unambiguously specifies in what order they apply
|
|
|
|
|
| |
expression between INTERVAL and the unit doesns not need any
precedence rules, there's no ambiguity there
|
|
|
|
| |
Item_ref should have the precedence of the item it's referencing
|
|
|
|
|
|
| |
when enabling performance_schema
max allowed value limit should be larger than any auto-sized value
|
|
|
|
|
|
| |
When first argument to the JSON_MERGE_PATCH was NULL and second - the
invalid JSON line, the error code was garbage. So it should be set to 0
initially.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
is_bulk_op())' fails on UPDATE on a partitioned table with subquery
(MySQL:71630)
Analysis and fix: Error is not checked. So correct error state is not returned.
Fix: Check for error and return the error state.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Diagnostics_area::set_error_status
Analysis: When strict mode is enabled, all warnings are converted to errors
including those which do not occur because of bad data.
Fix: Query should not be aborted when we have warning because limit to
examine rows was reached because it doesn't happen due to bad data.
So thd->abort_on_warning should be false.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- row_search_mvcc() should return DB_INTERRUPTED when it got killed.
- Add a syncpoint for the ICP check.
- Add test coverage for killed-during-ICP-check scenario
Backport of MDEV-22761 fixes for ICP from 10.4 commits:
* a6f956488c712bef3b13660584d1b905e0c676cc
* c03885cd9ceb1ede7f49a9e218022b401b3a1e28
XtraDB was fixed in deb3b9a17498
Reviewer: Daniel Black
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
EVEN IF I LOG TO FILE.
Analysis:
----------
MYSQL_UPGRADE of the master breaks the replication when
the query logging is enabled with FILE/NONE 'log-output'
option on the slave.
mysql_upgrade modifies the 'general_log' and 'slow_log'
tables after the logging is disabled as below:
SET @old_log_state = @@global.general_log;
SET GLOBAL general_log = 'OFF';
ALTER TABLE general_log
MODIFY event_time TIMESTAMP NOT NULL,
( .... );
SET GLOBAL general_log = @old_log_state;
and
SET @old_log_state = @@global.slow_query_log;
SET GLOBAL slow_query_log = 'OFF';
ALTER TABLE slow_log
MODIFY start_time TIMESTAMP NOT NULL,
( .... );
SET GLOBAL slow_query_log = @old_log_state;
In the binary log, only the ALTER statements are logged
but not the SET statements which turns ON/OFF the logging.
So when the slave replays the binary log,the ALTER of LOG
tables throws an error since the logging is enabled. Also
the 'log-output' option is not checked to determine
whether to allow/disallow the ALTER operation.
Fix:
----
The 'log-output' option is included in the check while
determining whether the query logging happens using the
log tables.
Picked from mysql respository at 0daaf8aecd8f84ff1fb400029139222ea1f0d812
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The crash was caused by improper raising of an error or replication checksum
verification at time of the server initialization. As there is no THD object
associated with the main initializing thread yet the error text should be
assigned with calling a respective macro that is aware of that possibility.
Fixed accordingly.
[At merging to 10.4 the new test result file needs
+# restart: --master_verify_checksum=ON --debug_dbug=+d,corrupt_read_log_event_char
that mtr run will hint on.]
|
| | |
| | |
| | |
| | | |
fix an error with locked taböes
|
| | |
| | |
| | |
| | | |
Made cleanup of DROP (udf) FUNCTION procedure and also check of mysql.func (not only loaded udf).
|
| | | |
|
|\ \ \ |
|
| |\ \ \
| | | |/
| | |/| |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This bug could manifest itself for a query with WHERE condition containing
top level OR formula such that each conjunct contained a single-range
condition supported by the same index. One of these range conditions must
be fully covered by another range condition that is used later in the OR
formula. Additionally at least one of these condition should be ANDed with
a sargable range condition supported by a different index.
There were several attempts to fix related problems for OR conditions after
the backport of range optimizer code from MySQL (commit
0e19f3e36f7842583feb6bead2c2600cd620bced). Unfortunately the first of these
fixes contained typo remained unnoticed until recently. This typo bug led
to rejection of valid range accesses. This patch fixed this typo bug.
The fix revealed another two bugs: one in a constructor for SEL_ARG,
the other in the function tree_or(). Both are fixed in this patch.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
!= 2' failed in handler::ha_rnd_next(); / Assertion `table_list->table' failed in find_field_in_table_ref / ERROR 1901 (on optimized builds)
Add the same check for altering DEFAULT used as for CREATE TABLE.
|
| | | |
| | | |
| | | |
| | | | |
st_mysql_show_var*, char*) through pointer to incorrect function type 'int (*)(THD *, st_mysql_show_var *, void *, system_status_var *, enum_var_type) errors
|
| | | |
| | | |
| | | |
| | | | |
../sql/item_cmpfunc.cc:3650:14
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Amend check for unread client data in threadpool.
THD::NET will have unread data, in case client uses compression, and
wraps multiple commands into a single compression packet
MariaDB C/C sends COM_STMT_RESET+COM_STMT_EXECUTE, and wraps it into
a single compressed packet, when compression is on, thus trying to use
compression and prepared statements against a threadpool-enabled server
will result into a hang, before this patch.
|
|\ \ \ \
| | |_|/
| |/| | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
type: 255 meta: 4 (0004)
Analysis:
========
"mysqlbinlog -v" option will reconstruct row events and display them as
commented SQL statements. If this option is given twice, the output includes
comments to indicate column data types and some metadata.
`log_event_print_value` is the function reponsible for printing values and
their types. This function doesn't handle GEOMETRY type. Hence the above error
gets printed.
Fix:
===
Add support for GEOMETRY datatype.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Appoligies, had a dirty branch before pushing:
This reverts commit 053653a23cac6f3f2e5288979438de27c9d0100a.
This reverts commit 0ff897807fc2f4a32e1ba1ae148005930ea604b5.
This reverts commit 85b085972b729f6c049050f851692c9a5b86f3d5.
This reverts commit f3f45e46b614bddcef0a37f4352c5909ca565d1d.
This reverts commit a470b3570a7ce2534c9021f3b84d7457a3ba08e1.
This reverts commit f8b8d202bc83d3de46c89ef86333fe602e711265.
This reverts commit 6b6f066fdd9f5f64813ded550e7dbda176ee3c82.
This reverts commit a701e9e6c390c3cbac69872e95b1aec565341d30.
This reverts commit c169838611e13c9f0559b2f49ba8c36aec11a78b.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* Maintain coding style in sql_yacc.yy in regards to optional clauses.
* Remove unused variable from sql_acl.cc.
* Update test case
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Implemented the alter user syntax. Also tested that create user
creates users accordingly.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Extend the syntax accepted by the grammar to account for the new create user
and alter user syntax.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Passing a null pointer to a nonnull argument is not only undefined
behaviour, but it also grants the compiler the permission to optimize
away further checks whether the pointer is null. GCC -O2 at least
starting with version 8 may do that, potentially causing SIGSEGV.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
For some reason, adding -fsanitize=undefined (cmake -DWITH_UBSAN=ON)
to the compilation flags will cause even more warnings to be emitted.
The warnings do look bogus, but the code can be simplified.
|
| | | | |
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Problem:
=======
SHOW BINLOG EVENTS FROM <"random"-pos> caused a variety of failures as
reported in MDEV-18046. They are fixed but that approach is not future-proof
as well as is not optimal to create extra check for being constructed event
parameters.
Analysis:
=========
"show binlog events from <pos>" code considers the user given position as a
valid event start position. The code starts reading data from this event start
position onwards and tries to map it to a set of known events. Each event has
a specific event structure and asserts have been added to ensure that, read
event data, satisfies the event specific requirements. When a random position
is supplied to "show binlog events command" the event structure specific
checks will fail and they result in assert.
For example: https://jira.mariadb.org/browse/MDEV-18046
In the bug description user executes CREATE TABLE/INSERT and ALTER SQL
commands.
When a crazy offset like "SHOW BINLOG EVENTS FROM 365" is provided code
assumes offset 365 as valid event begin and proceeds to EVENT_LEN_OFFSET reads
some random length and comes up with a crazy event which didn't exits in the
binary log. In this quoted example scenario, event read at offset 365 is
considered as "Update_rows_log_event", which is not present in binary log.
Since this is a random event its validation fails and code results in
assert/segmentation fault, as shown below.
mysqld: /data/src/10.4/sql/log_event.cc:10863: Rows_log_event::Rows_log_event(
const char*, uint, const Format_description_log_event*):
Assertion `var_header_len >= 2' failed.
181220 15:27:02 [ERROR] mysqld got signal 6 ;
#7 0x00007fa0d96abee2 in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
#8 0x000055e744ef82de in Rows_log_event::Rows_log_event (this=0x7fa05800d390,
buf=0x7fa05800d080 "", event_len=254, description_event=0x7fa058006d60) at
/data/src/10.4/sql/log_event.cc:10863
#9 0x000055e744f00cf8 in Update_rows_log_event::Update_rows_log_event
Since we are reading random data repeating the same command SHOW BINLOG EVENTS
FROM 365 produces different types of crashes with different events. MDEV-18046
reported 10 such crashes.
In order to avoid such scenarios user provided starting offset needs to be
validated for its correctness. Best way of doing this is to make use of
checksums if they are available. MDEV-18046 fix introduced the checksum based
validation.
The issue still remains in cases where binlog checksums are disabled. Please
find the following bug reports.
MDEV-22473: binlog.binlog_show_binlog_event_random_pos failed in buildbot,
server crashed in read_log_event
MDEV-22455: Server crashes in Table_map_log_event,
binlog.binlog_invalid_read_in_rotate failed in buildbot
Fix:
====
When binlog checksum is disabled, perform scan(via reading event by event), to
validate the requested FROM <pos> offset. Starting from offset 4 read the
event_length of next_event in the binary log. Using the next_event length
advance current offset to point to next event. Repeat this process till the
current offset is less than or equal to crazy offset. If current offset is
higher than crazy offset provide appropriate invalid input offset error.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Item_func_set_collation (on optimized builds)
This piece of the code in Item_func_or_sum::agg_item_set_converter:
if (!conv && ((*arg)->collation.repertoire == MY_REPERTOIRE_ASCII))
conv= new (thd->mem_root) Item_func_conv_charset(thd, *arg, coll.collation, 1);
was wrong because:
1. It could change Item_cache to Item_func_conv_charset
(with the old Item_cache in args[0]).
Such Item type change is not always supported, e.g.
the code in Item_singlerow_subselect::reset() expects only Item_cache,
to be able to call Item_cache::set_null(). So it erroneously
reinterpreted Item_func_conv_charset to Item_cache and called
a non-existing method set_null(), which crashed the server.
2. The 1 in the last parameter to Item_func_conv_charset() was also
a problem. In MariaDB versions where the reported query did not crash,
it erroneously returned "empty set" instead of one row, because
the 1 made subselects execute too earlier and return NULL.
Fix:
1. Removing the above two lines from Item_func_or_sum::agg_item_set_converter()
2. Adding the repertoire test inside the constructor of Item_func_conv_charset,
so it now detects itself as "safe" in more cases than before.
This is needed to avoid new "Illegal mix of collations" after
removing the wrong code in various scenarios when character set
conversion from pure ASCII happens, including the reported scenario.
So now this sequence:
Item_cache -> Item_func_concat
is replaced to this compatible sequence (the top Item is still Item_cache):
new Item_cache -> Item_func_conv_charset -> Item_func_concat
Before the fix it was replaced to this incompatible sequence:
Item_func_conv_charset -> old Item_cache -> Item_func_concat
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Remove incorrect BF (brute force) handling from lock_rec_has_to_wait_in_queue
and move condition to correct callers. Add a function to report
BF lock waits and assert if incorrect BF-BF lock wait happens.
wsrep_report_bf_lock_wait
Add a new function to report BF lock wait.
wsrep_assert_no_bf_bf_wait
Add a new function to check do we have a
BF-BF wait and if we have report this case
and assert as it is a bug.
lock_rec_has_to_wait
Use new wsrep_assert_bf_wait to check BF-BF wait.
lock_rec_create_low
lock_table_create
Use new function to report BF lock waits.
lock_rec_insert_by_trx_age
lock_grant_and_move_on_page
lock_grant_and_move_on_rec
Assert that trx is not Galera as VATS is not compatible
with Galera.
lock_rec_add_to_queue
If there is conflicting lock in a queue make sure that
transaction is BF.
lock_rec_has_to_wait_in_queue
Remove incorrect BF handling. If there is conflicting
locks in a queue all transactions must wait.
lock_rec_dequeue_from_page
lock_rec_unlock
If there is conflicting lock make sure it is not
BF-BF case.
lock_rec_queue_validate
Add Galera record locking rules comment and use
new function to report BF lock waits.
All attempts to reproduce the original assertion have been
failed. Therefore, there is no test case on this commit.
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Passing a null pointer to a nonnull argument is not only undefined
behaviour, but it also grants the compiler the permission to optimize
away further checks whether the pointer is null. GCC -O2 at least
starting with version 8 may do that, potentially causing SIGSEGV.
These problems were caught in a WITH_UBSAN=ON build with the
Bug#7024 test in main.view.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Passing a null pointer to the "%s" argument of a printf-like
function is undefined behaviour. In the GNU libc implementation
of the printf() family of functions, it happens to work.
GCC 10.2.0 would diagnose this with -Wformat-overflow -Og.
In -fsanitize=undefined (WITH_UBSAN=ON) builds, a runtime error
would be generated. In some other builds, GCC 8 or later might infer
that the parameter is nonnull and optimize away further checks whether
the parameter is null, leading to SIGSEGV.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
REPLICATE_DO_TABLE
Backporting fixes for:
MDEV-22317: SIGSEGV in my_free/delete_dynamic in optimized builds (ARIA)
Backported following commits from: 10.5.3
commit 77e1b0c39771f47bb2602530b8d1e584ac1d3731 -- Post push fix.
commit 2e6b21be4a8d0bf094da288cadff866f1bb38062
MDEV-22059: MSAN report at replicate_ignore_table_grant
Backported following commits from: 10.5.4
commit 840fb495ce2c0c00b20f2a9ba44b6fcc20c56118
|
|\ \ \ \
| |/ / /
| | | |
| | | | |
This also fixes MDEV-20464.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
--verbose
(This commit is exclusively for 10.1 branch, do not merge it to upper ones)
In case of a pattern of non-STMT_END-marked Rows-log-event (A) followed by
a STMT_END marked one (B) mysqlbinlog mixes up the base64 encoded rows events
with their pseudo sql representation produced by the verbose option:
BINLOG '
base64 encoded data for A
### verbose section for A
base64 encoded data for B
### verbose section for B
'/*!*/;
In effect the produced BINLOG '...' query is not valid and is rejected with the error.
Examples of this way malformed BINLOG could have been found in binlog_row_annotate.result
that gets corrected with the patch.
The issue is fixed with introduction an auxiliary IO_CACHE to hold on the verbose
comments until the terminal STMT_END event is found. The new cache is emptied
out after two pre-existing ones are done at that time.
The correctly produced output now for the above case is as the following:
BINLOG '
base64 encoded data for A
base64 encoded data for B
'/*!*/;
### verbose section for A
### verbose section for B
Thanks to Alexey Midenkov for the problem recognition and attempt to tackle,
Venkatesh Duggirala who produced a patch for the upstream whose
idea is exploited here, as well as to MDEV-23077 reporter LukeXwang who
also contributed a piece of a patch aiming at this issue.
Extra: mysqlbinlog_row_minimal refined to not produce mutable numeric values into the result file.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The issue here was that the query was using ORDER BY LIMIT optimzation where
the access method was changed from EQ_REF access to an index scan (index that would
resolve the ORDER BY clause).
But the parameter READ_RECORD::unlock_row was not reset to rr_unlock_row, which is
used when the access method is not EQ_REF access.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add a proper error handling of innobase_get_computed_value results in
row_upd_store_row/row_upd_store_v_row.
Also add an assertion in row_vers_build_clust_v_col to fail during row
purge.
Add one more assertion in row_sel_sec_rec_is_for_clust_rec for possible
future catches.
|