| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixing ROUND(date,0), TRUNCATE(date,x), FLOOR(date), CEILING(date)
to return the `int(8) unsigned` data type.
Details:
1. Cleanup: moving virtual implementations
- Type_handler_temporal_result::Item_func_int_val_fix_length_and_dec()
- Type_handler_temporal_result::Item_func_round_fix_length_and_dec()
to Type_handler_date_common. Other temporal data type handlers
override these methods anyway. So they were only DATE specific.
This change makes the code clearer.
2. Backporting DTCollation_numeric from 10.5, to reuse the code easier.
3. Adding the `preferred_attrs` argument to Item_func_round::fix_arg_int(). Now
Type_handler_xxx::Item_func_round_val_fix_length_and_dec() work as follows:
- The INT-alike and YEAR handlers copy preferred_attrs from args[0].
- The DATE handler passes explicit attributes, to get `int(8) unsigned`.
- The hex hybrid handler passes NULL, so fix_arg_int() calculates attributes.
4. Type_handler_date_common::Item_func_int_val_fix_length_and_dec()
now sets the type handler and attributes to get `int(8) unsigned`.
|
|
|
|
| |
don't GRANT UPDATE ON mysql.global_priv TO mariadb.sys@localhost;
|
|
|
|
|
|
|
|
|
| |
Classes that handle privilege tables (like Tables_priv_table)
could read some columns conditionally but they expect a certain
minimal number of colunms always to exist.
Add a check for a minimal required number of columns in privilege tables,
don't use a table that has fewer columns than required.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Fixing ROUND(x) and TRUNCATE(x,0) with TINYINT, SMALLINT, MEDIUMINT, BIGINT
input to preserve the exact data type of the argument when it's possible.
2. Fixing FLOOR(x) and CEILING(x) with TINYINT, SMALLINT, MEDIUMINT, BIGINT
to preserve the exact data type of the argument.
3. Adding dedicated Type_handler_year::Item_func_round_fix_length_and_dec()
to easier handle ROUND(x) and TRUNCATE(x,y) for the YEAR(2) and YEAR(4)
input. They still return INT(2) UNSIGNED and INT(4) UNSIGNED correspondingly,
as before.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
argument
Implementing dedicated fixing methods:
- Type_handler_bit::Item_func_round_fix_length_and_dec()
- Type_handler_bit::Item_func_int_val_fix_length_and_dec()
- Type_handler_typelib::Item_func_round_fix_length_and_dec()
because the inherited methods did not work well.
Fixing:
- Type_handler_typelib::Item_func_int_val_fix_length_and_dec
It did not work well, because it used args[0]->max_length to
calculate the result data type. In case of ENUM and SET it was
not correct, because in FLOOR() and CEILING() context
ENUM and SET return not more than 5 digits (65535 is the biggest
possible value).
Misc:
- Changing the API of
Type_handler_bit::Bit_decimal_notation_int_digits(const Item *item)
to a more generic form:
Type_handler_bit::Bit_decimal_notation_int_digits_by_nbits(uint nbits)
- Fixing Type_handler_bit::Bit_decimal_notation_int_digits_by_nbits() to
return the exact number of decimal digits for all nbits 1..64.
The old implementation was approximate.
This change gives better (more precise) data types.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Type_handler_hex_hybrid did not override
Type_handler_string_result::Item_func_round_fix_length_and_dec(),
so the result type of ROUND(0xFFFFFFFFFFFFFFFF) was erroneously
calculated ad DOUBLE with a wrong length.
Overriding Item_func_round_fix_length_and_dec(), to calculated
the result type as INT/BIGINT.
Also, fixing Item_func_round::fix_arg_int() to use
args[0]->decimal_precision() instead of args[0]->max_length
when calculating this->max_length, to get a correct result
for hex hybrids.
- Type_handler_hex_hybrid::Item_func_int_val_fix_length_and_dec()
called item->fix_length_and_dec_int_or_decimal(), which did not
produce a correct result data type for hex hybrid.
Implementing a dedicated code instead, to return INT UNSIGNED or
BIGINT UNSIGNED depending in the number of digits in the arguments.
|
|
|
|
|
|
|
|
|
|
|
|
| |
line 67
Doesn't allow instant alter of a field which is a prefix part of some key
is_part_of_a_key_prefix(): I hope the name of the function is clear
ha_innobase::can_convert_string()
ha_innobase::can_convert_varstring()
ha_innobase::can_convert_blob(): it can't when field is_part_of_a_key_prefix()
|
|
|
|
|
|
|
|
| |
ROUND() and TRUNCATE()
Fixing functions CEILING and FLOOR to return
- TIME for TIME input
- DATETIME for DATETIME and TIMESTAMP input
|
| |
|
|
|
|
|
|
|
| |
KILL and sequences
Continue support the hack of current select equal builtin select if
selects stack is empty even after subselects.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An instant ADD/DROP/reorder column could create a dummy table
object with the wrong ROW_FORMAT when innodb_default_row_format
was changed between CREATE TABLE and ALTER TABLE.
prepare_inplace_alter_table_dict(): If we had promised that
ALGORITHM=INPLACE is supported, we must preserve the ROW_FORMAT.
dict_table_t::prepare_instant(): Add debug assertions to catch
ROW_FORMAT mismatch.
The rest of the changes are related to adding
Alter_inplace_info::inplace_supported to cache the return value of
handler::check_if_supported_inplace_alter().
|
|
|
|
|
|
|
|
| |
In commit 0e5a4ac2532c64a545796c787354dc41d61d0e62 (MDEV-15562)
we introduced a bogus debug assertion, demanding that dict_col_t::len
never exceed some maximum value. The intention was to match what
dict_index_add_col() is doing. But, the assignment field->fixed_len = 0
applies to dict_field_t::fixed_len, not dict_col_t::len!
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An assertion
`server_state_.rollback_mode() == wsrep::server_state::rm_async`
fired in before_command() when
- thread-handling was set to pool-of-threads and
- a BF abort happened between client session calls to
wait_rollback_complete_and_acquire_ownership() and before_command().
This commit introduces a test case to reproduce the crash and
updates wsrep-lib submodule to fixed version.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The sporadic test hangs happen because of mutex dealock between innodb
background threads and two test connection executions.
The test sets variable innodb_disallow_writes, which blocks all writes
to filesyste. The test logic is to execute an INSERT, which should hang
because of filesytstem writes are blocked, and through another session
verify by SELECT that this hanging happens. The SELECT session will then
release innodb_disallow_writes blocking.
However, filesystem write blocking affects also innodb background threads
and they may hang while keeping some other resources locked.
As an example, in one test hang situation, buffer pool access was blocked.
And, if buffer pool is blocked, the test connections will be blocked as well,
and the SELECT session will not be able to continue to release the
innodb_disallow_writes.
The fix in this commit is refactoring of the test logic.
The test will now set first innodb_disallow_writes blocking, and then record
a hash of data directory's filesystem contents. This works as checksum of the
state of data on the datadirectory.
Then some SQL load is tried on both nodes, these sessions will be blocking
due to frozen file system state. The test will have a short sleep to allow
innodb background threads to loop and possibly encounter innodb_disallow_writes
blocking as well.
After the sleep, the test will record file system checksun for the second time,
and then release the innodb_disallow-writes blocking.
Finally, the two checksums are compared, they should be identical to verify that
nothing was written on datadirectory during the test execution.
The checksum is implemented by md5sum hash over all files found in datadirectory
by find command. all these file hashes are hashed together by one more md5sum.
The test therefore depends on md5sum and find. find may work differently with some
OS distributions, e.g. freebsd may be problematic.
|
|
|
|
|
|
|
| |
'LOCK_thd_data' and 'share->intern_lock' / 'lock->mutex'
Add `find_thread_by_id_with_thd_data_lock` which will be used only when killing thread.
This version needs to take `thd->LOCK_thd_data` lock.
|
|
|
|
|
| |
Fixed wsrep_notify.sh script so it only reports status changes on
'joined', 'synced', 'donor'.
|
|
|
|
|
|
| |
identifier
If there is no current_select and variable is not found among SP variables it can be only an error.
|
| |
|
|\ |
|
| |
| |
| |
| |
| | |
Problem was that the code didn't handle a transaction created in innodb
as part of a failed mysql_lock_tables()
|
| |
| |
| |
| |
| |
| |
| | |
Fix stale virtual field value in 4 cases: when virtual field depends
on row_start/row_end in timestamp/trx_id versioned table. row_start
dep is recalculated in vers_update_fields() (SQL and InnoDB
layer). row_end dep is recalculated on history row insert.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
REPLACE on a system-versioned table
make_versioned_helper() appended new update field unconditionally
while it should check if this field already exists in update vector.
Misc renames to conform versioning prefix. vers_update_fields() name
conforms with sql layer TABLE::vers_update_fields().
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The issue was:
T1, a parallel slave worker thread, is waiting for another worker thread to
commit. While waiting, it has the MDL_BACKUP_COMMIT lock.
T2, working for mariabackup, is doing BACKUP STAGE BLOCK_COMMIT and blocks
all commits.
This causes a deadlock as the thread T1 is waiting for can't commit.
Fixed by moving locking of MDL_BACKUP_COMMIT from ha_commit_trans() to
commit_one_phase_2()
Other things:
- Added a new argument to ha_comit_one_phase() to signal if the
transaction was a write transaction.
- Ensured that ha_maria::implicit_commit() is always called under
MDL_BACKUP_COMMIT. This code is not needed in 10.5
- Ensure that MDL_Request values 'type' and 'ticket' are always
initialized. This makes it easier to check the state of the MDL_Request.
- Moved thd->store_globals() earlier in handle_rpl_parallel_thread() as
thd->init_for_queries() could use a MDL that could crash if store_globals
where not called.
- Don't call ha_enable_transactions() in THD::init_for_queries() as this
is both slow (uses MDL locks) and not needed.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
field->col->mbmaxlen == 0' failed in dict_index_add_to_cache
is_part_of_a_key(): detect is TEXT field is a part of some key
ha_innobase::can_convert_blob(): now correctly detect whether our blob
is a part of some key. Previously the check didn't work in some cases.
|
|\ \
| |/ |
|
| |\ |
|
| | |\ |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
tables on Windows
An oveflow was happening on windows because on Windows sizeof(ulong) is 4 bytes
while it is 8 bytes on Linux.
Switched avg_frequency and avg length for column statistics to ulonglong.
Switched avg_frequency for index statistics to ulonglong.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
clause.
m_file[0] not always is a good sample.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
commit 854c219a7f0e1878517d5a821992f650342380dd (MDEV-17301)
broke a constraint: Fixed-length columns cannot be extended in InnoDB
without rebuilding the table.
ha_innobase::can_convert_string(): Correct the condition. We must
not allow any instantaneous change to the length of CHAR columns
measured in characters. For any format other than ROW_FORMAT=REDUNDANT,
we can allow the length in bytes to be extended if mbminlen<mbmaxlen held
before the change of the character set.
|
|\ \ \ \ |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When high priority replication slave applier encounters lock conflict in innodb,
it will force the conflicting lock holder transaction (victim) to rollback.
This is a must in multi-master sychronous replication model to avoid cluster lock-up.
This high priority victim abort (aka "brute force" (BF) abort), is started
from innodb lock manager while holding the victim's transaction's (trx) mutex.
Depending on the execution state of the victim transaction, it may happen that the
BF abort will call for THD::awake() to wake up the victim transaction for the rollback.
Now, if BF abort requires THD::awake() to be called, then the applier thread executed
locking protocol of: victim trx mutex -> victim THD::LOCK_thd_data
If, at the same time another DBMS super user issues KILL command to abort the same victim,
it will execute locking protocol of: victim THD::LOCK_thd_data -> victim trx mutex.
These two locking protocol acquire mutexes in opposite order, hence unresolvable mutex locking
deadlock may occur.
The fix in this commit adds THD::wsrep_aborter flag to synchronize who can kill the victim
This flag is set both when BF is called for from innodb and by KILL command.
Either path of victim killing will bail out if victim's wsrep_killed is already
set to avoid mutex conflicts with the other aborter execution. THD::wsrep_aborter
records the aborter THD's ID. This is needed to preserve the right to kill
the victim from different locations for the same aborter thread.
It is also good error logging, to see who is reponsible for the abort.
A new test case was added in galera.galera_bf_kill_debug.test for scenario where
wsrep applier thread and manual KILL command try to kill same idle victim
|
|\ \ \ \ \
| | |/ / /
| |/| | | |
|
| |\ \ \ \
| | | |/ /
| | |/| | |
|
| | |\ \ \
| | | | |/
| | | |/| |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The only InnoDB changes between Percona XtraDB Server 5.6.47-87.0
and 5.6.48-88.0 are related to InnoDB changes between MySQL 5.6.47
and MySQL 5.6.48, which we had already applied.
|
| | | | |
| | | | |
| | | | |
| | | | | |
There were no InnoDB changes between MySQL 5.6.48 and MySQL 5.6.49.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
number is in the future"
- Problem is that test case creates iblogfile* files. So existing
ibdata pages could point to future LSN. Fix is that taking the
backup of data before iblogfile* creation and apply it before
exiting the test case.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
failed in Diagnostics_area::set_ok_status on FUNCTION replace
When there is REPLACE in the statement, sp_drop_routine_internal() returns
0 (SP_OK) on success which is then assigned to ret. So ret becomes false
and the error state is lost. The expression inside DBUG_ASSERT()
evaluates to false and thus the assertion failure.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Filesort_buffer::spaceleft | SIGSEGV in __memmove_avx_unaligned_erms from my_b_write
Make sure that the sort_buffer that is allocated has atleast space for MERGEBUFF2 keys.
The issue here was that the record length is quite high and sort buffer size is very small,
due to which we end up with zero number of keys in the sort buffer. The Sort_param::max_keys_per_buffer
was zero in such a case, due to which we were flushing empty sort_buffer to the disk.
|
| | | | |
| | | | |
| | | | |
| | | | | |
https://github.com/codership/mariadb-server into 10.2-MDEV-18838
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
[Element_type = Item *]: Assertion `n < m_size' failed.
Allocate space for fields inside the window function (arguments, PARTITION BY and ORDER BY clause)
in the ref pointer array. All fields inside the window function are part of the temporary
table that is required for the window function computation.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
is_bulk_op())' failed in Diagnostics_area::set_ok_status
Error state is not stored in check_and_do_in_subquery_rewrites() when there is
illegal combination of optimizer switches. So all the functions eventually
return false. Thus the assetion failure.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The client can only find out if the server has disconnected when it tries to
read or send something. If the server gets disconnected before
send_client_reply_packet(), the client will try sending authentication
information but it will fail. But, if the client is fast enough to send
autentication information before disconnecting, it will notice that when
reading the ok packet. So the client can fail on read or on write.
It is unpredictable because, the process are unsynchronized and this
could happen in any order.
|