| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
on startup
Services, which is registered with 2 arguments "C:\path\to\mysqld.exe service_name"
would pass non-null terminated command line to win_main(), and this
would crash in defaults handling.
Make sure win_main() gets null-terminated argv.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Quick grouping is not supported for JSON_OBJECTAGG. The same for GROUP_CONCAT too
so make sure that Item::quick_group is set to FALSE. We need to make sure that in
the case of JSON_OBJECTAGG we don't create an index over grouping fields of
the temp table and update the result after each iteration.
Instead we should first sort the result in accordance to the
GROUP BY fields and then perform the grouping and
write the result to the temp table.
|
| | |
|
| |
| |
| |
| |
| | |
Failure to do this causes crashes if query output is piped to an output
that requires a [pseudo] temporary table.
|
| |
| |
| |
| | |
Only affects DBUG builds
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
MDL_lock::Ticket_list::remove_ticket(): reduce algoritmic
complexity from O(N) to O(1)
MDL_lock::Ticket_list::clear_bit_if_not_in_list(): removed
MDL_lock::Ticket_list::m_type_counters: a map of ticket type
to count. Initialization is memset(0) which takes time.
|
|/
|
|
|
| |
use ilist instread of I_P_List because it's generally
slightly faster on inserting, removing and iterating
|
|
|
|
|
|
|
|
|
| |
The parser must reject DDL operations on temporary objects when
they may modify or alter such object, including temporary tables and sequences.
The rejection is regardless (has been already in place for bin-loggable DML:s)
of the binlogging capability of the server or connection.
The patch implements the requirement. A binlog test is added.
|
|
|
|
|
|
|
|
|
| |
Backport this fix:
CLX-105: UPDATE query causes crash
LEX::print() should not crash when the UPDATE has no WHERE clause
Fixes CLX-373.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When converting a table (test.s3_table) from S3 to another engine, the
following will be logged to the binary log:
DROP TABLE IF EXISTS test.t1;
CREATE OR REPLACE TABLE test.t1 (...) ENGINE=new_engine
INSERT rows to test.t1 in binary-row-log-format
The bug is that the above statements are logged one by one to the binary
log. This means that a fast slave, configured to use the same S3 storage
as the master, would be able to execute the DROP and CREATE from the
binary log before the master has finished the ALTER TABLE.
In this case the slave would ignore the DROP (as it's on a S3 table) but
it will stop on CREATE of the local tale, as the table is still exists in
S3. The REPLACE part will be ignored by the slave as it can't touch the
S3 table.
The fix is to ensure that all the above statements is written to binary
log AFTER the table has been deleted from S3.
|
|
|
|
| |
- Added missing test for binlog_filter to ALTER TABLE
|
|
|
|
|
|
|
|
| |
- Rewrote bool Query_compressed_log_event::write() to make it more readable
(no logic changes).
- Changed DBUG_PRINT of 'is_error:' to 'is_error():' to make it easier to
find error: in traces.
- Ensure that 'db' is never null in Query_log_event (Simplified code).
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| | |
plugin are enabled
Make sure to initialize SSL early enough, when encryption plugins is loaded
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
/data/src/10.4-bug/sql/rpl_parallel.cc, line 470 upon shutdown during FTWRL
Problem:- When we issue FTWRL with shutdown in parallel, there is race between
FTWRL and shutdown. Shutdown might destroy the mutex (pool->LOCK_rpl_thread_pool)
before FTWRL can lock it. So we can get crash on FTWRL thread
Solution:- mysql_mutex_destroy(pool->LOCK_rpl_thread_pool) should wait for
FTWRL thread to complete its work , and then destroy.
So slave_prepare_for_shutdown will just deactivate the pool, and mutex is destroyed
later in end_slave()
|
| |
| |
| |
| | |
and put them all together in mysqld.cc
|
| |
| |
| |
| | |
Item_field::is_json_value() implemented.
|
| |
| |
| |
| | |
Warning message and function result fixed
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Item_ident::print or append_identifier
After this code
end_inplace:
if (thd->locked_tables_list.reopen_tables(thd, false))
goto err_with_mdl_after_alter;
table is not reopened (need_reopen is false) but
some_table_marked_for_reopen is reset to false.
Item_field is allocated on table lock and assigned new name on first
ALTER which is then freed at the end of the command. Second ALTER
accessess this Item_field and gets garbage value.
|
| |
| |
| |
| | |
Make mark_join_nest_as_const() print its action into the trace.
|
| | |
|
| | |
|
| |
| |
| |
| | |
MDEV-22829 SIGSEGV in _ma_reset_history on LOCK
|
| |
| |
| |
| |
| |
| | |
MDEV-22048 Assertion `binlog_table_maps == 0 ||
locked_tables_mode == LTM_LOCK_TABLES' failed in
THD::reset_for_next_command
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Problem was that FLUSH TABLES where trying to read latest sequence state
which conflicted with a running ALTER SEQUENCE. Removed the reading
of the state, when opening a table for FLUSH, as it's not needed in this
case.
Other thing:
- Fixed a potential issue with concurrently running ALTER SEQUENCE where
the later ALTER could potentially read old data
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
MDEV-22649 SIGSEGV in ha_partition::create_partitioning_metadata on ALTER
MDEV-22804 SIGSEGV in ha_partition::create_partitioning_metadata
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
MCOL-3875 Columnstore write cache
The main change is to change thr_lock function get_status to
return a value that indicates we have to abort the lock.
Other thing:
- Made start_bulk_insert() and end_bulk_insert() protected so that the
insert cache can use these
|
| |
| |
| |
| |
| |
| | |
The reson for the change was to make it easier to find true errors
when searching in trace logs.
"error:" should mainly be used when we have a real error
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- IF EXISTS ends with a list of all not existing object, instead of a
separate note for every not existing object
- Produce a "Note" for all wrongly dropped objects
(like trying to do DROP SEQUENCE for a normal table)
- Do not write existing tables that could not be dropped to binlog
Other things:
MDEV-22820 Bogus "Unknown table" warnings produced upon attempt to drop
parent table referenced by FK
This was caused by an older version of this commit patch and later fixed
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Produce a "Note" for all wrongly dropped objects
(Like doing DROP VIEW on a table).
- IF EXISTS ends with a list of all not existing objects, instead of a
separate note for every not existing object.
Other things:
- Fixed bug where one could do CREATE TEMPORARY SEQUENCE multiple times
and create multiple temporary sequences with the same name.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The used code is largely based on code from Tencent
The problem is that in some rare cases there may be a conflict between .frm
files and the files in the storage engine. In this case the DROP TABLE
was not able to properly drop the table.
Some MariaDB/MySQL forks has solved this by adding a FORCE option to
DROP TABLE. After some discussion among MariaDB developers, we concluded
that users expects that DROP TABLE should always work, even if the
table would not be consistent. There should not be a need to use a
separate keyword to ensure that the table is really deleted.
The used solution is:
- If a .frm table doesn't exists, try dropping the table from all storage
engines.
- If the .frm table exists but the table does not exist in the engine
try dropping the table from all storage engines.
- Update storage engines using many table files (.CVS, MyISAM, Aria) to
succeed with the drop even if some of the files are missing.
- Add HTON_AUTOMATIC_DELETE_TABLE to handlerton's where delete_table()
is not needed and always succeed. This is used by ha_delete_table_force()
to know which handlers to ignore when trying to drop a table without
a .frm file.
The disadvantage of this solution is that a DROP TABLE on a non existing
table will be a bit slower as we have to ask all active storage engines
if they know anything about the table.
Other things:
- Added a new flag MY_IGNORE_ENOENT to my_delete() to not give an error
if the file doesn't exist. This simplifies some of the code.
- Don't clear thd->error in ha_delete_table() if there was an active
error. This is a bug fix.
- handler::delete_table() will not abort if first file doesn't exists.
This is bug fix to handle the case when a drop table was aborted in
the middle.
- Cleaned up mysql_rm_table_no_locks() to ensure that if_exists uses
same code path as when it's not used.
- Use non_existing_Table_error() to detect if table didn't exists.
Old code used different errors tests in different position.
- Table_triggers_list::drop_all_triggers() now drops trigger file if
it can't be parsed instead of leaving it hanging around (bug fix)
- InnoDB doesn't anymore print error about .frm file out of sync with
InnoDB directory if .frm file does not exists. This change was required
to be able to try to drop an InnoDB file when .frm doesn't exists.
- Fixed bug in mi_delete_table() where the .MYD file would not be dropped
if the .MYI file didn't exists.
- Fixed memory leak in Mroonga when deleting non existing table
- Fixed memory leak in Connect when deleting non existing table
Bugs fixed introduced by the original version of this commit:
MDEV-22826 Presence of Spider prevents tables from being force-deleted from
other engines
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
check_grant_all_columns
With RETURNING it can happen that the user has some privileges on
the table (namely, DELETE), but later needs different privileges
on individual columns (namely, SELECT).
Do the same as in check_grant_column() - ER_COLUMNACCESS_DENIED_ERROR,
not an assert.
|
| |\ |
|
| | |\ |
|
| | | | |
|
| | | |\ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
fill_schema_user_privileges + *** stack smashing detected *** (on optimized builds)
The code erroneously used buff[100] in a fiew places to make
a GRANTEE value in the form:
'user'@'host'
Fix:
- Fixing the code to use (USER_HOST_BUFF_SIZE + 6) instead of 100.
- Adding a DBUG_ASSERT to make sure the buffer is enough
- Wrapping the code into a class Grantee_str, to reuse it easier in 4 places.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The real problem was that attempt to roll back cahnes after end of memory in QC was made incorrectly and lead to using uninitialized memory.
(bug has nothing to do with resize operation, it is just lack of resources erro processed incorrectly)
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
SELECT for DUAL"
This reverts commit 443391236d20cd0303fcc9957eb49a6aaf28316e.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
In case of SELECT without tables which returns either 0 or 1 rows,
JOIN::exec_inner() did not check if the flag representing SQL_CALC_FOUND_ROWS
is set or not and send_records was direclty assigned 0. So SELECT FOUND_ROWS()
was giving 0 in the output. Now it checks if the flag is set, if it is set
send_record=1 else 0. 1 is the number of rows that could have been sent
to the client if the SELECT query had SQL_CALC_FOUND_ROWS.
It is 0 when no rows were sent because the SELECT query did not have
SQL_CALC_FOUND_ROWS.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
find_uniq_filename(char*, unsigned long)
Fix:
===
Initialize 'number' variable to '0'.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
prepare_search_best_index_intersect on optimized builds
For low sort_buffer_size, in the cost calculation of using the Unique object the elements in the tree were evaluated to 0, make sure to have atleast 1 element in the Unique tree.
Also for the function Unique::get allocate memory for atleast MERGEBUFF2+1 keys.
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Item_cache_datetime::decimals was always copied from example->decimals
without limiting to 6 (maximum possible fractional digits), so
val_str() later crashed on asserts inside my_time_to_str() and
my_datetime_to_str().
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Item_func_div::int_op
Item_func_div::fix_length_and_dec_temporal() set the return data type to
integer in case of @div_precision_increment==0 for temporal input with FSP=0.
This caused Item_func_div to call int_op(), which is not implemented,
so a crash on DBUG_ASSERT(0) happened.
Fixing fix_length_and_dec_temporal() to set the result type to DECIMAL.
|