| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
When altering from DECIMAL to *INT UNIGNED or to BIT, go through val_decimal(),
to avoid truncation to the biggest possible signed integer
(0x7FFFFFFFFFFFFFFF / 9223372036854775807).
|
|
|
|
|
|
|
|
|
|
|
|
| |
materialized derived table/view that uses aliases is done
The problem appears when a column alias inside the materialized derived
table/view t1 definition coincides with the column name used in the
GROUP BY clause of t1. If the condition that can be pushed into t1
uses that ambiguous column name this column is determined as a column that
is used in the GROUP BY clause instead of the alias used in the projection
list of t1. That causes wrong result.
To prevent it resolve_ref_in_select_and_group() was changed.
|
|
|
|
| |
number
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The problem was most likely introduced by a fix for MDEV-11597
(commit 5f0c31f928338e8a6ffde098b7ffd3d1a8b02903) which removed
the assignment "killed= KILL_BAD_DATA" from THD::raise_condition().
Before MDEV-11597, sp_head::execute() tested thd->killed after
looping through the SP instructions and exited with an error
if thd->killed is set. After MDEV-11597, sp_head::execute()
stopped to notice errors and set the OK status on top of the
error status, which crashed on assert.
Fix:
Making sp_cursor::fetch() return -1 if server_side_cursor->fetch(1)
left an error in the diagnostics area. This makes the statement
"err_status= i->execute(thd, &ip)" in sp_head::execute() set the
error code and correctly break the SP instruction loop and
return on error without setting the OK status.
|
|
|
|
| |
crash the server
|
|
|
|
|
|
|
|
| |
This bug caused crashes for queries with unreferenced non-recursive
CTEs specified by unions.It happened because the function
st_select_lex_unit::prepare() tried to use the value of the field 'derived'
that could not be set for unferenced CTEs as there was no derived
table associated with an unreferenced CTE.
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Crash happened because in discover, table->work_part_info was not properly
reset before execution.
Fixed by resetting before calling execute alter table, create table or
mysql_create_frm_image.
|
| | |
|
| |
| |
| |
| |
| | |
Problem was that blob memory allocated in Table_trigger_list was not
properly freed
|
| |
| |
| |
| |
| | |
Problem was that max_row_lengt() used different bitmap
than pack_row()
|
| | |
|
| |
| |
| |
| |
| | |
Don't check global_memory_used if
debug_assert_on_not_freed_memory is not set
|
|\ \
| |/ |
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
MDEV-16123 ASAN heap-use-after-free handler::ha_index_or_rnd_end
MDEV-13828 Segmentation fault on RENAME TABLE
Problem was that destructor called methods for closed table.
Fixed by removing code in destructor.
|
| | |
| | |
| | |
| | |
| | | |
Problem was that handle_if_exists_options() didn't correct
alter_info->flags when things was removed from the list.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
RENAME
Problem was that detection of temporary tables was all wrong for
RENAME TABLE.
(Temporary tables where opened by top level call to
open_temporary_tables(), which can't detect if a temporary table
was renamed to something and then reused).
Fixed by adding proper parsing of rename list to check against
the current name of a table at each rename stage.
Also change do_rename_temporary() to check against the current
state of temporary tables, not according to the state of start
of RENAME TABLE.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
MDEV-10130 Assertion `share->in_trans == 0' failed in storage/maria/ma_close.c
MDEV-10378 Assertion `trn' failed in virtual int ha_maria::start_stmt
The problem was that maria_handler->trn was not properly reset
at commit/rollback and ha_maria::exernal_lock() could get confused
because.
There was some old code in ha_maria::implicit_commit() that tried
to take care of this, but it was not bullet proof.
Fixed by adding list of all tables that is part of the maria transaction to
TRN.
A nice side effect was of the fix is that loops in
ha_maria::implict_commit() got to be much simpler.
Other things:
- Fixed a bug in mysql_admin_table() where argument open_for_modify
was wrongly reset for the next table in the chain
- rollback admin command also in case of fatal error.
- Split _ma_set_trn_for_table() to three version to simplify code
and debugging.
- Several new asserts to detect the original problem (that file was
not properly removed from trn before calling ma_close())
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
order with Galera and encrypt-tmp-files=1
Problem:- If trans_cache (IO_CACHE) uses encrypted tmp file
then on next DML server will crash.
Case:-
Lets take a case , we have a table t1 , We try to do 2 inserts in t1
1. A really long insert so that trans_cache has to use temp_file
2. Just a small insert
Analysis:- Actually server crashes from inside of galera
library.
/lib64/libc.so.6(abort+0x175)[0x7fb5ba779dc5]
/usr/lib64/galera/libgalera_smm.so(_ZN6galera3FSMINS_9TrxHandle5State...
mysys/stacktrace.c:247(my_print_stacktrace)[0x7fb5a714940e]
sql/signal_handler.cc:160(handle_fatal_signal)[0x7fb5a715c1bd]
sql/wsrep_hton.cc:257(wsrep_rollback)[0x7fb5bcce923a]
sql/wsrep_hton.cc:268(wsrep_rollback)[0x7fb5bcce9368]
sql/handler.cc:1658(ha_rollback_trans(THD*, bool))[0x7fb5bcd4f41a]
sql/handler.cc:1483(ha_commit_trans(THD*, bool))[0x7fb5bcd4f804]
but actual issue is not in galera but in mariadb, because for 2nd
insert we should never call rollback. We are calling rollback because
log_and_order fails it fails because write_cache fails , It fails
because after reinit_io_cache(trans_cache) , my_b_bytes_in_cache says 0
so we look into tmp_file for data , which is obviously wrong since temp
was used for previous insert and it no longer exist.
wsrep_write_cache_inc() reads the IO_CACHE in a loop, filling it with
my_b_fill() until it returns "0 bytes read". Later
MYSQL_BIN_LOG::write_cache() does the same. wsrep_write_cache_inc()
assumes that reading a zero bytes past EOF leaves the old data in the
cache
Solution:- There is two issue in my_b_encr_read
1st we should never equal read_end to info->buffer. I mean this
does not make sense read_end should always point to end of buffer.
2nd For most of the case(apart from async IO_CACHE) info->pos_in_file
should be equal to info->buffer position wrt to temp file , since
in this case we are not changing info->buffer it should remain
unchanged.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The cause of this was several different bugs:
- When using binary logging with binlog_row_image=FULL
the all bits in read_set was set, which caused a
different (wrong) pattern for marking vcol_set.
- TABLE::mark_virtual_columns_for_write() didn't in all
cases mark vcol_set with the vcol_field.
- TABLE::update_virtual_fields() has to update all
vcol fields on REPLACE if binary logging with FULL
is used.
- VCOL_UPDATE_INDEXED should update all vcol fields part
of an index that was not updated by VCOL_UPDATE_FOR_READ
- max_row_length() calculated length of NULL and not
used fields. This didn't cause any crash, but used
more memory than needed.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Crash happened when deleting all columns that was part of a check constraint
The bug was that read map for from table was used when
checking CHECK constraint and was not properly reset
in copy_data_between_tables()
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When Item_insert_value needs a dummy field,
use zero-length Field_string, not Field_null.
The latter isn't compatible with CREATE ... SELECT.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
references t1
fix a typo that broke the main.view test
followup for ef295c31e3d
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
references t1
Fixed by extending unique_table() with a flag to not allow usage of
the replaced table.
I also cleaned up find_dup_table() to not use goto next.
I also added more comments to the code in find_dup_table()
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Problem was that if copy_data_between_tables() didn't do proper
clean up in case of failures:
- copy object was not properly freed
- end_bulk_insert() was not called
- mysql_trans_prepare_alter_copy_data() set THD->transaction.on to
false which was not properly restored
The last part caused a crash in Aria as Aria depends on that THD
is correct.
Other things:
- Reset info->switched_transactional after usage (safety)
- Reset bulk_insert_single_undo (safety)
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
information_schema
Make each lex pointing to statement lex instead of global pointer in THD (no
need store and restore the global pointer and put it on SP stack).
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
in Explain_query::~Explain_query upon/after EXECUTE IMMEDIATE
Explain_query must be created in the execution arena.
But JOIN::optimize_inner temporarily switches to the statement arena
under `if (sel->first_cond_optimization)`. This might cause
Explain_query to be allocated in the statement arena. Usually it is
harmless (although technically incorrect and a waste of memory), but
in case of EXECUTE IMMEDIATE, Prepared_statement object and its
statement arena are destroyed before log_slow_statement() call,
which uses Explain_query.
Fix:
1. Create Explain_query before switching arenas.
2. Before filling earlier-created Explain_query with data, set
thd->mem_root from the Explain_query::mem_root
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
for RocksDB table
Apply patch by Oleksandr Byelkin:
Do not use "only index read" in analyzing indices if there is a field which present in the index only partially.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The current code does not support recursive CTEs whose specifications
contain a mix of ALL UNION and DISTINCT UNION operations.
This patch catches such specifications and reports errors for them.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
with recursive subquery
There were two problems:
1. The code did not report that usage of global ORDER BY / LIMIT clauses
was not supported yet.
2. The code just reset fake_select_lex of the the unit specifying
a recursive CTE to NULL and that caused memory leaks in some cases.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Bounds_checked_array<Element_type>::operator
In this issue we hit the assert because we are adding addition fields to the field JOIN::all_fields list. This
is done because HEAP tables can't index BIT fields so we need to use an additional hidden field for grouping because later it will be
converted to a LONG field. Original field will remain of the BIT type and will be returned. This happens when we convert DISTINCT to
GROUP BY.
The solution is to take into account the number of such hidden fields that would be added to the field
JOIN::all_fields list while calculating the size of the ref_pointer_array.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Problem was that verify_constraints() didn't check if
there was an error as part of evaluating constraints
(can happen in strict mode).
In one-row-insert the error was ignored when using
binary logging as binary logging clear errors if insert
succeeded. In multi-row-insert the error was noticed
for the second row.
After this fix one will get an error for both one and
multi-row inserts if the constraints generates a warning
in strict mode.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
columns
Part one, non-temporary tables.
Rrenaming a column can make destructive changes
to the TABLE. This TABLE cannot be used anymore
and needs to be reopened even if ALTER TABLE
was aborted with an error.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Item_ident::print upon SHOW CREATE on partitioned table
items in the partitioning function were taking
the table name from the table's field
(in set_field(from_field) in Item_field::fix_fields)
and field's table_name is TABLE::alias.
But alias is changed for every statement, and
can be realloced if next statement uses a longer
alias. But partitioning items are fixed once
and live as long as the TABLE does. So if
an alias is realloced, pointers to the old
alias string will become invalid.
Fix partitioning item table_name to point to
the actual table name instead.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
rename to post_fix_fields_part_expr_processor()
because it's only used after fix_fields in
fix_fields_part_func() and can be used for
various post-fix_fields fixups
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
set the pointer to NULL to avoid double-free
when the item is cleaned up many times
(once in JOIN_TAB::cleanup(): tmp->jtbm_subselect->cleanup()
and once at the end of the query, with all other items)
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
more than 32 list values.
This problem occured because the reorganization of the list of values when the
number of elements exceeds 32 was not handled correctly. I have fixed the
problem by fixing the way that the list values are reorganized when the number
of list values exceeds 32.
Author:
Jacob Mathew.
Reviewer:
Alexey Botchkov.
Merged From:
Commit 8e01598 on branch bb-10.3-MDEV-16101
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
failure upon SELECT with impossible condition
The problem appears because of a wrong implementation of the
Item_func_in::build_clone() method. It didn't clone 'array' and 'cmp_fields'
fields for the cloned IN predicate and this could cause crashes.
The Item_func_in::fix_length_and_dec() method was refactored and a new method
named Item_func_in::create_array() was created. It allowed to create 'array'
for cloned IN predicates in a proper way.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
work in the IN subqueries
The pushdown into the materialized derived table/view wasn't done because
optimize() for the derived was called before any conditions that can
be pushed down were extracted. So optimize() in
convert_join_subqueries_to_semijoins() method is called too early and is
unnecessary. The second optimize() call in mysql_handle_single_derived()
is enough.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The issue here is that the window function execution is not called for the correct join tab, when we have GROUP BY
where we create extra temporary tables then we need to call window function execution for the last join tab. For doing
so the current code does not take into account the JOIN::aggr_tables.
Fixed by introducing a new function JOIN::total_join_tab_cnt that takes in account the temporary tables also.
|
|\ \ \ \ |
|
| |\ \ \ \
| | |/ / / |
|
| | |\ \ \
| | | |/ / |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Diagnostics_area::set_error_status upon operation inside XA
don't implicitly commit or rollback in mysql_admin_table()
unless the statement has CF_IMPLICIT_COMMIT_END flag.
|