| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fails in ONLY_FULL_GROUP_BY mode
The issue here is query with aggregate function and non-aggregate field
in the SELECT LIST with ONLY_FULL_GROUP_BY was not disallowed.
In ONLY_FULL_GROUP_BY mode non-aggregate fields are only allowed inside an
aggregate functions or the non-aggregate fields are part of the GROUP BY clause.
In the query for the failing assert the non-aggregate field was inside
a WINDOW function and the window function was treated as an aggregate function
and so no error was thrown.
The fix would be to make sure to mark that a non-aggregate field is used inside a
window function and not an aggregate function and throw an error then.
|
|
|
|
|
|
| |
With implicit grouping with window functions, we need to make sure that all the
fields inside the window functions are nullable as any non-aggregated field can
produce a NULL value.
|
|
|
|
| |
hang.
|
|
|
|
|
|
|
| |
only 1 timezone will be loaded.
Move alter to InnoDB earlier to more correct place to handle
also if only a one timezone file is loaded.
|
|
|
|
|
|
|
| |
fix_fields for the arguments of the NTH_VALUE function was updating the same reference,
so for the second argument (or after the first argument) the items were not resolved
to their corresponding field from the view as they were updating the reference to the
first argument.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
DELETE with subquery with ROLLUP
The issue here is when records are read from the temporary file
(filesort result in this case) via a cache(rr_from_cache).
The cache is initialized with init_rr_cache.
For correlated subquery the cache allocation is happening at each execution
of the subquery but the deallocation happens only once and that was
when the query execution was done.
So generally for subqueries we do two types of cleanup
1) Full cleanup: we should free all resources of the query(like temp tables).
This is done generally when the query execution is complete or the subquery
re-execution is not needed (case with uncorrelated subquery)
2) Partial cleanup: Minor cleanup that is required if
the subquery needs recalculation. This is done for all the structures that
need to be allocated for each execution (example SORT_INFO for filesort
is allocated for each execution of the correlated subquery).
The fix here would be free the cache used by rr_from_cache in the partial
cleanup phase.
|
|
|
|
|
|
| |
lock_rec_has_to_wait_in_queue(): Remove an obviously redundant assertion
that was added in commit a8ec45863b958757da61af3b2ce0a38b0a79d92c
and also enclose a Galera-specific condition in #ifdef WITH_WSREP.
|
|
|
|
|
|
|
|
|
|
|
|
| |
and inaccurately
Analysis: The list of all privileges is 118 characters wide. However, the
format of error message was: "%-.32s command denied to user...". get_length()
sets the maximum width to 32 characters. As a result, only first 32
characters of list of privilege are stored.
Fix: Changing the format to "%-.100T..." so that get_length() sets width to
100. Hence, first 100 characters of the list of privilege are stored and the
type specifier 'T' appends '...' so that truncation can be seen.
|
|
|
|
|
|
|
|
|
|
| |
Diagnostics_area::sql_errno upon query from I_S with LIMIT ROWS EXAMINED
open_normal_and_derived_table() fails because the query was already killed
as rows examined by the query are more than the limit. However, this isn't a
real error.
Fix: Check if there is actually an error before calling thd->sql_errno()
and later send a warning in handle_select() if no real error.
|
|
|
|
| |
fixed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
lock_rec_has_to_wait
wsrep_kill_victim
lock_rec_create_low
lock_rec_add_to_queue
DeadlockChecker::select_victim()
THD can't change from normal transaction to BF (brute force) transaction
here, thus there is no need to syncronize access in wsrep_thd_is_BF
function.
lock_rec_has_to_wait_in_queue
Add condition that lock is not NULL and add assertions if we are in
strong state.
|
| |
|
|\ |
|
| |
| |
| |
| | |
Make sure system tables aren't open, as the test kills the server
|
| | |
|
| | |
|
| | |
|
|\ \ |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
modified: storage/connect/filamdbf.cpp
modified: storage/connect/filamdbf.h
modified: storage/connect/filamzip.cpp
modified: storage/connect/filamzip.h
modified: storage/connect/ha_connect.cc
modified: storage/connect/plgxml.cpp
modified: storage/connect/tabdos.cpp
modified: storage/connect/tabdos.h
modified: storage/connect/tabfix.h
- Add/Init Level class member
modified: storage/connect/mongo.cpp
modified: storage/connect/mongo.h
modified: storage/connect/tabjson.cpp
modified: storage/connect/tabjson.h
- Typo
modified: storage/connect/connect.cc
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
and enable using special column in them.
modified: storage/connect/tabzip.cpp
modified: storage/connect/tabzip.h
- Fix some compiler errors
modified: storage/connect/tabcmg.cpp
|
|\ \ \
| | |/
| |/| |
|
| | | |
|
| | |
| | |
| | |
| | | |
fails in traverse_role_graph_impl
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The only change between Percona XtraDB Server 5.6.48-88.0
and 5.6.49-89.0 (apart from the version number change) was
percona/percona-server@25ec24092064c2ab95752705e592e0c038ec1111
which we had already addressed in
commit 7c03edf2fe66855a8ce8f2575c3aaf66af975377 and
commit c0fca2863bcbd7cd231f1aa747b4f8d999e3a00e.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
all rows
Do not collect EITS statistics for this statement:
ALTER TABLE t ANALYZE PARTITION p
EITS stats are currently global, not per-partition.
Collecting global stats when we are asked to process just one partition
causes issues for DBAs.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
SELECT command denied to user
check both column- and table-level grants when looking for SELECT
privilege on UPDATE statement.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
COMPOSITE PREFIX INDEX
Fix prefix key comparison in partitioning. Comparions must
take into account no more than prefix_len characters.
It used to compare prefix_len*mbmaxlen bytes.
|
| | |
| | |
| | |
| | |
| | |
| | | |
if mysql_create_view() is aborted when view is linked into lex
(when WSREP_TO_ISOLATION_BEGIN fails),
it should not be linked there again on err:.
|
| | |
| | |
| | |
| | | |
fix uninitialized struct member
|
| | |
| | |
| | |
| | | |
wait_while_table_is_used() should return an error if handler::extra() fails
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Largely based on MySQL commit
https://github.com/mysql/mysql-server/commit/75271e51d60bce8683423b208cbb43b11ca6060e
MySQL Ref:
BUG#24566529: BACKPORT BUG#23575445 TO 5.6
(cut)
Also, the PTR_SANE macro which tries to check if a pointer
is invalid (used when printing pointer values in stack traces)
gave false negatives on OSX/FreeBSD. On these platforms we
now simply check if the pointer is non-null. This also removes
a sbrk() deprecation warning when building on OS X. (It was
before only disabled with building using XCode).
Removed execinfo path of MySQL patch that was already included.
sbrk doesn't exist on FreeBSD aarch64.
Removed HAVE_BSS_START based detection and replaced with __linux__
as it doesn't exist on OSX, Solaris or Windows. __bss_start
exists on mutiple Linux architectures.
Tested on FreeBSD and Linux x86_64. Being in FreeBSD ports for 2
years implies a good testing there on all FreeBSD architectures there
too. MySQL-8.0.21 code is functionally identical to original commit.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There was no ability to set the mtr arguments of:
* --max-save-core; and
* --max-save-datadir
to 0. This is desireable in an automatied scenario where space
is limited hence targeting 10.1 branch.
We take away the 0 means unlimited aspect for these,
however, perl can handle some big numbers so they may as well be
close enough to unlimited for all meaningful purposes.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Correct to a true 2^14 rather than some different number that
was actually just a number typo.
Bug report thanks to Hartmut Holzgraefe.
|
| | |
| | |
| | |
| | |
| | | |
truncate_double() did not take into account the max_value
limit in case when dec<NOT_FIXED_DEC.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Fix the crash: IN-to-EXISTS rewrite causes an error (and so
JOIN::optimize() fails with an error, too), don't call
update_used_tables(). Terminate the query execution instead.
* Fix the cause of the error in the IN-to-EXISTS rewrite: don't do
the rewrite if doing it will cause an error of this kind:
This version of MariaDB doesn't yet support 'SUBQUERY in ROW in left
expression of IN/ALL/ANY'
* Fix another issue exposed by this testcase:
JOIN::setup_subquery_caches() may be invoked before any select has
saved its query plan, and will crash because none of the SELECTs
has called create_explain_query_if_not_exists() to create the Explain
Data Structure for this SELECT.
TODO: When merging this to 10.2, remove the poorly-placed call to
create_explain_query_if_not_exists made by fix for M_D_E_V-16153
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Removed duplicate.
Also move the --no-defaults option close to the other "default*"
options.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
THD proc info was assigned from stack allocated temporary buffer
which went out of scope immediately after assignment.
Fixed by removing the use of temp buffer and assign proc info
from string literal.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Problem:
=======
fts_cache_append_deleted_doc_ids() holds the deleted_lock and tries to
access size of deleted_doc_ids. In the meantime, fts_cache_clear()
clears the sync_heap before clearing deleted_doc_ids. It leads to
invalid access of deleted_doc_ids.
Fix:
===
fts_cache_clear() should free the sync_heap after clearing
deleted_doc_ids.
|
| | |
| | |
| | |
| | |
| | |
| | | |
The srv_monitor_event and the srv_monitor_thread would not be
created when InnoDB is in read-only mode. Yet, some code would
unconditionally invoke os_event_set(srv_monitor_event).
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The issue occurs when the subquery_cache is enabled.
When there is a cache miss the division was leading to a value with scale 9.
In the case of cache hit the value returned was of scale 9 and due to the different
values for the scales the where condition evaluated to FALSE, hence the output
was incomplete.
To fix this problem we need to round up the decimal to the limit mentioned in
Item::decimals. This would make sure the values are compared with the same
scale.
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
- Adding lock_wait_timeout value as 1 make sure that truncate table
fails instead of making MDL timeout.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The purpose of the InnoDB doublewrite buffer is to make InnoDB
tolerant against cases where the server was killed in the middle
of a page write. (In Linux, killing a process may interrupt a
write system call, typically on a 4096-byte boundary.)
There may exist multiple copies of a page number in the doublewrite
buffer. Recovery should choose the latest valid copy of the page.
By design, the FIL_PAGE_LSN must not precede the latest checkpoint LSN
nor be later than the end of the recovered log.
For page_compressed and encrypted pages, we were missing proper
consistency checks. In the 10.4 data set generated for in MDEV-23231,
the data file contained a valid page_compressed page, and an
identical copy of that page was also present in the doublewrite
buffer. But, recovery would incorrectly consider the page invalid
and restore an uncompressed copy of the same page that had been
written before the log checkpoint. (In fact, no redo log was to
be applied to that page.)
buf_dblwr_process(): Validate the FIL_PAGE_LSN in the doublewrite
buffer pages, and always skip page 0, because those pages should
have been recovered by Datafile::restore_from_doublewrite() if
necessary.
Datafile::restore_from_doublewrite(): Choose the latest applicable
page from the doublewrite buffer.
recv_dblwr_t::find_page(): Also validate encrypted or
page_compressed pages.
recv_dblwr_t::validate_page(): New function to validate a page,
either a copy in a data file or in the doublewrite buffer.
Also validate encrypted or page_compressed pages.
This is joint work with Thirunarayanan Balathandayuthapani.
|