| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
No functional change.
Call my_timer_init() only once and then reuse it from InnoDB and
perfschema storage engines.
This patch speeds up empty test for me like this:
./mtr -mem innodb.kevg,xtradb 1.21s user 0.84s system 34% cpu 5.999 total
./mtr -mem innodb.kevg,xtradb 1.12s user 0.60s system 31% cpu 5.385 total
|
|
|
|
|
| |
Based on pull request https://github.com/MariaDB/server/pull/999
by mkaruza@galeracluster.com
|
|\ |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
information schema
To read histograms for a table, we should check if the allocation of statistics was done or not,
if not done we should not try to read histograms for such a table.
|
| |
| |
| |
| | |
to the patch for MDEV-17605
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Diagnostics_area::set_eof_status upon EXPLAIN UPDATE in PS
Restore EXPAIN flag in SELECT_LEX before execution multi-update by flag in LEX
(the same but in other way made before INSERT/DELETE/SELECT)
Without it, mysql_update() didn't know that there will be EXPLAIN result set and was sending OK at the end of the update, which conflicted with the EOF sent later by EXPLAIN.
|
| | |
|
| |
| |
| |
| |
| |
| | |
make_sortkey
The patch for MDEV-18738 fixed this problem. Adding tests only.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
copy_if_not_alloced() did not handle situations when
"from" is a constant string pointing to a substring of "to",
so this code part freed "to" but then tried to copy its old (already freed)
content to a new buffer:
if (to->realloc(from_length))
return from;
if ((to->str_length=MY_MIN(from->str_length,from_length)))
memcpy(to->Ptr,from->Ptr,to->str_length);
Adding a new code piece that catches such constant substrings
and propery reallocs "to" to preserve its important part referenced
by "from".
|
| |
| |
| |
| |
| | |
This problem was earlier fixed, possibly by f8a800bec81983910a96a5dc38f3aeb9b7528bce
and is not repeatable in 10.1-10.4 any more. Adding tests only.
|
| |
| |
| |
| | |
event_scheduler = OFF
|
| |
| |
| |
| |
| |
| |
| | |
if log_warnings > 1.
This makes ER_DBACCESS_DENIED_ERROR handling the same as we do for other
"access denied"
|
| |
| |
| |
| |
| |
| | |
event_scheduler = DISABLED
Change error message.
|
| |
| |
| |
| |
| |
| |
| | |
names
Added a call to X509_check_ip_asc() in case server_hostname represents
an IP address.
|
| |
| |
| |
| |
| |
| | |
different collations
The fix for MDEV-17064 addressed this problem. Adding tests only.
|
| |
| |
| |
| |
| |
| |
| |
| | |
depends on uninitialised value
Initialized THD::force_read_stats introduced in the patch for MDEV-17605.
Leaving this field uninitialized in the constructor of the THD class may
trigger reading statistical data that is not needed.
|
| |
| |
| |
| |
| |
| |
| | |
Item_func_group_concat::repack_tree
Item_func_group_concat stores values in `tree`, which is often, but not
always the same as `&tree_base`.
|
| |
| |
| |
| |
| | |
move privilege specific part of gis2.test to gis_notembedded.test
and the rest to gis.test
|
| |
| |
| |
| |
| |
| | |
`field->table->stats_is_read' failed.
Fixed the assert by making sure that not to use EITS if the column statistics was not allocated.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
with DISTINCT, CAST and other functions
Item_func_min_max::fix_length_and_dec() erroneously set max_length
to UINT32_MAX.
Merge notes:
In 10.3 this problem had been fixed earlier.
During merge to 10.3, do a "null merge" in item_func.cc
|
| | |
|
|\ \ |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit ac275d0b4ad (connect/10.0)
Author: Olivier Bertrand <bertrandop@gmail.com>
Date: Wed Mar 27 12:46:20 2019 +0100
Comment out unrecognized command line options: Modified CMakeLists.txt
commit 592f1f75ad6
Author: Olivier Bertrand <bertrandop@gmail.com>
Date: Tue Mar 26 19:52:33 2019 +0100
Replace Command not recognized by CMake modified: CMakeLists.txt
commit 00f72199b16
Author: Olivier Bertrand <bertrandop@gmail.com>
Date: Tue Mar 26 18:15:08 2019 +0100
- Fix MDEV-15793: Server crash in PlugCloseFile with sql_mode=''
Fixed by replacing sprinf by snprintf in ShowValue to avoid
buffer overflow. It nows always use a buffer and returns int.
modified: storage/connect/tabdos.cpp
modified: storage/connect/tabfmt.cpp
modified: storage/connect/value.cpp
modified: storage/connect/value.h
- Fix MDEV-18292: CONNECT Engine JDBC not able to issue
simple UPDATE statement from trigger or stored procedure
Was not fixed when the same table was called several times
with different modes. Fixed by checking if a new statement
is compatible in the start_stmt function. It nows do the
same checks than external_lock.
modified: storage/connect/ha_connect.cc
modified: storage/connect/ha_connect.h
- typo
modified: storage/connect/user_connect.cc
- Fix GetTableName that returned wrong value under Windows
modified: storage/connect/ha_connect.cc
- Fix MDEV-13136: enhance CREATE SERVER MyServerName
FOREIGN DATA WRAPPER to work with CONNECT engine
modified: storage/connect/tabjdbc.cpp
- Add a function to retrieve User variable value (DEVELOPMENT only)
modified: storage/connect/ha_connect.cc
modified: storage/connect/jsonudf.cpp
modified: storage/connect/jsonudf.h
modified: storage/connect/tabjdbc.cpp
- Fix MDEV-18192: CONNECT Engine JDBC not able to issue
simple UPDATE statement from trigger or stored procedure
modified: storage/connect/tabext.cpp
modified: storage/connect/tabext.h
modified: storage/connect/tabjdbc.cpp
- Enable CONNECT tables to have triggers
Update version number
modified: storage/connect/ha_connect.cc
- Make user and password defined in CREATE TABLE have precedence on
the ones specified in a Federated Server.
modified: storage/connect/tabjdbc.cpp
- JSONColumns: Copy locally constant strings to fix error in OEM modules
modified: storage/connect/tabjson.cpp
commit 99de7f4e486
Author: Olivier Bertrand <bertrandop@gmail.com>
Date: Sun Jan 27 15:16:15 2019 +0100
- Fix MDEV-18192: CONNECT Engine JDBC not able to issue
simple UPDATE statement from trigger or stored procedure
modified: storage/connect/tabext.cpp
modified: storage/connect/tabext.h
modified: storage/connect/tabjdbc.cpp
- Enable CONNECT tables to have triggers
Update version number
modified: storage/connect/ha_connect.cc
- Make user and password defined in CREATE TABLE have precedence on
the ones specified in a Federated Server.
modified: storage/connect/tabjdbc.cpp
- JSONColumns: Copy locally constant strings to fix error in OEM modules
modified: storage/connect/tabjson.cpp
|
|\ \ \ |
|
| | | | |
|
|\ \ \ \
| | |_|/
| |/| | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Before killing the server, we must issue FLUSH TABLES in order
to cleanly close any MyISAM system tables, to avoid warnings about
them when restarting.
|
|\ \ \ \
| |/ / / |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
-------
MySQL abnormally exits on KILL command.
Fix
---
The abnormal exit has been fixed.
RB: 20971, 21129, 21237
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
fill_effective_table_privileges on concurrent GRANT and CREATE VIEW
rename a test file.
Closes #1253
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch fixes an invalid read in fill_effective_table_privileges
triggered by a grant_version increase between a PREPARE for a
statement creating a view from I_S and EXECUTE.
A tmp table was created and free'd while preparing the statement,
TABLE_LIST::table_name was set to point to the tmp table
TABLE_SHARE::table_name which no longer existed after preparing was
done.
The grant version increase made fill_effective_table_privileges
called during EXECUTE to try fetch the updated grant info and
this is where the dangling table name was used.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
on read-only
triggers are opened and tables used in triggers are prelocked in
open_tables(). But multi-update can detect what tables will actually
be updated only later, after all main tables are opened.
Meaning, if a table is used in multi-update, but is not actually updated,
its on-update treggers will be opened and tables will be prelocked,
even if it's unnecessary. This can cause more tables to be
write-locked than needed, causing read_only errors, privilege errors
and lock waits.
Fix: don't open/prelock triggers unless table->updating is true.
In multi-update after setting table->updating=true, do a second
open_tables() for newly added tables, if any.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
it always required UPDATE privilege on views, not being able to detect
when a views was not actually updated in multi-update.
fix: instead of marking all tables as "updating" by default,
only set "updating" on tables that will actually be updated
by multi-update. And mark the view "updating" if any of the
view's tables is.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
privilege tables can never be views or temporary tables,
don't even try to open them, if they are.
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
fk_info->referenced_table, &table->s->table_name)' failed in fk_truncate_illegal_if_parent
don't assert the correctness of FK constraints, as it can be
broken under `SET FOREIGN_KEY_CHECKS= OFF`
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
DEPENDING ON ALGORITHM
For partitioned table, ensure that the AUTO_INCREMENT values will
be assigned from the same sequence. This is based on the following
change in MySQL 5.6.44:
commit aaba359c13d9200747a609730dafafc3b63cd4d6
Author: Rahul Malik <rahul.m.malik@oracle.com>
Date: Mon Feb 4 13:31:41 2019 +0530
Bug#28573894 ALTER PARTITIONED TABLE ADD AUTO_INCREMENT DIFF RESULT DEPENDING ON ALGORITHM
Problem:
When a partition table is in-place altered to add an auto-increment column,
then its values are starting over for each partition.
Analysis:
In the case of in-place alter, InnoDB is creating a new sequence object
for each partition. It is default initialized. So auto-increment columns
start over for each partition.
Fix:
Assign old sequence of the partition to the sequence of next partition
so it won't start over.
RB#21148
Reviewed by Bin Su <bin.x.su@oracle.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Correctly document the usage of m_max_value. Remove the const
qualifier, so that the implicit assignment operator can be used.
Make all members of ib_sequence private, and add an accessor
member function max_value().
|
| | | | | |
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
PROBLEM
=======
An add index doesn't update index length stats in information schema
TABLES table.
FIX
===
Update the dict_table_t variable with index length stats that is
actually calculated post alter . As this variable is used to populated
the information schema index length statistics.
Reviewed by: Bin su<bin.x.su@oracle.com>
RB: 21277
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
DEFAULT
Field_bit for BIT(20) uses 2 full bytes in the record,
with additional 4 uneven bits in the "null bit area".
Field::set_default() called from Field_bit::set_default() erroneously
copied 3 bytes instead of 2 bytes from the record with default values.
Changing Field::set_default() to copy pack_length_in_rec() bytes
instead of pack_length() bytes.
|