| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
partitioned table
ha_partition stores records in array of m_ordered_rec_buffer and uses
it for prio queue in ordered index scan. When the records are restored
from the array the blob buffers may be already freed or rewritten.
The solution is to take temporary ownership of cached blob buffers via
String::swap(). When the record is restored from m_ordered_rec_buffer
the ownership is returned to table fields.
Cleanups:
init_record_priority_queue(): removed needless !m_ordered_rec_buffer
check as there is same assertion few lines before.
dbug_print_row() for arbitrary row pointer
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The failing reason was inconsistent truncation rules: the value of virtual
column could have been evaluated to '2000' sometimes instead of '0000' for
value 'a'.
The reason why `c YEAR AS ('aaaa')` was not evaluated same is that len=4 is
a special case insidew Field_year::store.
The correct fix is: always evaluate a bad value to 0000 instead 2000.
The truncated values should be evaluated as usual.
$support_virtual_index is finally changed to 1 in gcol.gcol_ins_upd_innodb,
which is also enough for testing.
The test from original bug report is also added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The assertion failed in handler::ha_reset upon SELECT under
READ UNCOMMITTED from table with index on virtual column.
This was the debug-only failure, though the problem is mush wider:
* MY_BITMAP is a structure containing my_bitmap_map, the latter is a raw
bitmap.
* read_set, write_set and vcol_set of TABLE are the pointers to MY_BITMAP
* The rest of MY_BITMAPs are stored in TABLE and TABLE_SHARE
* The pointers to the stored MY_BITMAPs, like orig_read_set etc, and
sometimes all_set and tmp_set, are assigned to the pointers.
* Sometimes tmp_use_all_columns is used to substitute the raw bitmap
directly with all_set.bitmap
* Sometimes even bitmaps are directly modified, like in
TABLE::update_virtual_field(): bitmap_clear_all(&tmp_set) is called.
The last three bullets in the list, when used together (which is mostly
always) make the program flow cumbersome and impossible to follow,
notwithstanding the errors they cause, like this MDEV-17556, where tmp_set
pointer was assigned to read_set, write_set and vcol_set, then its bitmap
was substituted with all_set.bitmap by dbug_tmp_use_all_columns() call,
and then bitmap_clear_all(&tmp_set) was applied to all this.
To untangle this knot, the rule should be applied:
* Never substitute bitmaps! This patch is about this.
orig_*, all_set bitmaps are never substituted already.
This patch changes the following function prototypes:
* tmp_use_all_columns, dbug_tmp_use_all_columns
to accept MY_BITMAP** and to return MY_BITMAP * instead of my_bitmap_map*
* tmp_restore_column_map, dbug_tmp_restore_column_maps to accept
MY_BITMAP* instead of my_bitmap_map*
These functions now will substitute read_set/write_set/vcol_set directly,
and won't touch underlying bitmaps.
|
|
|
|
|
| |
don't allow group_concat_max_len values >= 4Gb
(they never worked anyway)
|
|
|
|
|
|
|
|
|
|
|
| |
This follows up commit
commit 94a520ddbe39ae97de1135d98699cf2674e6b77e and
commit 7c5519c12d46ead947d341cbdcbb6fbbe4d4fe1b.
After these changes, the default test suites on a
cmake -DWITH_UBSAN=ON build no longer fail due to passing
null pointers as parameters that are declared to never be null,
but plenty of other runtime errors remain.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
COMPOSITE PREFIX INDEX
Fix prefix key comparison in partitioning. Comparions must
take into account no more than prefix_len characters.
It used to compare prefix_len*mbmaxlen bytes.
|
| |
| |
| |
| |
| | |
truncate_double() did not take into account the max_value
limit in case when dec<NOT_FIXED_DEC.
|
| | |
|
| |
| |
| |
| |
| |
| | |
NO_ZERO_DATE with ALGORITHM=INPLACE
accept table_name and db_name instead of table_share in make_truncated_value_warning
|
| | |
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
(optimized builds)
For DECIMAL[(M[,D])] datatype max_sort_length was not being honoured which was leading to buffer
overflow while making the sort key. The fix to this problem would be to create sort keys for decimals
with atmost max_sort_key bytes
Important:
The minimum value of max_sort_length has been raised to 8 (previously was 4),
so fixed size datatypes like DOUBLE and BIGINIT are not truncated for
lower values of max_sort_length.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
cmake -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_BUILD_TYPE=Debug
Maintainer mode makes all warnings errors. This patch fix warnings. Mostly about
deprecated `register` keyword.
Too much warnings came from Mroonga and I gave up on it.
|
| |
| |
| |
| |
| |
| | |
When neither MSAN nor Valgrind are enabled, declare
Field::mark_unused_memory_as_defined() as an empty inline function,
instead of declaring it as a virtual function.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
MDEV-22073 MSAN use-of-uninitialized-value in
collect_statistics_for_table()
Other things:
innodb.analyze_table was changed to mainly test statistic
collection. Was discussed with Marko.
|
| |
| |
| |
| |
| |
| | |
Other things:
- Removed innodb_encryption_tables.test from valgrind as it
takes a REALLY long time
|
| |
| |
| |
| |
| |
| | |
MDEV-21056 Assertion `global_status_var.global_memory_used == 0'
failed upon shutdown after query with DEFAULT on a geometry
field
|
| |
| |
| |
| | |
Error message now shows the whole value.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The fix consists of three commits backported from 10.3:
1) Cleanup isnan() portability checks
(cherry picked from commit 7ffd7fe9627d1f750a3712aebb4503e5ae8aea8e)
2) Cleanup isinf() portability checks
Original problem reported by Wlad: re-compilation of 10.3 on top of 10.2
build would cache undefined HAVE_ISINF from 10.2, whereas it is expected
to be 1 in 10.3.
std::isinf() seem to be available on all supported platforms.
(cherry picked from commit bc469a0bdf85400f7a63834f5b7af1a513dcdec9)
3) Use std::isfinite in C++ code
This is addition to parent revision fixing build failures.
(cherry picked from commit 54999f4e75f42baca484ae436b382ca8817df1dd)
|
| |
| |
| |
| | |
disabling warnings in read_statistics_for_table()
|
| |
| |
| |
| |
| |
| |
| | |
The flag is_stat_field is not set for the min_value and max_value of field items
inside table share. This is a must requirement as we don't want to throw
warnings of truncation when we read values from the statistics table to the column
statistics of table share fields.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
in row_upd_sec_index_entry or error code 126: Index is corrupted upon UPDATE with TIMESTAMP..ON UPDATE
remove a special treatment of a bare DEFAULT keyword that made it
behave inconsistently and differently from DEFAULT(column).
Now all forms of the explicit assignment of a default column value
behave identically, and all count as an explicitly assigned value
(for the purpose of ON UPDATE NOW).
followup for c7c481f4d91
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
false)' in row_upd_sec_index_entry or error code 126: Index is corrupted upon DELETE with PAD_CHAR_TO_FULL_LENGTH
This patch allows the server to open old tables that have
"bad" generated columns (i.e. indexed virtual generated columns,
persistent generated columns) that depend on sql_mode,
for general things like SELECT, INSERT, DROP, etc.
Warning are issued in such cases.
Only these commands are now disallowed and return an error:
- CREATE TABLE introducing a "bad" generated column
- ALTER TABLE introducing a "bad" generated column
- CREATE INDEX introdicing a "bad" generated column
(i.e. adding an index on a virtual generated column
that depends on sql_mode).
Note, these commands are allowed:
- ALTER TABLE removing a "bad" generate column
- ALTER TABLE removing an index from a "bad" virtual generated column
- DROP INDEX removing an index from a "bad" virtual generated column
but only if the table does not have any "bad" columns as a result.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
row_upd_sec_index_entry or error code 126: Index is corrupted upon DELETE with PAD_CHAR_TO_FULL_LENGTH
This change takes into account a column's GENERATED ALWAYS AS
expression dependcy on sql_mode's PAD_CHAR_TO_FULL_LENGTH and
NO_UNSIGNED_SUBTRACTION flags.
Indexed virtual columns as well as persistent generated columns are
now not allowed to have such dependencies to avoid inconsistent data
or index files on sql_mode changes.
So an error is now returned in cases like this:
CREATE OR REPLACE TABLE t1
(
a CHAR(5),
v VARCHAR(5) AS (a) PERSISTENT -- CHAR->VARCHAR or CHAR->TEXT = ERROR
);
Functions RPAD() and RTRIM() can now remove dependency on
PAD_CHAR_TO_FULL_LENGTH. So this can be used instead:
CREATE OR REPLACE TABLE t1
(
a CHAR(5),
v VARCHAR(5) AS (RTRIM(a)) PERSISTENT
);
Note, unlike CHAR->VARCHAR and CHAR->TEXT this still works,
not RPAD(a) is needed:
CREATE OR REPLACE TABLE t1
(
a CHAR(5),
v CHAR(5) AS (a) PERSISTENT -- CHAR->CHAR is OK
);
More sql_mode flags may affect values of generated columns.
They will be addressed separately.
See comments in sql_mode.h for implementation details.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
'varchar(20)'
Cherry picking:
Bug#25135304: RBR: WRONG FIELD LENGTH IN ERROR MESSAGE
commit 47bd3f7cf3c8518f62b1580ec65af2ba7ac13b95
Description:
============
In row based replication, when replicating from a table with a field with
character set set to UTF8mb3 to the same table with the same field set to
character set UTF8mb4 I get a confusing error message:
For VARCHAR: VARCHAR(1) 'utf8mb3' to VARCHAR(1) 'utf8mb4'
"Column 0 of table 'test.t1' cannot be converted from type 'varchar(3)' to
type 'varchar(1)'"
Similar issue with CHAR type as well.
Issue with respect to BLOB types:
For BLOB: LONGBLOB to TINYBLOB - Error message displays incorrect blob type.
"Column 0 of table 'test.t1' cannot be converted from type 'tinyblob' to type
'tinyblob'"
For BINARY to BINARY - Error message displays incorrect type for master side
field.
"Column 0 of table 'test.t' cannot be converted from type 'char(1)' to type
'binary(10)'"
Similar issue exists for VARBINARY type. It is displayed as 'VARCHAR'.
Analysis:
=========
In Row based replication charset information is not sent as part of metadata
from master to slave.
For VARCHAR field its character length is converted into equivalent
octets/bytes and stored internally. At the time of displaying the data to user
it is converted back to original character length.
For example:
VARCHAR(2)- utf8mb3 is stored as:2*3 = VARCHAR(6)
At the time of displaying it to user
VARCHAR(6)- charset utf8mb3:6/3= VARCHAR(2).
At present the internally converted octect length is sent from master to slave
with out providing the charset information. On slave side if the type
conversion fails 'show_sql_type' function is used to get the type specific
information from metadata. Since there is no charset information is available
the filed type is displayed as VARCHAR(6).
This results in confused error message.
For CHAR fields
CHAR(1)- utf8mb3 - CHAR(3)
CHAR(1)- utf8mb4 - CHAR(4)
'show_sql_type' function which retrieves type information from metadata uses
(bytes/local charset length) to get actual character length. If slave's chaset
is 'utf8mb4' then
CHAR(3/4)-->CHAR(0)
CHAR(4/4)-->CHAR(1).
This results in confused error message.
Analysis for BLOB type issue:
BLOB's length is represented in two forms.
1. Actual length
i.e
(length < 256) type= MYSQL_TYPE_TINY_BLOB;
(length < 65536) type= MYSQL_TYPE_BLOB; ...
2. packlength - The number of bytes used to represent the length of the blob
1- tinyblob
2- blob ...
In row based replication only the packlength is written in the binary log. On
the slave side this packlength is interpreted as actual length of the blob.
Hence the length is always < 256 and the type is displayed as tiny blob.
Analysis for BINARY to BINARY type issue:
The character set information is needed to identify a filed's type as char or
binary. Since master side character set information is not available on the
slave side both binary and char fields are displayed as char.
Fix:
===
For CHAR and VARCHAR fields display their length in octets for both source and
target fields. For target field display the charset information if it is
relevant.
For blob type changed the code to use the packlength and display appropriate
blob type in error message.
For binary and varbinary fields use the slave side character set as reference
to map them to binary or varbinary fields.
|
|\ \
| |/ |
|
| |\ |
|
| | |
| | |
| | |
| | | |
* Update wrong zip-code
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
To fix the crash there we need to make sure that the
server while storing the statistical values in statistical tables should do it
in a multi-byte safe way.
Also there is no need to throw warnings if there is truncation while storing
values from statistical fields.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Item::STRING_ITEM for Item_user_var_as_out_param detection
This is a part of "MDEV-18045 Backporting the MDEV-15497 changes to 10.2 branch"
|
| | |
| | |
| | |
| | | |
This is a part of "MDEV-18045 Backporting the MDEV-15497 changes to 10.2 branch"
|
| | |
| | |
| | |
| | |
| | |
| | | |
NO_AUTO_VALUE_ON_ZERO mode
This is a part of "MDEV-18045 Backporting the MDEV-15497 changes to 10.2 branch"
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Field::set_warning_truncated_wrong_value upon inserting into temporary table
remove TABLE_SHARE::error_table_name() and TABLE_SHARE::orig_table_name
(that was allocated in a wrong memroot in this bug).
instead, simply set TABLE_SHARE::table_name correctly.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
column.
The error message modified.
Then the TABLE_SHARE::error_table_name() implementation taken from 10.3,
to be used as a name of the table in this message.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
bitmap_is_set(table->read_set, field_index))' failed in Field_num::get_date
- clean up DEFAULT() to work only with default value and correctly print
itself.
- fix of DBUG_ASSERT about fields read/write
- fix of field marking for write based really on the thd->mark_used_columns flag
|
|\ \ \
| |/ / |
|
| |\ \ |
|
| | |\ \
| | | |/ |
|
| | | | |
|
|\ \ \ \
| |/ / / |
|
| |\ \ \
| | |/ / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Problem was that Create_field::create_length_to_internal_length()
calculated a different pack_length for NEWDECIMAL compared to
Field_new_decimal constructor which lead to some unused bytes
in the middle of the record, which Aria didn't like.
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
for the small InnoDB table
This bug was introduced by the patch 6c414fcf89510215d6d3466eb9992d444eadae89.
The patch has not taken into account that some objects of the Field_* types
are created only for TABLE_SHARE and the field 'table' is set to NULL
for them. In particular such are objects created to store statistical
min/max values for columns.
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | | |
number
|
| | | | |
|