| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
in `ulonglong=ulong*uint` multiplication
is done in ulong, wrapping around on 32bit.
This became visible after C/C changed the
default charset to utf8, thus changing
mbmaxlem from 1 to 3.
|
| |
| |
| |
| |
| |
| | |
- Removed test if HA_FT_WTYPE == HA_KEYTYPE_FLOAT as this never worked
(HA_KEYTYPE_FLOAT is an enum)
- Define HA_FT_MAXLEN to 126 (was tested before but never defined)
|
| | |
|
| |
| |
| |
| |
| |
| | |
correct metadata info
Added metadate info after prepare EXPLAIN/ANALYZE.
|
| |
| |
| |
| |
| |
| | |
field if indicator was specified
test added (bug is fixed)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In this commit we are adding three more status variable to SHOW SLAVE
STATUS. Slave_DDL_Events and Slave_Non_Transactional_Events.
Slave_DDL_Groups:- This status variable counts the occurrence of DDL
statements
Slave_Non_Transactional_Groups:- This variable count the occurrence
of non-transnational event group.
Slave_Transactional_Groups:- This variable count the occurrence
of transnational event group.
Patch Credit:- Kristian Nielsen
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The merge only covered 10.1 up to
commit 4d248974e00eb915a2fc433cc6b2fb5146281594.
Actually merge the changes up to
commit 0a534348c75cf435d2017959855de2efa798fd0b.
Also, remove the unused InnoDB field trx_t::abort_type.
|
| | |
|
| |
| |
| |
| |
| |
| | |
-DWITH_ASAN can be used as well now, on x64
Fix many clang-cl warnings.
|
|\ \ |
|
| |\ \
| | |/ |
|
| | |
| | |
| | |
| | | |
replication
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
'size_t' to 'type', possible loss of data)
Handle string length as size_t, consistently (almost always:))
Change function prototypes to accept size_t, where in the past
ulong or uint were used. change local/member variables to size_t
when appropriate.
This fix excludes rocksdb, spider,spider, sphinx and connect for now.
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Item_param::set_value() did not set Item::collation and
Item_param::str_value_ptr.str_charset properly. So both
metadata and data for OUT parameters were sent in a wrong
way to the client.
This patch removes the old implementation of Item_param::set_value()
and rewrites it using Type_handler::Item_param_set_from_value(),
so now setting IN and OUT parameters share the a lot of code.
1. Item_param::set_str() now:
- accepts two additional parameters fromcs, tocs
- sets str_value_ptr, to make sure it's always in sync with str_value,
even without Item_param::convert_str_value()
- does collation.set(tocs, DERIVATION_COERCIBLE),
to make sure that DTCollation is valid even without
Item_param::convert_str_value()
2. Item_param::set_value(), which is used to set OUT parameters,
now reuses Type_handler::Item_param_set_from_value().
3. Cleanup: moving Item_param::str_value_ptr to private,
as it's not needed outside.
4. Cleanup: adding a new virtual method
Settable_routine_parameter::get_item_param()
and using it a few new DBUG_ASSERTs, where
Item_param cannot appear.
After this change:
1. Assigning of IN parameters works as before:
a. Item_param::set_str() is called and sets the value as a binary string
b. The original value is sent to the query used for binary/general logging
c. Item_param::convert_str_value() converts the value from the client
character set to the connection character set
2. Assigning of OUT parameters works in the new way:
a. Item_param::set_str() and sets the value
using the source Item's collation, so both Item::collation
and Item_param::str_value_ptr.str_charset are properly set.
b. Protocol_binary::send_out_parameters() sends the
value to the client correctly:
- Protocol::send_result_set_metadata() uses Item::collation.collation
(which is now properly set), to detect if conversion is needed,
and sends a correct collation ID.
- Protocol::send_result_set_row() calls Type_handler::Item_send_str(),
which uses Item_param::str_value_ptr.str_charset
(which is now properly set) to actually perform the conversion.
|
|\ \ \
| |/ /
| | |
| | | |
TODO: enable MDEV-13049 optimization for 10.3
|
| | | |
|
|\ \ \
| |/ / |
|
| |\ \
| | |/ |
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
This address is from reserved range (rfc5737), and this ensures that DNS
won't resolve it into a name
|
| | |
| | |
| | |
| | |
| | |
| | | |
the test expects 192.168.0.1 not to resolve to anything.
this is a fragile assumption. use 127.0.0.2 instead
(which only marginally better)
|
| | | |
|
|/ /
| |
| |
| |
| |
| | |
accept proxy protocol header from client connections.
The new server variable 'proxy_protocol_networks' contains list
of networks from which proxy header is accepted.
|
|\ \
| |/ |
|
| |
| |
| |
| | |
Parameters can be MYSQL_TYPE_VARCHAR for long data load.
|
|\ \
| |/ |
|
| |\ |
|
| | |\ |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
uploaded 10.0, analyzed everything with the Impact=High
(and a couple of Medium)
|
|\ \ \ \
| |/ / / |
|
| |\ \ \
| | |/ /
| | | |
| | | |
| | | | |
Revert commit db0917f68f, because the fix for MDEV-12696
is coming from 5.5 and 10.1 in this merge.
|
| | |\ \
| | | |/ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
- Changed to 'strict'
- Fixed scope of variables
- Made timing smaller for of repair, check, flush and alter to get them to
trigger earlier
|
|\ \ \ \
| |/ / / |
|
| |\ \ \
| | |/ / |
|
| | |\ \
| | | |/
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Also, implement MDEV-11027 a little differently from 5.5 and 10.0:
recv_apply_hashed_log_recs(): Change the return type back to void
(DB_SUCCESS was always returned).
Report progress also via systemd using sd_notifyf().
|
| | | |\ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
mysql_prune_stmt_list() was walking the list following
element->next pointers, but inside the loop it was invoking
list_add(element) that modified element->next. So, mysql_prune_stmt_list()
failed to visit and reset all elements, and some of them were left
with pointers to invalid MYSQL.
|
|\ \ \ \ \
| |/ / / / |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Prepare os ANALYZE now respond as EXPLAIN.
|
|\ \ \ \ \
| |/ / / / |
|
| |\ \ \ \
| | |/ / / |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
group-concat-max-len=1M
test_bug14169 was setting session group_concat_max_len=1024 and
did not clean it up. Because of that test_ps_query_cache, when run
with group-concat-max-len != 1024, had different values in connections,
and was inserting into query cache when a hit was expected.
Fixed by adding a clean-up for the value in test_bug14169
|
| | |\ \ \
| | | |/ / |
|
| | | |\ \
| | | | |/ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
|