| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Let us introduce a dummy variable innodb_max_purge_lag_wait
for waiting that the InnoDB history list length is below
the user-specified limit. Specifically,
SET GLOBAL innodb_max_purge_lag_wait=0;
should wait for all history to be purged. This could be useful
when upgrading from an older version to MariaDB 10.3 or later,
to avoid hitting MDEV-15912.
Note: the history cannot be purged if there exist transactions
that may see old versions.
Reviewed by: Vladislav Vaintroub
|
| |
| |
| |
| |
| |
| |
| | |
session_track_system_variables and max_relay_log_size.
lock LOCK_global_system_variables around the get_one_variable() call
in the Session_sysvars_tracker::store_variable().
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
MY_MEMORY_ORDER_RELAXED) == X_LOCK_DECR
InnoDB frees the block lock during buffer pool shrinking when other
thread is yet to release the block lock. While shrinking the
buffer pool, InnoDB allows the page to be freed unless it is buffer
fixed. In some cases, InnoDB releases the latch after unfixing the
block.
Fix:
====
- InnoDB should unfix the block after releases the latch.
- Add more assertion to check buffer fix while accessing the page.
- Introduced block_hint structure to store buf_block_t pointer
and allow accessing the buf_block_t pointer only by passing a
functor. It returns original buf_block_t* pointer if it is valid
or nullptr if the pointer become stale.
- Replace buf_block_is_uncompressed() with
buf_pool_t::is_block_pointer()
This change is motivated by a change in mysql-5.7.32:
mysql/mysql-server@46e60de444a8fbd876cc6778a7e64a1d3426a48d
Bug #31036301 ASSERTION FAILURE: SYNC0RW.IC:429:LOCK->LOCK_WORD
|
| |
| |
| |
| |
| |
| |
| | |
Deadlock is possible between applier thread and local committing thread with active FLUSH TABLE.
Applier thread should skip table share checks and locks when opening table.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Problem:
1. The server terminates abnormally when phrase search doesn't
filter out doc_ids correctly. This problem has been fixed in bug
2. Wrong query result: It's a regression from the bug #22709692 fix.
This fix optimize full-text search query with limit clause.
when FTS expression involves only union operation, we fetch only
number of doc_ids specified with the limit clause.
Fulltext phrase search is not an union operation and we consider
phrase search with plugin parser a union operation.
In phrase search with limit clause, we fetch limited doc_ids for
each token and if any of the selected doc_id does not contain all
tokens in correct order then we do not include that row_id in the
result set.
Therefore phrase search gets fewer number of rows than the qualified
rows exist in the table.
Fix:
Added a condition that phrase search with plugin parser is not a
union operation.
RB: 24925
Reviewed by : Annamalai Gurusami <annamalai.gurusami@oracle.com>
This is a cherry-pick of
mysql/mysql-server@5549920b7a33ef33034461d973a9ecb17ce49799
without a test case, because the test case depends on an n-gram
tokenizer that will be missing from MariaDB until MDEV-10267 is added.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Problem:
In Full-text phrase search, we filter out row that do not contain
all the tokens in the phrase.
If we do not filter out doc_id that doesn't appear in all the
token's doc_id lists then we hit an assert.
Fix:
if any of the token has last doc_id equal to ith doc_id of the first
token doc_id list then filter out rest of the higher doc_ids.
RB: 24909
Reviewed by : Annamalai Gurusami <annamalai.gurusami@oracle.com>
This is a cherry-pick of
mysql/mysql-server@5aa075277dfe84a17a0331c57a6fe9b91dafb4cf
but without a test case, because the test case depends on an n-gram
tokenizer that will be missing from MariaDB until MDEV-10267 is added.
|
| |\ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Also add a new member Saved_Size in the Global structure.
modified: storage/connect/global.h
modified: storage/connect/plugutil.cpp
modified: storage/connect/user_connect.cc
modified: storage/connect/jsonudf.cpp
- Add session variables json_all_path and default_depth
modified: storage/connect/ha_connect.cc
modified: storage/connect/mongo.cpp
modified: storage/connect/tabjson.cpp
modified: storage/connect/tabxml.cpp
- ADD column options JPATH and XPATH
Work as FIELD_FORMAT but are more readable
modified: storage/connect/ha_connect.cc
modified: storage/connect/ha_connect.h
modified: storage/connect/mysql-test/connect/r/json_java_2.result
modified: storage/connect/mysql-test/connect/r/json_java_3.result
modified: storage/connect/mysql-test/connect/r/json_mongo_c.result
- Handle negative numbes in the option list
modified: storage/connect/ha_connect.cc
- Fix Json parse that could crash the server.
Was because it could use THROW out of the TRY block.
Also handle all error by THROW.
It is now done by a new class JSON.
modified: storage/connect/json.cpp
modified: storage/connect/json.h
- Add a new UDF function jfile_translate.
It translate a Json file to pretty = 0.
Fast because it does not a real parse of the file.
modified: storage/connect/jsonudf.cpp
modified: storage/connect/jsonudf.h
- Add a now options JSIZE and STRINGIFY to Json tables.
STRINGIFY makes Objects or Arrays to be returned by their
json representation instead of by their concatenated values.
JSIZE allows to specify the LRECL (was 256) defaults to 1024.
Also fix a bug about locating the sub-table by its path.
modified: storage/connect/tabjson.cpp
modified: storage/connect/tabjson.h
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
modified: storage/connect/ha_connect.cc
- Allow JSON columns to be "binary"
By setting their type as VARBINAY(132)
and their name begin with Jbin_
modified: storage/connect/json.h
modified: storage/connect/jsonudf.cpp
modified: storage/connect/tabjson.cpp
modified: storage/connect/value.cpp
modified: storage/connect/value.h
- CHARSET BINARY cannot be used for text columns
modified: storage/connect/mysql-test/connect/r/updelx.result
modified: storage/connect/mysql-test/connect/t/updelx.test
|
| | |
| | |
| | |
| | |
| | |
| | | |
The variable connect_work_size is now ulong or ulonglong for 64bit machines.
modified: storage/connect/ha_connect.cc
modified: storage/connect/user_connect.cc
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
All variables handling sizes that were uint are now size_t.
The variable connect_work_size is now ulong (was uint);
Also make Json functiosn to allocate a larger memory (M=9 was 7)
modified: storage/connect/global.h
modified: storage/connect/ha_connect.cc
modified: storage/connect/json.cpp
modified: storage/connect/jsonudf.cpp
modified: storage/connect/plgdbutl.cpp
modified: storage/connect/plugutil.cpp
modified: storage/connect/user_connect.cc
- Fix uninitialised variable (pretty) in Json_File.
Make Jbin_file accept the same arguments as Json_File ones.
modified: storage/connect/jsonudf.cpp
- Change the Level option to Depth (the word currently used)
(Level being still accepted)
modified: storage/connect/mongo.cpp
modified: storage/connect/tabjson.cpp
modified: storage/connect/tabxml.cpp
- Suppress 2nd argument default value for MYSQLtoPLG function
modified: storage/connect/myutil.h
- Allow REST tables to be create not specifying a file_name
modified: storage/connect/tabrest.cpp
|
| | |
| | |
| | |
| | | |
was left over from testing
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
dict_table_autoinc_destroy
This issue is caused by MDEV-22456 ad6171b91cac33e70bb28fa6865488b2c65e858c. Fix involves the backported version of 10.4 patch
MDEV-22778 5f2628d1eea21d9732f582b77782b072e5e04014 and few parts of
MDEV-17441 (e9a5f288f21c15ec6b4d2dd3d654a320904bb1bf).
dict_table_t::stats_latch_created: Removed
dict_table_t::stats_latch: make value member and always lock it for
simplicity even for stats cloned table.
zip_pad_info_t::mutex_created: Removed
zip_pad_info_t::mutex: make member value instead of pointer
os0once.h: Removed
dict_table_remove_from_cache_low(): Ensure that fts_free() is always
called, even if dict_mem_table_free() is deferred until
btr_search_lazy_free().
InnoDB would always zip_pad_info_t::mutex and
dict_table_t::autoinc_mutex, even for tables are not in
ROW_FORMAT=COMPRESSED nor include any AUTO_INCREMENT column.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
MariaDB 10.2.2 inherited from MySQL 5.7 a perceived optimization
of ALTER TABLE, which skips the writing of redo log records.
In MDEV-16809 we introduced a parameter that allows the redo log to
be written, so that Mariabackup would not be impacted, but we kept
the MySQL 5.7 behaviour enabled by default (innodb_log_optimize_ddl=ON).
As noted in MDEV-19747 (Deprecate and ignore innodb_log_optimize_ddl,
implemented in MariaDB 10.5.1), omitting the redo log writes can
actually reduce performance, because we will have to wait for the data
pages to be written out. When the redo log file is configured to be
large enough, it actually can be much faster to write the redo log and
avoid the extra page flushing.
When the redo log is omitted (innodb_log_optimize_ddl=ON), also
Mariabackup may have to perform a lot of extra work, to re-copy the
entire data file if it is possible that any log was omitted during
the backup.
Starting with MariaDB 10.5.1, the parameter innodb_log_optimize_ddl
is deprecated and ignored. We hereby deprecate (but will not ignore)
the parameter in earlier versions as well.
|
| | |
| | |
| | |
| | |
| | |
| | | |
role
Reviewed-by: serg@mariadb.com
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There are 2 issues here:
Issue #1: memory allocation.
An IO_CACHE that uses encryption uses a larger buffer (it needs space for the encrypted data,
decrypted data, IO_CACHE_CRYPT struct to describe encryption parameters etc).
Issue #2: IO_CACHE::seek_not_done
When IO_CACHE objects are cloned, they still share the file descriptor.
This means, operation on one IO_CACHE may change the file read position
which will confuse other IO_CACHEs using it.
The fix of these issues would be:
Allocate the buffer to also include the extra size needed for encryption.
Perform seek again after one IO_CACHE reads the file.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The characters parsed are always ascii characters, hence one byte. This
means that the code did not have "incorrect" logic because the boolean
condition, if true, would also evaluate to the value of 1.
The condition however is semantically wrong, assuming a length is equal
to the condition outcome. Change paranthesis to make it also read
according to the intent.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
fix printing precedence for BETWEEN, LIKE/ESCAPE, REGEXP, IN
don't use precedence for printing CASE/WHEN/THEN/ELSE/END
fix parsing precedence of BETWEEN, LIKE/ESCAPE, REGEXP, IN
support predicate arguments for IN, BETWEEN, SOUNDS LIKE, LIKE/ESCAPE,
REGEXP
use %nonassoc for unary operators
fix parsing of IS TRUE/FALSE/UNKNOWN/NULL
remove parser_precedence test as superseded by the precedence test
|
| | |
| | |
| | |
| | |
| | | |
prefix unary operators don't need to have different precedence,
the syntax unambiguously specifies in what order they apply
|
| | |
| | |
| | |
| | |
| | | |
expression between INTERVAL and the unit doesns not need any
precedence rules, there's no ambiguity there
|
| | |
| | |
| | |
| | | |
some results are incorrect
|
| | | |
|
| | |
| | |
| | |
| | | |
Item_ref should have the precedence of the item it's referencing
|
| | |
| | |
| | |
| | |
| | |
| | | |
when enabling performance_schema
max allowed value limit should be larger than any auto-sized value
|
| | | |
|
| | |
| | |
| | |
| | | |
define symbols as C/C does to avoid "macro redefined" warnings
|
| | |
| | |
| | |
| | | |
This patch removes unnecessary #ifdefs in cmake macros CHECK_C_SOURCE_COMPILES.
|
| | |
| | |
| | |
| | |
| | |
| | | |
This patch fixes incorrect argument type passed
to the last parameter of getgrouplist() in cmake
macros CHECK_C_SOURCE_COMPILES()
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Some GSS-API functions like gss_import_name(), gss_release_buffer()
used in plugin/auth_gssapi and libmariadb/plugins/auth are marked
as deprecated in MacOS starting from version 10.14+. It results in
extra warnings output on server building.
To eliminate extra warnings the flag '-Wno-deprecated-declarations'
has been added to compiler invocation string for those source
files that invoke deprecated GSS-API functions.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch moves definitions of macros variables
HAVE_PAM_SYSLOG, HAVE_PAM_EXT_H, HAVE_PAM_APPL_H, HAVE_STRNDUP
from command line (in the form -Dmacros) to the auto-generated
header file config_auth_pam.h
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
plugin/auth_pam/mapper/pam_user_map.c on MacOS
Compiler warnings like one listed below are generated during server build on MacOS:
[88%] Building C object plugin/auth_pam/CMakeFiles/pam_user_map.dir/mapper/pam_user_map.c.o
mariadb/server-10.2/plugin/auth_pam/mapper/pam_user_map.c:87:41: error: passing
'gid_t *' (aka 'unsigned int *') to parameter of type 'int *' converts between pointers to integer types
with different sign [-Werror,-Wpointer-sign]
if (getgrouplist(user, user_group_id, loc_groups, &ng) < 0)
^~~~~~~~~~
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/unistd.h:650:43: note:
passing argument to parameter here
int getgrouplist(const char *, int, int *, int *);
^
In case MariaDB server is build with -DCMAKE_BUILD_TYPE=Debug it results in
build error.
The reason of compiler warnings is that declaration of the Posix C API function
getgrouplist() on MacOS differs from declaration of getgrouplist() proposed
by Posix.
To suppress this compiler warning cmake configure was adapted to detect what
kind of getgrouplist() function is declared on the build platform and
set the macros HAVE_POSIX_GETGROUPLIST in case the building platform supports
Posix compatible interface for the getgrouplist() function. Depending on
whether this macros is set the compatible type of arguments is used to pass
parameter values to the function.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
tablespace upon prepare of mariabackup incremental backup
The problem:
When incremental backup is taken, delta files are created for innodb tables
which are marked as new tables during innodb ddl tracking. When such
tablespace is tried to be opened during prepare in
xb_delta_open_matching_space(), it is "created", i.e.
xb_space_create_file() is invoked, instead of opening, even if
a tablespace with the same name exists in the base backup directory.
xb_space_create_file() writes page 0 header the tablespace.
This header does not contain crypt data, as mariabackup does not have
any information about crypt data in delta file metadata for
tablespaces.
After delta file is applied, recovery process is started. As the
sequence of recovery for different pages is not defined, there can be
the situation when crypt data redo log event is executed after some
other page is read for recovery. When some page is read for recovery, it's
decrypted using crypt data stored in tablespace header in page 0, if
there is no crypt data, the page is not decryped and does not pass corruption
test.
This causes error for incremental backup --prepare for encrypted
tablespaces.
The error is not stable because crypt data redo log event updates crypt
data on page 0, and recovery for different pages can be executed in
undefined order.
The fix:
When delta file is created, the corresponding write filter copies only
the pages which LSN is greater then some incremental LSN. When new file
is created during incremental backup, the LSN of all it's pages must be
greater then incremental LSN, so there is no need to create delta for
such table, we can just copy it completely.
The fix is to copy the whole file which was tracked during incremental backup
with innodb ddl tracker, and copy it to base directory during --prepare
instead of delta applying.
There is also DBUG_EXECUTE_IF() in innodb code to avoid writing redo log
record for crypt data updating on page 0 to make the test case stable.
Note:
The issue is not reproducible in 10.5 as optimized DDL's are deprecated
in 10.5. But the fix is still useful because it allows to decrease
data copy size during backup, as delta file contains some extra info.
The test case should be removed for 10.5 as it will always pass.
|
| | |
| | |
| | |
| | | |
PROXY_USER event added.
|
| | |
| | |
| | |
| | |
| | |
| | | |
When first argument to the JSON_MERGE_PATCH was NULL and second - the
invalid JSON line, the error code was garbage. So it should be set to 0
initially.
|
| | |
| | |
| | |
| | | |
Reviewed-by: wlad@mariadb.com
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
projection
Reimplement MDEV-14275 Improving memory utilization for information schema
Postpone temp table instantiation until after setup_fields().
Replace all unused (not marked in read_set) columns in an I_S table
with CHAR(0). This can drastically reduce the footprint of a MEMORY
table (a TABLE_CATALOG alone is 1538 bytes per row).
This does not change the engine. If the table was decided to be Aria
(because of, say, blobs) then after optimization it'll stay Aria
even if all blobs were removed.
Note 1: when transforming table structure, share->blob_fields is
preserved, otherwise Aria might switch from DYNAMIC to STATIC row format
and expect a special field for a deleted mark, which create_tmp_tabe
didn't provide.
Note 2: optimizer was doing handler::info() (to know the number of rows)
before the temp table is populated. That didn't make much sense. Now
it's done before the table is even instantiated. Preserve the old
behavior and report 0 rows.
This reverts e2664ee8362 and a8458a2345e
|
| | |
| | |
| | |
| | | |
instead of, say, MY_SEARCH_LIBS(dlopen dl LIBDL)
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
PROXY_USER event added.
Conflicts:
plugin/server_audit/server_audit.c
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Original patch was contributed by Jani Tolonen <jani.k.tolonen@gmail.com>
https://github.com/an3l/server/commits/bb-10.3-anel-MDEV-21786-dump-sequence
which distinguishes data structure (linked list) of sequences from
tables.
- Added standard sql output to prevent future changes
of sequences and disabled locks for sequences.
- Added test case for `MDEV-20070: mysqldump won't work correct on
sequences` where table column depends on sequence value.
- Restore sequence last value in the following way:
- Find `next_not_cached_value` and use it to `setval()`
- We just need for logical restore, so don't execute `setval()`
- `setval()` should be showed also in case of `--no-data` option.
Reviewed-by: daniel@mariadb.org
|
|\ \ \
| |/ / |
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It was only from CMake-3.14.0 that CMAKE_REQUIRED_LINK_OPTIONS
was used in CHECK_CXX_SOURCE_COMPILES. Without this, it could be
the case (as was on OSX) that a flag was never checked in
CHECK_CXX_SOURCE_COMPILES, the CHECK successfully passed, but
failed at link time.
As such we use CMAKE_REQUIRED_LIBRARIES to include the flags to check
as its compatible enough with the cmake versions for non-Windows
compilers/linkers.
Tested on x86_64 with:
* 3.11.4
* 3.17.4
Corrects: 7473e1841c630d86f1873a2a7afacb53955b3f6f
In the future:
* cmake >=3.14.0 can use CMAKE_REQUIRED_LINK_OPTIONS
* cmake >=3.18.0 can use CHECK_LINKER_FLAG (with policy CMP0057 NEW)
(e.g: commit c7ac2deff9a2c965887dcc67cbf2a3a7c3e0123d)
CMAKE_REQUIRED_LIBRARIES suggested by serg@mariadb.com
Reviewed-by: anel@mariadb.org
|
| | |\ \
| | | | |
| | | | |
| | | | |
| | | | | |
The only applicable InnoDB change to MariaDB that was made
between MySQL 5.6.49 and MySQL 5.6.50 is MDEV-23999.
|