- 03 Nov, 2020 6 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Vladislav Vaintroub authored
The one which is in PATH by default is MinGW perl, which uses Unix paths. This perl does not work with mtr.
-
Jan Lindström authored
-
Teemu Ollakka authored
Prepared statements which were run over binary protocol crashed a server if the statement did not have CF_PS_ARRAY_BINDING_OPTIMIZED flag and the statement was executed in bulk mode and a BF abort occrurred. This was because the bulk execution resulted in several statements without calling wsrep_after_statement() between, which confused wsrep transaction state tracking. As a fix, call wsrep_after_statement() in bulk loop after each execution if CF_PS_ARRAY_BINDING_OPTIMIZED is not set. Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
-
- 02 Nov, 2020 17 commits
-
-
Marko Mäkelä authored
instant_alter_column_possible(): Relax a too strict debug assertion. The existence of an index stub or a corrupted index on virtual columns does not imply that virtual columns exist.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
C++11 is allowed only starting with MariaDB Server 10.4.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
This follows up commit commit 94a520dd and commit 7c5519c1. After these changes, the default test suites on a cmake -DWITH_UBSAN=ON build no longer fail due to passing null pointers as parameters that are declared to never be null, but plenty of other runtime errors remain.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
This was missed in commit d6ea03fa.
-
Marko Mäkelä authored
-
Vladislav Vaintroub authored
This will avoid some errors on appveyor, due to outdated SDKs.
-
Nikita Malyavin authored
-
Nikita Malyavin authored
Though this is an error message task, the problem was deep in the `mysql_prepare_create_table` implementation. The problem is described as follows: 1. `append_system_key_parts` was called before `mysql_prepare_create_table`, though key name generation was done close to the latest stage of the latter. 2. We can't move `append_system_key_parts` in the end, because system keys should be appended before some checks done. 3. If the checks from `append_system_key_parts` are moved to the end of `mysql_prepare_create_table`, then some other inappropriate errors are issued. like `ER_DUP_FIELDNAME`. To have key name specified in error message, name generation should be done before the checks, which consequenced in more changes. The final design for key initialization in `mysql_prepare_create_table` follows. The initialization is done in three phases: 1. Calculate a total number of keys created with respect to keys ignored. Allocate KEY* buffer. 2. Generate unique names; calculate a total number of key parts. Make early checks. Allocate KEY_PART_INFO* buffer. 3. Initialize key parts, make the rest of the checks.
-
Nikita Malyavin authored
`ha_heap::clone` was creating a handler by share's handlerton, which is partition handlerton. handler's handlerton should be used instead. Here in particular, HEAP handlerton will be used and it will create ha_heap handler.
-
Nikita Malyavin authored
The bug was fixed by MDEV-22599 bugfix, which changed `Field::cmp` call to `Field::cmp_prefix` in `TABLE::check_period_overlaps`. The trick is that `Field_bit::cmp` apparently calls `Field_bit::cmp_key`, which condiders an argument an actual pointer to data, which isn't correct for `Field_bit`, since it stores data by `bit_ptr`. which is in the beginning of the record, and using `ptr` is incorrect (we use it through `ptr_in_record` call)
-
Nikita Malyavin authored
After Sergei's cleanup this assertion is not actual anymore -- we can't predict if the handler was used for lookup, especially in multi-update scenario. `position(old_data)` is made earlier in `ha_check_overlaps`, therefore it is guaranteed that we compare right refs.
-
Nikita Malyavin authored
The problem here was that ha_check_overlaps internally uses ha_index_read, which in case of fail overwrites table->status. Even though the handlers are different, they share a common table, so the value is anyway spoiled. This is bad, and table->status is badly designed and overweighted by functionality, but nothing can be done with it, since the code related to this logic is ancient and it's impossible to extract it with normal effort. So let's just save and restore the value in ha_update_row before and after the checks. Other operations like INSERT and simple UPDATE are not in risk, since they don't use this table->status approach. DELETE does not do any unique checks, so it's also safe.
-
Nikita Malyavin authored
1. Subtracting table->record[0] from record is UB (non-contiguous buffers) 2. It is very popular to use move_field_offset, which changes Field::ptr, but leaves table->record[0] unchanged. This makes a ptr_in_record result incorrect, since it relies on table->record[0] value. The check ensures the result is within the queried record boundaries.
-
Elena Stepanova authored
-
- 01 Nov, 2020 3 commits
-
-
Oleksandr Byelkin authored
-
Sergei Petrunia authored
-
Elena Stepanova authored
-
- 31 Oct, 2020 3 commits
-
-
Daniel Black authored
Add --system={all, users, plugins, udfs, servers, stats, timezones} This will dump system information from the server in a logical form like: * CREATE USER * GRANT * SET DEFAULT ROLE * CREATE ROLE * CREATE SERVER * INSTALL PLUGIN * CREATE FUNCTION "stats" is the innodb statistics tables or EITS and these are dumped as INSERT/REPLACE INTO statements without recreating the table. "timezones" is the collection of timezone tables which are important to transfer to generate identical results on restoration. Two other options have an effect on the SQL generated by --system=all. These are mutually exclusive of each other. * --replace * --insert-ignore --replace will include "OR REPLACE" into the logical form like: * CREATE OR REPLACE USER ... * DROP ROLE IF EXISTS (MySQL-8.0+) * CREATE OR REPLACE ROLE ... * UNINSTALL PLUGIN IF EXISTS (10.4+) ... (before INSTALL PLUGIN) * DROP FUNCTION IF EXISTS (MySQL-5.7+) * CREATE OR REPLACE [AGGREGATE] FUNCTION * CREATE OR REPLACE SERVER --insert-ignore uses the construct " IF NOT EXISTS" where supported in the logical syntax. 'CREATE OR REPLACE USER' includes protection against being run as the same user that is importing the mysqldump. Includes experimental support for dumping mysql-5.7/8.0 system tables and exporting logical SQL compatible with MySQL. Updates mysqldump man page, including this information and (removing obsolute bug reference) Reviewed-by: anel@mariadb.org
-
Oleksandr Byelkin authored
-
Elena Stepanova authored
-
- 30 Oct, 2020 11 commits
-
-
Daniel Black authored
Per b9f3f068, mysql_system_tables_data.sql creates a mysql_native_password with a salted hash of "invalid" so that `set password` will detect a native password can be applied:. SHOW CREATE USER; diligently uses this value in its output generating the SQL: MariaDB [(none)]> show create user; +---------------------------------------------------------------------------------------------------+ | CREATE USER for dan@localhost | +---------------------------------------------------------------------------------------------------+ | CREATE USER `dan`@`localhost` IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket | +---------------------------------------------------------------------------------------------------+ Attempting to execute this before this patch results in: MariaDB [(none)]> CREATE USER `dan2`@`localhost` IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket; ERROR 1372 (HY000): Password hash should be a 41-digit hexadecimal number As such, deep the implementation of mysql_native_password we make "invalid" valid (pun intended) such that the above create user will succeed. We do this by storing "*THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE" (credit: Oracle MySQL), that is of an INCORRECT length for a scramble. In native_password_authenticate we check the length of this cached value and immediately fail if it is anything other than the scramble length. native_password_get_salt is only called in the context of set_user_salt, so all setting of native passwords to hashed content of 'invalid', quite literally create an invalid password. So other forms of "invalid" are valid SQL in creating invalid passwords: MariaDB [(none)]> set password = 'invalid'; Query OK, 0 rows affected (0.001 sec) MariaDB [(none)]> alter user dan@localhost IDENTIFIED BY PASSWORD 'invalid'; Query OK, 0 rows affected (0.000 sec) closes #1628 Reviewer: serg@mariadb.com
-
Marko Mäkelä authored
buf_flush_try_neighbors(): Before invoking buf_page_t::ready_for_flush(), check that the freshly looked up buf_pool.page_hash entry actually is a buffer page and not a buf_pool.watch[] sentinel for purge buffering. This race condition was introduced in MDEV-15053 (commit b1ab211d). It is rather hard to hit this bug, because buf_flush_check_neighbors() already checked the condition. The problem exists if buf_pool.watch_set() was invoked for a page in the range after the check in buf_flush_check_neighbor() had been finished.
-
Oleksandr Byelkin authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Thanks to Varun Gupta for suggesting this. This seems to make main.innodb_ext_key,off more stable.
-
Marko Mäkelä authored
Invoking memcpy() on a NULL pointer is undefined behaviour (even if the length is 0) and gives the compiler permission to assume that the pointer is nonnull. Recent versions of GCC (starting with version 8) are more aggressively optimizing away checks for NULL pointers. This undefined behaviour would cause a SIGSEGV in the test main.func_encrypt on an optimized debug build on GCC 10.2.0.
-
Marko Mäkelä authored
This regression was introduced in commit afc9d00c. This is a partial backport of commit 199863d7 from 10.4.
-
Sergei Golubchik authored
-
Marko Mäkelä authored
-
Varun Gupta authored
MDEV-24033: SIGSEGV in __memcmp_avx2_movbe from queue_insert | SIGSEGV in __memcmp_avx2_movbe from native_compare The issue here was the system variable max_sort_length was being applied to decimals and it was truncating the value for decimals to the number of bytes set by max_sort_length. This was leading to a buffer overflow as the values were written to the buffer without truncation and then we moved the offset to the number of bytes(set by max_sort_length), that are needed for comparison. The fix is to not apply max_sort_length for fixed size types like INT, DECIMALS and only apply max_sort_length for CHAR, VARCHARS, TEXT and BLOBS.
-