- 17 Jun, 2020 2 commits
-
-
Vladislav Vaintroub authored
Make sure to initialize SSL early enough, when encryption plugins is loaded
-
Krunal Bauskar authored
increased scalability through even distribution Rollback segments are allocated to transactions in round-robin fashion. This is controlled by incrementing a static-scope counter named rseg_slot. Said logic is not protected by any mutex or use of atomic for the counter. This potentially can cause the same rollback segment to get allocated to N different transactions (requesting allocation at the same time). While this is not an issue as a rollback segment can host multiple transactions from contention (performance) perspective it is better to allocate these rollback segments in round-robin fashion. Fix for the said issue ports use of atomic for the said counter that would ensure the original design semantic (even distribution through round-robin) is retained.
-
- 16 Jun, 2020 2 commits
-
-
Sachin authored
MDEV-22370 safe_mutex: Trying to lock uninitialized mutex at /data/src/10.4-bug/sql/rpl_parallel.cc, line 470 upon shutdown during FTWRL Problem:- When we issue FTWRL with shutdown in parallel, there is race between FTWRL and shutdown. Shutdown might destroy the mutex (pool->LOCK_rpl_thread_pool) before FTWRL can lock it. So we can get crash on FTWRL thread Solution:- mysql_mutex_destroy(pool->LOCK_rpl_thread_pool) should wait for FTWRL thread to complete its work , and then destroy. So slave_prepare_for_shutdown will just deactivate the pool, and mutex is destroyed later in end_slave()
-
MikkoJaakola authored
The galera.galera_parallel_autoinc_manytrx mtr test opens and runs test scenario through 3 connections to node 1 and one connection to node 2. In the test initialization phase, the test creates two tables 't1' and 'ten' and then creates a stored procedure 'p1' to operate on these tables. These 3 create DDL statements are issued through same connection to node 1. In the next test phase, the mtr script uses send command to launch the call for the p1 stored procedure through all 3 connections to node 1 and through one connection to node 2. As the mtr send command is asynchronous, this test phase is non blocking and fast operation. Now, if the replication between nodes is slow, it may happen that the initialization phase DDL statements have not been received or have not been fully applied in node 2. Therefore there is no guarantee that the test tables and the stored procedure have been created in node 2. Yet, the test is trying to call p1 in node 2. In the failure case error logs, there is error message "MTR failed: query 'reap' failed: 1305: PROCEDURE test.p1 does not exist" The reap command through connection to node 2, is the first place where test execution may observe that test tables and/or stored procedure are not yet created in node 2. The fix in this commit adds a wait condition in connection to node 2, to wait until the stored procedure is created before calling the stored procedure. The wait is implemented by looking in information_schema.routines for the p1 stored procedure.
-
- 15 Jun, 2020 1 commit
-
-
Alexey Yurchenko authored
galera_ipv6_mariabackup MTR tests
-
- 14 Jun, 2020 4 commits
-
-
Vlad Lesin authored
MDEV-21298: mariabackup doesn't read from the [mariadbd] and [mariadbd-X.Y] server option groups from configuration files MDEV-21301: mariabackup doesn't read [mariadb-backup] option group in configuration file All three issues require to change the same code, that is why their fixes are joined in one commit. The fix is in invoking load_defaults_or_exit() and handle_options() for backup-specific groups separately from client-server groups to let the last handle_options() call fail on unknown backup-specific options. The order of options procesing is the following: 1) Load server groups and process server options, ignore unknown options 2) Load client groups and process client options, ignore unknown options 3) Load backup groups and process client-server options, exit on unknown option 4) Process --mysqld-args command line options, ignore unknown options New global flag my_handle_options_init_variables was added to have ability to invoke handle_options() for the same allowed options set several times without re-initialising previously set option values. --password value destroying is moved from option processing callback to mariabackup's handle_options() function to have ability to invoke server's handle_options() several times for the same possible allowed options set. Galera invokes wsrep_sst_mariabackup.sh with mysqld command line options to configure mariabackup as close to the server as possible. It is not known what server options are supported by mariabackup when the script is invoked. That is why new mariabackup option "--mysqld-args" is added, all unknown options that follow this option will be silently ignored. wsrep_sst_mariabackup.sh was also changed to: - use "--mysqld-args" mariabackup option to pass mysqld options, - remove deprecated innobackupex mode, - remove unsupported mariabackup options: --encrypt --encrypt-key --rebuild-indexes --rebuild-threads
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The test case that was added for MDEV-21217 (commit b68f1d84) should have only two possible outcomes for the locking SELECT statement: (1) The statement is blocked, and the test will eventually fail with a lock wait timeout. This is what I observed when the code fix for MDEV-21217 was missing. (2) The lock conflict will ensure that the statement will execute after the rollback has completed, and an empty table will be observed. This is the expected outcome with the recovery fix. What occasionally happens (in some of our CI environments only, so far) is that the locking SELECT will return all 1,000 rows of the table that had been inserted by the transaction that was never supposed to be committed. One possibility is that the transaction was unexpectedly committed when the server was killed. Let us disable the test until the reason of the failure has been determined and addressed.
-
- 13 Jun, 2020 5 commits
-
-
Sergei Golubchik authored
With RETURNING it can happen that the user has some privileges on the table (namely, DELETE), but later needs different privileges on individual columns (namely, SELECT). Do the same as in check_grant_column() - ER_COLUMNACCESS_DENIED_ERROR, not an assert.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
trx_roll_must_shutdown(): Correct the condition that detects the start of shutdown.
-
Alexander Barkov authored
Item_func_div::fix_length_and_dec_temporal() set the return data type to integer in case of @div_precision_increment==0 for temporal input with FSP=0. This caused Item_func_div to call int_op(), which is not implemented, so a crash on DBUG_ASSERT(0) happened. Fixing fix_length_and_dec_temporal() to set the result type to DECIMAL.
-
- 12 Jun, 2020 2 commits
-
-
Vicențiu Ciorbaru authored
-
Alexander Barkov authored
MDEV-22499 Assertion `(uint) (table_check_constraints - share->check_constraints) == (uint) (share->table_check_constraints - share->field_check_constraints)' failed in TABLE_SHARE::init_from_binary_frm_image The patch for MDEV-22111 fixed MDEV-22499 as well. Adding tests only.
-
- 11 Jun, 2020 3 commits
-
-
Vicențiu Ciorbaru authored
-
Alexander Barkov authored
Item_cache_datetime::decimals was always copied from example->decimals without limiting to 6 (maximum possible fractional digits), so val_str() later crashed on asserts inside my_time_to_str() and my_datetime_to_str().
-
Alexander Barkov authored
MDEV-22755 CREATE USER leads to indirect SIGABRT in __stack_chk_fail () from fill_schema_user_privileges + *** stack smashing detected *** (on optimized builds) The code erroneously used buff[100] in a fiew places to make a GRANTEE value in the form: 'user'@'host' Fix: - Fixing the code to use (USER_HOST_BUFF_SIZE + 6) instead of 100. - Adding a DBUG_ASSERT to make sure the buffer is enough - Wrapping the code into a class Grantee_str, to reuse it easier in 4 places.
-
- 10 Jun, 2020 6 commits
-
-
Vicențiu Ciorbaru authored
On large hard disks (> 2TB), the plugin won't function correctly, always showing 2 TB of available space due to integer overflow. Upgrade table fields to bigint to resolve this problem.
-
Marko Mäkelä authored
The last traces of the special InnoDB table names were removed in commit 0af52734 but we forgot to remove the test case.
-
Alexander Barkov authored
Item_default_value did not override val_native(), so the inherited Item_field::val_native() was called. As a result Item_default_value::calculate() was not called and Item_field::val_native() was called on a Field with a non-initialized ptr. Implementing Item_default_value::val_native() properly.
-
Oleksandr Byelkin authored
The real problem was that attempt to roll back cahnes after end of memory in QC was made incorrectly and lead to using uninitialized memory. (bug has nothing to do with resize operation, it is just lack of resources erro processed incorrectly)
-
Oleksandr Byelkin authored
This reverts commit 44339123.
-
Alexander Barkov authored
Replacing the slow loop in my_hash_sort_utf8mbX() to the fast skip_trailing_spaces(), which consumes 8 bytes in one iteration, and is around 8 times faster on long data. Also, renaming: - my_hash_sort_utf8() to my_hash_sort_utf8mb3() - my_hash_sort_utf8_nopad() to my_hash_sort_utf8mb3_nopad() to merge to 10.5 easier (automatically?).
-
- 09 Jun, 2020 3 commits
-
-
Varun Gupta authored
Backported from MYSQL Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT Issue: ------ The problem occurs when: 1) GROUP_CONCAT (DISTINCT ....) is used in the query. 2) Data size greater than value of system variable: tmp_table_size. The result would contain values that are non-unique. Root cause: ----------- An in-memory structure is used to filter out non-unique values. When the data size exceeds tmp_table_size, the overflow is written to disk as a separate file. The expectation here is that when all such files are merged, the full set of unique values can be obtained. But the Item_func_group_concat::add function is in a bit of hurry. Even as it is adding values to the tree, it wants to decide if a value is unique and write it to the result buffer. This works fine if the configured maximum size is greater than the size of the data. But since tmp_table_size is set to a low value, the size of the tree is smaller and hence requires the creation of multiple copies on disk. Item_func_group_concat currently has no mechanism to merge all the copies on disk and then generate the result. This results in duplicate values. Solution: --------- In case of the DISTINCT clause, don't write to the result buffer immediately. Do the merge and only then put the unique values in the result buffer. This has be done in Item_func_group_concat::val_str. Note regarding result file changes: ----------------------------------- Earlier when a unique value was seen in Item_func_group_concat::add, it was dumped to the output. So result is in the order stored in SE. But with this fix, we wait until all the data is read and the final set of unique values are written to output buffer. So the data appears in the sorted order. This only fixes the cases when we have DISTINCT without ORDER BY clause in GROUP_CONCAT.
-
Daniel Black authored
innodb: dict_mem_table_add_col - compile warning fix argument 1 null where non-null expected (#1584) cd /build-mariadb-server-10.5-mysql_release/storage/innobase && /usr/bin/powerpc64le-linux-gnu-g++ -DBTR_CUR_ADAPT -DBTR_CUR_HASH_ADAPT -DCOMPILER_HINTS -DDBUG_TRACE -DEMBEDDED_LIBRARY -DHAVE_BZIP2=1 -DHAVE_C99_INITIALIZERS -DHAVE_CONFIG_H -DHAVE_FALLOC_PUNCH_HOLE_AND_KEEP_SIZE=1 -DHAVE_IB_LINUX_FUTEX=1 -DHAVE_LZ4=1 -DHAVE_LZ4_COMPRESS_DEFAULT=1 -DHAVE_LZMA=1 -DHAVE_NANOSLEEP=1 -DHAVE_OPENSSL -DHAVE_SCHED_GETCPU=1 -DLINUX_NATIVE_AIO=1 -DMUTEX_EVENT -DWITH_INNODB_DISALLOW_WRITES -D_FILE_OFFSET_BITS=64 -Iwsrep-lib/include -Iwsrep-lib/wsrep-API/v26 -I/home/dan/build-mariadb-server-10.5-mysql_release/include -Istorage/innobase/include -Istorage/innobase/handler -Ilibbinlogevents/include -Itpool -Iinclude -Isql -pie -fPIC -Wl,-z,relro,-z,now -fstack-protector --param=ssp-buffer-size=4 -Wconversion -Wno-sign-conversion -O3 -g -static-libgcc -fno-omit-frame-pointer -fno-strict-aliasing -Wno-uninitialized -D_FORTIFY_SOURCE=2 -DDBUG_OFF -Wall -Wextra -Wformat-security -Wno-format-truncation -Wno-init-self -Wno-nonnull-compare -Wno-unused-parameter -Woverloaded-virtual -Wnon-virtual-dtor -Wvla -Wwrite-strings -DUNIV_LINUX -D_GNU_SOURCE=1 -fPIC -fvisibility=hidden -std=gnu++11 -o CMakeFiles/innobase_embedded.dir/dict/dict0load.cc.o -c storage/innobase/dict/dict0load.cc storage/innobase/dict/dict0load.cc: In function ‘const char* dict_process_sys_columns_rec(mem_heap_t*, const rec_t*, dict_col_t*, table_id_t*, const char**, ulint*)’: storage/innobase/dict/dict0load.cc:1653:26: warning: argument 1 null where non-null expected [-Wnonnull] dict_mem_table_add_col(table, heap, name, mtype, ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~ prtype, col_len); ~~~~~~~~~~~~~~~~ In file included from storage/innobase/include/dict0dict.h:32:0, from storage/innobase/include/btr0pcur.h:30, from storage/innobase/dict/dict0load.cc:31: storage/innobase/include/dict0mem.h:323:1: note: in a call to function ‘void dict_mem_table_add_col(dict_table_t*, mem_heap_t*, const char*, ulint, ulint, ulint)’ declared here dict_mem_table_add_col( ^~~~~~~~~~~~~~~~~~~~~~
-
rucha174 authored
In case of SELECT without tables which returns either 0 or 1 rows, JOIN::exec_inner() did not check if the flag representing SQL_CALC_FOUND_ROWS is set or not and send_records was direclty assigned 0. So SELECT FOUND_ROWS() was giving 0 in the output. Now it checks if the flag is set, if it is set send_record=1 else 0. 1 is the number of rows that could have been sent to the client if the SELECT query had SQL_CALC_FOUND_ROWS. It is 0 when no rows were sent because the SELECT query did not have SQL_CALC_FOUND_ROWS.
-
- 08 Jun, 2020 6 commits
-
-
Sujatha authored
MDEV-22717: Conditional jump or move depends on uninitialised value(s) in find_uniq_filename(char*, unsigned long) Fix: === Initialize 'number' variable to '0'.
-
Alexander Barkov authored
When processing a condition like: WHERE timestamp_column='2010-00-01 00:00:00' don't replace the constant to Item_datetime_literal if the constant it has zeros (in the month or in the day).
-
Ian Gilfillan authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
When MDEV-22769 introduced srv_shutdown_state=SRV_SHUTDOWN_INITIATED in commit efc70da5 we forgot to adjust a few checks for SRV_SHUTDOWN_NONE. In the initial shutdown step, we are waiting for the background DROP TABLE queue to be processed or discarded. At that time, some background tasks (such as buffer pool resizing or dumping or encryption key rotation) may be terminated, but others must remain running normally. srv_purge_coordinator_suspend(), srv_purge_coordinator_thread(), srv_start_wait_for_purge_to_start(): Treat SRV_SHUTDOWN_NONE and SRV_SHUTDOWN_INITIATED equally.
-
- 07 Jun, 2020 5 commits
-
-
Monty authored
MDEV-19320 Sequence gets corrupted and produces ER_KEY_NOT_FOUND (Can't find record) after ALTER .. ORDER BY
-
Monty authored
MDEV-19977 Assertion `(0xFUL & mode) == LOCK_S || (0xFUL & mode) == LOCK_X' failed in lock_rec_lock
-
Alexander Barkov authored
-
Sachin authored
MDEV-22719 Long unique keys are not created when individual key_part->length < max_key_length but SUM(key_parts->length) > max_key_length Make UNIQUE HASH key in case when key_info->key_length > max_key_length
-
Sachin authored
MDEV-21804 Assertion `marked_for_read()' failed upon INSERT into table with long unique blob under binlog_row_image=NOBLOB Problem:- Calling mark_columns_per_binlog_row_image() earlier may change the result of mark_virtual_columns_for_write() , Since it can set the bitmap on for virtual column, and henceforth mark_virtual_column_deps(field) will never be called in mark_virtual_column_with_deps. This bug is not specific for long unique, It also fails for this case create table t2(id int primary key, a blob, b varchar(20) as (LEFT(a,2)));
-
- 06 Jun, 2020 1 commit
-
-
Varun Gupta authored
MDEV-22728: SIGFPE in Unique::get_cost_calc_buff_size from prepare_search_best_index_intersect on optimized builds For low sort_buffer_size, in the cost calculation of using the Unique object the elements in the tree were evaluated to 0, make sure to have atleast 1 element in the Unique tree. Also for the function Unique::get allocate memory for atleast MERGEBUFF2+1 keys.
-