- 19 Jun, 2020 5 commits
-
-
Oleksandr Byelkin authored
Second attempt to fix same bug: Use the same queue for all READ operations. Release queues for all used pages. This fixes a hang in the s3.alter2 test case
-
Monty authored
When converting a table (test.s3_table) from S3 to another engine, the following will be logged to the binary log: DROP TABLE IF EXISTS test.t1; CREATE OR REPLACE TABLE test.t1 (...) ENGINE=new_engine INSERT rows to test.t1 in binary-row-log-format The bug is that the above statements are logged one by one to the binary log. This means that a fast slave, configured to use the same S3 storage as the master, would be able to execute the DROP and CREATE from the binary log before the master has finished the ALTER TABLE. In this case the slave would ignore the DROP (as it's on a S3 table) but it will stop on CREATE of the local tale, as the table is still exists in S3. The REPLACE part will be ignored by the slave as it can't touch the S3 table. The fix is to ensure that all the above statements is written to binary log AFTER the table has been deleted from S3.
-
Monty authored
-
Monty authored
- Added missing test for binlog_filter to ALTER TABLE
-
Monty authored
- Rewrote bool Query_compressed_log_event::write() to make it more readable (no logic changes). - Changed DBUG_PRINT of 'is_error:' to 'is_error():' to make it easier to find error: in traces. - Ensure that 'db' is never null in Query_log_event (Simplified code).
-
- 18 Jun, 2020 16 commits
-
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
-
Daniel Black authored
-
Marko Mäkelä authored
The rw_lock_s_lock() calls for the buf_pool.page_hash became a clear bottleneck after MDEV-15053 reduced the contention on buf_pool.mutex. We will replace that use of rw_lock_t with a special implementation that is optimized for memory bus traffic. The hash_table_locks instrumentation will be removed. buf_pool_t::page_hash: Use a special implementation whose API is compatible with hash_table_t, and store the custom rw-locks directly in buf_pool.page_hash.array, intentionally sharing cache lines with the hash table pointers. rw_lock: A low-level rw-lock implementation based on std::atomic<uint32_t> where read_trylock() becomes a simple fetch_add(1). buf_pool_t::page_hash_latch: The special of rw_lock for the page_hash. buf_pool_t::page_hash_latch::read_lock(): Assert that buf_pool.mutex is not being held by the caller. buf_pool_t::page_hash_latch::write_lock() may be called while not holding buf_pool.mutex. buf_pool_t::watch_set() is such a caller. buf_pool_t::page_hash_latch::read_lock_wait(), page_hash_latch::write_lock_wait(): The spin loops. These will obey the global parameters innodb_sync_spin_loops and innodb_sync_spin_wait_delay. buf_pool_t::freed_page_hash: A singly linked list of copies of buf_pool.page_hash that ever existed. The fact that we never free any buf_pool.page_hash.array guarantees that all page_hash_latch that ever existed will remain valid until shutdown. buf_pool_t::resize_hash(): Replaces buf_pool_resize_hash(). Prepend a shallow copy of the old page_hash to freed_page_hash. buf_pool_t::page_hash_table::n_cells: Declare as Atomic_relaxed. buf_pool_t::page_hash_table::lock(): Explain what prevents a race condition with buf_pool_t::resize_hash().
-
Marko Mäkelä authored
hash_get_n_cells(): Remove. Access n_cells directly. hash_get_nth_cell(): Remove. Access array directly. hash_table_clear(): Replaced with hash_table_t::clear(). hash_table_create(), hash_table_free(): Remove. hash0hash.cc: Remove.
-
Marko Mäkelä authored
btr_search_sys::parts[]: A single structure for the partitions of the adaptive hash index. Replaces the 3 separate arrays: btr_search_latches[], btr_search_sys->hash_tables, btr_search_sys->hash_tables[i]->heap. hash_table_t::heap, hash_table_t::adaptive: Remove. ha0ha.cc: Remove. Move all code to btr0sea.cc.
-
Marko Mäkelä authored
HASH_TABLE_SYNC_MUTEX was kind-of used for the adaptive hash index, even though that hash table is already protected by btr_search_latches[]. HASH_TABLE_SYNC_RWLOCK was only being used for buf_pool.page_hash. It is cleaner to decouple that synchronization from hash_table_t, and move it to the actual user. buf_pool_t::page_hash_latches[]: Synchronization for buf_pool.page_hash. LATCH_ID_HASH_TABLE_MUTEX: Remove. hash_table_t::sync_obj, hash_table_t::n_sync_obj: Remove. hash_table_t::type, hash_table_sync_t: Remove. HASH_ASSERT_OWN(), hash_get_mutex(), hash_get_nth_mutex(): Remove. ib_recreate(): Merge to the only caller, buf_pool_resize_hash(). ib_create(): Merge to the callers. ha_clear(): Merge to the only caller buf_pool_t::close(). buf_pool_t::create(): Merge the ib_create() and hash_create_sync_obj() invocations. ha_insert_for_fold_func(): Clarify an assertion. buf_pool_t::page_hash_lock(): Simplify the logic. hash_assert_can_search(), hash_assert_can_modify(): Remove. These predicates were only being invoked for the adaptive hash index, while they only are effective for buf_pool.page_hash. HASH_DELETE_AND_COMPACT(): Merge to ha_delete_hash_node(). hash_get_sync_obj_index(): Remove. hash_table_t::heaps[], hash_get_nth_heap(): Remove. It was actually unused! hash_get_heap(): Remove. It was only used in ha_delete_hash_node(), where we always use hash_table_t::heap. hash_table_t::calc_hash(): Replaces hash_calc_hash().
-
Daniel Black authored
MRI scripts cannot handle + in paths, and ubuntu CI makes use of these. So we remove the top level build dir from the script and transform it into a relative path script.
-
Daniel Black authored
Because of common dependencies between the static libraries list can contain duplicates. We reduce these down to the single last one in the list. This reduces the relative time of a rebuild from: $ (cd builddir/; time make -j) ... real 0m30.789s user 1m33.477s sys 0m19.678s and the LIB entries $ grep ADDLIB builddir/libmysqld/mysqlserver-\$\<CONFIG\>.mri.tpl | wc -l 179 $ du -h builddir/libmysqld/libmariadbd.a 4.1G builddir/libmysqld/libmariadbd.a To: $ (cd builddir/; time make -j) ... real 0m20.139s user 1m32.423s sys 0m12.208s $ grep ADDLIB builddir/libmysqld/mysqlserver-\$\<CONFIG\>.mri.tpl | wc -l 25 $ du -h builddir/libmysqld/libmariadbd.a 688M builddir/libmysqld/libmariadbd.a
-
Marko Mäkelä authored
-
Vlad Lesin authored
from configuration files
-
Vlad Lesin authored
Post-push fix: add mysqd options in backup string in mariabackup sst script for the case of logging not in syslog.
-
Marko Mäkelä authored
The test mariabackup.mdev-14447 started failing due to the option --apply-log-only that became invalid since commit 9bdf35e9 and had been removed in commit 8c71c6aa.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
and put them all together in mysqld.cc
-
- 17 Jun, 2020 11 commits
-
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
(if this option is present). perf, and other tools that do stack walking works much better with it.
-
Thirunarayanan Balathandayuthapani authored
- There is a possiblity that metadata record is the only record in the leftmost leaf page. So change the assertion to check only previous page.
-
Sergei Petrunia authored
-
Thirunarayanan Balathandayuthapani authored
- This is a regression of MDEV-21174(56f6dab1). InnoDB resets the BTR_EXTERN_LEN value at wrong offset.
-
Alexey Yurchenko authored
galera_ipv6_mariabackup MTR tests
-
MikkoJaakola authored
The galera.galera_parallel_autoinc_manytrx mtr test opens and runs test scenario through 3 connections to node 1 and one connection to node 2. In the test initialization phase, the test creates two tables 't1' and 'ten' and then creates a stored procedure 'p1' to operate on these tables. These 3 create DDL statements are issued through same connection to node 1. In the next test phase, the mtr script uses send command to launch the call for the p1 stored procedure through all 3 connections to node 1 and through one connection to node 2. As the mtr send command is asynchronous, this test phase is non blocking and fast operation. Now, if the replication between nodes is slow, it may happen that the initialization phase DDL statements have not been received or have not been fully applied in node 2. Therefore there is no guarantee that the test tables and the stored procedure have been created in node 2. Yet, the test is trying to call p1 in node 2. In the failure case error logs, there is error message "MTR failed: query 'reap' failed: 1305: PROCEDURE test.p1 does not exist" The reap command through connection to node 2, is the first place where test execution may observe that test tables and/or stored procedure are not yet created in node 2. The fix in this commit adds a wait condition in connection to node 2, to wait until the stored procedure is created before calling the stored procedure. The wait is implemented by looking in information_schema.routines for the p1 stored procedure.
-
Jan Lindström authored
-
Jan Lindström authored
MDEV-22125 : galera.galera_drop_multi MTR failed: InnoDB: MySQL is trying to drop database `fts`.`` though there are still open handles MDEV-22140 galera.galera_drop_database MTR failed: InnoDB: MySQL is trying to drop database `fts`.`` though there are still open handles Add wait conditions to wait that all operations are done in both nodes.
-
Vladislav Vaintroub authored
Make sure to initialize SSL early enough, when encryption plugins is loaded
-
Krunal Bauskar authored
increased scalability through even distribution Rollback segments are allocated to transactions in round-robin fashion. This is controlled by incrementing a static-scope counter named rseg_slot. Said logic is not protected by any mutex or use of atomic for the counter. This potentially can cause the same rollback segment to get allocated to N different transactions (requesting allocation at the same time). While this is not an issue as a rollback segment can host multiple transactions from contention (performance) perspective it is better to allocate these rollback segments in round-robin fashion. Fix for the said issue ports use of atomic for the said counter that would ensure the original design semantic (even distribution through round-robin) is retained.
-
- 16 Jun, 2020 2 commits
-
-
Sachin authored
MDEV-22370 safe_mutex: Trying to lock uninitialized mutex at /data/src/10.4-bug/sql/rpl_parallel.cc, line 470 upon shutdown during FTWRL Problem:- When we issue FTWRL with shutdown in parallel, there is race between FTWRL and shutdown. Shutdown might destroy the mutex (pool->LOCK_rpl_thread_pool) before FTWRL can lock it. So we can get crash on FTWRL thread Solution:- mysql_mutex_destroy(pool->LOCK_rpl_thread_pool) should wait for FTWRL thread to complete its work , and then destroy. So slave_prepare_for_shutdown will just deactivate the pool, and mutex is destroyed later in end_slave()
-
MikkoJaakola authored
The galera.galera_parallel_autoinc_manytrx mtr test opens and runs test scenario through 3 connections to node 1 and one connection to node 2. In the test initialization phase, the test creates two tables 't1' and 'ten' and then creates a stored procedure 'p1' to operate on these tables. These 3 create DDL statements are issued through same connection to node 1. In the next test phase, the mtr script uses send command to launch the call for the p1 stored procedure through all 3 connections to node 1 and through one connection to node 2. As the mtr send command is asynchronous, this test phase is non blocking and fast operation. Now, if the replication between nodes is slow, it may happen that the initialization phase DDL statements have not been received or have not been fully applied in node 2. Therefore there is no guarantee that the test tables and the stored procedure have been created in node 2. Yet, the test is trying to call p1 in node 2. In the failure case error logs, there is error message "MTR failed: query 'reap' failed: 1305: PROCEDURE test.p1 does not exist" The reap command through connection to node 2, is the first place where test execution may observe that test tables and/or stored procedure are not yet created in node 2. The fix in this commit adds a wait condition in connection to node 2, to wait until the stored procedure is created before calling the stored procedure. The wait is implemented by looking in information_schema.routines for the p1 stored procedure.
-
- 15 Jun, 2020 1 commit
-
-
Alexey Yurchenko authored
galera_ipv6_mariabackup MTR tests
-
- 14 Jun, 2020 2 commits
-
-
Vlad Lesin authored
MDEV-21298: mariabackup doesn't read from the [mariadbd] and [mariadbd-X.Y] server option groups from configuration files MDEV-21301: mariabackup doesn't read [mariadb-backup] option group in configuration file All three issues require to change the same code, that is why their fixes are joined in one commit. The fix is in invoking load_defaults_or_exit() and handle_options() for backup-specific groups separately from client-server groups to let the last handle_options() call fail on unknown backup-specific options. The order of options procesing is the following: 1) Load server groups and process server options, ignore unknown options 2) Load client groups and process client options, ignore unknown options 3) Load backup groups and process client-server options, exit on unknown option 4) Process --mysqld-args command line options, ignore unknown options New global flag my_handle_options_init_variables was added to have ability to invoke handle_options() for the same allowed options set several times without re-initialising previously set option values. --password value destroying is moved from option processing callback to mariabackup's handle_options() function to have ability to invoke server's handle_options() several times for the same possible allowed options set. Galera invokes wsrep_sst_mariabackup.sh with mysqld command line options to configure mariabackup as close to the server as possible. It is not known what server options are supported by mariabackup when the script is invoked. That is why new mariabackup option "--mysqld-args" is added, all unknown options that follow this option will be silently ignored. wsrep_sst_mariabackup.sh was also changed to: - use "--mysqld-args" mariabackup option to pass mysqld options, - remove deprecated innobackupex mode, - remove unsupported mariabackup options: --encrypt --encrypt-key --rebuild-indexes --rebuild-threads
-
Kentoku SHIBA authored
-
- 16 Jun, 2020 3 commits
-
-
Thirunarayanan Balathandayuthapani authored
MEM_GET_VBITS(): Save information about uninitialized data. MEM_SET_VBITS(): Restore information about uninitialized data.
-
Vladislav Vaintroub authored
Make ut_new_get_key_by_file event less expensive remove binary search, compute auto_event_keys offset at compile time.
-
Otto Kekäläinen authored
Replace all references to /usr/sbin/mysqld (and bin and libexec) with mariadbd, so that the binary server will always be 'mariadbd'. Also update all places that reference the server binary in other ways, such as AppArmor profiles and scripts that previously expected to find a 'mysqld' in process lists.
-