- 27 Jan, 2018 2 commits
-
-
Monty authored
I disabled rocksdb in ASAN build as I got a link error when it's included
-
Andrei Elkin authored
replicate_events_marked_for_skip=FILTER_ON_MASTER When events of a big transaction are binlogged offsetting over 2GB from the beginning of the log the semisync master's dump thread lost such events. The events were skipped by the Dump thread that found their skipping status erroneously. The current fixes make sure the skipping status is computed correctly. The test verifies them simulating the 2GB offset.
-
- 26 Jan, 2018 6 commits
-
-
-
Monty authored
-
Vladislav Vaintroub authored
do not include test suite in release zip anymore.
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
-
- 25 Jan, 2018 1 commit
-
-
Jan Lindström authored
Probem was that dict_sys mutex was owned when calling function dict_stats_save_defrag_stats() that assumes we do not own dict_sys mutex.
-
- 24 Jan, 2018 4 commits
-
-
Monty authored
Assertion `count > 0' failed in rpl_parallel_thread_pool:: get_thread, rpl.rpl_parallel failed in buildbot The reason for this is that one thread can call rpl_parallel_resize_pool_if_no_slaves() while another thread calls at the same time rpl_parallel_activate_pool(). If rpl_parallel_active_pool() is called before rpl_parallel_resize_pool_if_no_slaves() has finished, pool->count will be set to 0 even if there exists active slave threads. Added a mutex lock in rpl_parallel_activate_pool() to protect against this scenario, which seams to fix this issue.
-
Monty authored
It crashed because we accessed lex->current_select when it was a NULL, which is the case for SP parameters or local variables.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
While the bug was reported as a regression of MDEV-11025 Make number of page cleaner threads variable dynamic in MariaDB Server 10.3, the code that MariaDB Server 10.2 inherited from MySQL 5.7.4 (WL#6642) looks prone to similar errors. pc_flush_slot(): If there is no work to do, reset the is_requested signal, to avoid potential busy-waiting in buf_flush_page_cleaner_worker(). If the coordinator thread has shut down, avoid resetting the is_requested event, to avoid a potential hang at shutdown if there are multiple worker threads.
-
- 23 Jan, 2018 3 commits
-
-
Daniel Black authored
Signed-off-by: Daniel Black <daniel@linux.vnet.ibm.com>
-
Alexander Barkov authored
-
Vladislav Vaintroub authored
FULLTEXT auxiliary tables Change the logic to take mdl lock on all tables before tablespaces are copied, rather than lock every single tablespace just before it is copied.
-
- 22 Jan, 2018 6 commits
-
-
Marko Mäkelä authored
ibuf_merge_or_delete_for_page(): Invoke fil_space_acquire_silent() instead of fil_space_acquire() in order to avoid displaying a useless message. We know perfectly well that a tablespace can be dropped while a change buffer merge is pending, because change buffer merges skip any transactional locks.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
innobase_commit_by_xid(), innobase_rollback_by_xid(): Decrement the reference count before freeing the transaction object to the pool. Failure to do so might corrupt the transaction bookkeeping if trx_create_low() returns the same object to another thread before we are done with it. trx_sys_close(): Detach the recovered XA PREPARE transactions from trx_sys->rw_trx_list before freeing them.
-
Sergei Petrunia authored
join_tab->distinct=true means "Before doing record read with this join_tab, call join_tab->remove_duplicates() to eliminate duplicates". remove_duplicates() assumes that - there is a temporary table $T with rows that are to be de-duplicated - there is a previous join_tab (e.g. with join_tab->fields) which was used to populate the temp.table $T. When the query has "Impossible WHERE" and window function, then the above conditions are not met (but we still might need a window function computation step when the query has implicit grouping). The fix is to not add remove_duplicates step if the select execution is degenerate (and we'll have at most one row in the output anyway).
-
Marko Mäkelä authored
MDEV-14511 tried to avoid some consistency problems related to InnoDB persistent statistics. The persistent statistics are being written by an InnoDB internal SQL interpreter that requires the InnoDB data dictionary cache to be locked. Before MDEV-14511, the statistics were written during DDL in separate transactions, which could unnecessarily reduce performance (each commit would require a redo log flush) and break atomicity, because the statistics would be updated separately from the dictionary transaction. However, because it is unacceptable to hold the InnoDB data dictionary cache locked while suspending the execution for waiting for a transactional lock (in the mysql.innodb_index_stats or mysql.innodb_table_stats tables) to be released, any lock conflict was immediately be reported as "lock wait timeout". To fix MDEV-14941, an attempt to reduce these lock conflicts by acquiring transactional locks on the user tables in both the statistics and DDL operations was made, but it would still not entirely prevent lock conflicts on the mysql.innodb_index_stats and mysql.innodb_table_stats tables. Fixing the remaining problems would require a change that is too intrusive for a GA release series, such as MariaDB 10.2. Thefefore, we revert the change MDEV-14511. To silence the MDEV-13201 assertion, we use the pre-existing flag trx_t::internal.
-
- 21 Jan, 2018 2 commits
-
-
Monty authored
current_select may point to data from old parser states when calling a stored procedure with CALL The failure happens in Item::Item when testing if we are in having. Fixed by explicitely reseting current_select in do_execute_sp() and in sp_rcontext::create(). The later is also needed for stored functions().
-
Monty authored
-
- 18 Jan, 2018 4 commits
-
-
Igor Babaev authored
caused an error The function subselect_single_select_engine::print() did not print the WITH clause attached to a subselect with single select engine. As a result views using suqueries with attached WITH clauses lost these clauses when saved in frm files.
-
Marko Mäkelä authored
The field trx_rseg_t::trx_ref_count that was added in WL#6965 in MySQL 5.7.5 is being incremented twice if a recovered transaction includes both undo log partitions insert_undo and update_undo. This reference count is being used in trx_purge(), which invokes trx_purge_initiate_truncate() to try to truncate an undo tablespace file. Because of the double-increment, the trx_ref_count would never reach 0. It is possible that after the failed truncation attempt, the undo tablespace would be disabled for logging any new transactions until the server is restarted (hopefully after committing or rolling back all transactions, so that no transactions would be recovered on the next startup). trx_resurrect_insert(), trx_resurrect_update(): Do not increment trx_ref_count. Instead, let the caller do that. trx_lists_init_at_db_start(): Increment rseg->trx_ref_count only once for each recovered transaction. Adjust comments. Finally, if innodb_force_recovery prevents the undo log scan, do not bother iterating the empty lists.
-
Monty authored
-
Monty authored
The problem was that max_size was acciently set to 1 in some cases. Other things: - Adjust max_rows if min_rows > max_rows. - Removed not used variable varchar_length - Adjusted max_pack_length (safety fix)
-
- 16 Jan, 2018 5 commits
-
-
Marko Mäkelä authored
srv_prepare_to_delete_redo_log_files(): Initialize srv_start_lsn.
-
Alexander Barkov authored
1. Moving the following methods from THD to Item_change_list: nocheck_register_item_tree_change() check_and_register_item_tree_change() rollback_item_tree_changes() as they work only with the "change_list" member and don't require anything else from THD. 2. Deriving THD from Item_change_list This change will help to fix "MDEV-14603 signal 11 with short stacktrace" easier.
-
Marko Mäkelä authored
innobase_start_or_create_for_mysql(): Only start the data dictionary and transaction subsystems in normal server startup and during mariabackup --export.
-
Marko Mäkelä authored
btr_cur_update_in_place(): Read block->index only once, so that it cannot change to NULL after the first read. When block->index != NULL, it must be equal to index.
-
Marko Mäkelä authored
btr_cur_update_in_place(): The call rw_lock_x_lock(ahi_latch) must of course be inside the if (ahi_latch) condition. This is a mistake that I made when backporting the fix-under-development from 10.3.
-
- 15 Jan, 2018 7 commits
-
-
Sergei Petrunia authored
-
Marko Mäkelä authored
This race condition is a regression caused by MDEV-12121. btr_cur_update_in_place(): Determine block->index!=NULL only once in order to determine whether an adaptive hash index bucket needs to be exclusively locked and unlocked. If we evaluated block->index multiple times, and the adaptive hash index was disabled before we locked the adaptive hash index, then we would never release the adaptive hash index bucket latch, which would eventually lead to InnoDB hanging.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Sergei Petrunia authored
-
Marko Mäkelä authored
innodb.truncate_inject: Replacement for innodb_zip.wl6501_error_1 Note: unlike MySQL, in some cases TRUNCATE does not return an error in MariaDB. This should be fixed in the scope of MDEV-13564 or similar.
-