- 18 Nov, 2014 13 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
* use the same HAVE_C/CXX_ variables for compiler flag tests as the rest of the server and tokudb - to use cached results * plugin's name should be "mroonga" not "ha_mroonga" * don't use set_property(TARGET plugin_name ...), it aborts cmake when a plugin id disabled, because the target doesn't exists in that case result: mroonga can now be disabled from cmake command line
-
Sergei Golubchik authored
try the first unique key as a surrogate PK *before* disabling extended keys because of missing PK
-
Sergei Golubchik authored
-
Sergei Golubchik authored
1. remove find_mysql_client (from a bad merge) 2. use $mysql_command
-
Sergei Golubchik authored
disable binlogging when loading help tables
-
Sergei Golubchik authored
update mysql_system_tables_fix.sql to match mysql_system_tables.sql
-
Sergei Golubchik authored
use the same restriction for character_set_client on the command line and from SQL. Also: remove strange hack from thd_init_client_charset() that contradicted the manual (collation_connection and character_set_result were not always set)
-
Sergei Golubchik authored
a different fix for view.test --ps-protocol crash (revert the old fix that has caused a regression)
-
Sergei Golubchik authored
ALTER TABLE: don't fill default values per row, do it once. And do it in two places - for copy_data_between_tables() and for online ALTER. Also, run function_defaults test both for MyISAM and for InnoDB.
-
Sergei Golubchik authored
when reading data into the record buffer, the tail of the VARCHAR (between real and max varchar length) is not written to. initialize the record buffer to avoid writing uninitialized memory to disk.
-
Sergei Golubchik authored
update big test results
-
- 13 Nov, 2014 2 commits
-
-
Sergei Golubchik authored
correct the buffer boundary check
-
Sergei Golubchik authored
reset default fields not for every modified row, but only once, at the beginning, as the set of modified fields doesn't change. exception: INSERT ... ON DUPLICATE KEY UPDATE - the set of fields does change per row and in that case we reset default fields per row.
-
- 11 Nov, 2014 1 commit
-
-
Sergei Golubchik authored
(it was *after* in two cases and *before* in one case)
-
- 18 Nov, 2014 3 commits
-
-
Alexander Barkov authored
-
Alexander Barkov authored
MDEV-6950 Bad results with joins comparing DATE/DATETIME and INT/DECIMAL/DOUBLE/ENUM/VARCHAR columns MDEV-6971 Bad results with joins comparing TIME and DOUBLE/DECIMAL columns Disallow using indexes on non-temporal columns to optimize ref access, range access and table elimination when the counterpart's cmp_type is TIME_RESULT, e.g.: SELECT * FROM t1 WHERE indexed_int_column=time_expression; Only index on a temporal column can be used to optimize temporal comparison operations.
-
Alexander Barkov authored
Removing a redundant and wrong condition which could access beyond the pattern string range.
-
- 17 Nov, 2014 4 commits
-
-
unknown authored
Going to 'create_new_string:' caused double freeing alloc_plan (there and at 'end:').
-
Kristian Nielsen authored
When a master server restarts, it writes a restart format_description event as the first event in the next binlog file. The parallel slave SQL thread queues a special restart entry for the current worker thread to signal this, so that the worker thread can roll back any prior partial transaction that might have been written to the binlog due to master crashing. This queueing was missing a mysql_cond_signal() to notify the worker thread. This could cause the worker thread to not process the restart entry, and this in turn would cause the SQL thread to hang infinitely waiting for the worker thread to complete processing. Fix by adding the missing wakeup signalling for this case.
-
Kristian Nielsen authored
The test case rpl.rpl_parallel_temptable deliberately crashes the master server as part of the testing. This makes it unsuitable for Valgrind testing. So make sure that it will be skipped when testing with Valgrind.
-
Kristian Nielsen authored
The real problem here was inconsistent handling of entry->commit_errno in MYSQL_BIN_LOG::write_transaction_or_stmt(). Some return paths were setting it to the value of errno, some where not. And the setting was redundant anyway, as it is set consistently by the caller. Fix by consistently setting it in the caller, and not in each return path in the function. The test failure happened because a DBUG_EXECUTE_IF() used in the test case set an entry->commit_errno that was immediately overwritten in the caller with whatever happened to be the value of errno. This could lead to different error message in the .result file.
-
- 14 Nov, 2014 1 commit
-
-
Jan Lindström authored
buildbot on work-amd64-valgrind Fixed issue by finding out first the current used priority for both treads and using that seeing did we really change the priority or not.
-
- 13 Nov, 2014 13 commits
-
-
Kristian Nielsen authored
MDEV-6917: Parallel replication: "Commit failed due to failure of an earlier commit on which this one depends", but no prior failure seen This bug was seen when parallel replication experienced a deadlock between transactions T1 and T2, where T2 has reached the commit phase and is waiting for T1 to commit first. In this case, the deadlock is broken by sending a kill to T2; that kill error is then later detected and converted to a deadlock error, which causes T2 to be rolled back and retried. The problem was that the kill caused ha_commit_trans() to errorneously call wakeup_subsequent_commits() on T3, signalling it to abort because T2 failed during commit. This is incorrect, because the error in T2 is only a temporary error, which will be resolved by normal transaction retry. We should not signal error to the next transaction until we have executed the code that handles such temporary errors. So this patch just removes the calls to wakeup_subsequent_commits() from ha_commit_trans(). They are incorrect in this case, and they are not needed in general, as wakeup_subsequent_commits() must in any case be called in finish_event_group() to wakeup any transactions that may have started to wait after ha_commit_trans(). And normally, wakeup will in fact have happened earlier, either from the binlog group commit code, or (in case of no binlogging) after the fast part of InnoDB/XtraDB group commit. The symptom of this bug was that replication would break on some transaction with "Commit failed due to failure of an earlier commit on which this one depends", but with no such failure of an earlier commit visible anywhere.
-
Kristian Nielsen authored
The retry of an event group in parallel replication set the wrong value for the end log position of the event that was retried (qev->future_event_relay_log_pos). It was too large by the size of the event, so it pointed into the middle of the following event. If the retry happened in the very last event of the event group, _and_ the SQL thread was stopped just after successfully retrying that event, then the SQL threads's relay log position would be left incorrect. Restarting the SQL thread could then try to read events from a garbage offset in the relay log, usually leading to an error about not being able to read the event.
-
Kristian Nielsen authored
The code in binlog group commit around wait_for_commit that controls commit order, did the wakeup of subsequent commits early, as soon as a following transaction is put into the group commit queue, but before any such commit has actually taken place. This causes problems with too early wakeup of transactions that need to wait for prior to commit, but do not take part in the binlog group commit for one reason or the other. This patch solves the problem, by moving the wakeup to happen only after the binlog group commit is completed. This requires a new solution to ensure that transactions that arrive later than the leader are still able to participate in group commit. This patch introduces a flag wait_for_commit::commit_started. When this is set, a waiter can queue up itself in the group commit queue. This way, effectively the wait_for_prior_commit() is skipped only for transactions that participate in group commit, so that skipping the wait is safe. Other transactions still wait as needed for correctness.
-
Kristian Nielsen authored
The code that handles free lists of various objects passed to worker threads in parallel replication handles freeing in batches, to avoid taking and releasing LOCK_rpl_thread too often. However, it was possible for freeing to be delayed to the point where one thread could stall the SQL driver thread due to full queue, while other worker threads might be idle. This could significantly degrade possible parallelism and thus performance. Clean up the batch freeing code so that it is more robust and now able to regularly free batches of object, so that normally the queue will not run full unless the SQL driver thread is really far ahead of the worker threads.
-
Kristian Nielsen authored
The bug occured in parallel replication when re-trying transactions that failed due to deadlock. In this case, the relay log file is re-opened and the events are read out again. This reading requires a format description event of the appropriate version. But the code was using a description event stored in rli, which is not thread-safe. This could lead to various rare races if the format description event was replaced by the SQL driver thread at the exact moment where a worker thread was trying to use it. The fix is to instead make the retry code create and maintain its own format description event. When the relay log file is opened, we first read the format description event from the start of the file, before seeking to the current position. This now uses the same code as when the SQL driver threads starts from a given relay log position. This also makes sure that the correct format description event version will be used in cases where the version of the binlog could change during replication.
-
Kristian Nielsen authored
In parallel replication, threads can do two different waits for a prior transaction. One is for the prior transaction to start commit, the other is for it to complete commit. It turns out that the same PSI_stage_info message was errorneously used in both cases (probably a merge error), causing SHOW PROCESSLIST to be misleading. Fix by using correct, distinct message in each case.
-
Kristian Nielsen authored
-
Kristian Nielsen authored
In parallel replication, the wait_for_commit facility is used to ensure that events are written into the binlog in the correct order. This is handled in an optimised way in the binlogging group commit code. However, some statements, for example GRANT, are written directly into the binlog, outside of the group commit code. There was a bug that this direct write does not correctly wait for the prior transactions to have been written first, which allows f.ex. GRANT to be written ahead of earlier transactions. This patch adds the missing wait_for_prior_commit() before writing directly to the binlog. However, the problem is still there, although the race is much less likely to occur now. The problem is that the optimised group commit code does wakeup of following transactions early, before the binlog write is actually done. A woken-up following transaction is then allowed to run ahead and queue up for the group commit, which will ensure that binlog write happens in correct order in the end. However, the code for directly written events currently bypass this mechanism, so they get woken up and written too early. This will be fixed properly in a later patch.
-
Kristian Nielsen authored
The real bug was that open_tables() returned error in case of thd->killed() without properly calling thd->send_kill_message() to set the correct error. This was fixed some time ago. So remove the, now redundant, extra checks for thd->is_error(), possibly allowing to catch in debug builds more incorrect error handling cases.
-
Kristian Nielsen authored
In SAFE_MUTEX builds, reset the wait_for_commit mutex (destroy and re-initialise), so that SAFE_MUTEX lock order check does not become confused when the mutex is re-used for a different purpose.
-
Jan Lindström authored
setting of innodb_io_capacity_max (a) Changed the behaviour so that if you set innodb_io_capacity to a value > innodb_io_capacity_max that the value is accepted AND that innodb_io_capacity_max = innodb_io_capacity * 2. (b) If someone wants to reduce innodb_io_capacity_max and reduce it below innodb_io_capacity then innodb_io_capacity should be reduced to the same level as innodb_io_capacity_max. In both cases give a warning to user.
-
Jan Lindström authored
Analysis: InnoDB error monitor is responsible to call every second sync_arr_wake_threads_if_sema_free() to wake up possible hanging threads if they are missed in mutex_signal_object. This is not possible if error monitor itself is on mutex/semaphore wait. We should avoid all unnecessary mutex/semaphore waits on error monitor. Currently error monitor calls function buf_flush_stat_update() that calls log_get_lsn() function and there we will try to get log_sys mutex. Better, solution for error monitor is that in buf_flush_stat_update() we will try to get lsn with mutex_enter_nowait() and if we did not get mutex do not update the stats. Fix: Use log_get_lsn_nowait() function on buf_flush_stat_update() function. If returned lsn is 0, we do not update flush stats. log_get_lsn_nowait() will use mutex_enter_nowait() and if we get mutex we return a correct lsn if not we return 0.
-
Jan Lindström authored
on work-amd64-valgrind. Fixed issue by finding out first the current used priority for both treads and using that seeing did we really change the priority or not.
-
- 12 Nov, 2014 3 commits
-
-
Jan Lindström authored
-
Elena Stepanova authored
MDEV-7073 main.information_schema and main.information_schema_all_engines fail in buildbot on a build without perfschema main.information_schema: added a condition to the query to exclude perfschema tables main.information_schema_all_engines: added a call to the include file to check for the presence of perfschema
-
Elena Stepanova authored
MDEV-7072 mroonga/wrapper.version_56_or_later_performance_schema fails in buildbot on a build without perfschema Added a call for the include file to check for the presence of perfschema
-