- 05 May, 2021 3 commits
-
-
Alexey Yurchenko authored
1. Fix eval command line to correctly pass stunnel option to rsync on donor. 2. Deprecate `tkey`, `tcert` and `tca` options in [sst] section in favor of conventional `ssl-key`, `ssl-cert` and `ssl-ca`, but keep their precedence for backward compatibility. 3. Default to require SSL encryption if at least SSL key and cert files are specified in configuration, either in [sst] or [mysqld] sections. 4. Enable `verify*` option for stunnel on donor only if a. CA file is specified somewhere in the configuration b. it is explicitly requested in [sst] section by either specifying ssl-mode or CA file there. In this case if ssl-mode is not explicitly given, it defaults to VERIFY_CA. ssl-mode maps to stunnel options as follows: VERIFY_CA -> verifyChain = yes VERIFY_IDENTITY -> verifyPeer = yes Example to require donor to verify joiner identity: ``` [mysqld] ssl-cert=/path/to/cert ssl-key=/path/to/key ssl-ca=/path/to/ca [sst] ssl-mode=VERIFY_IDENTITY ``` 5. If SSL verification is requested, joiner verifies donor by checking the secret passed to donor via SST request. Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
-
Julius Goryavsky authored
-
Julius Goryavsky authored
-
- 04 May, 2021 1 commit
-
-
Sergei Golubchik authored
When you only need view structure, don't call handle_derived with DT_CREATE and rely on its internal hackish check to skip DT_CREATE. Because handle_derived is called from many different places, and this internal hackish check is indiscriminative. Instead, just don't ask handle_derived to do DT_CREATE if you don't want it to do DT_CREATE.
-
- 03 May, 2021 4 commits
-
-
Julius Goryavsky authored
After switching to the new mariabackup interface (instead of the outdated innobackupex interface, which is supported for compatibility), we need to explicitly pass a path to the datadir directory as a parameter, since in the new interface the value of this option is not automatically set in such a way that it always matches the SST/IST logic. This commit adds passing this option as an explicit parameter to mariabackup. This commit also removed unnecessary options that are not used and not supported by mariabackup. Also, numerous flaws in the common wsrep_sst_common script have been fixed: 1) There are many bash-specific constructs in the script that may not be supported by other interpreters, which can lead to the most unexpected errors during SST, because failures in the interpretation of bash-specific constructs lead to incorrect parsing of arguments; 2) There is parse_cnf() function which is often called by other scripts for the "mysqld" or "--mysqld" group, but it does not take into account the default group suffix, which leads to reading values only from the default group, which then leads to errors due to reading the default values instead of the values for a specific group; 3) Some options such as --user, --innodb-data-home-dir or --datadir are not removed from the --mysqld-args list, although they are processed inside scripts (and passing of these options funther may cause problems for mariabackup); 4) If an argument that the script understands is present in the --mysqld-args list twice, then this causes SST to fail, instead of reading the most recent value; 5) The "--host" parameter is technically still supported among the arguments of the SST scripts, but in reality scripts do not work with it as expected, especially if it has an IPv6 address; 6) If the port number is absent in the --address parameter value, but the port number is explicitly passed through the --port argument, then the scripts for mariabackup and xtrabackup-v2 fail; 7) If a new address interface is used (with the --address parameter), then automatic default port substitution is not performed, although it is supported for the legacy --host/--port interface. 8) If there are spaces in the parameter values after --mysqld_args, then their further transfer does not occur correctly, which causes mariabackup to fail during SST - the space splits the argument in such a way that it breaks the parsing of the following parameters; 9) If most of the parameters that are names or paths to the files or directories contain spaces, then SST scripts fail in an unpredictable way due to incorrect variable substitutions; 10) If the --log-bin option is passed among the arguments of myqlds (--mysqld-args) without a parameter, and the --binlog option is not specified, then the script cannot substitute the default name for binlog and cannot construct binlog name using the --log-basename argument (which is against server specifications); 11) Tail slashes are not removed from the directory names, which, upon further substitution, leads to the appearance of a double slash in the file paths; 12) The explicit --binlog parameter (which is now always transmitted from the server side) and the "hidden" --log-bin parameter in the list of arguments after --mysqld-args are perceived as two different parameters in different parts of the scripts, and if they are do not match for some reason, this will lead to failures during SST; Also, all new changes from the 10.6 branch have been migrated here, including the latest pull requests for authentication (only the part that concerns SST scripts). It also fixes dozens of other bugs in all SST scripts.
-
Julius Goryavsky authored
Removed numerous extra blank lines and spaces that interfere with reading and understanding program code, making it more difficult to find errors in scripts. I also removed all extra trailing spaces at the ends of lines, which lead to marking extra lines as changes (in subsequent changes). The amount of indentation in some parts of the code has also been normalized.
-
Sergei Petrunia authored
Fix a race condition in the testcase. The testcase assumed that State='Sending data' means that the thread is already in an InnoDB lock wait. This is not case, there is a gap between the state changing to Sending data and execution reaching the point where it is waiting for a lock. Use a more precise check instead, through I_S.INNODB_TRX.
-
Vladislav Vaintroub authored
-
- 30 Apr, 2021 4 commits
-
-
Sujatha authored
Problem: ======== 180511 11:07:58 [ERROR] Slave I/O: Unexpected master's heartbeat data: heartbeat is not compatible with local info;the event's data: log_file_name mysql-bin.000009 log_pos 1054262041, Error_code: 1623 Analysis: ========= In replication setup when master server doesn't have any events to send to slave server it sends an 'Heartbeat_log_event'. This event carries the current binary log filename and offset details. The offset values is stored within 4 bytes of event header. When the size of binary log is higher than UINT32_MAX the log_pos values will not fit in 4 bytes memory. It overflows and hence slave stops with an error. Fix: === Since we cannot extend the common_header of Log_event class, a greater than 4GB value of Log_event::log_pos is made to be transported with a HeartBeat event's sub-header. Log_event::log_pos in such case is set to zero to indicate that the 8 byte sub-header is allocated in the event. In case of cross version replication following behaviour is expected OLD - Server without fix NEW - Server with fix OLD<->NEW : works bidirectionally as long as the binlog offset is (normally) within 4GB. When log_pos > UINT32_MAX OLD->NEW : The 'log_pos' is bound to overflow and NEW slave may report an invalid event/incompatible heart beat event error. NEW->OLD : Since patched server sets log_pos=0 on overflow, OLD slave will report invalid event error.
-
Thirunarayanan Balathandayuthapani authored
- Fixing post-push failure of innodb_fts_misc_1 test case.
-
Thirunarayanan Balathandayuthapani authored
InnoDB tries to fetch the deleted doc ids for discarded tablespace. In i_s_fts_deleted_generic_fill(), InnoDB needs to check whether the table is discarded or not before fetching deleted doc ids.
-
Marko Mäkelä authored
fil_ibd_load(): Remove a message that is basically saying that everything works as expected. The other "Ignoring data file" message about the presence of an extraneous file will be retained (and expected by the test innodb.log_file_name).
-
- 29 Apr, 2021 4 commits
-
-
Nikita Malyavin authored
This reverts commit 8880dff2.
-
Nikita Malyavin authored
-
Nikita Malyavin authored
-
Igor Babaev authored
This commits replaces the call of the function setup_tables() with a call of the function setup_tables_and_check_access() in the method Multiupdate_prelocking_strategy::handle_end(). There is no known bug that would require this change. However the change aligns this piece of code with the code existed before the patch for MDEV-24823.
-
- 28 Apr, 2021 8 commits
-
-
Dmitry Shulga authored
Attempt to build MariaDB server on MacOS could result in compilation errors like the following one: In file included from server-10.2/storage/perfschema/cursor_by_account.cc:28: In file included from server-10.2/include/my_global.h:287: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.3.sdk/usr/include/c++/v1/math.h:309: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.3.sdk/usr/include/c++/v1/type_traits:418: server-10.2/version:1:1: error: expected unqualified-id MYSQL_VERSION_MAJOR=10 ^ server-10.2/build.dir/include/my_config.h:529:29: note: expanded from macro 'MYSQL_VERSION_MAJOR' This kind of compiler errors occur by the reson that compiler's system headers contain the directive '#include <version>' and a compiler is invoked with -I${CMAKE_SOURCE_DIR}. The MariaDB source code root directory contains the file VERSION that is handled by the compiler during processing the directive #include <version> since file names on MacOS are case insensetive, so version and VERSION is treated as the same file name. To fix the issue the source code root directory should be removed from a list of directories used by the compiler for include search path.
-
Oleksandr Byelkin authored
The problem is that sharing default expression among set instruction leads to attempt access result field of function created in other instruction runtime MEM_ROOT and already freed (a bit different then MySQL problem). Fix is the same as in MySQL (but no optimisation for constant), turn DECLARE a, b, c type DEFAULT expr; to DECLARE a type DEFAULT expr, b type DEFAULT a, c type DEFAULT a;
-
Jan Lindström authored
Removed explicit InnoDB monitor startup and used just functions to print current lock information.
-
Jan Lindström authored
Problem was that we should skip strict password validation on applier nodes similarly as is done for slave nodes.
-
Jan Lindström authored
Replace unnecessary sleeps with real wait_conditions to make sure correct cluster sizes.
-
Vladislav Vaintroub authored
Relax the assert condition. A locked table that did existed prior to CREATE IF NOT EXIST, retains the MDL_NO_SHARED_READ_WRITE MDL lock prio.
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
-
- 27 Apr, 2021 9 commits
-
-
Sergei Golubchik authored
plugin variables in SET only locked the plugin till the end of the statement. If SET with a plugin variable was prepared, it was possible to uninstall the plugin before EXECUTE. Then EXECUTE would crash, trying to resolve a now-invalid pointer to a disappeared variable. Fix: keep plugins locked until the prepared statement is closed.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
encourage the use of mysql_secure_installation, that can always set the root password correctly for all root accounts, no matter how many are there and what the structure of privilege tables is
-
Thirunarayanan Balathandayuthapani authored
InnoDB startup hangs if a DDL transaction needs to be rolled back and a recovered transaction on statistics tables exists. In that case, InnoDB should rollback the transaction which holds locks on innodb_table_stats or innodb_index_stats during trx_rollback_or_clean_recovered().
-
Thirunarayanan Balathandayuthapani authored
InnoDB fails to fetch the index type when innodb dictionary doesn't match with frm. InnoDB should return corrupted if it can't find the index in ha_innobase::index_type().
-
Nikita Malyavin authored
table->move_fields wasn't undone in case of error. 1. move_fields is unconditionally undone even when error is occurred 2. cherry-pick an assertion in `ptr_in_record`, which is already in 10.5
-
Nikita Malyavin authored
The assertion is improved: storage engines like myisam always have to store at least one field, so the assertion does not cover tables with no stored columns.
-
Nikita Malyavin authored
So we are having a race condition of three of threads, resulting in a deadlock backoff in purge, which is unexpected. More precisely, the following happens: T1: NOCOPY ALTER TABLE begins, and eventually it holds MDL_SHARED_NO_WRITE lock; T2: FLUSH TABLES begins. it sets share->tdc->flushed = true T3: purge on a record with virtual column begins. it is going to open a table. MDL_SHARED_READ lock is acquired therefore. Since share->tdc->flushed is set, it waits for a TDC purge end. T1: is going to elevate MDL LOCK to exclusive and therefore has to set other waiters to back off. T3: receives VICTIM status, reports a DEADLOCK, sets OT_BACKOFF_AND_RETRY to Open_table_context::m_action My fix is to allow opening table in purge while flushing. It is already done the same way in other maintainance facilities like REPAIR TABLE. Another way would be making an actual backoff, but Open_table_context does not allow to distinguish it from other failure types, which still seem to be unexpected. Making this would require hacking into Open_table_context interface for no benefit, in comparison to passing MYSQL_OPEN_IGNORE_FLUSH during table open.
-
Nikita Malyavin authored
innodb_debug_sync was introduced in commit b393e2cb and reverted in commit fc58c172 due to memory leak reported by valgrind, see MDEV-21336. The leak is now fixed by adding `rw_lock_free(&slot->debug_sync_lock)` after background thread working loop is finished, and the patch is reapplied, with respect to c++98 fixes by Marko. The missing DEBUG_SYNC for MDEV-18546 in row0vers.cc is also reapplied.
-
- 26 Apr, 2021 1 commit
-
-
Daniel Black authored
Quoting MDEV reporter Daniel Lewart: Starting MariaDB with default configuration causes the following problems: "[Warning] Could not increase number of max_open_files to more than 16384 (request: 32186)" silently reduces table_open_cache_instances from 8 (default) to 4 Default Server System Variables: extra_max_connections = 1 max_connections = 151 table_open_cache = 2000 table_open_cache_instances = 8 thread_pool_size = 4 LimitNOFILE=16834 is in the following files: support-files/mariadb.service.in support-files/mariadb@.service.in Looking at sql/mysqld.cc lines 3837-3917: wanted_files= (extra_files + max_connections + extra_max_connections + tc_size * 2 * tc_instances); wanted_files+= threadpool_size; Plugging in the default values: wanted_files = (30 + 151 + 1 + 2000 * 2 * 8 + 4) = 32186 However, systemd configuration has LimitNOFILE = 16384, which is far smaller. I suggest increasing LimitNOFILE to 32768.
-
- 25 Apr, 2021 2 commits
-
-
Sergei Petrunia authored
(trivial backport to 10.2) Add a testcase
-
Sergei Petrunia authored
(trivial backport to 10.2) The optimizer removes redundant GROUP BY operations. If GROUP BY element is a subselect, it is "eliminated". However one must not eliminate the item if it is used both in the select list and in the GROUP BY, like so: select (select ... ) as SUBQ from ... group by SUBQ Do not eliminate such items.
-
- 24 Apr, 2021 2 commits
-
-
Marko Mäkelä authored
It is possible that an object that was originally created by open_purge_table() will remain cached and reused for SQL execution. Our previous fix wrongly assumed that ha_innobase::open() would always be called before SQL execution starts. Therefore, we must invoke dict_stats_init() in ha_innobase::info_low() instead of only doing it in ha_innobase::open(). Note: Concurrent execution of dict_stats_init() on the same table is possible, but it also was possible between two calls to ha_innobase::open(), with no ill effects observed. This should fix the assertion failure on stat_initialized. A possibly easy way to reproduce it would have been to run the server with innodb_force_recovery=2 (disable the purge of history), update a table so that an indexed virtual column will be affected, and finally restart the server normally (purge enabled), to observe a crash when the table is accessed from SQL. The problem was first observed and this fix verified by Elena Stepanova. Also Thirunarayanan Balathandayuthapani repeated the problem.
-
Marko Mäkelä authored
row_sel_sec_rec_is_for_clust_rec(): If the field in the clustered index record stored off page, always fetch it, also when the secondary index field has been built on the entire column. This was broken ever since the InnoDB Plugin for MySQL Server 5.1 introduced ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPRESSED for InnoDB tables. That code was first introduced in this tree in commit 3945d5e5. For the original ROW_FORMAT=REDUNDANT and the MySQL 5.0.3 ROW_FORMAT=COMPRESSED, there was no problem, because for those tables we always stored at least a 768-byte prefix of each column in the clustered index record. row_sel_sec_rec_is_for_blob(): Allow prefix_len==0 for matching the full column.
-
- 23 Apr, 2021 2 commits
-
-
Igor Babaev authored
-
Aleksey Midenkov authored
Before FRM is written walk vcol expressions through check_table_name_processor() and check if field items match (db, table_name) qualifier. We cannot do this in check_vcol_func_processor() as there is already no table name qualifiers in expressions of written and loaded FRM.
-