- 30 Apr, 2021 4 commits
-
-
Sujatha authored
Problem: ======== 180511 11:07:58 [ERROR] Slave I/O: Unexpected master's heartbeat data: heartbeat is not compatible with local info;the event's data: log_file_name mysql-bin.000009 log_pos 1054262041, Error_code: 1623 Analysis: ========= In replication setup when master server doesn't have any events to send to slave server it sends an 'Heartbeat_log_event'. This event carries the current binary log filename and offset details. The offset values is stored within 4 bytes of event header. When the size of binary log is higher than UINT32_MAX the log_pos values will not fit in 4 bytes memory. It overflows and hence slave stops with an error. Fix: === Since we cannot extend the common_header of Log_event class, a greater than 4GB value of Log_event::log_pos is made to be transported with a HeartBeat event's sub-header. Log_event::log_pos in such case is set to zero to indicate that the 8 byte sub-header is allocated in the event. In case of cross version replication following behaviour is expected OLD - Server without fix NEW - Server with fix OLD<->NEW : works bidirectionally as long as the binlog offset is (normally) within 4GB. When log_pos > UINT32_MAX OLD->NEW : The 'log_pos' is bound to overflow and NEW slave may report an invalid event/incompatible heart beat event error. NEW->OLD : Since patched server sets log_pos=0 on overflow, OLD slave will report invalid event error.
-
Thirunarayanan Balathandayuthapani authored
- Fixing post-push failure of innodb_fts_misc_1 test case.
-
Thirunarayanan Balathandayuthapani authored
InnoDB tries to fetch the deleted doc ids for discarded tablespace. In i_s_fts_deleted_generic_fill(), InnoDB needs to check whether the table is discarded or not before fetching deleted doc ids.
-
Marko Mäkelä authored
fil_ibd_load(): Remove a message that is basically saying that everything works as expected. The other "Ignoring data file" message about the presence of an extraneous file will be retained (and expected by the test innodb.log_file_name).
-
- 29 Apr, 2021 4 commits
-
-
Nikita Malyavin authored
This reverts commit 8880dff2.
-
Nikita Malyavin authored
-
Nikita Malyavin authored
-
Igor Babaev authored
This commits replaces the call of the function setup_tables() with a call of the function setup_tables_and_check_access() in the method Multiupdate_prelocking_strategy::handle_end(). There is no known bug that would require this change. However the change aligns this piece of code with the code existed before the patch for MDEV-24823.
-
- 28 Apr, 2021 8 commits
-
-
Dmitry Shulga authored
Attempt to build MariaDB server on MacOS could result in compilation errors like the following one: In file included from server-10.2/storage/perfschema/cursor_by_account.cc:28: In file included from server-10.2/include/my_global.h:287: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.3.sdk/usr/include/c++/v1/math.h:309: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.3.sdk/usr/include/c++/v1/type_traits:418: server-10.2/version:1:1: error: expected unqualified-id MYSQL_VERSION_MAJOR=10 ^ server-10.2/build.dir/include/my_config.h:529:29: note: expanded from macro 'MYSQL_VERSION_MAJOR' This kind of compiler errors occur by the reson that compiler's system headers contain the directive '#include <version>' and a compiler is invoked with -I${CMAKE_SOURCE_DIR}. The MariaDB source code root directory contains the file VERSION that is handled by the compiler during processing the directive #include <version> since file names on MacOS are case insensetive, so version and VERSION is treated as the same file name. To fix the issue the source code root directory should be removed from a list of directories used by the compiler for include search path.
-
Oleksandr Byelkin authored
The problem is that sharing default expression among set instruction leads to attempt access result field of function created in other instruction runtime MEM_ROOT and already freed (a bit different then MySQL problem). Fix is the same as in MySQL (but no optimisation for constant), turn DECLARE a, b, c type DEFAULT expr; to DECLARE a type DEFAULT expr, b type DEFAULT a, c type DEFAULT a;
-
Jan Lindström authored
Removed explicit InnoDB monitor startup and used just functions to print current lock information.
-
Jan Lindström authored
Problem was that we should skip strict password validation on applier nodes similarly as is done for slave nodes.
-
Jan Lindström authored
Replace unnecessary sleeps with real wait_conditions to make sure correct cluster sizes.
-
Vladislav Vaintroub authored
Relax the assert condition. A locked table that did existed prior to CREATE IF NOT EXIST, retains the MDL_NO_SHARED_READ_WRITE MDL lock prio.
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
-
- 27 Apr, 2021 9 commits
-
-
Sergei Golubchik authored
plugin variables in SET only locked the plugin till the end of the statement. If SET with a plugin variable was prepared, it was possible to uninstall the plugin before EXECUTE. Then EXECUTE would crash, trying to resolve a now-invalid pointer to a disappeared variable. Fix: keep plugins locked until the prepared statement is closed.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
encourage the use of mysql_secure_installation, that can always set the root password correctly for all root accounts, no matter how many are there and what the structure of privilege tables is
-
Thirunarayanan Balathandayuthapani authored
InnoDB startup hangs if a DDL transaction needs to be rolled back and a recovered transaction on statistics tables exists. In that case, InnoDB should rollback the transaction which holds locks on innodb_table_stats or innodb_index_stats during trx_rollback_or_clean_recovered().
-
Thirunarayanan Balathandayuthapani authored
InnoDB fails to fetch the index type when innodb dictionary doesn't match with frm. InnoDB should return corrupted if it can't find the index in ha_innobase::index_type().
-
Nikita Malyavin authored
table->move_fields wasn't undone in case of error. 1. move_fields is unconditionally undone even when error is occurred 2. cherry-pick an assertion in `ptr_in_record`, which is already in 10.5
-
Nikita Malyavin authored
The assertion is improved: storage engines like myisam always have to store at least one field, so the assertion does not cover tables with no stored columns.
-
Nikita Malyavin authored
So we are having a race condition of three of threads, resulting in a deadlock backoff in purge, which is unexpected. More precisely, the following happens: T1: NOCOPY ALTER TABLE begins, and eventually it holds MDL_SHARED_NO_WRITE lock; T2: FLUSH TABLES begins. it sets share->tdc->flushed = true T3: purge on a record with virtual column begins. it is going to open a table. MDL_SHARED_READ lock is acquired therefore. Since share->tdc->flushed is set, it waits for a TDC purge end. T1: is going to elevate MDL LOCK to exclusive and therefore has to set other waiters to back off. T3: receives VICTIM status, reports a DEADLOCK, sets OT_BACKOFF_AND_RETRY to Open_table_context::m_action My fix is to allow opening table in purge while flushing. It is already done the same way in other maintainance facilities like REPAIR TABLE. Another way would be making an actual backoff, but Open_table_context does not allow to distinguish it from other failure types, which still seem to be unexpected. Making this would require hacking into Open_table_context interface for no benefit, in comparison to passing MYSQL_OPEN_IGNORE_FLUSH during table open.
-
Nikita Malyavin authored
innodb_debug_sync was introduced in commit b393e2cb and reverted in commit fc58c172 due to memory leak reported by valgrind, see MDEV-21336. The leak is now fixed by adding `rw_lock_free(&slot->debug_sync_lock)` after background thread working loop is finished, and the patch is reapplied, with respect to c++98 fixes by Marko. The missing DEBUG_SYNC for MDEV-18546 in row0vers.cc is also reapplied.
-
- 26 Apr, 2021 1 commit
-
-
Daniel Black authored
Quoting MDEV reporter Daniel Lewart: Starting MariaDB with default configuration causes the following problems: "[Warning] Could not increase number of max_open_files to more than 16384 (request: 32186)" silently reduces table_open_cache_instances from 8 (default) to 4 Default Server System Variables: extra_max_connections = 1 max_connections = 151 table_open_cache = 2000 table_open_cache_instances = 8 thread_pool_size = 4 LimitNOFILE=16834 is in the following files: support-files/mariadb.service.in support-files/mariadb@.service.in Looking at sql/mysqld.cc lines 3837-3917: wanted_files= (extra_files + max_connections + extra_max_connections + tc_size * 2 * tc_instances); wanted_files+= threadpool_size; Plugging in the default values: wanted_files = (30 + 151 + 1 + 2000 * 2 * 8 + 4) = 32186 However, systemd configuration has LimitNOFILE = 16384, which is far smaller. I suggest increasing LimitNOFILE to 32768.
-
- 25 Apr, 2021 2 commits
-
-
Sergei Petrunia authored
(trivial backport to 10.2) Add a testcase
-
Sergei Petrunia authored
(trivial backport to 10.2) The optimizer removes redundant GROUP BY operations. If GROUP BY element is a subselect, it is "eliminated". However one must not eliminate the item if it is used both in the select list and in the GROUP BY, like so: select (select ... ) as SUBQ from ... group by SUBQ Do not eliminate such items.
-
- 24 Apr, 2021 2 commits
-
-
Marko Mäkelä authored
It is possible that an object that was originally created by open_purge_table() will remain cached and reused for SQL execution. Our previous fix wrongly assumed that ha_innobase::open() would always be called before SQL execution starts. Therefore, we must invoke dict_stats_init() in ha_innobase::info_low() instead of only doing it in ha_innobase::open(). Note: Concurrent execution of dict_stats_init() on the same table is possible, but it also was possible between two calls to ha_innobase::open(), with no ill effects observed. This should fix the assertion failure on stat_initialized. A possibly easy way to reproduce it would have been to run the server with innodb_force_recovery=2 (disable the purge of history), update a table so that an indexed virtual column will be affected, and finally restart the server normally (purge enabled), to observe a crash when the table is accessed from SQL. The problem was first observed and this fix verified by Elena Stepanova. Also Thirunarayanan Balathandayuthapani repeated the problem.
-
Marko Mäkelä authored
row_sel_sec_rec_is_for_clust_rec(): If the field in the clustered index record stored off page, always fetch it, also when the secondary index field has been built on the entire column. This was broken ever since the InnoDB Plugin for MySQL Server 5.1 introduced ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPRESSED for InnoDB tables. That code was first introduced in this tree in commit 3945d5e5. For the original ROW_FORMAT=REDUNDANT and the MySQL 5.0.3 ROW_FORMAT=COMPRESSED, there was no problem, because for those tables we always stored at least a 768-byte prefix of each column in the clustered index record. row_sel_sec_rec_is_for_blob(): Allow prefix_len==0 for matching the full column.
-
- 23 Apr, 2021 4 commits
-
-
Igor Babaev authored
-
Aleksey Midenkov authored
Before FRM is written walk vcol expressions through check_table_name_processor() and check if field items match (db, table_name) qualifier. We cannot do this in check_vcol_func_processor() as there is already no table name qualifiers in expressions of written and loaded FRM.
-
Aleksey Midenkov authored
Buffer overflow in ib_push_warning() fixed by using vsnprintf(). InnoDB parser was obsoleted by MDEV-16417. Thanks to Nikita Malyavin for review and suggestion.
-
Sergei Golubchik authored
It's Oracle libmysqlclient license exception, we no longer include, build or ship libmysqlclient
-
- 22 Apr, 2021 4 commits
-
-
Igor Babaev authored
Before this patch mergeable derived tables / view used in a multi-table update / delete were merged before the preparation stage. When the merge of a derived table / view is performed the on expression attached to it is fixed and ANDed with the where condition of the select S containing this derived table / view. It happens after the specification of the derived table / view has been merged into S. If the ON expression refers to a non existing field an error is reported and some other mergeable derived tables / views remain unmerged. It's not a problem if the multi-table update / delete statement is standalone. Yet if it is used in a stored procedure the select with incompletely merged derived tables / views may cause a problem for the second call of the procedure. This does not happen for select queries using derived tables / views, because in this case their specifications are merged after the preparation stage at which all ON expressions are fixed. This patch makes sure that merging of the derived tables / views used in a multi-table update / delete statement is performed after the preparation stage. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
Fix is to changed message to be [WARNING] for backup
-
Vladislav Vaintroub authored
There is new Yukon Standard time Windows timezone. Also fix the powershell script that generates the Windows locale mapping, tell powershell to use TLSv1.2 to access the github (on some reason it is TLS1.1 that powershell is using by default, and it does no work)
-
- 21 Apr, 2021 2 commits
-
-
Thirunarayanan Balathandayuthapani authored
to mysql interpreter InnoDB returns uninitialized statistics to mysql interpreter when background thread is opening the table. So it leads to assertion failure. In that case, InnoDB avoid sending innodb statistics information to mysql interpreter.
-
Eugene Kosov authored
node->index was NULL. But it's possible to get dict_table_t* from another source.
-