- 26 May, 2021 2 commits
- 24 May, 2021 2 commits
-
-
Marko Mäkelä authored
-
Thirunarayanan Balathandayuthapani authored
- Patch addresses the problem to fix double free of transaction if it is own transaction.
-
- 23 May, 2021 2 commits
-
-
Thirunarayanan Balathandayuthapani authored
FTS add index fails Problem: ======== InnoDB double frees the table if auxiliary fts table creation fails and fails to set the dict operation for the transaction. It leads to failure while dropping newly added index. Solution: ========= InnoDB should avoid double freeing and set the dictionary operation of transaction in fts_create_common_tables()
-
Thirunarayanan Balathandayuthapani authored
InnoDB truncate table fails to load the fts stopword table into cache. In that case, InnoDB double frees the truncate creation transaction. InnoDB should free the transaction which was created inside ha_innobase::create.
-
- 22 May, 2021 7 commits
-
-
Julius Goryavsky authored
The is_local_ip function that used in Galera SST scripts now incorrectly identifies ip-addresses falling under the "127.0.0.0/8" netmask as non-local ip, although they certainly belong to the loopback interface. This commit fixes this flaw.
-
Sergei Golubchik authored
don't require tar/gtar, git, getconf, groff/nroff, and ruby.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
when cmake is re-run and include(FindJAVA) is skipped, JAVA_FOUND should still be set. Same for JNI.
-
Sergei Golubchik authored
report correct error codes in ed25519. Invalid value stored in the user table or an OpenSSL error is CR_ERROR. When a user provided incorrect password when logging in - it's CR_AUTH_USER_CREDENTIALS.
-
Sergei Golubchik authored
-
- 21 May, 2021 5 commits
-
-
Igor Babaev authored
In the code existed just before this patch binding of a table reference to the specification of the corresponding CTE happens in the function open_and_process_table(). If the table reference is not the first in the query the specification is cloned in the same way as the specification of a view is cloned for any reference of the view. This works fine for standalone queries, but does not work for stored procedures / functions for the following reason. When the first call of a stored procedure/ function SP is processed the body of SP is parsed. When a query of SP is parsed the info on each encountered table reference is put into a TABLE_LIST object linked into a global chain associated with the query. When parsing of the query is finished the basic info on the table references from this chain except table references to derived tables and information schema tables is put in one hash table associated with SP. When parsing of the body of SP is finished this hash table is used to construct TABLE_LIST objects for all table references mentioned in SP and link them into the list of such objects passed to a pre-locking process that calls open_and_process_table() for each table from the list. When a TABLE_LIST for a view is encountered the view is opened and its specification is parsed. For any table reference occurred in the specification a new TABLE_LIST object is created to be included into the list for pre-locking. After all objects in the pre-locking have been looked through the tables mentioned in the list are locked. Note that the objects referenced CTEs are just skipped here as it is impossible to resolve these references without any info on the context where they occur. Now the statements from the body of SP are executed one by one that. At the very beginning of the execution of a query the tables used in the query are opened and open_and_process_table() now is called for each table reference mentioned in the list of TABLE_LIST objects associated with the query that was built when the query was parsed. For each table reference first the reference is checked against CTEs definitions in whose scope it occurred. If such definition is found the reference is considered resolved and if this is not the first reference to the found CTE the the specification of the CTE is re-parsed and the result of the parsing is added to the parsing tree of the query as a sub-tree. If this sub-tree contains table references to other tables they are added to the list of TABLE_LIST objects associated with the query in order the referenced tables to be opened. When the procedure that opens the tables comes to the TABLE_LIST object created for a non-first reference to a CTE it discovers that the referenced table instance is not locked and reports an error. Thus processing non-first table references to a CTE similar to how references to view are processed does not work for queries used in stored procedures / functions. And the main problem is that the current pre-locking mechanism employed for stored procedures / functions does not allow to save the context in which a CTE reference occur. It's not trivial to save the info about the context where a CTE reference occurs while the resolution of the table reference cannot be done without this context and consequentially the specification for the table reference cannot be determined. This patch solves the above problem by moving resolution of all CTE references at the parsing stage. More exactly references to CTEs occurred in a query are resolved right after parsing of the query has finished. After resolution any CTE reference it is marked as a reference to to derived table. So it is excluded from the hash table created for pre-locking used base tables and view when the first call of a stored procedure / function is processed. This solution required recursive calls of the parser. The function THD::sql_parser() has been added specifically for recursive invocations of the parser.
-
Marko Mäkelä authored
ha_innobase::open(): If the table is only being opened by purge for evaluating virtual column values, avoid invoking initialize_auto_increment(), because the purge thread may already be holding an shared latch on the clustered index root page. Shared latches are not recursive. The additional request would lead to a hang if another thread has started waiting for an exclusive latch.
-
Sergei Petrunia authored
MDEV-22462: Item_in_subselect::create_single_in_to_exists_cond(JOIN *, Item **, Item **): Assertion `false' failed. Item_in_subselect::create_single_in_to_exists_cond() should handle the case where the subquery is a table-less select but it is not a result of a UNION. (Table-less subqueries like "(SELECT 1)" are "substituted" with their select list, but table-less subqueries with WHERE or HAVING clause, like "(SELECT 1 WHERE ...)" are not substituted. They are handled with regular execution path)
-
Julius Goryavsky authored
Another batch of changes that should make the SST process more reliable in all scenarios: 1) Added hostname or CN verification when stunnel is used with certificate chain verification (verifyChain = yes); 2) Added check for the absence of the stunnel utility for mtr tests; 3) Deletion of working files before and after SST is done more accurately; 4) rsync on joiner can be run even if the path to its configuration file contains spaces; 5) More accurate directory creation (for data files and for logs); 6) IST with mysqldump no longer turns off statement logging; 7) Reset password for mysqldump when password is empty but username is specified; 8) More reliable quoting when generating statements in wsrep_sst_mysqldump; 9) Added explicit generation of 2048-bit Diffie-Hellman parameters for sockat < 1.7.3, by analogy with xtrabackup; 10) Compression parameters for qpress are read from all suitable server groups in configuration file, as well as from the [sst] and [xtrabackup] groups; 11) Added a test that checks compression using qpress; 12) Checking for optional utilities is modified to work even if they implemented as built-in shell commands (unlikely on real systems, but more reliable).
-
Julius Goryavsky authored
Another batch of changes that should make the SST process more reliable in all scenarios: 1) Added hostname or CN verification when stunnel is used with certificate chain verification (verifyChain = yes); 2) Added check for the absence of the stunnel utility for mtr tests; 3) Deletion of working files before and after SST is done more accurately; 4) rsync on joiner can be run even if the path to its configuration file contains spaces; 5) More accurate directory creation (for data files and for logs); 6) IST with mysqldump no longer turns off statement logging; 7) Reset password for mysqldump when password is empty but username is specified; 8) More reliable quoting when generating statements in wsrep_sst_mysqldump; 9) Added explicit generation of 2048-bit Diffie-Hellman parameters for sockat < 1.7.3, by analogy with xtrabackup; 10) Compression parameters for qpress are read from all suitable server groups in configuration file, as well as from the [sst] and [xtrabackup] groups; 11) Added a test that checks compression using qpress; 12) Checking for optional utilities is modified to work even if they implemented as built-in shell commands (unlikely on real systems, but more reliable).
-
- 20 May, 2021 1 commit
-
-
Rucha Deodhar authored
m_status == DA_OK_BULK' failed in Diagnostics_area::message from get_schema_tables_record Analysis: SET NAMES changes character set for character_set_client, character_set_connection, character_set_results to 'filename'. The .frm file of view has @xx sequences in the SELECT query, which give parsing error because 'filename' character set is not parser friendly. When we get parsing error (ER_PARSE_ERROR), we directly return true without setting error status. This is caught later in assertion. Fix: Disallow 'filename' character set in SET NAMES because it is not parser friendly.
-
- 19 May, 2021 3 commits
-
-
Daniel Black authored
before change test: strace -fe trace=file -o /tmp/f.strace sql/mysqld --datadir=/tmp/d --log-bin=foo-bin --help --verbose && ls -la /tmp/ ... 'mysqladmin variables' instead of 'mysqld --verbose --help'. total 0 drwxrwxr-x. 2 dan dan 60 May 19 18:05 . drwxrwxrwt. 27 root root 640 May 19 18:03 .. -rw-rw----. 1 dan dan 0 May 19 18:05 foo-bin.index
-
Sergei Petrunia authored
In Item_field::fix_fields(): when the item was resolved to an Item_field in the SELECT's select_list, copy the Item_field's "depended_from" field. Failure to do so caused the item to have incorrect attributes: it pointed to a Field in an upper select but used_tables() didn't return OUTER_REF_TABLE_BIT.
-
Sujatha authored
Post push fix to address test issue.
-
- 18 May, 2021 3 commits
-
-
Ramesh Sivaraman authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
ha_innobase::index_read(): If an autocommit non-locking transaction was already started, refuse to access a SPATIAL INDEX. Once a non-locking autocommit transaction has started, it must remain in that mode (not acquire any locks). This should fix one cause of the assertion failure that would occur in DeadlockChecker::check_and_resolve() under heavy load, presumably due to concurrent execution of trx_commit_in_memory().
-
- 17 May, 2021 6 commits
-
-
Julius Goryavsky authored
-
Julius Goryavsky authored
-
Julius Goryavsky authored
1) This commit implements reading all sections from configuration files while looking for the current value of any server variable, which were previously only read from the [mysqld.suffix] group and from [mysqld], but not from other groups such as [mariadb.suffix], [mariadb] or, for example, [server]. 2) This commit also fixes misrecognition of some parameters when parsing a command line containing a special marker for the end of the list of options ("--") or when short option names (such as "-s", "-a" and "-h arg") chained together (like a "-sah arg"). Such parameters can be passed to the SST script in the list of arguments after "--mysqld-args" if the server is started with a complex set of options - this was revealed during manual testing of changes to read configuration files. 3) The server-side preparation code for the "--mysqld-args" option list has also been simplified to make it easier to change in the future (if needed), and has been improved to properly handle the special backquote ("`") character in the argument values.
-
Julius Goryavsky authored
-
Sujatha authored
Problem: ======== Aborting OPTIMIZE TABLE still logs in binary logs and replicates to the Slave server. "Optimize table" command under execution, is killed by using "Ctrl-C" as shown below. MariaDB [test]> optimize table t2; ^CCtrl-C -- query killed. Continuing normally. In spite of query execution being interrupted the query gets written to binary log. Analysis: ======== Admin command execution logic is not handling KILL command, hence it ignores the KILL command and completes its execution. Fix: === Check for thread killed notification, during admin command execution and handle it. If thread kill occurs prior to any table modification the query will not be written to binary log. If kill happens after at least one table is modified then the query will be written to binary log. Ex: command in execution is 'OPTIMIZE TABLE t1,t2' and the thread kill happens after t1 table is modified then 'OPTIMIZE TABLE t1,t2' will be written to binary log as admin commands will not make the slave to diverge from master.
-
Sujatha authored
Problem: ======= In slave_parallel_mode=optimistic configuration, when admin commands and DML operation on the same table are scheduled simultaneously for execution, it results in lock conflict and slave server either hangs due to deadlock or goes down with an assert. Analysis: ======== Admin commands OPTIMIZE, REPAIR and ANALYZE are written to binary log as ordinary transactions. When 'slave_parallel_mode' is 'optimistic' DMLs are allowed to run in parallel. But these locks are not detected by parallel replication deadlock detection-and-handling mechanism. At times they result in deadlock or assertion. Fix: === Flag admin commands as DDL in Gtid_log_event at the time of writing to binary log. Add a new bit EXECUTED_TABLE_ADMIN_CMD to 'm_unsafe_rollback_flags'. During 'mysql_admin_table' command execution it accepts a list of tables to be processed and executes them in a loop. Upon successful execution enable 'EXECUTED_TABLE_ADMIN_CMD' bit in thd->transaction.stmt_unsafe_rollback_flags. Gtid_log_event constructor will notice this flag and mark the current transaction with 'FL_DDL' flag. Gtid_log_events marked as FL_DDL will not be scheduled parallel execution, on the slave. They will execute in isolation to prevent deadlocks. Note: Removed the call to 'trans_commit_implicit' from 'mysql_admin_table' function as 'mysql_execute_command' will take care of invoking 'trans_commit_implicit'.
-
- 16 May, 2021 1 commit
-
-
Daniel Black authored
No longer a MySQL server, "his" is the wrong pronoun for a server. Thanks Michael Newton for highlighting these problems Also changed slave -> replica.
-
- 15 May, 2021 2 commits
-
-
Julius Goryavsky authored
1) This commit implements reading all sections from configuration files while looking for the current value of any server variable, which were previously only read from the [mysqld.suffix] group and from [mysqld], but not from other groups such as [mariadb.suffix], [mariadb] or, for example, [server]. 2) This commit also fixes misrecognition of some parameters when parsing a command line containing a special marker for the end of the list of options ("--") or when short option names (such as "-s", "-a" and "-h arg") chained together (like a "-sah arg"). Such parameters can be passed to the SST script in the list of arguments after "--mysqld-args" if the server is started with a complex set of options - this was revealed during manual testing of changes to read configuration files. 3) The server-side preparation code for the "--mysqld-args" option list has also been simplified to make it easier to change in the future (if needed), and has been improved to properly handle the special backquote ("`") character in the argument values.
-
Julius Goryavsky authored
-
- 14 May, 2021 4 commits
-
-
Igor Babaev authored
If a select query contained an ORDER BY clause that followed a LIMIT clause or an ORDER BY clause or ORDER BY with LIMIT the EXPLAIN output for the query showed an execution plan different from that was actually executed. Approved by Roman Nozdrin <roman.nozdrin@mariadb.com>
-
Sachin Kumar authored
Problem:- When slave is shutdown, we will get this assertion failure sql/sql_list.h:642: void ilink::assert_linked(): Assertion `prev != 0 && next != 0' failed. Solution:- In close_connections when we call threads.get() it resets to prev and next to NULL. And in parallel worker thread(handle_rpl_parallel_thread) calls unlink_not_visible_thd() which assert on prev and next being not NULL. .unlink_not_visible_thd() should be always called first before threads.get() is called. To make sure worker calls unlink_not_visible_thd() in slave_prepare_for_shutdown() we are deactivating the worker thread pool which in turn will close all worker threads. Since this is already done in 10.4 and 10.5 I am backPorting MDEV-20821 and MDEV-22370 to 10.2. Mdev-22370 is improving the MDEV-20821 patch.
-
Sachin Kumar authored
MDEV-22370 safe_mutex: Trying to lock uninitialized mutex at /data/src/10.4-bug/sql/rpl_parallel.cc, line 470 upon shutdown during FTWRL Problem:- When we issue FTWRL with shutdown in parallel, there is race between FTWRL and shutdown. Shutdown might destroy the mutex (pool->LOCK_rpl_thread_pool) before FTWRL can lock it. So we can get crash on FTWRL thread Solution:- mysql_mutex_destroy(pool->LOCK_rpl_thread_pool) should wait for FTWRL thread to complete its work , and then destroy. So slave_prepare_for_shutdown will just deactivate the pool, and mutex is destroyed later in end_slave()
-
Andrei Elkin authored
Parallel slave server shutdown found to be hanging in close_connections() triggered by shutdown due to a slave worker thread would not be notified to exit in case the worker was sitting idle. Fixed with destroying the worker pool earlier that is in slave_prepare_for_shutdown() when all their driver threads have already left. A test file is added to simulate the bug condition as well as check multi-sourced and not-idle worker cases.
-
- 11 May, 2021 2 commits
-
-
Sergei Golubchik authored
-
Ramesh Sivaraman authored
-