- 01 Jun, 2021 7 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Krunal Bauskar authored
Debian has support for 3 different arm machine and probably one which is failing is a pretty old version which doesn't support the said instruction. So we can make it specific to _aarch64_ (as per the arm official reference manual ISB should be defined by all ARMv8 processors). [as per the ARMv7 specs even 32 bits processor should support it but not sure which exact version Debian has under armel]. In either case, the said performance issue will have an impact mainly with a high-end processors with extreme parallelism so it safe to limit it to _aarch64_. Corrects: MDEV-24630
-
Daniel Black authored
Though these will all get case to unsigned long long where it is populated into the perfschema's BIGINT type. Use uintptr_t for NetBSD per Nia Alarie's original #1836.
-
Daniel Black authored
-
nia authored
This is a fix for operating systems that have pthread_t defined as a pointer and use the default pthread_self() mechanism for identifying threads. More specifically, this is a build fix for NetBSD. Any changes I submit are freely available under the new BSD license. Signed-off-by: Nia Alarie <nia@NetBSD.org>
-
- 31 May, 2021 5 commits
-
-
Julius Goryavsky authored
This commit reduces the likelihood of getting a busy port on quick restarts with rsync SST (problem MDEV-25818) and fixes a number of other flaws in SST scripts, adds new functionality, and also synchronizes the xtrabackup-v2 script with the mariabackup script (the latter applies only to the 10.2 branch): 1) SST via rsync: rsync and stunnel does not always get the right time to complete by correctly handling SIGTERM. These utilities are now given more time to complete normally (via normal SIGTERM processing) before we move on to using "kill -9"; 2) SST via rsync: attempts to terminate an rsync or stunnel process (via "kill" utility) are only made if it did not terminated on its own; 3) SST via rsync: if a combination of stunnel and rsync is used, then we need to wait for both utilities to finish or stop, not just one of them; 4) The config file and pid file for stunnel are now deleted after successful completion of SST on the donor node; 5) The configs and pid files from rsync and stunnel should not be deleted unless these utilities succeed (or are sucessfully terminated) on the joiner node; 6) The configs and pid files now excluded from transfer via rsync; 7) Spaces in paths are now valid for config files as well (when used with SST via rsync or mariabackup / xtrabackup[-v2]); 8) SST via mariabackup: added preliminary verification of keys and certificates that are used when establishing a connection using SSL (to avoid long timeouts and improve diagnostics) - by analogy with how it is done for the xtrabackup-v2 (plus check for CA file), while that check is skipped if the user does not have openssl installed (or does not have diff utility); 9) Added backup-threads=<n> configuration option which adds "--parallel=<n>" for mariabackup / xtrabackup at backup and move-back stages; 10) Added encrypt-threads and encrypt-chunk-size configuration options for xbcrypt management (when xbcrypt is used); 11) Small optimization: checking the socat version and adding a file with parameters for 2048-bit Diffie-Hellman (if necessary) is done only if the user has not specified "dhparam=" in the "sockopt" option value; 12) SST via rsync now supports "backup-threads" configuration option (in server-related sections or in the "[sst]"); 13) Determining the number of available processors is now supported for FreeBSD + mariabackup/xtrabackup: before that we might have problems with "--compact" (rebuild indexes) or qpress on FreeBSD; 14) The check_pid() function should not raise an error state in the rare cases when the pid file was created, but it is empty, or if it is deleted right during the check, or when zero is read from the pid file; 15) Iproved templates that are used to check if a requested socket is "listening" when using the ss utility; 16) Shortened some other templates for socket state utilities; 17) Temporary files created by mariabackup / xtrabackup are moved to a separate subdirectory inside tmpdir (so they don't get mixed with other temporary files, which can make debugging more difficult); 18) 10.2 only: the script for SST via xtrabackup-v2 has been brought in full compliance with all the bugfixes made for mariabackup (as it previously contained many flaws compared to the updated script for mariabackup).
-
Marko Mäkelä authored
page_apply_insert_redundant(): Correct a condition that would occasionally fail when recovering changes for the change buffer tree (where extra_size and data_size can vary wildly). This was broken in commit 138cbec5 (MDEV-21724).
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Vladislav Vaintroub authored
-
- 30 May, 2021 1 commit
-
-
Dmitry Shulga authored
MDEV-25576: The statement EXPLAIN running as regular statement and as prepared statement produces different results for UPDATE with subquery Both EXPLAIN and EXPLAIN EXTENDED statements produce different results set in case it is run in normal way and in PS mode for the statements UPDATE/DELETE with subquery. The use case below reproduces the issue: MariaDB [test]> CREATE TABLE t1 (c1 INT KEY) ENGINE=MyISAM; Query OK, 0 rows affected (0,128 sec) MariaDB [test]> CREATE TABLE t2 (c2 INT) ENGINE=MyISAM; Query OK, 0 rows affected (0,023 sec) MariaDB [test]> CREATE TABLE t3 (c3 INT) ENGINE=MyISAM; Query OK, 0 rows affected (0,021 sec) MariaDB [test]> EXPLAIN EXTENDED UPDATE t3 SET c3 = -> ( SELECT COUNT(d1.c1) FROM ( SELECT a11.c1 FROM t1 AS a11 -> STRAIGHT_JOIN t2 AS a21 ON a21.c2 = a11.c1 JOIN t1 AS a12 -> ON a12.c1 = a11.c1 ) d1 ); +------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+ | 1 | PRIMARY | t3 | ALL | NULL | NULL | NULL | NULL | 0 | 100.00 | | | 2 | SUBQUERY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Impossible WHERE noticed after reading const tables +------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+ 2 rows in set (0,002 sec) MariaDB [test]> PREPARE stmt FROM -> EXPLAIN EXTENDED UPDATE t3 SET c3 = -> ( SELECT COUNT(d1.c1) FROM ( SELECT a11.c1 FROM t1 AS a11 -> STRAIGHT_JOIN t2 AS a21 ON a21.c2 = a11.c1 JOIN t1 AS a12 -> ON a12.c1 = a11.c1 ) d1 ); Query OK, 0 rows affected (0,000 sec) Statement prepared MariaDB [test]> EXECUTE stmt; +------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+ | 1 | PRIMARY | t3 | ALL | NULL | NULL | NULL | NULL | 0 | 100.00 | | | 2 | SUBQUERY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | no matching row in const table | +------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+ 2 rows in set (0,000 sec) The reason by that different result sets are produced is that on execution of the statement 'EXECUTE stmt' the flag SELECT_DESCRIBE not set in the data member SELECT_LEX::options for instances of SELECT_LEX that correspond to subqueries used in the UPDTAE/DELETE statements. Initially, these flags were set on parsing the statement PREPARE stmt FROM "EXPLAIN EXTENDED UPDATE t3 SET ..." but latter they were reset before starting real execution of the parsed query during handling the statement 'EXECUTE stmt'; So, to fix the issue the functions mysql_update()/mysql_delete() have been modified to set the flag SELECT_DESCRIBE forcibly in the data member SELECT_LEX::options for the primary SELECT_LEX of the UPDATE/DELETE statement.
-
- 29 May, 2021 1 commit
-
-
Vladislav Vaintroub authored
Fix regression (debug assertion or division by 0) caused by cfd3d70c
-
- 28 May, 2021 1 commit
-
-
Otto Kekäläinen authored
Synced from downstream Debian: https://salsa.debian.org/mariadb-team/mariadb-10.5/-/commit/7015e8e4b5ed3a7727a8e442039fceb2c5b1a6b9
-
- 27 May, 2021 6 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
to print all arguments of _verbose(), not just the number of them
-
Sergei Golubchik authored
if the server isn't started and innodb initiated a shutdown process - don't wait for the server to start, it won't
-
Robert Bindar authored
This reverts commit 17106c98.
-
Thirunarayanan Balathandayuthapani authored
InnoDB should calculate the MBR for the first field of spatial index and do the comparison with the clustered index field MBR. Due to MDEV-25459 refactoring, InnoDB calculate the length of the first field and fails with too long column error.
-
Marko Mäkelä authored
The only call of the virtual member function handler::update_table_comment() was removed in commit 82d28fad (MySQL 5.5.53) but the implementation was not removed. The only non-trivial implementation was for InnoDB. The information is now returned via handler::get_foreign_key_create_info() and ha_statistics::delete_length.
-
- 26 May, 2021 13 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
and replace it with equally unsightly %ifdef/%endif hack also, support %else, it's nice
-
Monty authored
-
Monty authored
The problem was that when LOCK TABLES where unwinded as part of a killed connection, unlink_all_closed_tables() did not like that there was uncommited transactions. Fixed by doing a rollback of any open transaction in this particular case.
-
Robert Bindar authored
-
Jan Lindström authored
-
Jan Lindström authored
-
Marko Mäkelä authored
-
Jan Lindström authored
Add wait_condition to wait until streaming log is empty.
-
Igor Babaev authored
In the code existed just before this patch binding of a table reference to the specification of the corresponding CTE happens in the function open_and_process_table(). If the table reference is not the first in the query the specification is cloned in the same way as the specification of a view is cloned for any reference of the view. This works fine for standalone queries, but does not work for stored procedures / functions for the following reason. When the first call of a stored procedure/ function SP is processed the body of SP is parsed. When a query of SP is parsed the info on each encountered table reference is put into a TABLE_LIST object linked into a global chain associated with the query. When parsing of the query is finished the basic info on the table references from this chain except table references to derived tables and information schema tables is put in one hash table associated with SP. When parsing of the body of SP is finished this hash table is used to construct TABLE_LIST objects for all table references mentioned in SP and link them into the list of such objects passed to a pre-locking process that calls open_and_process_table() for each table from the list. When a TABLE_LIST for a view is encountered the view is opened and its specification is parsed. For any table reference occurred in the specification a new TABLE_LIST object is created to be included into the list for pre-locking. After all objects in the pre-locking have been looked through the tables mentioned in the list are locked. Note that the objects referenced CTEs are just skipped here as it is impossible to resolve these references without any info on the context where they occur. Now the statements from the body of SP are executed one by one that. At the very beginning of the execution of a query the tables used in the query are opened and open_and_process_table() now is called for each table reference mentioned in the list of TABLE_LIST objects associated with the query that was built when the query was parsed. For each table reference first the reference is checked against CTEs definitions in whose scope it occurred. If such definition is found the reference is considered resolved and if this is not the first reference to the found CTE the the specification of the CTE is re-parsed and the result of the parsing is added to the parsing tree of the query as a sub-tree. If this sub-tree contains table references to other tables they are added to the list of TABLE_LIST objects associated with the query in order the referenced tables to be opened. When the procedure that opens the tables comes to the TABLE_LIST object created for a non-first reference to a CTE it discovers that the referenced table instance is not locked and reports an error. Thus processing non-first table references to a CTE similar to how references to view are processed does not work for queries used in stored procedures / functions. And the main problem is that the current pre-locking mechanism employed for stored procedures / functions does not allow to save the context in which a CTE reference occur. It's not trivial to save the info about the context where a CTE reference occurs while the resolution of the table reference cannot be done without this context and consequentially the specification for the table reference cannot be determined. This patch solves the above problem by moving resolution of all CTE references at the parsing stage. More exactly references to CTEs occurred in a query are resolved right after parsing of the query has finished. After resolution any CTE reference it is marked as a reference to to derived table. So it is excluded from the hash table created for pre-locking used base tables and view when the first call of a stored procedure / function is processed. This solution required recursive calls of the parser. The function THD::sql_parser() has been added specifically for recursive invocations of the parser. # Conflicts: # sql/sql_cte.cc # sql/sql_cte.h # sql/sql_lex.cc # sql/sql_lex.h # sql/sql_view.cc # sql/sql_yacc.yy # sql/sql_yacc_ora.yy
-
sjaakola authored
The underlying problem with MDEV-25551 turned out to be that transactions having changes for tables with no primary key, were not safe to apply in parallel. This is due to excessive locking in innodb side, and even non related row modifications could end up in lock conflict during applying. The fix for MDEV-25551 has disabled parallel applying for tables with no PK. This fix depends on change for wsrep-lib, where a separate PR allows application to modify transaction flags in wsrep-lib. This commit has also separate mtr test for verifying that transactions modifying a table with no primary key, will not apply in parallel. This test is a modified version of initial test created by Gabor Orosz, the reporterr of MDEV-25551. Another mtr test was added in galera_sr suite, for testing if modifying tables with no primary key would causes issues for streaming replication use cases. Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
-
Brad Smith authored
closes #1828
-
nia authored
In NetBSD 9.x and prior, udata is an intptr_t, but in 10.x (current development branch) it was changed to be a void * for compatibility with other BSDs a year or so ago. Unfortunately, this does not simplify the code, as NetBSD 8.x and 9.x are still supported and will be for a few more years. Signed-off-by: Nia Alarie <nia@NetBSD.org>
-
- 25 May, 2021 5 commits
-
-
Marko Mäkelä authored
-
Sergei Golubchik authored
cherry-pick commit: 1fff2398 MDEV-22530 post push fixes from 10.6. Followup. If the KILL happens - report it as a failure, don't eat it up silently. Note that this has to be done after `table_name` is populated, so that the error message could show it.
-
Igor Babaev authored
In the code existed just before this patch binding of a table reference to the specification of the corresponding CTE happens in the function open_and_process_table(). If the table reference is not the first in the query the specification is cloned in the same way as the specification of a view is cloned for any reference of the view. This works fine for standalone queries, but does not work for stored procedures / functions for the following reason. When the first call of a stored procedure/ function SP is processed the body of SP is parsed. When a query of SP is parsed the info on each encountered table reference is put into a TABLE_LIST object linked into a global chain associated with the query. When parsing of the query is finished the basic info on the table references from this chain except table references to derived tables and information schema tables is put in one hash table associated with SP. When parsing of the body of SP is finished this hash table is used to construct TABLE_LIST objects for all table references mentioned in SP and link them into the list of such objects passed to a pre-locking process that calls open_and_process_table() for each table from the list. When a TABLE_LIST for a view is encountered the view is opened and its specification is parsed. For any table reference occurred in the specification a new TABLE_LIST object is created to be included into the list for pre-locking. After all objects in the pre-locking have been looked through the tables mentioned in the list are locked. Note that the objects referenced CTEs are just skipped here as it is impossible to resolve these references without any info on the context where they occur. Now the statements from the body of SP are executed one by one that. At the very beginning of the execution of a query the tables used in the query are opened and open_and_process_table() now is called for each table reference mentioned in the list of TABLE_LIST objects associated with the query that was built when the query was parsed. For each table reference first the reference is checked against CTEs definitions in whose scope it occurred. If such definition is found the reference is considered resolved and if this is not the first reference to the found CTE the the specification of the CTE is re-parsed and the result of the parsing is added to the parsing tree of the query as a sub-tree. If this sub-tree contains table references to other tables they are added to the list of TABLE_LIST objects associated with the query in order the referenced tables to be opened. When the procedure that opens the tables comes to the TABLE_LIST object created for a non-first reference to a CTE it discovers that the referenced table instance is not locked and reports an error. Thus processing non-first table references to a CTE similar to how references to view are processed does not work for queries used in stored procedures / functions. And the main problem is that the current pre-locking mechanism employed for stored procedures / functions does not allow to save the context in which a CTE reference occur. It's not trivial to save the info about the context where a CTE reference occurs while the resolution of the table reference cannot be done without this context and consequentially the specification for the table reference cannot be determined. This patch solves the above problem by moving resolution of all CTE references at the parsing stage. More exactly references to CTEs occurred in a query are resolved right after parsing of the query has finished. After resolution any CTE reference it is marked as a reference to to derived table. So it is excluded from the hash table created for pre-locking used base tables and view when the first call of a stored procedure / function is processed. This solution required recursive calls of the parser. The function THD::sql_parser() has been added specifically for recursive invocations of the parser.
-
Julius Goryavsky authored
The following features have been added: 1) Automatic addition of the pf = ip6 option for socat when it can be recognized by the format of the connection address; 2) Automatically add or remove extra commas at the beginning and at the end of sockopt, for example, sockopt='pf=ip6' and sockopt=',pf=ip6' work equally well; Also, due to interference in the code of the get_transfer() function, I also refactored it and now: 3) encrypt = 4 is supported not only for xtrabackup-v2, but also for mariabackup - this can help with migration from Percona; 4) Improved setting of 'commonname' option for encrypt=3 and encrypt=4 modes;
-
nia authored
This needs backporting to MariaDB 10.5. Any changes I submit are freely available under the new BSD license. Signed-off-by: Nia Alarie <nia@NetBSD.org>
-
- 24 May, 2021 1 commit
-
-
Julius Goryavsky authored
mbstream is already supported as a format name after MDEV-24580, but additional code refactoring has been done to correctly display the format name in log files and to check if the mbstream utility is in the path. Also, for xtrabackup-v2 (only available in the 10.2) both utilities are supported - both xbstram and mbstream, since they are interchangeable in this context. In this case, the original innobackupex always receives the correct --stream=xbstream option as input, but the user can actually try to use the mbstream utility during the transfer (if the user explicitly specifies this in the configuration file).
-