- 06 Apr, 2022 4 commits
-
-
Daniel Black authored
Closes #2084
-
Daniel Black authored
Closes #2083
-
Daniel Black authored
Closes #2082
-
Tuukka Pasanen authored
Add compression plugins: * bzip2 * lz4 * lzma * lzo * snappy To Autopkg's basic smoke-test dependencies. Also make sure that MariaDB service is reloaded when smoke-test is executed.
-
- 05 Apr, 2022 2 commits
-
-
Marko Mäkelä authored
-
Otto Kekäläinen authored
As MariaDB 10.5 has been removed from Debian Sid and MariaDB 10.6 has entered it, the Salsa-CI testing needs to adapt. To achieve this, essentially sync most the the salsa-ci.yml contents from https://salsa.debian.org/mariadb-team/mariadb-server/-/tree/debian/latest This includes removing Stretch builds, as Stretch does not support uring nor pmem libraries, which MariaDB 10.6 depends on. Also add a couple Lintian overrides to make Salsa-CI pass. NOTE TO MERGERS: This commit is made on 10.6 branch and can be merged to all later branches (10.7, 10.8, 10.9..) for now, but later somebody needs to go in and update all the testing stages to do the upgrade testing correctly for 10.6->10.7->10.8->10.9 etc.
-
- 04 Apr, 2022 5 commits
-
-
Thirunarayanan Balathandayuthapani authored
- InnoDB bulk insert operation fails to rollback when it detect DB_DUPLICATE_KEY error. It leads to orphaned records in primary indexes. Consecutive update/delete operation assumes that record should exist in secondary index and it leads to failure.
-
Thirunarayanan Balathandayuthapani authored
- After MDEV-24621, InnoDB does buffer the insert bulk operation for all indexes expect spatial one. But it leads to search the primary key lookup and it leads to failure. So InnoDB should avoid bulk insert when table has spatial index involved.
-
Vlad Lesin authored
The issue is caused by 59a0236d commit. The initial intention of the commit was to speed up "mariabackup --prepare". The call stack of binlog position reading is the following: ▾ trx_rseg_mem_restore ▾ trx_rseg_array_init ▾ trx_lists_init_at_db_start ▸ srv_start Both trx_lists_init_at_db_start() and trx_rseg_mem_restore() contain special cases for srv_operation == SRV_OPERATION_RESTORE condition, and on this condition only rseg headers are read to parse binlog position. Performance impact is not so big. The solution is to revert 59a0236d.
-
Daniel Black authored
ib_id_t is a uint64. On AIX this isn't a long long unsigned and to prevent the compile warnings and potential wrong type, the UINT64PFx defination is corrected. As INT64PF is unused (last use, xtradb in 10.2), it is removed to remove the confusion that INT64PF and UINT64PFx would be different types otherwise.
-
Dmitry Shulga authored
This bug report is not about ASAN Use After Free issue. This bug is about missed calling of the method LEX::cleanup_lex_after_parse_error that should happen on parse error. Aforementioned method calls sphead::restore_thd_mem_root to clean up resources acquired on processing a stored routine. Particularly, the method sp_head::restore_tht_mem_root is called to restore an original mem root and reset LEX::sphead into nullptr. The method LEX::cleanup_lex_after_parse_error is invoked by the macros MYSQL_YYABORT. Unfortunately, some rules of grammar for handling user variables in SQL use YYABORT instead of MYSQL_YYABORT to handle parser errors. As a consequence, in case a statement with setting of a user variable is called inside a stored routine, it results in assert failure in sp_head destructor. To fix the issue the macros YYABORT should be replaced by MYSQL_YYABORT in those grammar rules that handle assignment of user variables.
-
- 02 Apr, 2022 1 commit
-
-
Sergei Golubchik authored
-
- 01 Apr, 2022 3 commits
-
-
Daniel Black authored
With testing the system libfmt, use the LIBFMT_INCLUDE_DIR in case the system include path isn't sufficient.
-
Daniel Black authored
CMAKE_SYSTEM_PROCESSOR on AIX is "powerpc". To deconflict with the Linux 32bit arch of the same name, CMAKE_SYSTEM_NAME was used in the CMakeLists.txt test to enable -mhtm in the same way that was required for Linux ppc64{,le} compilers in MDEV-27936
-
Aleksey Midenkov authored
Table must be closed before running ddl log revert. That is done by handle_alter_part_error().
-
- 31 Mar, 2022 3 commits
-
-
Nayuta Yanagisawa authored
Configuring UDFs via plugin variables looks not a good idea. The more variables Spider has, the more complex it becomes. Further, I expect that only a few users use Spider UDFs. Deprecate the following plugin variables regarding Spider UDFs: * spider_udf_ds_bulk_insert_rows * spider_udf_ds_table_loop_mode * spider_udf_ds_use_real_table * spider_udf_ct_bulk_insert_interval * spider_udf_ct_bulk_insert_rows spider_udf_table_lock_mutex_count and spider_udf_table_mon_mutex_count are also for tweaking UDFs but they are already read-only. So, there is no need to deprecate them.
-
Nayuta Yanagisawa authored
-
Nayuta Yanagisawa authored
"#ifdef WITH_PARTITION_STORAGE_ENGINE ... #endif" appears frequently in the Spider code base. However, there is no need to maintain such ifdefs because Spider is disabled if the partitioning engine is disabled.
-
- 30 Mar, 2022 6 commits
-
-
Rucha Deodhar authored
Fix: Added THD *thd argument in Item_func_or_sum::fix_length_and_dec() and in fix_length_and_dec() for all derived classes of Item_func_or_sum.
-
Rucha Deodhar authored
Analysis: In case of error while processing json document, we goto error label which eventually return 1 instead of 0. Fix: Return 0 in case of error instead of 1.
-
Rucha Deodhar authored
1) When at least one of the two json documents is of scalar type: 1.a) If value and json document both are scalar, then return true if they have same type and value. 1.b) If json document is scalar but other is array (or vice versa), then return true if array has at least one element of same type and value as scalar. 1.c) If one is scalar and other is object, then return false because it can't be compared. 2) When both arguments are of non-scalar type and below conditons are satisfied then return true: 2.a) When both arguments are arrays: Iterate over the value and json document. If there exists at least one element in other array of same type and value as that of element in value. 2.b) If both arguments are objects: Iterate over value and json document and if there exists at least one key-value pair common between two objects. 2.c) If either of json document or value is array and other is object: Iterate over the array, if an element of type object is found, then compare it with the object (which is the other arguemnt). If the entire object matches i.e all they key value pairs match.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 29 Mar, 2022 12 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The comparison on the checkpoint age (number of log bytes written since the previous checkpoint) is inaccurate, because the previous FILE_CHECKPOINT record could span two 512-byte log blocks, which will cause the LSN to increase by the size of the log block header and footer. We will still generate a redudant checkpoint if the previous checkpoint wrote some FILE_MODIFY records before the FILE_CHECKPOINT record.
-
Marko Mäkelä authored
Let us remove a redundant condition when the S3 plugin is disabled during compilation time.
-
Marko Mäkelä authored
Whenever we retrieve an older version for READ COMMITTED, it is better to release the undo page latches so that we can freely move to the next clustered index record without potentially violating any latching order.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Only one checkpoint may be in progress at a time. The counter log_sys.n_pending_checkpoint_writes was being protected by log_sys.mutex. Let us replace it with the Boolean log_sys.checkpoint_pending.
-
Marko Mäkelä authored
srv_start(): Set srv_startup_is_before_trx_rollback_phase before starting the buf_flush_page_cleaner() thread, so that it will not invoke log_checkpoint() before the log file has been created. This race condition was reproduced with https://rr-project.org. This fixes up commit 15efb7ed
-
Marko Mäkelä authored
buf_pool_t::watch_unset(): Reorder some code so that no warning will be emitted in CMAKE_BUILD_TYPE=RelWithDebInfo. It is unclear why invoking watch_is_sentinel() before buf_fix_count() would make the warning disappear.
-
Marko Mäkelä authored
-
Jan Lindström authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 28 Mar, 2022 4 commits
-
-
mkaruza authored
For GTID consistenty, GTID events was artificialy added before replication happned. This event should not contain CHECKSUM calculated. Reviewed-by:
Jan Lindström <jan.lindstrom@mariadb.com>
-
Vladislav Vaintroub authored
On affected machine, the error happens sporadically in innodb.instant_alter_limit. Procmon shows SetRenameInformationFile failing with ERROR_ACCESS_DENIED. In this case, the destination file was previously opened rsp oplocked by Windows defender antivirus. The fix is to retry MoveFileEx on ERROR_ACCESS_DENIED.
-
Marko Mäkelä authored
In commit 437da7bc (MDEV-19534), the default value of the global variable srv_checksum_algorithm in innochecksum was changed from SRV_CHECKSUM_ALGORITHM_INNODB to implied 0 (innodb_checksum_algorithm=crc32). As a result, the function buf_page_is_corrupted() would by default invoke buf_calc_page_crc32() in innochecksum, and crc32_inited would hold. This would cause "innochecksum" to fail on a particular page. The actual problem is older, introduced in 2011 in mysql/mysql-server@17e497bdb793bc6b8360aa1c626dcd8bb5cfad1b (MySQL 5.6.3). It should affect the validation of pages of old data files that were written with innodb_checksum_algorithm=innodb. When using innodb_checksum_algorithm=crc32 (the default setting since MariaDB Server 10.2), some valid pages would be rejected only because exactly one of the two checksum fields accidentally matches the innodb_checksum_algorithm=crc32 value. buf_page_is_corrupted(): Simplify the logic of non-strict checksum validation, by always invoking buf_calc_page_crc32(). Remove a bogus condition that if only one of the checksum fields contains the value returned by buf_calc_page_crc32(), the page is corrupted.
-
Marko Mäkelä authored
In commit 7a4fbb55 (MDEV-25105) the innochecksum option --write (-w) was removed altogether. It should have been made a Boolean option, so that old data files may be converted to a format that is compatible with innodb_checksum_algorithm=strict_crc32 by executing the following: innochecksum -n -w ibdata* */*.ibd It would be better to use an older-version innochecksum for such a conversion, so that page checksums will be validated before updating the checksum. It never was possible for innochecksum to convert files to the innodb_checksum_algorithm=full_crc32 format that is the default for new InnoDB data files.
-