- 17 Oct, 2023 3 commits
-
-
Marko Mäkelä authored
-
Yuchen Pei authored
- Remove some references to dead macros
-
Yuchen Pei authored
It was disabled in the recent 10.6->10.10 merge.
-
- 16 Oct, 2023 2 commits
-
-
Thirunarayanan Balathandayuthapani authored
- InnoDB fails to check the overflow buffer while applying the operation to the table that was rebuilt. This is caused by commit 3cef4f8f (MDEV-515).
-
Monty authored
-
- 14 Oct, 2023 3 commits
-
-
Monty authored
Fixed missing initialization of Alter_info() This could cause crashes in some create table like scenarios where some generated indexes where automatically dropped. I also added a test that we do not try to drop from index_stats for temporary tables.
-
Monty authored
The intentention was always to not create histograms for single value unique keys (as histograms is not useful in this case), but because of a bug in the code this was still done. The changes in the test cases was mainly because hist_size is now NULL for these kind of columns.
-
Marko Mäkelä authored
The MDEV-29693 conflict resolution is from Monty, as well as is a bug fix where ANALYZE TABLE wrongly built histograms for single-column PRIMARY KEY. Also includes a fix for safe_malloc error reporting. Other things: - Copied main.log_slow from 10.4 to avoid mtr issue Disabled test: - spider/bugfix.mdev_27239 because we started to get +Error 1429 Unable to connect to foreign data source: localhost -Error 1158 Got an error reading communication packets - main.delayed - Bug#54332 Deadlock with two connections doing LOCK TABLE+INSERT DELAYED This part is disabled for now as it fails randomly with different warnings/errors (no corruption).
-
- 13 Oct, 2023 2 commits
-
-
Vlad Lesin authored
MDEV-32272 lock_release_on_prepare_try() does not release lock if supremum bit is set along with other bits set in lock's bitmap The error is caused by MDEV-30165 fix with the following commit: d13a57ae There is logical error in lock_release_on_prepare_try(): if (supremum_bit) lock_rec_unlock_supremum(*cell, lock); else lock_rec_dequeue_from_page(lock, false); Because there can be other bits set in the lock's bitmap, and the lock type can be suitable for releasing criteria, but the above logic releases only supremum bit of the lock. The fix is to release lock if it suits for releasing criteria and unlock supremum if supremum is locked otherwise. Tere is also the test for the case, which was reported by QA team. I placed it in a separate files, because it requires debug build. Reviewed by: Marko Mäkelä
-
Thirunarayanan Balathandayuthapani authored
MDEV-31098 InnoDB Recovery doesn't display encryption message when no encryption configuration passed - InnoDB fails to report the error when encryption configuration wasn't passed. This patch addresses the issue by adding the error while loading the tablespace and deferring the tablespace creation.
-
- 12 Oct, 2023 1 commit
-
-
Daniel Black authored
There are many filesystem related errors that can occur with MariaBackup. These already outputed to stderr with a good description of the error. Many of these are permission or resource (file descriptor) limits where the assertion and resulting core crash doesn't offer developers anything more than the log message. To the user, assertions and core crashes come across as poor error handling. As such we return an error and handle this all the way up the stack.
-
- 11 Oct, 2023 2 commits
-
-
Marko Mäkelä authored
log_t::create(), log_t::attach(): Return whether the initialisation succeeded. It may fail if too large an innodb_log_buffer_size is specified. recv_sys_t::close_files(): Actually close the data files so that the test mariabackup.huge_lsn,strict_crc32 will not fail on Microsoft Windows when renaming ib_logfile101 due to a leaked file handle of ib_logfile0. recv_sys_t::find_checkpoint(): Register recv_sys.files[0] as OS_FILE_CLOSED because the file handle has already been attached to log_sys.log and we do not want to close the file twice. recv_sys_t::read(): Access the first log file via log_sys.log. This is a port of commit 6e9b421f adapted to commit 685d958e (MDEV-14425). The test case is omitted, because it would fail to fail when the log is stored in persistent memory (or "fake PMEM" on Linux /dev/shm).
-
Yuchen Pei authored
The system variable spider_disable_group_by_handler, if on, will disable the spider group by handler (gbh), and such disablement serves as workaround to bugs caused by gbh, labelled with spider-gbh on jira, including MDEV-26247, MDEV-28998, MDEV-29163, MDEV-30392, MDEV-31645. Tests for these tickets are added accordingly with the workaround in place.
-
- 10 Oct, 2023 4 commits
-
-
Monty authored
The problem was that RANGE_OPT_PARAM was not completely initialized in some cases. Added bzero() to ensure that all elements are always initialized.
-
Monty authored
-
Monty authored
Fixed hang when renaming index to original name
-
Monty authored
Use Dummy_error_handler in open_stat_tables() to ignore all errors when opening statistics tables.
-
- 08 Oct, 2023 1 commit
-
-
Otto Kekalainen authored
In commit 5ea5291d @sanja-byelkin for unknown reason switched the file mode for 3 Galera tzinfo related test files from 644 -> 755. This exists only from branch 10.6 onward: $ git checkout 10.5 $ find mysql-test -executable -name *.test -or -executable -name *.result (no results) $ git checkout 10.6 $ find mysql-test -executable -name *.test -or -executable -name *.result mysql-test/suite/galera/t/mysql_tzmysql-test/suite/galera/t/mysql_tzinfo_to_sql.test mysql-test/suite/galera/t/mariadb_tzinfo_to_sql.test mysql-test/suite/galera/r/mariadb_tzinfo_to_sql.resultinfo_to_sql.test mysql-test/suite/galera/t/mariadb_tzinfo_to_sql.test mysql-test/suite/galera/r/mariadb_tzinfo_to_sql.result No test file nor test result file should be executable, so run chmod -x on them. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
- 06 Oct, 2023 5 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
log_t::create(): Return whether the initialisation succeeded. It may fail if too large an innodb_log_buffer_size is specified.
-
Aleksey Midenkov authored
check_access() updates ACL of first TABLE_LIST (t1), but not second (tp). After it is done we copy t1 ACL to tp.
-
Marko Mäkelä authored
copy_back(): Also copy the dummy empty ib_logfile0 so that MariaDB Server 10.8 or later can be started after --copy-back or --move-back. Thanks to Daniel Black for reporting this. This is a 10.5 version of commit ebf36492
-
Marko Mäkelä authored
Table_cache_instance::operator new[](size_t): Reverted the changes that were made in commit 8edef482 and move them to the only caller.
-
- 05 Oct, 2023 4 commits
-
-
Yuchen Pei authored
-
Yuchen Pei authored
Fix spider init bugs (MDEV-22979, MDEV-27233, MDEV-28218) while preventing regression on old ones (MDEV-30370, MDEV-29904) Two things are changed: First, Spider initialisation is made fully synchronous, i.e. it no longer happens in a background thread. Adapted from the original fix by nayuta for MDEV-27233. This change itself would cause failure when spider is initialised early, by plugin-load-add, due to dependency on Aria and udf function creation, which are fixed in the second and third parts below. Requires SQL Service, thus porting earlier versions requires MDEV-27595 Second, if spider is initialised before udf_init(), create udf by inserting into `mysql.func`, otherwise do it by `CREATE FUNCTION` as usual. This change may be generalised in MDEV-31401. Also factor out some clean-up queries from deinit_spider.inc for use of spider init tests. A minor caveat is that early spider initialisation will fail if the server is bootstrapped for the first time, due to missing `mysql` database which needs to be created by the bootstrap script.
-
Yuchen Pei authored
Removing procedures that were created and dropped during init. This also fixes a race condition where mtr test with plugin-load-add=ha_spider.so causes post test check to fail as it expects the procedures to still be there.
-
Yuchen Pei authored
There are several plugins in ha_spider: spider, spider_alloc_mem, spider_wrapper_protocols, spider_rewrite etc. INSTALL PLUGIN foo SONAME ha_spider causes all the other ones to be installed by the init queries where foo is any of the plugins. This introduces unnecessary complexiy. For example it reads mysql.plugins to find all other plugins, causing the hack of moving spider plugin init to a separate thread. To install all spider related plugins, install soname ha_spider should be used instead. This also fixes spurious rows in mysql.plugin when installing say only the spider plugin with `plugin-load-add=SPIDER=ha_spider.so`: select * from mysql.plugin; name dl spider_alloc_mem ha_spider.so # should not be here spider_wrapper_protocols ha_spider.so # should not be here Adapted from part of the reverted commit c160a115.
-
- 04 Oct, 2023 2 commits
-
-
Vladislav Vaintroub authored
Use an std::atomic_flag to track thread creation in progress. This is mainly a cleanup, the effect of this change was not measureable in my tests.
-
Vladislav Vaintroub authored
Add threadpool functionality to restrict concurrency during "batch" periods (where tasks are added in rapid succession). This will throttle thread creation more agressively than usual, while keeping performance at least on-par. One of these cases is bufferpool load, where async read IOs are executed without any throttling. There can be as much as 650K read IOs for loading 10GB buffer pool. Another one is recovery, where "fake read" IOs are executed. Why there are more threads than we expect? Worker threads are not be recognized as idle, until they return to the standby list, and to return to that list, they need to acquire mutex currently held in the submit_task(). In those cases, submit_task() has no worker to wake, and would create threads until default concurrency level (2*ncpus) is satisfied. Only after that throttling would happen.
-
- 03 Oct, 2023 9 commits
-
-
Michael Widenius authored
The problem was that we did not handle errors properly in JOIN::get_best_combination. In case an early error, JOIN->join_tab would contain unintialized values, which would cause errors on cleanup(). The error in question was reported earlier, but not noticed until later. One cause of this is that most of the sql_select.cc code just checks thd->fatal_error and not thd->is_error(). Fixed by changing of checks of fatal_error to is_error().
-
Monty authored
This allows a user to to change the default value of MAX_SEL_ARGS (16000) in the rare case where they neeed more generated SEL_ARGS (as part of the range optimizer)
-
Monty authored
Raise notes if indexes cannot be used: - in case of data type or collation mismatch (diferent error messages). - in case if a table field was replaced to something else (e.g. Item_func_conv_charset) during a condition rewrite. Added option to write warnings and notes to the slow query log for slow queries. New variables added/changed: - note_verbosity, with is a set of the following options: basic - All old notes unusable_keys - Print warnings about keys that cannot be used for select, delete or update. explain - Print unusable_keys warnings for EXPLAIN querys. The default is 'basic,explain'. This means that for old installations the only notable new behavior is that one will get notes about unusable keys when one does an EXPLAIN for a query. One can turn all of all notes by either setting note_verbosity to "" or setting sql_notes=0. - log_slow_verbosity has a new option 'warnings'. If this is set then warnings and notes generated are printed in the slow query log (up to log_slow_max_warnings times per statement). - log_slow_max_warnings - Max number of warnings written to slow query log. Other things: - One can now use =ALL for any 'set' variable to set all options at once. For example using "note_verbosity=ALL" in a config file or "SET @@note_verbosity=ALL' in SQL. - mysqldump will in the future use @@note_verbosity=""' instead of @sql_notes=0 to disable notes. - Added "enum class Data_type_compatibility" and changing the return type of all Field::can_optimize*() methods from "bool" to this new data type. Reviewer & Co-author: Alexander Barkov <bar@mariadb.com> - The code that prints out the notes comes mainly from Alexander
-
Monty authored
The warning is given in case of table not found or if there is a lock timeout. The warning is needed as in case of a lock timeout then the persistent table stats are going to be wrong.
-
Monty authored
-
Monty authored
Updated ha_mroonga::storage_check_if_supported_inplace_alter to support new ALTER TABLE flags. This fixes failing tests: mroonga/storage.alter_table_add_index_unique_duplicated mroonga/storage.alter_table_add_index_unique_multiple_column_duplicated
-
Monty authored
- hostnames in hostname_cache added - Some Galera (WSREP) allocations - Table caches
-
Monty authored
This makes it easier to see how much memory MariaDB server has allocated. (For all memory allocations that goes through mysys)
-
Monty authored
Example of what causes the problem: T1: ANALYZE TABLE starts to collect statistics T2: ALTER TABLE starts by deleting statistics for all changed fields, then creates a temp table and copies data to it. T1: ANALYZE ends and writes to the statistics tables. T2: ALTER TABLE renames temp table in place of the old table. Now the statistics from analyze matches the old deleted tables. Fixed by waiting to delete old statistics until ALTER TABLE is the only one using the old table and ensure that rename of columns can handle swapping of column names. rename_columns_in_stat_table() (former rename_column_in_stat_tables()) now takes a list of columns to rename. It uses the following algorithm to update column_stats to be able to handle circular renames - While there are columns to be renamed and it is the first loop or last rename loop did change something. - Loop over all columns to be renamed - Change column name in column_stat - If fail because of duplicate key - If this is first change attempt for this column - Change column name to a temporary column name - If there was a conflicting row, replace it with the current row. else - Remove entry from column list - Loop over all remaining columns in the list - Remove the conflicting row - Change column from temporary name to final name in column_stat Other things: - Don't flush tables for every operation. Only flush when all updates are done. - Rename of columns was not handled in case of ALGORITHM=copy (old bug). - Fixed that we do not collect statistics for hidden hash columns used by UNIQUE constraint on long values. - Fixed that we do not collect statistics for blob columns referred by generated virtual columns. This was achieved by storing the fields for which we want to have statistics in table->has_value_set instead of in table->read_set. - Rename of indexes was not handled for persistent statistics. - This is now handled similar as rename of columns. Renamed columns are now stored in 'rename_stat_indexes' and handled in Alter_info::delete_statistics() together with drooped indexes. - ALTER TABLE .. ADD INDEX may instead of creating a new index rename an existing generated foreign key index. This was not reflected in the index_stats table because this was handled in mysql_prepare_create_table instead instead of in the mysql_alter() code. Fixed by adding a call in mysql_prepare_create_table() to drop the changed index. I also had to change the code that 'marked the index' to be ignored with code that would not destroy the original index name. Reviewer: Sergei Petrunia <sergey@mariadb.com>
-
- 02 Oct, 2023 1 commit
-
-
Thirunarayanan Balathandayuthapani authored
- This failure happens due to commit bf3b787e (MDEV-31835). InnoDB fails to apply buffered insert operation for transaction_registry during commit operation. To avoid this, ha_commit_trans() should call extra() with HA_EXTRA_RESET_STATE to apply bulk buffered insert operation.
-
- 29 Sep, 2023 1 commit
-
-
Andrei authored
In `pseudo_slave_mode=1` aka "psedo-slave" mode any prepared XA transaction disconnects from the user session, as if the user connection drops. The xid of such transaction remains in the server, and should the prepared transaction be read-only, it is marked. The marking makes sure that the following termination of the read-only transaction ends up with ER_XA_RBROLLBACK. This did not take place actually for `pseudo_slave_mode=1` read-only. Fixed with checking the read-only status of a prepared transaction at time it disconnects from the `pseudo_slave_mode=1` session, to mark its xid when that's the case.
-