- 27 May, 2024 40 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Elena Stepanova authored
-
Monty authored
Columns added to TABLE_STATISTICS - ROWS_INSERTED, ROWS_DELETED, ROWS_UPDATED, KEY_READ_HITS and KEY_READ_MISSES. Columns added to CLIENT_STATISTICS and USER_STATISTICS: - KEY_READ_HITS and KEY_READ_MISSES. User visible changes (except new columns): - CLIENT_STATISTICS and USER_STATISTICS has columns KEY_READ_HITS and KEY_READ_MISSES added after column ROWS_UPDATED before SELECT_COMMANDS. Other changes: - Do not collect table statistics for system tables like index_stats table_stats, performance_schema, information_schema etc as the user has no control of these and the generate noice in the statistics. - All row variables that are part of user_stats are moved to 'struct rows_stats' to make it easy to clear all of them at once. - ha_read_key_misses added to STATUS_VAR Notes: - userstat.result has a change of numbers of rows for handler_read_key. This is because use-stat-tables is now disabled for the test.
-
Sergei Golubchik authored
fixes a failure of period.create on bintar-centos74-amd64
-
Sergei Golubchik authored
EE_ numbers occupy the same range as OS Exxx errors and my_errno is generally for OS errors. And for HA_ERR_ handler errors which occupy a different range, so can be freely mixed with OS errors.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
add a test case
-
Sergei Golubchik authored
-
Sergei Golubchik authored
reinit_io_cache() resets and restores MY_TRACK_WITH_LIMIT internally
-
Sergei Golubchik authored
and make it static
-
Sergei Golubchik authored
and rephrase memory accounting error message to match tmp_space one (besides, memory accounting error could - and did - happen when all memory was actually freed)
-
Monty authored
MDEV-34150 Assertion failure in Diagnostics_area::set_error_status upon binary logging hitting tmp space limit - Moved writing to binlog_cache from close_thread_tables() to binlog_commit(). - In select_create() delete cached row events instead of flushing them to disk. This was done to avoid possible disk write error in this code.
-
Monty authored
-
Monty authored
-
Monty authored
MDEV-34054 Memory leak in Window_func_runner::exec after encountering "temporary space limit reached" error
-
Monty authored
-
Monty authored
Changes: - Fixed that MyISAM and Aria parallel repair works with tmp file limit. This required to add current_thd to all parallel workers and add protection in my_malloc_size_cb_func() and temp_file_size_cb_func() to be able to handle shared THD's. I removed the old code in MyISAM to set current_thd() as only worked when using with virtal indexed columns and I wanted to keep the Aria and MyISAM code identical. Other things: - Improved error messages from Aria parallel repair and create_internal_tmp_table_from_heap().
-
Monty authored
The bug was that Aggregator_distinct::add() did not properly handle write errors. (Old bug exposed by the new code).
-
Monty authored
Two new variables added: - max_tmp_space_usage : Limits the the temporary space allowance per user - max_total_tmp_space_usage: Limits the temporary space allowance for all users. New status variables: tmp_space_used & max_tmp_space_used New field in information_schema.process_list: TMP_SPACE_USED The temporary space is counted for: - All SQL level temporary files. This includes files for filesort, transaction temporary space, analyze, binlog_stmt_cache etc. It does not include engine internal temporary files used for repair, alter table, index pre sorting etc. - All internal on disk temporary tables created as part of resolving a SELECT, multi-source update etc. Special cases: - When doing a commit, the last flush of the binlog_stmt_cache will not cause an error even if the temporary space limit is exceeded. This is to avoid giving errors on commit. This means that a user can temporary go over the limit with up to binlog_stmt_cache_size. Noteworthy issue: - One has to be careful when using small values for max_tmp_space_limit together with binary logging and with non transactional tables. If a the binary log entry for the query is bigger than binlog_stmt_cache_size and one hits the limit of max_tmp_space_limit when flushing the entry to disk, the query will abort and the binary log will not contain the last changes to the table. This will also stop the slave! This is also true for all Aria tables as Aria cannot do rollback (except in case of crashes)! One way to avoid it is to use @@binlog_format=statement for queries that updates a lot of rows. Implementation: - All writes to temporary files or internal temporary tables, that increases the file size, are routed through temp_file_size_cb_func() which updates and checks the temp space usage. - Most of the temporary file monitoring is done inside IO_CACHE. Temporary file monitoring is done inside the Aria engine. - MY_TRACK and MY_TRACK_WITH_LIMIT are new flags for ini_io_cache(). MY_TRACK means that we track the file usage. TRACK_WITH_LIMIT means that we track the file usage and we give an error if the limit is breached. This is used to not give an error on commit when binlog_stmp_cache is flushed. - global_tmp_space_used contains the total tmp space used so far. This is needed quickly check against max_total_tmp_space_usage. - Temporary space errors are using EE_LOCAL_TMP_SPACE_FULL and handler errors are using HA_ERR_LOCAL_TMP_SPACE_FULL. This is needed until we move general errors to it's own error space so that they cannot conflict with system error numbers. - Return value of my_chsize() and mysql_file_chsize() has changed so that -1 is returned in the case my_chsize() could not decrease the file size (very unlikely and will not happen on modern systems). All calls to _chsize() are updated to check for > 0 as the error condition. - At the destruction of THD we check that THD::tmp_file_space == 0 - At server end we check that global_tmp_space_used == 0 - As a precaution against errors in the tmp_space_used code, one can set max_tmp_space_usage and max_total_tmp_space_usage to 0 to disable the tmp space quota errors. - truncate_io_cache() function added. - Aria tables using static or dynamic row length are registered in 8K increments to avoid some calls to update_tmp_file_size(). Other things: - Ensure that all handler errors are registered. Before, some engine errors could be printed as "Unknown error". - Fixed bug in filesort() that causes a assert if there was an error when writing to the temporay file. - Fixed that compute_window_func() now takes into account write errors. - In case of parallel replication, rpl_group_info::cleanup_context() could call trans_rollback() with thd->error set, which would cause an assert. Fixed by resetting the error before calling trans_rollback(). - Fixed bug in subselect3.inc which caused following test to use heap tables with low value for max_heap_table_size - Fixed bug in sql_expression_cache where it did not overflow heap table to Aria table. - Added Max_tmp_disk_space_used to slow query log. - Fixed some bugs in log_slow_innodb.test
-
Monty authored
A commit in 10.10 caused it to be always 'No'
-
Monty authored
Added name of the dll/udf that caused the error.
-
Sergei Golubchik authored
add old-mode that restores inconsistent legacy behavior for FLUSH STATUS. It doesn't affect FLUSH { SESSION | GLOBAL } STATUS.
-
Sergei Golubchik authored
make it easier to add more old-mode values
-
Sergei Golubchik authored
-
Monty authored
- FLUSH GLOBAL STATUS now resets most global_status_vars. At this stage, this is mainly to be used for testing. - FLUSH SESSION STATUS added as an alias for FLUSH STATUS. - FLUSH STATUS does not require any privilege (before required RELOAD). - FLUSH GLOBAL STATUS requires RELOAD privilege. - All global status reset moved to FLUSH GLOBAL STATUS. - Replication semisync status variables are now reset by FLUSH GLOBAL STATUS. - In test cases, the only changes are: - Replace FLUSH STATUS with FLUSH GLOBAL STATUS - Replace FLUSH STATUS with FLUSH STATUS; FLUSH GLOBAL STATUS. This was only done in a few tests where the test was using SHOW STATUS for both local and global variables. - Uptime_since_flush_status is now always provided, independent if ENABLED_PROFILING is enabled when compiling MariaDB. - @@global.Uptime_since_flush_status is reset on FLUSH GLOBAL STATUS and @@session.Uptime_since_flush_status is reset on FLUSH SESSION STATUS. - When connected, @@session.Uptime_since_flush_status is set to 0.
-
Monty authored
Added SHOW_LONGLONG_NOFLUSH to mark the variables that should not be flushed. New variables cleared as part of SHOW STATUS: - Rpl_semi_sync_master_request_ack - Rpl_semi_sync_master_get_ac - Rpl_semi_sync_slave_send_ack - Slave_skipped_error
-
Monty authored
-
Sergei Golubchik authored
Closes #3228
-
Anson Chung authored
A new GitLab MTR run is added which uses libfaketime to simulate starting the server and running the test suite in the future. The job simulates running in two times in the year 2038, once before the signed 32 bit UNIX limit and once after. This change helps to ensure that all future changes preserve the ability for the engine start and function past the year 2038. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
Anson Chung authored
In previous commits, changes were made to use the entire 32 bit unsigned range in order to support timestamps past 2038 on 64 bit systems. Without changing set_max_time on thr_timer to also match this new range, the server will crash when attempting to startup past 2038-01-19. Like the previous commits, this only applies to 64 bit systems. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
plugins can have unused variables too. If they use a literal "Unused" string a compiler might or might not merge two identical strings into one (-fmerge-constants) and depending on that the server will or will not issue a "variable is ignored" warning.
-
Monty authored
-
Monty authored
This will allow us to in the future add better error messages why an Aria table is crashed.
-
Monty authored
MDEV-32188 make TIMESTAMP use whole 32-bit unsigned range - Added --update-history option to mariadb-dump to change 2038 row_end timestamp to 2106. - Updated ALTER TABLE ... to convert old row_end timestamps to 2106 timestamp for tables created before MariaDB 11.4.0. - Fixed bug in CHECK TABLE where we wrongly suggested to USE REPAIR TABLE when ALTER TABLE...FORCE is needed. - mariadb-check printed table names that where used with REPAIR TABLE but did not print table names used with ALTER TABLE or with name repair. Fixed by always printing a table that is fixed if --silent is not used. - Added TABLE::vers_fix_old_timestamp() that will change max-timestamp for versioned tables when replication from a pre-11.4.0 server. A few test cases changed. This is caused by: - CHECK TABLE now prints 'Please do ALTER TABLE... instead of 'Please do REPAIR TABLE' when there is a problem with the information in the .frm file (for example a very old frm file). - mariadb-check now prints repaired table names. - mariadb-check also now prints nicer error message in case ALTER TABLE is needed to repair a table.
-
Monty authored
This task is to ensure we have a clear definition and rules of how to repair or optimize a table. The rules are: - REPAIR should be used with tables that are crashed and are unreadable (hardware issues with not readable blocks, blocks with 'unexpected data' etc) - OPTIMIZE table should be used to optimize the storage layout for the table (recover space for delete rows and optimize the index structure. - ALTER TABLE table_name FORCE should be used to rebuild the .frm file (the table definition) and the table (with the original table row format). If the table is from and older MariaDB/MySQL release with a different storage format, it will convert the data to the new format. ALTER TABLE ... FORCE is used as part of mariadb-upgrade Here follows some more background: The 3 ways to repair a table are: 1) ALTER TABLE table_name FORCE" (not other options). As an alias we allow: "ALTER TABLE table_name ENGINE=original_engine" 2) "REPAIR TABLE" (without FORCE) 3) "OPTIMIZE TABLE" All of the above commands will optimize row space usage (which means that space will be needed to hold a temporary copy of the table) and re-generate all indexes. They will also try to replicate the original table definition as exact as possible. For ALTER TABLE and "REPAIR TABLE without FORCE", the following will hold: If the table is from an older MariaDB version and data conversion is needed (for example for old type HASH columns, MySQL JSON type or new TIMESTAMP format) "ALTER TABLE table_name FORCE, algorithm=COPY" will be used. The differences between the algorithms are 1) Will use the fastest algorithm the engine supports to do a full repair of the table (except if data conversions are is needed). 2) Will use the storage engine internal REPAIR facility (MyISAM, Aria). If the engine does not support REPAIR then "ALTER TABLE FORCE, ALGORITHM=COPY" will be used. If there was data incompatibilities (which means that FORCE was used) then there will be a warning after REPAIR that ALTER TABLE FORCE is still needed. The reason for this is that REPAIR may be able to go around data errors (wrong incompatible data, crashed or unreadable sectors) that ALTER TABLE cannot do. 3) Will use the storage engine internal OPTIMIZE. If engine does not support optimize, then "ALTER TABLE FORCE" is used. The above will ensure that ALTER TABLE FORCE is able to correct almost any errors in the row or index data. In case of corrupted blocks then REPAIR possible followed by ALTER TABLE is needed. This is important as mariadb-upgrade executes ALTER TABLE table_name FORCE for any table that must be re-created. Bugs fixed with InnoDB tables when using ALTER TABLE FORCE: - No error for INNODB_DEFAULT_ROW_FORMAT=COMPACT even if row length would be too wide. (Independent of innodb_strict_mode). - Tables using symlinks will be symlinked after any of the above commands (Independent of the setting of --symbolic-links) If one specifies an algorithm together with ALTER TABLE FORCE, things will work as before (except if data conversion is required as then the COPY algorithm is enforced). ALTER TABLE .. OPTIMIZE ALL PARTITIONS will work as before. Other things: - FORCE argument added to REPAIR to allow one to first run internal repair to fix damaged blocks and then follow it with ALTER TABLE. - REPAIR will not update frm_version if ha_check_for_upgrade() finds that table is still incompatible with current version. In this case the REPAIR will end with an error. - REPAIR for storage engines that does not have native repair, like InnoDB, is now using ALTER TABLE FORCE. - REPAIR csv-table USE_FRM now works. - It did not work before as CSV tables had extension list in wrong order. - Default error messages length for %M increased from 128 to 256 to not cut information from REPAIR. - Documented HA_ADMIN_XX variables related to repair. - Added HA_ADMIN_NEEDS_DATA_CONVERSION to signal that we have to do data conversions when converting the table (and thus ALTER TABLE copy algorithm is needed). - Fixed typo in error message (caused test changes).
-
Monty authored
Remove alter_algorithm but keep the variable as no-op (with a warning). The reasons for removing alter_algorithm are: - alter_algorithm was introduced as a replacement for the old_alter_table that was used to force the usage of the original alter table algorithm (copy) in the cases where the new alter algorithm did not work. The new option was added as a way to force the usage of a specific algorithm when it should instead have made it possible to disable algorithms that would not work for some reason. - alter_algorithm introduced some cases where ALTER TABLE would not work without specifying the ALGORITHM=XXX option together with ALTER TABLE. - Having different values of alter_algorithm on master and slave could cause slave to stop unexpectedly. - ALTER TABLE FORCE, as used by mariadb-upgrade, would not always work if alter_algorithm was set for the server. - As part of the MDEV-33449 "improving repair of tables" it become clear that alter- algorithm made it harder to provide a better and more consistent ALTER TABLE FORCE and REPAIR TABLE and it would be better to remove it.
-
Monty authored
This enasures that the 'variables that only exists for compatibility' are always kept as its default value. Based on an idea from Sergei Golubchik
-
Monty authored
Also fixed that all unused variables are using the same variable comment. The warning will be tested with the next commit that deprecates the variable alter_algorithm.
-