An error occurred fetching the project authors.
- 14 Feb, 2024 1 commit
-
-
Monty authored
binlog_space_limit is a variable in Percona server used to limit the total size of all binary logs. This implementation is based on code from Percona server 5.7. In MariaDB we decided to call the variable max-binlog-total-size to be similar to max-binlog-size. This makes it easier to find in the output from 'mariadbd --help --verbose'). MariaDB will also support binlog_space_limit for compatibility with Percona. Some internal notes to explain implementation notes: - When running MariaDB does not delete binary logs that are either used by slaves or have active xid that are not yet committed. Some implementation notes: - max-binlog-total-size is by default 0 (no limit). - max-binlog-total-size can be changed without server restart. - Binlog file sizes are checked on startup, or if max-binlog-total-size is set to a value > 0, not for every log write. The total size of all binary logs is cached and dynamically updated when updating the binary log on binary log rotation. - max-binlog-total-size is checked against existing log files during serverstart, binlog rotation, FLUSH LOGS, when writing to binary log or when max-binlog-total-size changes value. - Option --slave-connections-needed-for-purge with 1 as default added. This allows one to ensure that we do not delete binary logs if there is less than 'slave-connections-needed-for-purge' connected. Without this option max-binlog-total-size would potentially delete binlogs needed by slaves on server startup or when a slave disconnects as there are then no connected slaves to protect active binlogs. - PURGE BINARY LOGS TO ... will be executed as if slave-connectitons-needed-for-purge would be zero. In other words it will do the purge even if there is no slaves connected. If there are connected slaves working on the logs, these will be protected. - If binary log is on and max-binlog-total_size <> 0 then the status variable 'Binlog_disk_use' shows the current size of all old binary logs + the state of the current one. - Removed test of strcmp(log_file_name, log_info.log_file_name) in purge_logs_before_date() as this is tested in can_purge_logs() - To avoid expensive calls of log_in_use() we cache the result for the last log that is in use by a slave. Future calls to can_purge_logs() for this binary log will be quickly detected and false will be returned until a slave starts working on a new log. - Note that after a binary log rotation caused by max_binlog_size, the last log will not be purged directly as it is still in use internally. The next binary log write will purge binlogs if needed. Reviewer:Kristian Nielsen <knielsen@knielsen-hq.org>
-
- 27 Jan, 2024 1 commit
-
-
Kristian Nielsen authored
Improve the performance of slave connect using B+-Tree indexes on each binlog file. The index allows fast lookup of a GTID position to the corresponding offset in the binlog file, as well as lookup of a position to find the corresponding GTID position. This eliminates a costly sequential scan of the starting binlog file to find the GTID starting position when a slave connects. This is especially costly if the binlog file is not cached in memory (IO cost), or if it is encrypted or a lot of slaves connect simultaneously (CPU cost). The size of the index files is generally less than 1% of the binlog data, so not expected to be an issue. Most of the work writing the index is done as a background task, in the binlog background thread. This minimises the performance impact on transaction commit. A simple global mutex is used to protect index reads and (background) index writes; this is fine as slave connect is a relatively infrequent operation. Here are the user-visible options and status variables. The feature is on by default and is expected to need no tuning or configuration for most users. binlog_gtid_index On by default. Can be used to disable the indexes for testing purposes. binlog_gtid_index_page_size (default 4096) Page size to use for the binlog GTID index. This is the size of the nodes in the B+-tree used internally in the index. A very small page-size (64 is the minimum) will be less efficient, but can be used to stress the BTree-code during testing. binlog_gtid_index_span_min (default 65536) Control sparseness of the binlog GTID index. If set to N, at most one index record will be added for every N bytes of binlog file written. This can be used to reduce the number of records in the index, at the cost only of having to scan a few more events in the binlog file before finding the target position Two status variables are available to monitor the use of the GTID indexes: Binlog_gtid_index_hit Binlog_gtid_index_miss The "hit" status increments for each successful lookup in a GTID index. The "miss" increments when a lookup is not possible. This indicates that the index file is missing (eg. binlog written by old server version without GTID index support), or corrupt. Signed-off-by:
Kristian Nielsen <knielsen@knielsen-hq.org>
-
- 30 Nov, 2023 1 commit
-
-
Vladislav Vaintroub authored
Connector/NET does not expect collation IDs returned by "show collations" to be NULL, runs into an exception. The fix is to determine connector/net using its connection attributes, then make sure "show collations" does not output NULL IDs. The patch introduces new old_mode NO_NULL_COLLATION_IDs, that is automatically set, once MySQL Connector/NET connection is determined. A test was added, that uses MySql.Data from powershell - only works if MySql.Data is installed into GAC (i.e with C/NET MSI package)
-
- 23 Nov, 2023 1 commit
-
-
Libing Song authored
This patch provides a new mode FULL_NODUP to binlog_row_image system variable. With FULL_NODUP mode, all columns are included in before image, but only updated columns are included in after image for UPDATE. While all columns are included in the after image for INSERT. FULL_NODUP is for replacing FULL mode. It includes all data of the before and after image as FULL mode, but it uses less storage especially in the case that only a few columns are updated. Note: It will binlog full before and after image for all modes if the table has no primary key. FULL_NODUP follows the behavior.
-
- 02 Nov, 2023 1 commit
-
-
Oleksandr Byelkin authored
-
- 27 Oct, 2023 1 commit
-
-
Kristian Nielsen authored
Compute binlog checksums (when enabled) already when writing events into the statement or transaction caches, where before it was done when the caches are copied to the real binlog file. This moves the checksum computation outside of holding LOCK_log, improving scalabitily. At stmt/trx cache write time, the final end_log_pos values are not known, so with this patch these will be set to 0. Events that are written directly to the binlog file (not through stmt/trx cache) keep the correct end_log_pos value. The GTID and COMMIT/XID events at the start and end of event groups are written directly, so the zero end_log_pos is only for events in the middle of event groups, which do not negatively affect replication. An option --binlog-legacy-event-pos, off by default, is provided to disable this behavior to provide backwards compatibility with any external applications that might rely on end_log_pos in events in the middle of event groups. Checksums cannot be pre-computed when binlog encryption is enabled, as encryption relies on correct end_log_pos to provide part of the nonce/IV. Checksum pre-computation is also disabled for WSREP/Galera, as it uses events differently in its write-sets and so on. Extending pre-computation of checksums to Galera where it makes sense could be added in a future patch. The current --binlog-checksum configuration is saved in binlog_cache_data at transaction start and used to pre-compute checksums in cache, if applicable. When the cache is later copied to the binlog, a check is made if the saved value still matches the configured global value; if so, the events are block-copied directly into the binlog file. If --binlog-checksum was changed during the transaction, events are re-written to the binlog file one-by-one and the checksums recomputed/discarded as appropriate. Reviewed-by:
Monty <monty@mariadb.org> Signed-off-by:
Kristian Nielsen <knielsen@knielsen-hq.org>
-
- 26 Oct, 2023 1 commit
-
-
Yuchen Pei authored
Adding a global/session var `redirect_url' of string type. The initial value is empty. Can be supplied in mysqld with --redirect-url or set in --init-connect. A valid redirect_url should be of the format {mysql,mariadb}://host[:port] where <host> is an arbitrary string not containing colons, and <port> is a number between 0 and 65535 inclusive. The variable will be used by the server to notify clients that they should connect to another server, specified by the value of the variable, if not empty. The notification is done by the inclusion of the variable in session_track_system_variable.
-
- 19 Oct, 2023 1 commit
-
-
Sergei Petrunia authored
(Variant#3: Allow cross-charset comparisons, use a special CHARSET_INFO to create lookup keys. Review input addressed.) Equalities that compare utf8mb{3,4}_general_ci strings, like: WHERE ... utf8mb3_key_col=utf8mb4_value (MB3-4-CMP) can now be used to construct ref[const] access and also participate in multiple-equalities. This means that utf8mb3_key_col can be used for key-lookups when compared with an utf8mb4 constant, field or expression using '=' or '<=>' comparison operators. This is controlled by optimizer_switch='cset_narrowing=on', which is OFF by default. IMPLEMENTATION Item value comparison in (MB3-4-CMP) is done using utf8mb4_general_ci. This is valid as any utf8mb3 value is also an utf8mb4 value. When making index lookup value for utf8mb3_key_col, we do "Charset Narrowing": characters that are in the Basic Multilingual Plane (=BMP) are copied as-is, as they can be represented in utf8mb3. Characters that are outside the BMP cannot be represented in utf8mb3 and are replaced with U+FFFD, the "Replacement Character". In utf8mb4_general_ci, the Replacement Character compares as equal to any character that's not in BMP. Because of this, the constructed lookup value will find all index records that would be considered equal by the original condition (MB3-4-CMP). Approved-by:
Monty <monty@mariadb.org>
-
- 03 Oct, 2023 2 commits
-
-
Monty authored
This allows a user to to change the default value of MAX_SEL_ARGS (16000) in the rare case where they neeed more generated SEL_ARGS (as part of the range optimizer)
-
Monty authored
Raise notes if indexes cannot be used: - in case of data type or collation mismatch (diferent error messages). - in case if a table field was replaced to something else (e.g. Item_func_conv_charset) during a condition rewrite. Added option to write warnings and notes to the slow query log for slow queries. New variables added/changed: - note_verbosity, with is a set of the following options: basic - All old notes unusable_keys - Print warnings about keys that cannot be used for select, delete or update. explain - Print unusable_keys warnings for EXPLAIN querys. The default is 'basic,explain'. This means that for old installations the only notable new behavior is that one will get notes about unusable keys when one does an EXPLAIN for a query. One can turn all of all notes by either setting note_verbosity to "" or setting sql_notes=0. - log_slow_verbosity has a new option 'warnings'. If this is set then warnings and notes generated are printed in the slow query log (up to log_slow_max_warnings times per statement). - log_slow_max_warnings - Max number of warnings written to slow query log. Other things: - One can now use =ALL for any 'set' variable to set all options at once. For example using "note_verbosity=ALL" in a config file or "SET @@note_verbosity=ALL' in SQL. - mysqldump will in the future use @@note_verbosity=""' instead of @sql_notes=0 to disable notes. - Added "enum class Data_type_compatibility" and changing the return type of all Field::can_optimize*() methods from "bool" to this new data type. Reviewer & Co-author: Alexander Barkov <bar@mariadb.com> - The code that prints out the notes comes mainly from Alexander
-
- 30 Sep, 2023 1 commit
-
-
Sergei Golubchik authored
In particular: * @@debug deprecated since 5.5.37 * sr_YU locale deprecated since 10.0.11 * "engine_condition_pushdown" in the @@optimizer_switch deprecated since 10.1.1 * @@date_format, @@datetime_format, @@time_format, @@max_tmp_tables deprecated since 10.1.2 * @@wsrep_causal_reads deprecated since 10.1.3 * "parser" in mroonga table comment deprecated since 10.2.11
-
- 12 Sep, 2023 1 commit
-
-
Sergei Petrunia authored
(Review input addressed) (Added handling of UPDATE/DELETE and partitioning w/o index) If the properties of the used collation allow, do the following equivalent rewrites: 1. UPPER(key_col)=expr -> key_col=expr expr=UPPER(key_col) -> expr=key_col (also rewrite both sides of the equality at the same time) 2. UPPER(key_col) IN (constant-list) -> key_col IN (constant-list) - Mark utf8mb{3,4}_general_ci as collations that allow this. - Add optimizer_switch='sargable_casefold=ON' to control this. (ON by default in this patch) - Cover the rewrite in Optimizer Trace, rewrite name is "sargable_casefold_removal".
-
- 15 Aug, 2023 1 commit
-
-
Nikita Malyavin authored
Add LOCK_ALTER_TABE_COPY bit to old_mode. Disables online copy by default, but still allows to force it with explicit lock=none
-
- 02 Aug, 2023 2 commits
-
-
Sergei Golubchik authored
AES_ENCRYPT(str, key, [, iv [, mode ]]) AES_DECRYPT(str, key, [, iv [, mode ]]) mode is aes-{128,192,256}-{ecb,cbc,ctr} e.g. "aes-128-cbc". and a @@block_encryption_mode variable for the default value of mode change in behavior: AES_ENCRYPT(str, key) can no longer be used in persistent virtual columns (and alike)
-
Sergei Golubchik authored
-
- 17 Jul, 2023 1 commit
-
-
Alexander Barkov authored
This patch adds a way to override default collations (or "character set collations") for desired character sets. The SQL standard says: > Each collation known in an SQL-environment is applicable to one > or more character sets, and for each character set, one or more > collations are applicable to it, one of which is associated with > it as its character set collation. In MariaDB, character set collations has been hard-coded so far, e.g. utf8mb4_general_ci has been a hard-coded character set collation for utf8mb4. This patch allows to override (globally per server, or per session) character set collations, so for example, uca1400_ai_ci can be set as a character set collation for Unicode character sets (instead of compiled xxx_general_ci). The array of overridden character set collations is stored in a new (session and global) system variable @@character_set_collations and can be set as a comma separated list of charset=collation pairs, e.g.: SET @@character_set_collations='utf8mb3=uca1400_ai_ci,utf8mb4=uca1400_ai_ci'; The variable is empty by default, which mean use the hard-coded character set collations (e.g. utf8mb4_general_ci for utf8mb4). The variable can also be set globally by passing to the server startup command line, and/or in my.cnf.
-
- 07 Jul, 2023 1 commit
-
-
Monty authored
The new statistics is enabled by adding the "engine", "innodb" or "full" option to --log-slow-verbosity Example output: # Pages_accessed: 184 Pages_read: 95 Pages_updated: 0 Old_rows_read: 1 # Pages_read_time: 17.0204 Engine_time: 248.1297 Page_read_time is time doing physical reads inside a storage engine. (Writes cannot be tracked as these are usually done in the background). Engine_time is the time spent inside the storage engine for the full duration of the read/write/update calls. It uses the same code as 'analyze statement' for calculating the time spent. The engine statistics is done with a generic interface that should be easy for any engine to use. It can also easily be extended to provide even more statistics. Currently only InnoDB has counters for Pages_% and Undo_% status. Engine_time works for all engines. Implementation details: class ha_handler_stats holds all engine stats. This class is included in handler and THD classes. While a query is running, all statistics is updated in the handler. In close_thread_tables() the statistics is added to the THD. handler::handler_stats is a pointer to where statistics should be collected. This is set to point to handler::active_handler_stats if stats are requested. If not, it is set to 0. handler_stats has also an element, 'active' that is 1 if stats are requested. This is to allow engines to avoid doing any 'if's while updating the statistics. Cloned or partition tables have the pointer set to the base table if status are requested. There is a small performance impact when using --log-slow-verbosity=engine: - All engine calls in 'select' will be timed. - IO calls for InnoDB reads will be timed. - Incrementation of counters are done on local variables and accesses are inline, so these should have very little impact. - Statistics has to be reset for each statement for the THD and each used handler. This is only 40 bytes, which should be neglectable. - For partition tables we have to loop over all partitions to update the handler_status as part of table_init(). Can be optimized in the future to only do this is log-slow-verbosity changes. For this to work we have to update handler_status for all opened partitions and also for all partitions opened in the future. Other things: - Added options 'engine' and 'full' to log-slow-verbosity. - Some of the new files in the test suite comes from Percona server, which has similar status information. - buf_page_optimistic_get(): Do not increment any counter, since we are only validating a pointer, not performing any buf_pool.page_hash lookup. - Added THD argument to save_explain_data_intern(). - Switched arguments for save_explain_.*_data() to have always THD first (generates better code as other functions also have THD first).
-
- 03 May, 2023 1 commit
-
-
Monty authored
Introduces @@optimizer_switch flag: hash_join_cardinality When this option is on, use EITS statistics to produce tighter bounds for hash join output cardinality. This patch is an extension / replacement to a similar patch in 10.6 New features: - optimizer_switch hash_join_cardinality is on by default - records_out is set to fanout when HASH is used - Fixed bug in is_eits_usable: The function did not work with views
-
- 28 Apr, 2023 1 commit
-
-
Sergei Petrunia authored
Introduce @@optimizer_switch flag: hash_join_cardinality When it is on, use EITS statistics to produce tighter bounds for hash join output cardinality. Amended by Monty. Reviewed by: Monty <monty@mariadb.org>
-
- 26 Apr, 2023 1 commit
-
-
Sergei Golubchik authored
remove ancient hard-coded treatment of --core-file. This enables normal my_getopt behavior for the already existing sysvar
-
- 24 Mar, 2023 1 commit
-
-
Otto Kekalainen authored
- agressively -> aggressively - exising -> existing - occured -> occurred - releated -> related - seperated -> separated - sucess -> success - use use -> use All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
- 10 Feb, 2023 1 commit
-
-
Sergei Golubchik authored
avoid contaminating my_getopt with sysvar implementation details. adjust variable values after my_getopt, like it's done for others. this fixes --help to show correct values.
-
- 03 Feb, 2023 1 commit
-
-
Monty authored
-
- 02 Feb, 2023 4 commits
-
-
Monty authored
This includes all test changes from "Changing all cost calculation to be given in milliseconds" and forwards. Some of the things that caused changes in the result files: - As part of fixing tests, I added 'echo' to some comments to be able to easier find out where things where wrong. - MATERIALIZED has now a higher cost compared to X than before. Because of this some MATERIALIZED types have changed to DEPENDEND SUBQUERY. - Some test cases that required MATERIALIZED to repeat a bug was changed by adding more rows to force MATERIALIZED to happen. - 'Filtered' in SHOW EXPLAIN has in many case changed from 100.00 to something smaller. This is because now filtered also takes into account the smallest possible ref access and filters, even if they where not used. Another reason for 'Filtered' being smaller is that we now also take into account implicit filtering done for subqueries using FIRSTMATCH. (main.subselect_no_exists_to_in) This is caluculated in best_access_path() and stored in records_out. - Table orders has changed because more accurate costs. - 'index' and 'ALL' for small tables has changed to use 'range' or 'ref' because of optimizer_scan_setup_cost. - index can be changed to 'range' as 'range' optimizer assumes we don't have to read the blocks from disk that range optimizer has already read. This can be confusing in the case where there is no obvious where clause but instead there is a hidden 'key_column > NULL' added by the optimizer. (main.subselect_no_exists_to_in) - Scan on primary clustered key does not report 'Using Index' anymore (It's a table scan, not an index scan). - For derived tables, the number of rows is now 100 instead of 2, which can be seen in EXPLAIN. - More tests have "Using index for group by" as the cost of this optimization is now more correct (lower). - A primary key could be preferred for a normal key, even if it would access more rows, as it's faster to do 1 lokoup and 3 'index_next' on a clustered primary key than one lookup trough a secondary. (main.stat_tables_innodb) Notes: - There was a 4.7% more calls to best_extension_by_limited_search() in the main.greedy_optimizer test. However examining the test results it looked that the plans where slightly better (eq_ref where more chained together) so I assume this is ok. - I have verified a few test cases where there was notable/unexpected changes in the plan and in all cases the new optimizer plans where faster. (main.greedy_optimizer and some others)
-
Monty authored
This makes it easier to compare different costs and also allows the optimizer to optimizer different storage engines more reliably. - Added tests/check_costs.pl, a tool to verify optimizer cost calculations. - Most engine costs has been found with this program. All steps to calculate the new costs are documented in Docs/optimizer_costs.txt - User optimizer_cost variables are given in microseconds (as individual costs can be very small). Internally they are stored in ms. - Changed DISK_READ_COST (was DISK_SEEK_BASE_COST) from a hard disk cost (9 ms) to common SSD cost (400MB/sec). - Removed cost calculations for hard disks (rotation etc). - Changed the following handler functions to return IO_AND_CPU_COST. This makes it easy to apply different cost modifiers in ha_..time() functions for io and cpu costs. - scan_time() - rnd_pos_time() & rnd_pos_call_time() - keyread_time() - Enhanched keyread_time() to calculate the full cost of reading of a set of keys with a given number of ranges and optional number of blocks that need to be accessed. - Removed read_time() as keyread_time() + rnd_pos_time() can do the same thing and more. - Tuned cost for: heap, myisam, Aria, InnoDB, archive and MyRocks. Used heap table costs for json_table. The rest are using default engine costs. - Added the following new optimizer variables: - optimizer_disk_read_ratio - optimizer_disk_read_cost - optimizer_key_lookup_cost - optimizer_row_lookup_cost - optimizer_row_next_find_cost - optimizer_scan_cost - Moved all engine specific cost to OPTIMIZER_COSTS structure. - Changed costs to use 'records_out' instead of 'records_read' when recalculating costs. - Split optimizer_costs.h to optimizer_costs.h and optimizer_defaults.h. This allows one to change costs without having to compile a lot of files. - Updated costs for filter lookup. - Use a better cost estimate in best_extension_by_limited_search() for the sorting cost. - Fixed previous issues with 'filtered' explain column as we are now using 'records_out' (min rows seen for table) to calculate filtering. This greatly simplifies the filtering code in JOIN_TAB::save_explain_data(). This change caused a lot of queries to be optimized differently than before, which exposed different issues in the optimizer that needs to be fixed. These fixes are in the following commits. To not have to change the same test case over and over again, the changes in the test cases are done in a single commit after all the critical change sets are done. InnoDB changes: - Updated InnoDB to not divide big range cost with 2. - Added cost for InnoDB (innobase_update_optimizer_costs()). - Don't mark clustered primary key with HA_KEYREAD_ONLY. This will prevent that the optimizer is trying to use index-only scans on the clustered key. - Disabled ha_innobase::scan_time() and ha_innobase::read_time() and ha_innobase::rnd_pos_time() as the default engine cost functions now works good for InnoDB. Other things: - Added --show-query-costs (\Q) option to mysql.cc to show the query cost after each query (good when working with query costs). - Extended my_getopt with GET_ADJUSTED_VALUE which allows one to adjust the value that user is given. This is used to change cost from microseconds (user input) to milliseconds (what the server is internally using). - Added include/my_tracker.h ; Useful include file to quickly test costs of a function. - Use handler::set_table() in all places instead of 'table= arg'. - Added SHOW_OPTIMIZER_COSTS to sys variables. These are input and shown in microseconds for the user but stored as milliseconds. This is to make the numbers easier to read for the user (less pre-zeros). Implemented in 'Sys_var_optimizer_cost' class. - In test_quick_select() do not use index scans if 'no_keyread' is set for the table. This is what we do in other places of the server. - Added THD parameter to Unique::get_use_cost() and check_index_intersect_extension() and similar functions to be able to provide costs to called functions. - Changed 'records' to 'rows' in optimizer_trace. - Write more information to optimizer_trace. - Added INDEX_BLOCK_FILL_FACTOR_MUL (4) and INDEX_BLOCK_FILL_FACTOR_DIV (3) to calculate usage space of keys in b-trees. (Before we used numeric constants). - Removed code that assumed that b-trees has similar costs as binary trees. Replaced with engine calls that returns the cost. - Added Bitmap::find_first_bit() - Added timings to join_cache for ANALYZE table (patch by Sergei Petrunia). - Added records_init and records_after_filter to POSITION to remember more of what best_access_patch() calculates. - table_after_join_selectivity() changed to recalculate 'records_out' based on the new fields from best_access_patch() Bug fixes: - Some queries did not update last_query_cost (was 0). Fixed by moving setting thd->...last_query_cost in JOIN::optimize(). - Write '0' as number of rows for const tables with a matching row. Some internals: - Engine cost are stored in OPTIMIZER_COSTS structure. When a handlerton is created, we also created a new cost variable for the handlerton. We also create a new variable if the user changes a optimizer cost for a not yet loaded handlerton either with command line arguments or with SET @@global.engine.optimizer_cost_variable=xx. - There are 3 global OPTIMIZER_COSTS variables: default_optimizer_costs The default costs + changes from the command line without an engine specifier. heap_optimizer_costs Heap table costs, used for temporary tables tmp_table_optimizer_costs The cost for the default on disk internal temporary table (MyISAM or Aria) - The engine cost for a table is stored in table_share. To speed up accesses the handler has a pointer to this. The cost is copied to the table on first access. If one wants to change the cost one must first update the global engine cost and then do a FLUSH TABLES. This was done to be able to access the costs for an open table without any locks. - When a handlerton is created, the cost are updated the following way: See sql/keycaches.cc for details: - Use 'default_optimizer_costs' as a base - Call hton->update_optimizer_costs() to override with the engines default costs. - Override the costs that the user has specified for the engine. - One handler open, copy the engine cost from handlerton to TABLE_SHARE. - Call handler::update_optimizer_costs() to allow the engine to update cost for this particular table. - There are two costs stored in THD. These are copied to the handler when the table is used in a query: - optimizer_where_cost - optimizer_scan_setup_cost - Simply code in best_access_path() by storing all cost result in a structure. (Idea/Suggestion by Igor)
-
Monty authored
Variables added: - optimizer_index_block_copy_cost - optimizer_key_copy_cost - optimizer_key_next_find_cost - optimizer_key_compare_cost - optimizer_row_copy_cost - optimizer_where_compare_cost Some rename of defines was done to make the internal defines similar to the visible ones: TIME_FOR_COMPARE -> WHERE_COST; WHERE_COST was also "inverted" to be a number between 0 and 1 that is multiply with accepted records (similar to other optimizer variables). TIME_FOR_COMPARE_IDX -> KEY_COMPARE_COST. This is also inverted, similar to TIME_FOR_COMPARE. TIME_FOR_COMPARE_ROWID -> ROWID_COMPARE_COST. This is also inverted, similar to TIME_FOR_COMPARE. All default costs are identical to what they where before this patch. Other things: - Compare factor in get_merge_buffers_cost() was inverted. - Changed namespace to static in filesort_utils.cc
-
Monty authored
Before this patch, when calculating the cost of fetching and using a row/key from the engine, we took into account the cost of finding a row or key from the engine, but did not consistently take into account index only accessed, clustered key or covered keys for all access paths. The cost of the WHERE clause (TIME_FOR_COMPARE) was not consistently considered in best_access_path(). TIME_FOR_COMPARE was used in calculation in other places, like greedy_search(), but was in some cases (like scans) done an a different number of rows than was accessed. The cost calculation of row and index scans didn't take into account the number of rows that where accessed, only the number of accepted rows. When using a filter, the cost of index_only_reads and cost of accessing and disregarding 'filtered rows' where not taken into account, which made filters cost less than there actually where. To remedy the above, the following key & row fetch related costs has been added: - The cost of fetching and using a row is now split into different costs: - key + Row fetch cost (as before) but multiplied with the variable 'optimizer_cache_cost' (default to 0.5). This allows the user to tell the optimizer the likehood of finding the key and row in the engine cache. - ROW_COPY_COST, The cost copying a row from the engine to the sql layer or creating a row from the join_cache to the record buffer. Mostly affects table scan costs. - ROW_LOOKUP_COST, the cost of fetching a row by rowid. - KEY_COPY_COST the cost of finding the next key and copying it from the engine to the SQL layer. This is used when we calculate the cost index only reads. It makes index scans more expensive than before if they cover a lot of rows. (main.index_merge_myisam) - KEY_LOOKUP_COST, the cost of finding the first key in a range. This replaces the old define IDX_LOOKUP_COST, but with a higher cost. - KEY_NEXT_FIND_COST, the cost of finding the next key (and rowid). when doing a index scan and comparing the rowid to the filter. Before this cost was assumed to be 0. All of the above constants/variables are now tuned to be somewhat in proportion of executing complexity to each other. There is tuning need for these in the future, but that can wait until the above are made user variables as that will make tuning much easier. To make the usage of the above easy, there are new (not virtual) cost calclation functions in handler: - ha_read_time(), like read_time(), but take optimizer_cache_cost into account. - ha_read_and_copy_time(), like ha_read_time() but take into account ROW_COPY_TIME - ha_read_and_compare_time(), like ha_read_and_copy_time() but take TIME_FOR_COMPARE into account. - ha_rnd_pos_time(). Read row with row id, taking ROW_COPY_COST into account. This is used with filesort where we don't need to execute the WHERE clause again. - ha_keyread_time(), like keyread_time() but take optimizer_cache_cost into account. - ha_keyread_and_copy_time(), like ha_keyread_time(), but add KEY_COPY_COST. - ha_key_scan_time(), like key_scan_time() but take optimizer_cache_cost nto account. - ha_key_scan_and_compare_time(), like ha_key_scan_time(), but add KEY_COPY_COST & TIME_FOR_COMPARE. I also added some setup costs for doing different types of scans and creating temporary tables (on disk and in memory). This encourages the optimizer to not use these for simple 'a few row' lookups if there are adequate key lookup strategies. - TABLE_SCAN_SETUP_COST, cost of starting a table scan. - INDEX_SCAN_SETUP_COST, cost of starting an index scan. - HEAP_TEMPTABLE_CREATE_COST, cost of creating in memory temporary table. - DISK_TEMPTABLE_CREATE_COST, cost of creating an on disk temporary table. When calculating cost of fetching ranges, we had a cost of IDX_LOOKUP_COST (0.125) for doing a key div for a new range. This is now replaced with 'io_cost * KEY_LOOKUP_COST (1.0) * optimizer_cache_cost', which matches the cost we use for 'ref' and other key lookups. The effect is that the cost is now a bit higher when we have many ranges for a key. Allmost all calculation with TIME_FOR_COMPARE is now done in best_access_path(). 'JOIN::read_time' now includes the full cost for finding the rows in the table. In the result files, many of the changes are now again close to what they where before the "Update cost for hash and cached joins" commit, as that commit didn't fix the filter cost (too complex to do everything in one commit). The above changes showed a lot of a lot of inconsistencies in optimizer cost calculation. The main objective with the other changes was to do calculation as similar (and accurate) as possible and to make different plans more comparable. Detailed list of changes: - Calculate index_only_cost consistently and correctly for all scan and ref accesses. The row fetch_cost and index_only_cost now takes into account clustered keys, covered keys and index only accesses. - cost_for_index_read now returns both full cost and index_only_cost - Fixed cost calculation of get_sweep_read_cost() to match other similar costs. This is bases on the assumption that data is more often stored on SSD than a hard disk. - Replaced constant 2.0 with new define TABLE_SCAN_SETUP_COST. - Some scan cost estimates did not take into account TIME_FOR_COMPARE. Now all scan costs takes this into account. (main.show_explain) - Added session variable optimizer_cache_hit_ratio (default 50%). By adjusting this on can reduce or increase the cost of index or direct record lookups. The effect of the default is that key lookups is now a bit cheaper than before. See usage of 'optimizer_cache_cost' in handler.h. - JOIN_TAB::scan_time() did not take into account index only scans, which produced a wrong cost when index scan was used. Changed JOIN_TAB:::scan_time() to take into consideration clustered and covered keys. The values are now cached and we only have to call this function once. Other calls are changed to use the cached values. Function renamed to JOIN_TAB::estimate_scan_time(). - Fixed that most index cost calculations are done the same way and more close to 'range' calculations. The cost is now lower than before for small data sets and higher for large data sets as we take into account how many keys are read (main.opt_trace_selectivity, main.limit_rows_examined). - Ensured that index_scan_cost() == range(scan_of_all_rows_in_table_using_one_range) + MULTI_RANGE_READ_INFO_CONST. One effect of this is that if there is choice of doing a full index scan and a range-index scan over almost the whole table then index scan will be preferred (no range-read setup cost). (innodb.innodb, main.show_explain, main.range) - Fixed the EQ_REF and REF takes into account clustered and covered keys. This changes some plans to use covered or clustered indexes as these are much cheaper. (main.subselect_mat_cost, main.state_tables_innodb, main.limit_rows_examined) - Rowid filter setup cost and filter compare cost now takes into account fetching and checking the rowid (KEY_NEXT_FIND_COST). (main.partition_pruning heap.heap_btree main.log_state) - Added KEY_NEXT_FIND_COST to Range_rowid_filter_cost_info::lookup_cost to account of the time to find and check the next key value against the container - Introduced ha_keyread_time(rows) that takes into account finding the next row and copying the key value to 'record' (KEY_COPY_COST). - Introduced ha_key_scan_time() for calculating an index scan over all rows. - Added IDX_LOOKUP_COST to keyread_time() as a startup cost. - Added index_only_fetch_cost() as a convenience function to OPT_RANGE. - keyread_time() cost is slightly reduced to prefer shorter keys. (main.index_merge_myisam) - All of the above caused some index_merge combinations to be rejected because of cost (main.index_intersect). In some cases 'ref' where replaced with index_merge because of the low cost calculation of get_sweep_read_cost(). - Some index usage moved from PRIMARY to a covering index. (main.subselect_innodb) - Changed cost calculation of filter to take KEY_LOOKUP_COST and TIME_FOR_COMPARE into account. See sql_select.cc::apply_filter(). filter parameters and costs are now written to optimizer_trace. - Don't use matchings_records_in_range() to try to estimate the number of filtered rows for ranges. The reason is that we want to ensure that 'range' is calculated similar to 'ref'. There is also more work needed to calculate the selectivity when using ranges and ranges and filtering. This causes filtering column in EXPLAIN EXTENDED to be 100.00 for some cases where range cannot use filtering. (main.rowid_filter) - Introduced ha_scan_time() that takes into account the CPU cost of finding the next row and copying the row from the engine to 'record'. This causes costs of table scan to slightly increase and some test to changed their plan from ALL to RANGE or ALL to ref. (innodb.innodb_mysql, main.select_pkeycache) In a few cases where scan time of very small tables have lower cost than a ref or range, things changed from ref/range to ALL. (main.myisam, main.func_group, main.limit_rows_examined, main.subselect2) - Introduced ha_scan_and_compare_time() which is like ha_scan_time() but also adds the cost of the where clause (TIME_FOR_COMPARE). - Added small cost for creating temporary table for materialization. This causes some very small tables to use scan instead of materialization. - Added checking of the WHERE clause (TIME_FOR_COMPARE) of the accepted rows to ROR costs in get_best_ror_intersect() - Removed '- 0.001' from 'join->best_read' and optimize_straight_join() to ensure that the 'Last_query_cost' status variable contains the same value as the one that was calculated by the optimizer. - Take avg_io_cost() into account in handler::keyread_time() and handler::read_time(). This should have no effect as it's 1.0 by default, except for heap that overrides these functions. - Some 'ref_or_null' accesses changed to 'range' because of cost adjustments (main.order_by) - Added scan type "scan_with_join_cache" for optimizer_trace. This is just to show in the trace what kind of scan was used. - When using 'scan_with_join_cache' take into account number of preceding tables (as have to restore all fields for all previous table combination when checking the where clause) The new cost added is: (row_combinations * ROW_COPY_COST * number_of_cached_tables). This increases the cost of join buffering in proportion of the number of tables in the join buffer. One effect is that full scans are now done earlier as the cost is then smaller. (main.join_outer_innodb, main.greedy_optimizer) - Removed the usage of 'worst_seeks' in cost_for_index_read as it caused wrong plans to be created; It prefered JT_EQ_REF even if it would be much more expensive than a full table scan. A related issue was that worst_seeks only applied to full lookup, not to clustered or index only lookups, which is not consistent. This caused some plans to use index scan instead of eq_ref (main.union) - Changed federated block size from 4096 to 1500, which is the typical size of an IO packet. - Added costs for reading rows to Federated. Needed as there is no caching of rows in the federated engine. - Added ha_innobase::rnd_pos_time() cost function. - A lot of extra things added to optimizer trace - More costs, especially for materialization and index_merge. - Make lables more uniform - Fixed a lot of minor bugs - Added 'trace_started()' around a lot of trace blocks. - When calculating ORDER BY with LIMIT cost for using an index the cost did not take into account the number of row retrivals that has to be done or the cost of comparing the rows with the WHERE clause. The cost calculated would be just a fraction of the real cost. Now we calculate the cost as we do for ranges and 'ref'. - 'Using index for group-by' is used a bit more than before as now take into account the WHERE clause cost when comparing with 'ref' and prefer the method with fewer row combinations. (main.group_min_max). Bugs fixed: - Fixed that we don't calculate TIME_FOR_COMPARE twice for some plans, like in optimize_straight_join() and greedy_search() - Fixed bug in save_explain_data where we could test for the wrong index when displaying 'Using index'. This caused some old plans to show 'Using index'. (main.subselect_innodb, main.subselect2) - Fixed bug in get_best_ror_intersect() where 'min_cost' was not updated, and the cost we compared with was not the one that was used. - Fixed very wrong cost calculation for priority queues in check_if_pq_applicable(). (main.order_by now correctly uses priority queue) - When calculating cost of EQ_REF or REF, we added the cost of comparing the WHERE clause with the found rows, not all row combinations. This made ref and eq_ref to be regarded way to cheap compared to other access methods. - FORCE INDEX cost calculation didn't take into account clustered or covered indexes. - JT_EQ_REF cost was estimated as avg_io_cost(), which is half the cost of a JT_REF key. This may be true for InnoDB primary key, but not for other unique keys or other engines. Now we use handler function to calculate the cost, which allows us to handle consistently clustered, covered keys and not covered keys. - ha_start_keyread() didn't call extra_opt() if keyread was already enabled but still changed the 'keyread' variable (which is wrong). Fixed by not doing anything if keyread is already enabled. - multi_range_read_info_cost() didn't take into account io_cost when calculating the cost of ranges. - fix_semijoin_strategies_for_picked_join_order() used the wrong record_count when calling best_access_path() for SJ_OPT_FIRST_MATCH and SJ_OPT_LOOSE_SCAN. - Hash joins didn't provide correct best_cost to the upper level, which means that the cost for hash_joins more expensive than calculated in best_access_path (a difference of 10x * TIME_OF_COMPARE). This is fixed in the new code thanks to that we now include TIME_OF_COMPARE cost in 'read_time'. Other things: - Added some 'if (thd->trace_started())' to speed up code - Removed not used function Cost_estimate::is_zero() - Simplified testing of HA_POS_ERROR in get_best_ror_intersect(). (No cost changes) - Moved ha_start_keyread() from join_read_const_table() to join_read_const() to enable keyread for all types of JT_CONST tables. - Made a few very short functions inline in handler.h Notes: - In main.rowid_filter the join order of order and lineitem is swapped. This is because the cost of doing a range fetch of lineitem(98 rows) is almost as big as the whole join of order,lineitem. The filtering will also ensure that we only have to do very small key fetches of the rows in lineitem. - main.index_merge_myisam had a few changes where we are now using less keys for index_merge. This is because index scans are now more expensive than before. - handler->optimizer_cache_cost is updated in ha_external_lock(). This ensures that it is up to date per statements. Not an optimal solution (for locked tables), but should be ok for now. - 'DELETE FROM t1 WHERE t1.a > 0 ORDER BY t1.a' does not take cost of filesort into consideration when table scan is chosen. (main.myisam_explain_non_select_all) - perfschema.table_aggregate_global_* has changed because an update on a table with 1 row will now use table scan instead of key lookup. TODO in upcomming commits: - Fix selectivity calculation for ranges with and without filtering and when there is a ref access but scan is chosen. For this we have to store the lowest known value for 'accepted_records' in the OPT_RANGE structure. - Change that records_read does not include filtered rows. - test_if_cheaper_ordering() needs to be updated to properly calculate costs. This will fix tests like main.order_by_innodb, main.single_delete_update - Extend get_range_limit_read_cost() to take into considering cost_for_index_read() if there where no quick keys. This will reduce the computed cost for ORDER BY with LIMIT in some cases. (main.innodb_ext_key) - Fix that we take into account selectivity when counting the number of rows we have to read when considering using a index table scan to resolve ORDER BY. - Add new calculation for rnd_pos_time() where we take into account the benefit of reading multiple rows from the same page.
-
- 22 Jan, 2023 1 commit
-
-
Daniel Black authored
log_slow_filter=admin as been available for a long time. Uses can migrate from log_slow_statements_statements=OFF by removing 'admin' from the default log_slow_filter variable setting.
-
- 27 Oct, 2022 1 commit
-
-
Paragoumba authored
* MDEV-24377: Accept comma separated addresses as --bind-address value When bind address form the basis of wsrep based variables, the first address in the comma separated list is used. The test uses the IP address 192.168.0.1 as we need to include multiple address. This will include failures without the following commit. The tests for bind_multiple_address_resolution could return addresses that we cannot bind too. Windows and FreeBSD, and probably other OSs will terminate the service if addresses are unavailable. We use the WSAEADDRNOTAVAIL / POSIX EADDRNOTAVAIL codes to continue to bind to other interfaces. If at the end of the bind list, if no binds are successful, the we terminate but still leaving the error messages in the log. Co-authored-by:
Daniel Black <daniel@mariadb.org>
-
- 26 Oct, 2022 2 commits
-
-
Sergei Golubchik authored
* clarify the help text for --system-versioning-insert-history * move the vers_write=false check from Item_field::fix_fields() next to other vers field checks in find_field_in_table() * move row_start validation from handler::write_row() next to vers_update_fields() * make secure_timestamp check to happen in one place only, extract it into a function is_set_timestamp_vorbidden(). * overwriting vers fields is an error, just like setting @@timestamp * don't run vers_insert_history() for every row
-
Aleksey Midenkov authored
1. system_versioning_insert_history session variable allows pseudocolumns ROW_START and ROW_END be specified in INSERT, INSERT..SELECT and LOAD DATA. 2. Cleaned up select_insert::send_data() from setting vers_write as this parameter is now set on TABLE initialization. 4. Replication of system_versioning_insert_history via option_bits in OPTIONS_WRITTEN_TO_BIN_LOG.
-
- 04 Oct, 2022 3 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
post-merge fixes: * remove log_slow_queries_not_using_indexes, no need to create variables that are deprecated since the moment of creation * rename log_slow_query_enable->log_slow_query no other variable uses *_enable pattern * MDEV-29626 Assertion `self == &Sys_slow_query_log' failed in fix_log_state * tests Closes #2137
-
Michael Qin authored
Thus, all these variables will be grouped together and more logically named. Descriptions for the old variables were updated to indicate they are now aliases for the newly introduced variables with prefix log_slow. log_slow_queries_not_using_indexes_filter will not be addressed in this merge request. log_throttle_queries_not_using_indexes seems to no longer be in use. MTR tests are also updated to include the new variable names. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
- 21 Sep, 2022 1 commit
-
-
Monty authored
The benefit of this is that one can remove the READ ONLY ADMIN privilege from all users and this way ensure that no one can do any changes on any non-temporary tables. This is good option to use on slaves when one wants to ensure that the slave is kept identical to the master.
-
- 10 Aug, 2022 1 commit
-
-
Sergei Golubchik authored
-
- 03 Aug, 2022 1 commit
-
-
Brandon Nesterenko authored
New Feature: ============ This patch adds a new system variable, @@slave_max_statement_time, which limits the execution time of s slave’s events that implements an equivalent to @@max_statement_time for slave applier. Reviewed By: ============ Andrei Elkin <andrei.elkin@mariadb.com>
-
- 02 Aug, 2022 1 commit
-
-
Sergei Golubchik authored
print "Defaults to on" not only for GET_BOOL, but also for GET_BIT
-
- 26 Jul, 2022 2 commits
-
-
Sergei Petrunia authored
Part #2: Extend heuristic pruning to use multiple tables as the "Model tables". Before the patch, heuristic pruning uses only one "Model table": The table which had the best cost AND record became the "Model table". After that, if a table's cost and record were both worse than those of the Model Table, the table would be pruned away. This didn't work well when the first table (the optimizer sorts them by record_count) had low record_count but relatively high cost: nothing could be pruned afterwards. The patch adds the two additional "Model tables": one with the least cost and the other with the least record_count. (In both cases, a table can be pruned away if BOTH its cost and record_count are worse than those of a Model table) The new pruning is active when the number of tables to consider for the prefix is higher than @@optimizer_extra_pruning_depth. One can see the new pruning in the Optimizer Trace as - "pruned_by_heuristic":"min_record_count", or - "pruned_by_heuristic":"min_read_time". Old heuristic pruning shows as "pruned_by_heuristic":1.
-
Monty authored
MDEV-28073 Slow query performance in MariaDB when using many table The idea is to prefer and chain EQ_REF tables (tables that uses an unique key to find a row) when searching for the best table combination. This significantly reduces row combinations that has to be examined. This is optimization is enabled when setting optimizer_prune_level=2 (which is now default). Implementation: - optimizer_prune_level has a new level, 2, which enables EQ_REF optimization in addition to the pruning done by level 1. Level 2 is now default. - Added JOIN::eq_ref_tables that contains bits of tables that could use potentially use EQ_REF access in the query. This is calculated in sort_and_filter_keyuse() Under optimizer_prune_level=2: - When the greedy_optimizer notices that the preceding table was an EQ_REF table, it tries to add an EQ_REF table next. If an EQ_REF table exists, only this one will be considered at this level. We also collect all EQ_REF tables chained by the next levels and these are ignored on the starting level as we have already examined these. If no EQ_REF table exists, we continue as normal. This optimization speeds up the greedy_optimizer combination test with ~25% Other things: - I ported the changes in MySQL 5.7 to greedy_optimizer.test to MariaDB to be able to ensure we can handle all cases that MySQL can do. - I have run all tests with --mysqld=--optimizer_prune_level=1 to verify that there where no test changes.
-