- 10 Feb, 2023 20 commits
-
-
Sergei Golubchik authored
avoid contaminating my_getopt with sysvar implementation details. adjust variable values after my_getopt, like it's done for others. this fixes --help to show correct values.
-
Sergei Golubchik authored
avoid contaminating SHOW code with sysvar implementation details. And no hard-coded factor either.
-
Sergei Golubchik authored
it doesn't provide any information we'll use. No matter what the value is, we don't remove the non-standard syntax unless we have to
-
Sergei Golubchik authored
-
Monty authored
matching_candidates_in_table() computes the number of rows one gets from the current table after applying the WHERE clause on just this table The function had a "found_counstraint heuristic" which reduced the number of rows after WHERE check by 25% if there were comparisons between key parts in table T and previous tables, like WHERE T.keyXpartY= func(prev_table.cols) Note that such comparisons can only be checked when the row of table T is joined with rows of the previous tables. It is wrong to apply the selectivity before the join operation. Fixed by moving the 'found_constraint' code to a separate function and only reducing the #rows in 'records_out'. Renamed matching_candidates_in_table() to apply_selectivity_for_table() as the function now either applies selectivity on the rows (depending on the value of thd->variables.optimizer_use_condition_selectivity) or uses the selectivity from the available range conditions.
-
Monty authored
-
Monty authored
The reason things fails in 10.5 and above is that test_quick_select() returns -1 (impossible range) for empty tables if there are any conditions attached. This didn't happen in 10.4 as the cost for a range was more than for a table scan with 0 rows and get_key_scan_params() did not create any range plans and thus did not mark the range as impossible. The code that checked the 'impossible range' conditions did not take into account all cases of LEFT JOIN usage. Adding an extra check if the table is used with an ON condition in case of 'impossible range' fixes the issue.
-
Monty authored
Detailed description: - Added more function comments and fixed types in some old comments - Removed an outdated comment - Cleaned up some functions in records.cc - Replaced "while" with "if" - Reused error code - Made functions similar - Added caching of pfs_batch_update() - Simplified some rowid_filter code - Only call build_range_rowid_filter() if rowid filter will be used - Replaced tab->is_rowid_filter_built with need_to_build_rowid_filter. We only have to test need_to_build_rowid_filter to know if we have to build the filter. Old code needed two tests - Added function 'clear_range_rowid_filter' to disable rowid filter. Made things simpler as we can now clear all rowid filter variables in one place. - Removed some 'if' in sub_select()
-
Monty authored
The problem was that make_join_select() called test_quick_select() outside of best_access_path(). This could use indexes that where not taken into account before and this caused changes to selectivity and 'records_out'. Fixed by updating records_out if test_quick_select() was called.
-
Monty authored
MDEV-30328 Assertion `avg_io_cost != 0.0 || index_cost.io + row_cost.io == 0' failed in Cost_estimate::total_cost() The assert was there to check that engines reports sensible numbers for IO. However this does not work in case of optimizer_disk_read_ratio=0. Fixed by removing the assert.
-
Monty authored
Fixed by calling init_pager() before tee_fprintf()
-
Monty authored
The bug was related to floating point rounding. Fixed the assert to take that into account.
-
Monty authored
These are helpful tools to quickly see what optimizer switch options are on or off. The different options are displayed alphabetically
-
Monty authored
-
Monty authored
MDEV-30310 Assertion failure in best_access_path upon IN exceeding IN_PREDICATE_CONVERSION_THRESHOLD, derived_with_keys=off The bug was some old code that, without any explanation, reset PART_KEY_FLAG from fields in temporary tables. This caused join_tab->key_dependent to not be updated properly, which caused an assert.
-
Monty authored
Added comments that not used keys of derivied tables will be deleted. Added some comments about checking if pos_in_table_list is 0. Other things: - Added a marker (DBTYPE_IN_PREDICATE) in TABLE_LIST->derived_type to indicate that the table was generated from IN (list). This is useful for debugging and can later be used by explain if needed. - Removed a not needed test of table->pos_in_table_list as it should always be valid at this point in time.
-
Monty authored
The problem was an assignment in test_quick_select() that flagged empty tables with "Impossible where". This test was however wrong as it didn't work correctly for left join. Removed the test, but added checking of empty tables in DELETE and UPDATE to get similar EXPLAIN as before. The new tests is a bit more strict (better) than before as it catches all cases of empty tables in single table DELETE/UPDATE.
-
Monty authored
Fixes also MDEV-30104 Server crashes in handler_rowid_filter_check upon ANALYZE TABLE cancel_pushed_rowid_filter() didn't inform the handler that rowid_filter was canceled.
-
Monty authored
Fixed cost calculation for MERGE tables with 0 tables
-
Monty authored
The main difference in code path between EQ_REF and REF is that for REF we have to do an extra read_next on the index to check that there is no more matching rows. Before this patch we added a preference of EQ_REF by ensuring that REF would always estimate to find at least 2 rows. This patch adds the cost of the extra key read_next to REF access and removes the code that limited REF to at least 2 rows. For some queries this can have a big effect as the total estimated rows will be halved for each REF table with 1 rows. multi_range cost calculations are also changed to take into account the difference between EQ_REF and REF. The effect of the patch to the test suite: - About 80 test case changed - Almost all changes where for EXPLAIN where estimated rows for REF where changed from 2 to 1. - A few test cases using explain extended had a change of 'filtered'. This is because of the estimated rows are now closer to the calculated selectivity. - A very few test had a change of table order. This is because the change of estimated rows from 2 to 1 or the small cost change for REF (main.subselect_sj_jcl6, main.group_by, main.dervied_cond_pushdown, main.distinct, main.join_nested, main.order_by, main.join_cache) - No key statistics and the estimated rows are now smaller which cased estimated filtering to be lower. (main.subselect_sj_mat) - The number of total rows are halved. (main.derived_cond_pushdown) - Plans with 1 row changed to use RANGE instead of REF. (main.group_min_max) - ALL changed to REF (main.key_diff) - Key changed from ref + index_only to PRIMARY key for InnoDB, as OPTIMIZER_ROW_LOOKUP_COST + OPTIMIZER_ROW_NEXT_FIND_COST is smaller than OPTIMIZER_KEY_LOOKUP_COST + OPTIMIZER_KEY_NEXT_FIND_COST. (main.join_outer_innodb) - Cost changes printouts (main.opt_trace*) - Result order change (innodb_gis.rtree)
-
- 03 Feb, 2023 20 commits
-
-
Monty authored
The old code counted selectivity double in case of queries like: WHERE key_part1=1 and key_part2 < 100 if the optimizer would decide to use a REF access on key_part1. The new code in best_access_path() that changes REF access to RANGE if the RANGE key is longer makes this issue less likely to happen. I was not able to create a test case for 11.0, however if one ports this patch to a MariaDB version without the change of REF to RANGE, the selectivity will be counted double.
-
Monty authored
The reason for this is that we call file->index_flags(index, 0, 1) multiple times in best_access_patch()when optimizing a table. For example, in InnoDB, the calls is not trivial (4 if's and 2 assignments) Now the function is inlined and is just a memory reference. Other things: - handler::is_clustering_key() and pk_is_clustering_key() are now inline. - Added TABLE::can_use_rowid_filter() to simplify some code. - Test if we should use a rowid_filter only if can_use_rowid_filter() is true. - Added TABLE::is_clustering_key() to avoid a memory reference. - Simplify some code using the fact that HA_KEYREAD_ONLY is true implies that HA_CLUSTERED_INDEX is false. - Added DBUG_ASSERT to TABLE::best_range_rowid_filter() to ensure we do not call it with a clustering key. - Reorginized elements in struct st_key to get better memory alignment. - Updated ha_innobase::index_flags() to not have HA_DO_RANGE_FILTER_PUSHDOWN for clustered index
-
Monty authored
- Increased timeout for binlog_mysqlbinlog_raw_flush.test. The old timeout was not enough when running with --valgrind - Disabled ssl_timeout for --valgrind as it times out - Disabled binlog_truncate_multi_engine for --valgrind as it does restarts
-
Monty authored
This could happen if mtr_grab_file() returned empty (happened to me)
-
Sergei Petrunia authored
-
Sergei Petrunia authored
-
Sergei Petrunia authored
Fixes over previous patches: do tracing of attached conditions close to where we generate them. Fix the tracing code to print the right conditions.
-
Rex authored
MDEV-21092,MDEV-21095,MDEV-29997: Optimizer Trace for index condition pushdown, partition pruning, exists-to-in Add Optimizer Tracing for: - Index Condition Pushdown - Partition Pruning - Exists-to-IN optimization
-
Sergei Petrunia authored
Make sure the queries use the intended query plan
-
Sergei Petrunia authored
This seems to confuse windows.
-
Sergei Petrunia authored
-
Monty authored
This was done after discussions with Igor, Sanja and Bar. The main reason for removing the deprication was to ensure that MariaDB is always backward compatible whenever possible. Other things: - Added statistics counters, mainly for the feedback plugin. - INTO OUTFILE - INTO variable - If INTO is using the old syntax (end of query)
-
Monty authored
-
Monty authored
In essence this means that we expect the user query to have at least one matching row in the end. This change will not affect the estimated rows for the plan, but will ensure that the cost for adding a table is not neglected because of record count being too low. The reasons for this is that if we have table combination that together has a very high selectivity then join record_count could become very low (close to 0) This would cause costs for all future tables to be so small that they are irrelevant for the rest of the plan. This has been shown to be the case in some performance benchmarks and in a few mtr tests. There is also still a problem in selectivity calculations as joining two tables in different order causes a different estimation of total rows. This can be seen in selectivity_innodb.test, test 'Q20' where joining nation,supplier is expecting 1.111 rows_out while joining supplier,nation is expecting 0.04 rows_out. The reason for 0.04 is that the optimizer estimates 'supplier' to have 10 matching rows, and joining with nation (eq_ref) has 1 row. However selectivity of n_name = 'UNITED STATES' makes the optimizer things that there will be only 0.04 matching rows. This patch avoids this "too low row count" to affect cost caclulations.
-
Monty authored
- Changed 'WARNING' of type "You need to use --log-bin to make ... work" to 'Note' - Only print startup Notes if log_warnings >= 4
-
Monty authored
-
Monty authored
"select * from information_schema.tables limit 1" was giving the following warning in the log: [ERROR] Invalid (old?) table or database name '#rocksdb'
-
Sergei Petrunia authored
-
Sergei Petrunia authored
Basic printout for join and table execution costs.
-
Monty authored
-