1. 10 Feb, 2023 26 commits
    • Sergei Petrunia's avatar
      MDEV-30529: Assertion `rnd_records <= s->found_records' failed in best_access_path · cc81ea1c
      Sergei Petrunia authored
      best_access_path() has an assertion:
      
         DBUG_ASSERT(rnd_records <= s->found_records);
      
      make it rounding-safe.
      cc81ea1c
    • Sergei Petrunia's avatar
      MDEV-30525: Assertion `ranges > 0' fails in IO_AND_CPU_COST handler::keyread_time · 5faf2ac0
      Sergei Petrunia authored
      Make get_best_group_min_max() exit early if the table has
      table->records()=0. Attempting to compute loose scan over 0
      groups eventually causes an assert when trying to get the
      cost of reading 0 ranges.
      5faf2ac0
    • Monty's avatar
      Fixed bug in extended key handling when there is no primary key · 00704aff
      Monty authored
      Extended keys works by first checking if the engine supports extended
      keys.
      If yes, it extends secondary key with primary key components and mark the
      secondary keys as HA_EXT_NOSAME (unique).
      If we later notice that there where no primary key, the extended key
      information for secondary keys in share->key_info is reset. However the
      key_info->flag HA_EXT_NOSAME was not reset!
      
      This causes some strange things to happen:
      - Tables that have no primary key or secondary index that contained the
        primary key would be wrongly optimized as the secondary key could be
        thought to be unique when it was not and not unique when it was.
      - The problem was not shown in EXPLAIN because of a bug in
        create_ref_for_key() that caused EQ_REF to be displayed by EXPLAIN as REF
        when extended keys where used and the secondary key contained the primary
        key.
      
      This is fixed with:
      - Removed wrong test in make_join_select() which did not detect that key
        where unique when a secondary key contains the primary.
      - Moved initialization of extended keys from create_key_infos() to
        init_from_binary_frm_image() after we know if there is a usable primary
        key or not. One disadvantage with this approach is that
        key_info->key_parts may have not used slots (for keys we thought could
        be extended but could not). Fixed by adding a check for unused key_parts
        to copy_keys_from_share().
      
      Other things:
      - Simplified copying of first key part in create_key_infos().
      - Added a lot of code comments in code that I had to check as part of
        finding the issue.
      - Fixed some indentation.
      - Replaced a couple of looks using references to pointers in C
        context where the reference does not give any benefit.
      - Updated Aria and Maria to not assume the all key_info->rec_per_key
        are in one memory block (this could happen when using dervived
        tables with many keys).
      - Fixed a bug where key_info->rec_per_key where not allocated
      - Optimized TABLE::add_tmp_key() to only call alloc() once.
        (No logic changes)
      
      Test case changes:
      - innodb_mysql.test changed index as an index the optimizer thought
        was unique, was not. (Table had no primary key)
      
      TODO:
      - Move code that checks for partial or too long keys to the primary loop
        earlier that initally decides if we should add extended key fields.
        This is needed to ensure that HA_EXT_NOSAME is not set for partial or
        too long keys. It will also shorten the current code notable.
      00704aff
    • Monty's avatar
      MDEV-30486 Table is not eliminated in bb-11.0 · fe1f4ca8
      Monty authored
      Some tables where not eliminated when they could have been.
      This was caused because HA_KEYREAD_ONLY is not set anymore for InnoDB
      clustered index and the elimination code was depending on
      field->part_of_key_not_clustered which was not set if HA_KEYREAD_ONLY
      is not present.
      
      Fixed by moving out field->part_of_key and
      field->part_of_key_not_clustered from under HA_KEYREAD_ONLY (which
      they should never have been part of).
      
      Other things:
      - Fixed a bug in make_join_select() that caused range to be used when
        there where elminiated or constant tables present (Caused wrong
        change of plans in join_outer_innodb.test). This also affected
        show_explain.test and subselct_sj_mat.test where wrong 'range's where
        replaced with index scans.
      
      Reviewer: Sergei Petrunia <sergey@mariadb.com>
      fe1f4ca8
    • Monty's avatar
      Removed /2 of InnoDB ref_per_key[] estimates · 01c82173
      Monty authored
      The original code was there to favor index search over table scan.
      This is not needed anymore as the cost calculations for table scans
      and index lookups are now more exact.
      01c82173
    • Sergei Petrunia's avatar
    • Sergei Golubchik's avatar
      remove GET_ADJUST_VALUE · 2010cfab
      Sergei Golubchik authored
      avoid contaminating my_getopt with sysvar implementation details.
      adjust variable values after my_getopt, like it's done for others.
      this fixes --help to show correct values.
      2010cfab
    • Sergei Golubchik's avatar
      remove SHOW_OPTIMIZER_COST · d10b3b01
      Sergei Golubchik authored
      avoid contaminating SHOW code with sysvar implementation details.
      And no hard-coded factor either.
      d10b3b01
    • Sergei Golubchik's avatar
      remove Feature_into_old_syntax · affab99c
      Sergei Golubchik authored
      it doesn't provide any information we'll use.
      No matter what the value is, we don't remove the non-standard
      syntax unless we have to
      affab99c
    • Sergei Golubchik's avatar
      typos in comments, etc · 7e465aeb
      Sergei Golubchik authored
      7e465aeb
    • Monty's avatar
      Selectivity: apply found_constraint heuristic only to post-join #rows. · 5e5988db
      Monty authored
      matching_candidates_in_table() computes the number of rows one
      gets from the current table after applying the WHERE clause on
      just this table
      
      The function had a "found_counstraint heuristic" which reduced the
      number of rows after WHERE check by 25% if there were comparisons
      between key parts in table T and previous tables, like WHERE
      T.keyXpartY= func(prev_table.cols)
      
      Note that such comparisons can only be checked when the row of
      table T is joined with rows of the previous tables. It is wrong
      to apply the selectivity before the join operation.
      
      Fixed by moving the 'found_constraint' code to a separate function
      and only reducing the #rows in 'records_out'.
      
      Renamed matching_candidates_in_table() to apply_selectivity_for_table() as
      the function now either applies selectivity on the rows (depending
      on the value of thd->variables.optimizer_use_condition_selectivity)
      or uses the selectivity from the available range conditions.
      5e5988db
    • Monty's avatar
      Updated comments in best_access_path() · 33af691f
      Monty authored
      33af691f
    • Monty's avatar
      MDEV-30080 Wrong result with LEFT JOINs involving constant tables · 0eca91ab
      Monty authored
      The reason things fails in 10.5 and above is that test_quick_select()
      returns -1 (impossible range) for empty tables if there are any
      conditions attached.
      
      This didn't happen in 10.4 as the cost for a range was more than for
      a table scan with 0 rows and get_key_scan_params() did not create any
      range plans and thus did not mark the range as impossible.
      
      The code that checked the 'impossible range' conditions did not take
      into account all cases of LEFT JOIN usage.
      
      Adding an extra check if the table is used with an ON condition in case
      of 'impossible range' fixes the issue.
      0eca91ab
    • Monty's avatar
      Code cleanups and add some caching of functions to speed up things · 3316a54d
      Monty authored
      Detailed description:
      - Added more function comments and fixed types in some old comments
      - Removed an outdated comment
      - Cleaned up some functions in records.cc
        - Replaced "while" with "if"
        - Reused error code
        - Made functions similar
      - Added caching of pfs_batch_update()
      - Simplified some rowid_filter code
        - Only call build_range_rowid_filter() if rowid filter will be used
        - Replaced tab->is_rowid_filter_built with need_to_build_rowid_filter.
          We only have to test need_to_build_rowid_filter to know if we have
          to build the filter. Old code needed two tests
        - Added function 'clear_range_rowid_filter' to disable rowid filter.
          Made things simpler as we can now clear all rowid filter variables
          in one place.
      - Removed some 'if' in sub_select()
      3316a54d
    • Monty's avatar
      MDEV-30360 Assertion `cond_selectivity <= 1.000000001' failed in ... · 65da5645
      Monty authored
      The problem was that make_join_select() called test_quick_select() outside
      of best_access_path(). This could use indexes that where not taken into
      account before and this caused changes to selectivity and 'records_out'.
      
      Fixed by updating records_out if test_quick_select() was called.
      65da5645
    • Monty's avatar
      MDEV-30328 Assertion `avg_io_cost != 0.0 || index_cost.io + row_cost.io == 0'... · 0a7d2917
      Monty authored
      MDEV-30328 Assertion `avg_io_cost != 0.0 || index_cost.io + row_cost.io == 0' failed in Cost_estimate::total_cost()
      
      The assert was there to check that engines reports sensible numbers for IO.
      However this does not work in case of optimizer_disk_read_ratio=0.
      
      Fixed by removing the assert.
      0a7d2917
    • Monty's avatar
      MDEV-30327 Client crashes in print_last_query_cost · 1a13dbff
      Monty authored
      Fixed by calling init_pager() before tee_fprintf()
      1a13dbff
    • Monty's avatar
      MDEV-30313 Sporadic assertion `cond_selectivity <= 1.0' failure in get_range_limit_read_cost · 8d4bccf3
      Monty authored
      The bug was related to floating point rounding. Fixed the assert to take
      that into account.
      8d4bccf3
    • Monty's avatar
      Added sys.optimizer_switch_on() and sys.optimizer_switch_off() · 5de734da
      Monty authored
      These are helpful tools to quickly see what optimizer switch options
      are on or off.  The different options are displayed alphabetically
      5de734da
    • Monty's avatar
    • Monty's avatar
      MDEV-30310 Assertion failure in best_access_path upon IN exceeding... · 02b7735b
      Monty authored
      MDEV-30310 Assertion failure in best_access_path upon IN exceeding IN_PREDICATE_CONVERSION_THRESHOLD, derived_with_keys=off
      
      The bug was some old code that, without any explanation, reset
      PART_KEY_FLAG from fields in temporary tables. This caused
      join_tab->key_dependent to not be updated properly, which caused
      an assert.
      02b7735b
    • Monty's avatar
      Simplified code in generate_derived_keys() and when using pos_in_tables · 4be0bfad
      Monty authored
      Added comments that not used keys of derivied tables will be deleted.
      Added some comments about checking if pos_in_table_list is 0.
      
      Other things:
      - Added a marker (DBTYPE_IN_PREDICATE) in TABLE_LIST->derived_type
        to indicate that the table was generated from IN (list). This is
        useful for debugging and can later be used by explain if needed.
      - Removed a not needed test of table->pos_in_table_list as it should
        always be valid at this point in time.
      4be0bfad
    • Monty's avatar
      MDEV-30256 Wrong result (missing rows) upon join with empty table · 9a4110aa
      Monty authored
      The problem was an assignment in test_quick_select() that flagged empty
      tables with "Impossible where". This test was however wrong as it
      didn't work correctly for left join.
      
      Removed the test, but added checking of empty tables in DELETE and UPDATE
      to get similar EXPLAIN as before.
      
      The new tests is a bit more strict (better) than before as it catches all
      cases of empty tables in single table DELETE/UPDATE.
      9a4110aa
    • Monty's avatar
      MDEV-30098 Server crashes in ha_myisam::index_read_map with index_merge_sort_intersection=on · e3f56254
      Monty authored
      Fixes also
      MDEV-30104 Server crashes in handler_rowid_filter_check upon ANALYZE TABLE
      
      cancel_pushed_rowid_filter() didn't inform the handler that rowid_filter
      was canceled.
      e3f56254
    • Monty's avatar
      MDEV-30088 Assertion `cond_selectivity <= 1.0' failed in get_range_limit_read_cost · 76d2a77d
      Monty authored
      Fixed cost calculation for MERGE tables with 0 tables
      76d2a77d
    • Monty's avatar
      Change cost for REF to take into account cost for 1 extra key read_next · 3fa99f0c
      Monty authored
      The main difference in code path between EQ_REF and REF is that for
      REF we have to do an extra read_next on the index to check that there
      is no more matching rows.
      
      Before this patch we added a preference of EQ_REF by ensuring that REF
      would always estimate to find at least 2 rows.
      
      This patch adds the cost of the extra key read_next to REF access and
      removes the code that limited REF to at least 2 rows. For some queries
      this can have a big effect as the total estimated rows will be halved
      for each REF table with 1 rows.
      
      multi_range cost calculations are also changed to take into account
      the difference between EQ_REF and REF.
      
      The effect of the patch to the test suite:
      - About 80 test case changed
      - Almost all changes where for EXPLAIN where estimated rows for REF
        where changed from 2 to 1.
      - A few test cases using explain extended had a change of 'filtered'.
        This is because of the estimated rows are now closer to the
        calculated selectivity.
      - A very few test had a change of table order.
        This is because the change of estimated rows from 2 to 1 or the small
        cost change for REF
        (main.subselect_sj_jcl6, main.group_by, main.dervied_cond_pushdown,
        main.distinct, main.join_nested, main.order_by, main.join_cache)
      - No key statistics and the estimated rows are now smaller which cased
        estimated filtering to be lower.
        (main.subselect_sj_mat)
      - The number of total rows are halved.
        (main.derived_cond_pushdown)
      - Plans with 1 row changed to use RANGE instead of REF.
        (main.group_min_max)
      - ALL changed to REF
        (main.key_diff)
      - Key changed from ref + index_only to PRIMARY key for InnoDB, as
        OPTIMIZER_ROW_LOOKUP_COST + OPTIMIZER_ROW_NEXT_FIND_COST is smaller than
        OPTIMIZER_KEY_LOOKUP_COST + OPTIMIZER_KEY_NEXT_FIND_COST.
        (main.join_outer_innodb)
      - Cost changes printouts
        (main.opt_trace*)
      - Result order change
        (innodb_gis.rtree)
      3fa99f0c
  2. 03 Feb, 2023 14 commits
    • Monty's avatar
      Fixed wrong selectivity calculation in table_after_join_selectivity() · b5df077e
      Monty authored
      The old code counted selectivity double in case of queries like:
      WHERE key_part1=1 and key_part2 < 100
      if the optimizer would decide to use a REF access on key_part1.
      
      The new code in best_access_path() that changes REF access to RANGE
      if the RANGE key is longer makes this issue less likely to happen.
      
      I was not able to create a test case for 11.0, however if one ports this
      patch to a MariaDB version without the change of REF to RANGE, the
      selectivity will be counted double.
      b5df077e
    • Monty's avatar
      Cache file->index_flags(index, 0, 1) in table->key_info[index].index_flags · ed0a7235
      Monty authored
      The reason for this is that we call file->index_flags(index, 0, 1)
      multiple times in best_access_patch()when optimizing a table.
      For example, in InnoDB, the calls is not trivial (4 if's and 2 assignments)
      Now the function is inlined and is just a memory reference.
      
      Other things:
      - handler::is_clustering_key() and pk_is_clustering_key() are now inline.
      - Added TABLE::can_use_rowid_filter() to simplify some code.
      - Test if we should use a rowid_filter only if can_use_rowid_filter() is
        true.
      - Added TABLE::is_clustering_key() to avoid a memory reference.
      - Simplify some code using the fact that HA_KEYREAD_ONLY is true implies
        that HA_CLUSTERED_INDEX is false.
      - Added DBUG_ASSERT to TABLE::best_range_rowid_filter() to ensure we
        do not call it with a clustering key.
      - Reorginized elements in struct st_key to get better memory alignment.
      - Updated ha_innobase::index_flags() to not have
        HA_DO_RANGE_FILTER_PUSHDOWN for clustered index
      ed0a7235
    • Monty's avatar
      Updated some tests for --valgrind · 5e0832e1
      Monty authored
      - Increased timeout for binlog_mysqlbinlog_raw_flush.test.
        The old timeout was not enough when running with --valgrind
      - Disabled ssl_timeout for --valgrind as it times out
      - Disabled binlog_truncate_multi_engine for --valgrind as it does restarts
      5e0832e1
    • Monty's avatar
      Fixed 'undefined variable' error in mtr · 43dc4233
      Monty authored
      This could happen if mtr_grab_file() returned empty (happened to me)
      43dc4233
    • Sergei Petrunia's avatar
      Make tests work with --view-protocol · e4fbec14
      Sergei Petrunia authored
      e4fbec14
    • Sergei Petrunia's avatar
      Stabilize rocksdb.rocksdb test. · 15298815
      Sergei Petrunia authored
      15298815
    • Sergei Petrunia's avatar
      MDEV-21095: Make Optimizer Trace support Index Condition Pushdown · cbd99688
      Sergei Petrunia authored
      Fixes over previous patches: do tracing of attached conditions
      close to where we generate them.
      
      Fix the tracing code to print the right conditions.
      cbd99688
    • Rex's avatar
      MDEV-21092,MDEV-21095,MDEV-29997: Optimizer Trace for index condition... · 07f21cfb
      Rex authored
      MDEV-21092,MDEV-21095,MDEV-29997: Optimizer Trace for index condition pushdown, partition pruning, exists-to-in
      
              Add Optimizer Tracing for:
              - Index Condition Pushdown
              - Partition Pruning
              - Exists-to-IN optimization
      07f21cfb
    • Sergei Petrunia's avatar
      Stabilize engines/iuds.type_bit_iuds test · dba78f3c
      Sergei Petrunia authored
      Make sure the queries use the intended query plan
      dba78f3c
    • Sergei Petrunia's avatar
      Remove mysql-test/suite/versioning/r/select,trx_id.rdiff which is empty · 0fcc32f8
      Sergei Petrunia authored
      This seems to confuse windows.
      0fcc32f8
    • Sergei Petrunia's avatar
    • Monty's avatar
      Removed "<select expression> INTO <destination>" deprication. · 1f4a9f08
      Monty authored
      This was done after discussions with Igor, Sanja and Bar.
      
      The main reason for removing the deprication was to ensure that MariaDB
      is always backward compatible whenever possible.
      
      Other things:
      - Added statistics counters, mainly for the feedback plugin.
        - INTO OUTFILE
        - INTO variable
        - If INTO is using the old syntax (end of query)
      1f4a9f08
    • Monty's avatar
      Removed diff dates from rdiff files · b74d2623
      Monty authored
      b74d2623
    • Monty's avatar
      In best_access_path() change record_count to 1.0 if its less than 1.0. · 8b7c0d69
      Monty authored
      In essence this means that we expect the user query to have at least
      one matching row in the end.
      This change will not affect the estimated rows for the plan, but will
      ensure that the cost for adding a table is not neglected because of
      record count being too low.
      
      The reasons for this is that if we have table combination that
      together has a very high selectivity then join record_count could
      become very low (close to 0)
      
      This would cause costs for all future tables to be so small that they
      are irrelevant for the rest of the plan.
      This has been shown to be the case in some performance benchmarks and
      in a few mtr tests.
      
      There is also still a problem in selectivity calculations as joining two
      tables in different order causes a different estimation of total rows.
      This can be seen in selectivity_innodb.test, test 'Q20' where joining
      nation,supplier is expecting 1.111 rows_out while joining supplier,nation
      is expecting 0.04 rows_out.
      
      The reason for 0.04 is that the optimizer estimates 'supplier' to have
      10 matching rows, and joining with nation (eq_ref) has 1 row. However
      selectivity of n_name = 'UNITED STATES' makes the optimizer things
      that there will be only 0.04 matching rows.
      
      This patch avoids this "too low row count" to affect cost
      caclulations.
      8b7c0d69