An error occurred fetching the project authors.
  1. 29 Sep, 2024 1 commit
    • Oleg Smirnov's avatar
      MDEV-34880 Incorrect result for query with derived table having TEXT field · 22cb0299
      Oleg Smirnov authored
      When a derived table which has distinct values and BLOB fields is
      materialized, an index is created over all columns to ensure only
      distinct values are placed to the result.
      This index is created in a special mode HA_UNIQUE_HASH to support BLOBs.
      Later in `best_access_path` the optimizer may incorrectly choose this
      index to retrieve values from the derived table, although such type
      of index cannot be used for data retrieval.
      
      This commit excludes HA_UNIQUE_HASH indexes from consideration
      in `best_access_path`
      22cb0299
  2. 09 May, 2023 1 commit
    • Sergei Petrunia's avatar
      MDEV-31022: SIGSEGV in maria_create from create_internal_tmp_table · b3edbf25
      Sergei Petrunia authored
      The code in create_internal_tmp_table() didn't take into account that
      now temporary (derived) tables may have multiple indexes:
      
      - one index due to duplicate removal
         = In this example created by conversion of big-IN(...) into subquery
         = this index might be converted into a "unique constraint" if the key
           length is too large.
      - one index added by derived_with_keys optimization.
      
      Make create_internal_tmp_table() handle multiple indexes.
      
      Before this patch, use of a unique constraint was indicated in
      TABLE_SHARE::uniques. This was ok as unique constraint was the only index
      in the table. Now it's no longer the case so TABLE_SHARE::uniques is removed
      and replaced with an in-memory-only flag HA_UNIQUE_HASH.
      
      This patch is based on Monty's patch.
      Co-Author: Monty <monty@mariadb.org>
      b3edbf25
  3. 16 Mar, 2023 2 commits
    • Igor Babaev's avatar
    • Igor Babaev's avatar
      MDEV-28965 Assertion failure when preparing UPDATE with derived table in WHERE · 88ca62dc
      Igor Babaev authored
      This patch fixes not only the assertion failure in the function
      Field_iterator_table_ref::set_field_iterator() but also:
       - fixes the problem of forced materialization of derived tables used
         in subqueries contained in WHERE clauses of single-table and multi-table
         UPDATE and DELETE statements
       - fixes the problem of MDEV-17954 that prevented execution of multi-table
         DELETE statements if they use in their WHERE clauses references to
         the tables that are updated.
      
      The patch must be considered a complement to the patch for MDEV-28883.
      
      Approved by Oleksandr Byelkin <sanja@mariadb.com>
      88ca62dc
  4. 07 Mar, 2023 1 commit
    • Monty's avatar
      Allow firstmatch to use HASH joins · 7a277a33
      Monty authored
      Firstmatch_picker::check_qep() has an optimization that allows firstmatch
      to be used together with join buffer under some conditions. In this
      case the cost was assumed to be same as what best_access_path()
      had calculated.
      
      However if HASH+join_buffer was used, then
      fix_semijoin_strategies_for_picked_join_order() would remove the
      join_buffer (which would cause a full join to be used) and the cost
      assumption by Firstmatch_picker::check_qep() would be wrong.
      Later check_join_cache_usage() sees that it's a full scan and decides
      it can use join buffering, (But not the hash join).
      
      Fixed by also allowing HASH joins with firstmatch.
      This removes the need to change disable and re-enable join buffer.
      
      Test case changes:
      - HASH join used with firstmatch (Using join buffer (flat, BNLH join))
      - Filtered could change with firstmatch as the conversion with and without
        join_buffered lost the filtering information.
      - The not "re-enabling join buffer" is shown in main.optimizer_trace
      
      Original code by Sergei, optimized by Monty.
      
      Author: Sergei Petrunia <sergey@mariadb.com>, monty@mariadb.org
      7a277a33
  5. 02 Mar, 2023 1 commit
    • Monty's avatar
      MDEV-30540 Wrong result with IN list length reaching IN_PREDICATE_CONVERSION_THRESHOLD · bd9ca2a0
      Monty authored
      The problem was the mysql_derived_prepare() did not correctly set
      'distinct' when creating a temporary derivated table.
      
      Fixed by separating checking for distinct for queries with and without
      UNION.
      
      Other things:
      - Fixed bug in generate_derived_keys_for_table() where we set the wrong
        bit for join_tab->keys
      - Cleaned up JOIN::drop_unused_derived_keys()
      - Changed TABLE::use_index() to keep unique keys and update
        share->key_parts
      
      Author: Sergei Petrunia <sergey@mariadb.com>, monty@mariadb.org
      bd9ca2a0
  6. 10 Feb, 2023 2 commits
    • Monty's avatar
      MDEV-30310 Assertion failure in best_access_path upon IN exceeding... · 02b7735b
      Monty authored
      MDEV-30310 Assertion failure in best_access_path upon IN exceeding IN_PREDICATE_CONVERSION_THRESHOLD, derived_with_keys=off
      
      The bug was some old code that, without any explanation, reset
      PART_KEY_FLAG from fields in temporary tables. This caused
      join_tab->key_dependent to not be updated properly, which caused
      an assert.
      02b7735b
    • Monty's avatar
      Change cost for REF to take into account cost for 1 extra key read_next · 3fa99f0c
      Monty authored
      The main difference in code path between EQ_REF and REF is that for
      REF we have to do an extra read_next on the index to check that there
      is no more matching rows.
      
      Before this patch we added a preference of EQ_REF by ensuring that REF
      would always estimate to find at least 2 rows.
      
      This patch adds the cost of the extra key read_next to REF access and
      removes the code that limited REF to at least 2 rows. For some queries
      this can have a big effect as the total estimated rows will be halved
      for each REF table with 1 rows.
      
      multi_range cost calculations are also changed to take into account
      the difference between EQ_REF and REF.
      
      The effect of the patch to the test suite:
      - About 80 test case changed
      - Almost all changes where for EXPLAIN where estimated rows for REF
        where changed from 2 to 1.
      - A few test cases using explain extended had a change of 'filtered'.
        This is because of the estimated rows are now closer to the
        calculated selectivity.
      - A very few test had a change of table order.
        This is because the change of estimated rows from 2 to 1 or the small
        cost change for REF
        (main.subselect_sj_jcl6, main.group_by, main.dervied_cond_pushdown,
        main.distinct, main.join_nested, main.order_by, main.join_cache)
      - No key statistics and the estimated rows are now smaller which cased
        estimated filtering to be lower.
        (main.subselect_sj_mat)
      - The number of total rows are halved.
        (main.derived_cond_pushdown)
      - Plans with 1 row changed to use RANGE instead of REF.
        (main.group_min_max)
      - ALL changed to REF
        (main.key_diff)
      - Key changed from ref + index_only to PRIMARY key for InnoDB, as
        OPTIMIZER_ROW_LOOKUP_COST + OPTIMIZER_ROW_NEXT_FIND_COST is smaller than
        OPTIMIZER_KEY_LOOKUP_COST + OPTIMIZER_KEY_NEXT_FIND_COST.
        (main.join_outer_innodb)
      - Cost changes printouts
        (main.opt_trace*)
      - Result order change
        (innodb_gis.rtree)
      3fa99f0c
  7. 02 Feb, 2023 3 commits
    • Monty's avatar
      Added test cases for preceding test · 727491b7
      Monty authored
      This includes all test changes from
      "Changing all cost calculation to be given in milliseconds"
      and forwards.
      
      Some of the things that caused changes in the result files:
      
      - As part of fixing tests, I added 'echo' to some comments to be able to
        easier find out where things where wrong.
      - MATERIALIZED has now a higher cost compared to X than before. Because
        of this some MATERIALIZED types have changed to DEPENDEND SUBQUERY.
        - Some test cases that required MATERIALIZED to repeat a bug was
          changed by adding more rows to force MATERIALIZED to happen.
      - 'Filtered' in SHOW EXPLAIN has in many case changed from 100.00 to
        something smaller. This is because now filtered also takes into
        account the smallest possible ref access and filters, even if they
        where not used. Another reason for 'Filtered' being smaller is that
        we now also take into account implicit filtering done for subqueries
        using FIRSTMATCH.
        (main.subselect_no_exists_to_in)
        This is caluculated in best_access_path() and stored in records_out.
      - Table orders has changed because more accurate costs.
      - 'index' and 'ALL' for small tables has changed to use 'range' or
         'ref' because of optimizer_scan_setup_cost.
      - index can be changed to 'range' as 'range' optimizer assumes we don't
        have to read the blocks from disk that range optimizer has already read.
        This can be confusing in the case where there is no obvious where clause
        but instead there is a hidden 'key_column > NULL' added by the optimizer.
        (main.subselect_no_exists_to_in)
      - Scan on primary clustered key does not report 'Using Index' anymore
        (It's a table scan, not an index scan).
      - For derived tables, the number of rows is now 100 instead of 2,
        which can be seen in EXPLAIN.
      - More tests have "Using index for group by" as the cost of this
        optimization is now more correct (lower).
      - A primary key could be preferred for a normal key, even if it would
        access more rows, as it's faster to do 1 lokoup and 3 'index_next' on a
        clustered primary key than one lookup trough a secondary.
        (main.stat_tables_innodb)
      
      Notes:
      
      - There was a 4.7% more calls to best_extension_by_limited_search() in
        the main.greedy_optimizer test.  However examining the test results
        it looked that the plans where slightly better (eq_ref where more
        chained together) so I assume this is ok.
      - I have verified a few test cases where there was notable/unexpected
        changes in the plan and in all cases the new optimizer plans where
        faster.  (main.greedy_optimizer and some others)
      727491b7
    • Monty's avatar
      Derived tables and union can now create distinct keys · 5e5a8eda
      Monty authored
      The idea is that instead of marking all select_lex's with DISTINCT, we
      only mark those that really need distinct result.
      
      Benefits of this change:
      - Temporary tables used with derived tables, UNION, IN are now smaller
        as duplicates are removed already on the insert phase.
      - The optimizer can now produce better plans with EQ_REF. This can be
        seen from the tests where several queries does not anymore materialize
        derived tables twice.
      - Queries affected by 'in_predicate_conversion_threshold' where large IN
        lists are converted to sub query produces better plans.
      
      Other things:
      - Removed on duplicate call to sel->init_select() in
        LEX::add_primary_to_query_expression_body()
      - I moved the testing of
        tab->table->pos_in_table_list->is_materialized_derived()
        in join_read_const_table() to the caller as it caused problems for
        derived tables that could be proven to be const tables.
        This also is likely to fix some bugs as if join_read_const_table()
        was aborted, the table was left marked as JT_CONST, which cannot
        be good.  I added an ASSERT there for now that can be removed when
        the code has been properly tested.
      5e5a8eda
    • Monty's avatar
      Update row and key fetch cost models to take into account data copy costs · b6215b9b
      Monty authored
      Before this patch, when calculating the cost of fetching and using a
      row/key from the engine, we took into account the cost of finding a
      row or key from the engine, but did not consistently take into account
      index only accessed, clustered key or covered keys for all access
      paths.
      
      The cost of the WHERE clause (TIME_FOR_COMPARE) was not consistently
      considered in best_access_path().  TIME_FOR_COMPARE was used in
      calculation in other places, like greedy_search(), but was in some
      cases (like scans) done an a different number of rows than was
      accessed.
      
      The cost calculation of row and index scans didn't take into account
      the number of rows that where accessed, only the number of accepted
      rows.
      
      When using a filter, the cost of index_only_reads and cost of
      accessing and disregarding 'filtered rows' where not taken into
      account, which made filters cost less than there actually where.
      
      To remedy the above, the following key & row fetch related costs
      has been added:
      
      - The cost of fetching and using a row is now split into different costs:
        - key + Row fetch cost (as before) but multiplied with the variable
        'optimizer_cache_cost' (default to 0.5). This allows the user to
        tell the optimizer the likehood of finding the key and row in the
        engine cache.
      - ROW_COPY_COST, The cost copying a row from the engine to the
        sql layer or creating a row from the join_cache to the record
        buffer. Mostly affects table scan costs.
      - ROW_LOOKUP_COST, the cost of fetching a row by rowid.
      - KEY_COPY_COST the cost of finding the next key and copying it from
        the engine to the SQL layer. This is used when we calculate the cost
        index only reads. It makes index scans more expensive than before if
        they cover a lot of rows. (main.index_merge_myisam)
      - KEY_LOOKUP_COST, the cost of finding the first key in a range.
        This replaces the old define IDX_LOOKUP_COST, but with a higher cost.
      - KEY_NEXT_FIND_COST, the cost of finding the next key (and rowid).
        when doing a index scan and comparing the rowid to the filter.
        Before this cost was assumed to be 0.
      
      All of the above constants/variables are now tuned to be somewhat in
      proportion of executing complexity to each other.  There is tuning
      need for these in the future, but that can wait until the above are
      made user variables as that will make tuning much easier.
      
      To make the usage of the above easy, there are new (not virtual)
      cost calclation functions in handler:
      - ha_read_time(), like read_time(), but take optimizer_cache_cost into
        account.
      - ha_read_and_copy_time(), like ha_read_time() but take into account
        ROW_COPY_TIME
      - ha_read_and_compare_time(), like ha_read_and_copy_time() but take
        TIME_FOR_COMPARE into account.
      - ha_rnd_pos_time(). Read row with row id, taking ROW_COPY_COST
        into account.  This is used with filesort where we don't need
        to execute the WHERE clause again.
      - ha_keyread_time(), like keyread_time() but take
        optimizer_cache_cost into account.
      - ha_keyread_and_copy_time(), like ha_keyread_time(), but add
        KEY_COPY_COST.
      - ha_key_scan_time(), like key_scan_time() but take
        optimizer_cache_cost nto account.
      - ha_key_scan_and_compare_time(), like ha_key_scan_time(), but add
        KEY_COPY_COST & TIME_FOR_COMPARE.
      
      I also added some setup costs for doing different types of scans and
      creating temporary tables (on disk and in memory). This encourages
      the optimizer to not use these for simple 'a few row' lookups if
      there are adequate key lookup strategies.
      - TABLE_SCAN_SETUP_COST, cost of starting a table scan.
      - INDEX_SCAN_SETUP_COST, cost of starting an index scan.
      - HEAP_TEMPTABLE_CREATE_COST, cost of creating in memory
        temporary table.
      - DISK_TEMPTABLE_CREATE_COST, cost of creating an on disk temporary
        table.
      
      When calculating cost of fetching ranges, we had a cost of
      IDX_LOOKUP_COST (0.125) for doing a key div for a new range. This is
      now replaced with 'io_cost * KEY_LOOKUP_COST (1.0) *
      optimizer_cache_cost', which matches the cost we use for 'ref' and
      other key lookups. The effect is that the cost is now a bit higher
      when we have many ranges for a key.
      
      Allmost all calculation with TIME_FOR_COMPARE is now done in
      best_access_path(). 'JOIN::read_time' now includes the full
      cost for finding the rows in the table.
      
      In the result files, many of the changes are now again close to what
      they where before the "Update cost for hash and cached joins" commit,
      as that commit didn't fix the filter cost (too complex to do
      everything in one commit).
      
      The above changes showed a lot of a lot of inconsistencies in
      optimizer cost calculation. The main objective with the other changes
      was to do calculation as similar (and accurate) as possible and to make
      different plans more comparable.
      
      Detailed list of changes:
      
      - Calculate index_only_cost consistently and correctly for all scan
        and ref accesses. The row fetch_cost and index_only_cost now
        takes into account clustered keys, covered keys and index
        only accesses.
      - cost_for_index_read now returns both full cost and index_only_cost
      - Fixed cost calculation of get_sweep_read_cost() to match other
        similar costs. This is bases on the assumption that data is more
        often stored on SSD than a hard disk.
      - Replaced constant 2.0 with new define TABLE_SCAN_SETUP_COST.
      - Some scan cost estimates did not take into account
        TIME_FOR_COMPARE. Now all scan costs takes this into
        account. (main.show_explain)
      - Added session variable optimizer_cache_hit_ratio (default 50%). By
        adjusting this on can reduce or increase the cost of index or direct
        record lookups. The effect of the default is that key lookups is now
        a bit cheaper than before. See usage of 'optimizer_cache_cost' in
        handler.h.
      - JOIN_TAB::scan_time() did not take into account index only scans,
        which produced a wrong cost when index scan was used. Changed
        JOIN_TAB:::scan_time() to take into consideration clustered and
        covered keys. The values are now cached and we only have to call
        this function once. Other calls are changed to use the cached
        values.  Function renamed to JOIN_TAB::estimate_scan_time().
      - Fixed that most index cost calculations are done the same way and
        more close to 'range' calculations. The cost is now lower than
        before for small data sets and higher for large data sets as we take
        into account how many keys are read (main.opt_trace_selectivity,
        main.limit_rows_examined).
      - Ensured that index_scan_cost() ==
        range(scan_of_all_rows_in_table_using_one_range) +
        MULTI_RANGE_READ_INFO_CONST. One effect of this is that if there
        is choice of doing a full index scan and a range-index scan over
        almost the whole table then index scan will be preferred (no
        range-read setup cost).  (innodb.innodb, main.show_explain,
        main.range)
        - Fixed the EQ_REF and REF takes into account clustered and covered
          keys.  This changes some plans to use covered or clustered indexes
          as these are much cheaper.  (main.subselect_mat_cost,
          main.state_tables_innodb, main.limit_rows_examined)
        - Rowid filter setup cost and filter compare cost now takes into
          account fetching and checking the rowid (KEY_NEXT_FIND_COST).
          (main.partition_pruning heap.heap_btree main.log_state)
        - Added KEY_NEXT_FIND_COST to
          Range_rowid_filter_cost_info::lookup_cost to account of the time
          to find and check the next key value against the container
        - Introduced ha_keyread_time(rows) that takes into account finding
          the next row and copying the key value to 'record'
          (KEY_COPY_COST).
        - Introduced ha_key_scan_time() for calculating an index scan over
          all rows.
        - Added IDX_LOOKUP_COST to keyread_time() as a startup cost.
        - Added index_only_fetch_cost() as a convenience function to
          OPT_RANGE.
        - keyread_time() cost is slightly reduced to prefer shorter keys.
          (main.index_merge_myisam)
        - All of the above caused some index_merge combinations to be
          rejected because of cost (main.index_intersect). In some cases
          'ref' where replaced with index_merge because of the low
          cost calculation of get_sweep_read_cost().
        - Some index usage moved from PRIMARY to a covering index.
          (main.subselect_innodb)
      - Changed cost calculation of filter to take KEY_LOOKUP_COST and
        TIME_FOR_COMPARE into account.  See sql_select.cc::apply_filter().
        filter parameters and costs are now written to optimizer_trace.
      - Don't use matchings_records_in_range() to try to estimate the number
        of filtered rows for ranges. The reason is that we want to ensure
        that 'range' is calculated similar to 'ref'. There is also more work
        needed to calculate the selectivity when using ranges and ranges and
        filtering.  This causes filtering column in EXPLAIN EXTENDED to be
        100.00 for some cases where range cannot use filtering.
        (main.rowid_filter)
      - Introduced ha_scan_time() that takes into account the CPU cost of
        finding the next row and copying the row from the engine to
        'record'. This causes costs of table scan to slightly increase and
        some test to changed their plan from ALL to RANGE or ALL to ref.
        (innodb.innodb_mysql, main.select_pkeycache)
        In a few cases where scan time of very small tables have lower cost
        than a ref or range, things changed from ref/range to ALL.
        (main.myisam, main.func_group, main.limit_rows_examined,
        main.subselect2)
      - Introduced ha_scan_and_compare_time() which is like ha_scan_time()
        but also adds the cost of the where clause (TIME_FOR_COMPARE).
      - Added small cost for creating temporary table for
        materialization. This causes some very small tables to use scan
        instead of materialization.
      - Added checking of the WHERE clause (TIME_FOR_COMPARE) of the
        accepted rows to ROR costs in get_best_ror_intersect()
      - Removed '- 0.001' from 'join->best_read' and optimize_straight_join()
        to ensure that the 'Last_query_cost' status variable contains the
        same value as the one that was calculated by the optimizer.
      - Take avg_io_cost() into account in handler::keyread_time() and
        handler::read_time(). This should have no effect as it's 1.0 by
        default, except for heap that overrides these functions.
      - Some 'ref_or_null' accesses changed to 'range' because of cost
        adjustments (main.order_by)
      - Added scan type "scan_with_join_cache" for optimizer_trace. This is
        just to show in the trace what kind of scan was used.
      - When using 'scan_with_join_cache' take into account number of
        preceding tables (as have to restore all fields for all previous
        table combination when checking the where clause)
        The new cost added is:
        (row_combinations * ROW_COPY_COST * number_of_cached_tables).
        This increases the cost of join buffering in proportion of the
        number of tables in the join buffer. One effect is that full scans
        are now done earlier as the cost is then smaller.
        (main.join_outer_innodb, main.greedy_optimizer)
      - Removed the usage of 'worst_seeks' in cost_for_index_read as it
        caused wrong plans to be created; It prefered JT_EQ_REF even if it
        would be much more expensive than a full table scan. A related
        issue was that worst_seeks only applied to full lookup, not to
        clustered or index only lookups, which is not consistent. This
        caused some plans to use index scan instead of eq_ref (main.union)
      - Changed federated block size from 4096 to 1500, which is the
        typical size of an IO packet.
      - Added costs for reading rows to Federated. Needed as there is no
        caching of rows in the federated engine.
      - Added ha_innobase::rnd_pos_time() cost function.
      - A lot of extra things added to optimizer trace
        - More costs, especially for materialization and index_merge.
        - Make lables more uniform
        - Fixed a lot of minor bugs
        - Added 'trace_started()' around a lot of trace blocks.
      - When calculating ORDER BY with LIMIT cost for using an index
        the cost did not take into account the number of row retrivals
        that has to be done or the cost of comparing the rows with the
        WHERE clause. The cost calculated would be just a fraction of
        the real cost. Now we calculate the cost as we do for ranges
        and 'ref'.
      - 'Using index for group-by' is used a bit more than before as
        now take into account the WHERE clause cost when comparing
        with 'ref' and prefer the method with fewer row combinations.
        (main.group_min_max).
      
      Bugs fixed:
      - Fixed that we don't calculate TIME_FOR_COMPARE twice for some plans,
        like in optimize_straight_join() and greedy_search()
      - Fixed bug in save_explain_data where we could test for the wrong
        index when displaying 'Using index'. This caused some old plans to
        show 'Using index'.  (main.subselect_innodb, main.subselect2)
      - Fixed bug in get_best_ror_intersect() where 'min_cost' was not
        updated, and the cost we compared with was not the one that was
        used.
      - Fixed very wrong cost calculation for priority queues in
        check_if_pq_applicable(). (main.order_by now correctly uses priority
        queue)
      - When calculating cost of EQ_REF or REF, we added the cost of
        comparing the WHERE clause with the found rows, not all row
        combinations. This made ref and eq_ref to be regarded way to cheap
        compared to other access methods.
      - FORCE INDEX cost calculation didn't take into account clustered or
        covered indexes.
      - JT_EQ_REF cost was estimated as avg_io_cost(), which is half the
        cost of a JT_REF key. This may be true for InnoDB primary key, but
        not for other unique keys or other engines. Now we use handler
        function to calculate the cost, which allows us to handle
        consistently clustered, covered keys and not covered keys.
      - ha_start_keyread() didn't call extra_opt() if keyread was already
        enabled but still changed the 'keyread' variable (which is wrong).
        Fixed by not doing anything if keyread is already enabled.
      - multi_range_read_info_cost() didn't take into account io_cost when
        calculating the cost of ranges.
      - fix_semijoin_strategies_for_picked_join_order() used the wrong
        record_count when calling best_access_path() for SJ_OPT_FIRST_MATCH
        and SJ_OPT_LOOSE_SCAN.
      - Hash joins didn't provide correct best_cost to the upper level, which
        means that the cost for hash_joins more expensive than calculated
        in best_access_path (a difference of 10x * TIME_OF_COMPARE).
        This is fixed in the new code thanks to that we now include
        TIME_OF_COMPARE cost in 'read_time'.
      
      Other things:
      - Added some 'if (thd->trace_started())' to speed up code
      - Removed not used function Cost_estimate::is_zero()
      - Simplified testing of HA_POS_ERROR in get_best_ror_intersect().
        (No cost changes)
      - Moved ha_start_keyread() from join_read_const_table() to join_read_const()
        to enable keyread for all types of JT_CONST tables.
      - Made a few very short functions inline in handler.h
      
      Notes:
      - In main.rowid_filter the join order of order and lineitem is swapped.
        This is because the cost of doing a range fetch of lineitem(98 rows) is
        almost as big as the whole join of order,lineitem. The filtering will
        also ensure that we only have to do very small key fetches of the rows
        in lineitem.
      - main.index_merge_myisam had a few changes where we are now using
        less keys for index_merge. This is because index scans are now more
        expensive than before.
      - handler->optimizer_cache_cost is updated in ha_external_lock().
        This ensures that it is up to date per statements.
        Not an optimal solution (for locked tables), but should be ok for now.
      - 'DELETE FROM t1 WHERE t1.a > 0 ORDER BY t1.a' does not take cost of
        filesort into consideration when table scan is chosen.
        (main.myisam_explain_non_select_all)
      - perfschema.table_aggregate_global_* has changed because an update
        on a table with 1 row will now use table scan instead of key lookup.
      
      TODO in upcomming commits:
      - Fix selectivity calculation for ranges with and without filtering and
        when there is a ref access but scan is chosen.
        For this we have to store the lowest known value for
        'accepted_records' in the OPT_RANGE structure.
      - Change that records_read does not include filtered rows.
      - test_if_cheaper_ordering() needs to be updated to properly calculate
        costs. This will fix tests like main.order_by_innodb,
        main.single_delete_update
      - Extend get_range_limit_read_cost() to take into considering
        cost_for_index_read() if there where no quick keys. This will reduce
        the computed cost for ORDER BY with LIMIT in some cases.
        (main.innodb_ext_key)
      - Fix that we take into account selectivity when counting the number
        of rows we have to read when considering using a index table scan to
        resolve ORDER BY.
      - Add new calculation for rnd_pos_time() where we take into account the
        benefit of reading multiple rows from the same page.
      b6215b9b
  8. 25 Jan, 2023 1 commit
    • Igor Babaev's avatar
      MDEV-28616 Crash when using derived table over union with order by clause · 4652260d
      Igor Babaev authored
      This bug manifested itself when the server processed a query containing
      a derived table over union whose ORDER BY clause included a subquery
      with unresolvable column reference. For such a query the server crashed
      when trying to resolve column references in the ORDER BY clause used by
      union.
      For any union with ORDER BY clause an extra SELECT_LEX structure is created
      and it is attached to SELECT_LEX_UNIT structure of the union via the field
      fake_select_lex. The outer context for fake_select_lex must be the same as
      for other selects of the union. If the union is used in the FROM list of
      a derived table then the outer context for fake_select_lex must be set to
      NULL in line with other selects of the union. It was not done and it
      caused a crash when searching for possible resolution of an unresolvable
      column reference occurred in a subquery used in the ORDER BY clause.
      
      Approved by Oleksandr Byelkin <sanja@mariadb.com>
      4652260d
  9. 01 Nov, 2022 1 commit
  10. 24 Oct, 2022 1 commit
  11. 27 Mar, 2020 1 commit
    • Monty's avatar
      Updated optimizer costs in multi_range_read_info_const() and sql_select.cc · eb483c51
      Monty authored
      - multi_range_read_info_const now uses the new records_in_range interface
      - Added handler::avg_io_cost()
      - Don't calculate avg_io_cost() in get_sweep_read_cost if avg_io_cost is
        not 1.0.  In this case we trust the avg_io_cost() from the handler.
      - Changed test_quick_select to use TIME_FOR_COMPARE instead of
        TIME_FOR_COMPARE_IDX to align this with the rest of the code.
      - Fixed bug when using test_if_cheaper_ordering where we didn't use
        keyread if index was changed
      - Fixed a bug where we didn't use index only read when using order-by-index
      - Added keyread_time() to HEAP.
        The default keyread_time() was optimized for blocks and not suitable for
        HEAP. The effect was the HEAP prefered table scans over ranges for btree
        indexes.
      - Fixed get_sweep_read_cost() for HEAP tables
      - Ensure that range and ref have same cost for simple ranges
        Added a small cost (MULTI_RANGE_READ_SETUP_COST) to ranges to ensure
        we favior ref for range for simple queries.
      - Fixed that matching_candidates_in_table() uses same number of records
        as the rest of the optimizer
      - Added avg_io_cost() to JT_EQ_REF cost. This helps calculate the cost for
        HEAP and temporary tables better. A few tests changed because of this.
      - heap::read_time() and heap::keyread_time() adjusted to not add +1.
        This was to ensure that handler::keyread_time() doesn't give
        higher cost for heap tables than for normal tables. One effect of
        this is that heap and derived tables stored in heap will prefer
        key access as this is now regarded as cheap.
      - Changed cost for index read in sql_select.cc to match
        multi_range_read_info_const(). All index cost calculation is now
        done trough one function.
      - 'ref' will now use quick_cost for keys if it exists. This is done
        so that for '=' ranges, 'ref' is prefered over 'range'.
      - scan_time() now takes avg_io_costs() into account
      - get_delayed_table_estimates() uses block_size and avg_io_cost()
      - Removed default argument to test_if_order_by_key(); simplifies code
      eb483c51
  12. 30 Apr, 2019 1 commit
  13. 15 May, 2018 1 commit
    • Galina Shalygina's avatar
      MDEV-12387 Push conditions into materialized subqueries · d3ff1333
      Galina Shalygina authored
      The logic and the implementation scheme are similar with the
      MDEV-9197 Pushdown conditions into non-mergeable views/derived tables
      
      How the push down is made on the example:
      
      select * from t1
      where a>3 and b>10 and
       (a,b) in (select x,max(y) from t2 group by x);
      
      -->
      
      select * from t1
      where a>3 and b>10 and
        (a,b) in (select x,max(y)
                  from t2
                  where x>3
                  group by x
                  having max(y)>10);
      
      The implementation scheme:
      
      1. Search for the condition cond that depends only on the fields
         from the left part of the IN subquery (left_part)
      2. Find fields F_group in the select of the right part of the
         IN subquery (right_part) that are used in the GROUP BY
      3. Extract from the cond condition cond_where that depends only on the
         fields from the left_part that stay at the same places in the left_part
         (have the same indexes) as the F_group fields in the projection of the
         right_part
      4. Transform cond_where so it can be pushed into the WHERE clause of the
         right_part and delete cond_where from the cond
      5. Transform cond so it can be pushed into the HAVING clause of the right_part
      
      The optimization is made in the
      Item_in_subselect::pushdown_cond_for_in_subquery() and is controlled by the
      variable condition_pushdown_for_subquery.
      
      New test file in_subq_cond_pushdown.test is created.
      
      There are also some changes made for setup_jtbm_semi_joins().
      Now it is decomposed into the 2 procedures: setup_degenerate_jtbm_semi_joins()
      that is called before optimize_cond() for cond and setup_jtbm_semi_joins()
      that is called after optimize_cond().
      New setup_jtbm_semi_joins() is made in the way so that the result of its work is
      the same as if it was called before optimize_cond().
      
      The code that is common for pushdown into materialized derived and into materialized
      IN subqueries is factored out into pushdown_cond_for_derived(),
      Item_in_subselect::pushdown_cond_for_in_subquery() and
      st_select_lex::pushdown_cond_into_where_clause().
      d3ff1333
  14. 29 Mar, 2018 1 commit
  15. 23 Jan, 2018 1 commit
  16. 19 Jan, 2018 1 commit
  17. 02 Dec, 2017 1 commit
  18. 23 Jun, 2017 1 commit
    • Igor Babaev's avatar
      Fixed the bug mdev-12845. · 9f362219
      Igor Babaev authored
      This patch fills in a serious flaw in the
      code that supports condition pushdown into
      materialized views / derived tables.
      
      If a predicate happened to contain a reference
      to a mergeable view / derived table and it does
      not depended directly on the target materialized
      view / derived table then the predicate was not
      considered as a subject to pusdown to this view
      / derived table.
      9f362219
  19. 14 Mar, 2017 1 commit
  20. 13 Mar, 2017 1 commit
  21. 09 Mar, 2017 1 commit
  22. 17 Feb, 2017 1 commit
    • Igor Babaev's avatar
      Fixed bug mdev-9028. · f49375fd
      Igor Babaev authored
      This patch is actually a complement for the fix of bug mdev-6892.
      The procedure create_tmp_table() now must take into account
      Item_direct_refs that wrap up constant fields of derived tables/views
      that are used as inner tables in outer join operations.
      f49375fd
  23. 10 Feb, 2017 1 commit
    • Oleksandr Byelkin's avatar
      MDEV-10554: Assertion... · ae3072c0
      Oleksandr Byelkin authored
      MDEV-10554: Assertion `!derived->first_select()->exclude_from_table_unique_test || derived->outer_select()-> exclude_from_table_unique_test' failed in TABLE_LIST::set_check_merged()
      
      Mark and check excluded because of merging derived tables and views.
      ae3072c0
  24. 12 Dec, 2016 1 commit
  25. 06 Dec, 2016 1 commit
  26. 01 Aug, 2016 1 commit
  27. 31 Mar, 2016 1 commit
  28. 06 Feb, 2016 1 commit
  29. 05 Feb, 2016 1 commit
    • Oleksandr Byelkin's avatar
      MDEV-7827: Assertion `!table || (!table->read_set ||... · 6ecf6d84
      Oleksandr Byelkin authored
      MDEV-7827: Assertion `!table || (!table->read_set || bitmap_is_set(table->read_set, field_index))' failed in Field_long::val_str on EXPLAIN EXTENDED
      
      (Solution by Sergei Petrunia)
      It appeared that semijoin conditions was not iterated when we were updating used tables. So now they do.
      6ecf6d84
  30. 05 Sep, 2015 1 commit
  31. 31 Aug, 2015 1 commit
  32. 23 Apr, 2015 1 commit
    • Oleksandr Byelkin's avatar
      MDEV-6892: WHERE does not apply · 20109712
      Oleksandr Byelkin authored
      Taking into account implicit dependence of constant view field from nullable table of left join added.
      
      Fixed finding real table to check if it turned to NULL (materialized view & derived taken into account)
      
      Removed incorrect uninitialization.
      20109712
  33. 14 Oct, 2014 1 commit
  34. 04 Jun, 2014 1 commit
  35. 04 Dec, 2013 1 commit
  36. 21 Oct, 2013 1 commit