1. 04 Feb, 2024 1 commit
    • Igor Babaev's avatar
      MDEV-31361 Wrong result on 2nd execution of PS for query with derived table · 6fadbf8e
      Igor Babaev authored
      This bug led to wrong result sets returned by the second execution of
      prepared statements from selects using mergeable derived tables pushed
      into external engine. Such derived tables are always materialized. The
      decision that they have to be materialized is taken late in the function
      mysql_derived_optimized(). For regular derived tables this decision is
      usually taken at the prepare phase. However in some cases for some derived
      tables this decision is made in mysql_derived_optimized() too. It can be
      seen in the code of mysql_derived_fill() that for such a derived table it's
      critical to change its translation table to tune it to the fields of the
      temporary table used for materialization of the derived table and this
      must be done after each refill of the derived table. The same actions are
      needed for derived tables pushed into external engines.
      
      Approved by Oleksandr Byelkin <sanja@mariadb.com>
      6fadbf8e
  2. 02 Feb, 2024 3 commits
    • Sergei Petrunia's avatar
      Make innodb_ext_key test stable: use innodb_stable_estimates.inc · 9d0b79c5
      Sergei Petrunia authored
      @@ -314,7 +314,7 @@
       select straight_join * from t0, part ignore index (primary)
       where p_partkey=t0.a and p_size=1;
       id	select_type	table	type	possible_keys	key	key_len	ref	rows	Extra
      -1	SIMPLE	t0	ALL	NULL	NULL	NULL	NULL	5	Using where
      +1	SIMPLE	t0	ALL	NULL	NULL	NULL	NULL	6	Using where
       1	SIMPLE	part	eq_ref	i_p_size	i_p_size	9	const,dbt3_s001.t0.a	1
      9d0b79c5
    • Sergei Petrunia's avatar
      MDEV-33314: Crash in calculate_cond_selectivity_for_table() with many columns · 5972f5c2
      Sergei Petrunia authored
      Variant#3: moved the logic out of create_key_parts_for_pseudo_indexes
      
      Range Analyzer (get_mm_tree functions) can only process up to MAX_KEY=64
      indexes. The problem was that calculate_cond_selectivity_for_table used
      it to estimate selectivities for columns, and since a table can
      have > MAX_KEY columns, would invoke Range Analyzer with more than MAX_KEY
      "pseudo-indexes".
      
      Fixed by making calculate_cond_selectivity_for_table() to run Range
      Analyzer with at most MAX_KEY pseudo-indexes. If there are more
      columns to process, Range Analyzer will be invoked multiple times.
      
      Also made this change:
      -    param.real_keynr[0]= 0;
      +    MEM_UNDEFINED(&param.real_keynr, sizeof(param.real_keynr));
      
      Range Analyzer should have no use on real_keynr when it is run with
      pseudo-indexes.
      5972f5c2
    • Alexander Barkov's avatar
      MDEV-32893 mariadb-backup is not considering O/S user when --user option is omitted · 78662dda
      Alexander Barkov authored
      mariadb-backup:
      
      Adding a function get_os_user() to detect the OS user name
      if the user name is not specified, to make mariadb-backup:
      - work like MariaDB client tools work
      - match its --help page, which says:
      
        -u, --user=name This option specifies the username used when
        connecting to the server, if that's not the current user.
      78662dda
  3. 01 Feb, 2024 2 commits
  4. 31 Jan, 2024 2 commits
    • Nikita Malyavin's avatar
      MDEV-25370 Update for portion changes autoincrement key in bi-temp table · 68c1fbfc
      Nikita Malyavin authored
      According to the standard, the autoincrement column (i.e. *identity
      column*) should be advanced each insert implicitly made by
      UPDATE/DELETE ... FOR PORTION.
      
      This is very unconvenient use in several notable cases. Concider a
      WITHOUT OVERLAPS key with an autoinc column:
      id int auto_increment, unique(id, p without overlaps)
      
      An update or delete with FOR PORTION creates a sense that id will remain
      unchanged in such case.
      
      The standard's IDENTITY reminds MariaDB's AUTO_INCREMENT, however
      the generation rules differ in many ways. For example, there's also a
      notion autoincrement index, which is bound to the autoincrement field.
      
      We will define our own generation rule for the PORTION OF operations
      involving AUTO_INCREMENT:
      * If an autoincrement index contains WITHOUT OVERLAPS specification, then
      a new value should not be generated, otherwise it should.
      
      Apart from WITHOUT OVERLAPS there is also another notable case, referred
      by the reporter - a unique key that has an autoincrement column and a field
      from the period specification:
        id int auto_increment, unique(id, s), period for p(s, e)
      
      for this case, no exception is made, and the autoincrementing rules will be
      proceeded accordung to the standard (i.e. the value will be advanced on
      implicit inserts).
      68c1fbfc
    • Thirunarayanan Balathandayuthapani's avatar
      MDEV-33341 innodb.undo_space_dblwr test case fails with Unknown Storage Engine InnoDB · 21f18bd9
      Thirunarayanan Balathandayuthapani authored
      Reason:
      ======
      undo_space_dblwr test case fails if the first page of undo
      tablespace is not flushed before restart the server. While
      restarting the server, InnoDB fails to detect the first
      page of undo tablespace from doublewrite buffer.
      
      Fix:
      ===
      Use "ib_log_checkpoint_avoid_hard" debug sync point
      to avoid checkpoint and make sure to flush the
      dirtied page before killing the server.
      
      innodb_make_page_dirty(): Fails to set
      srv_fil_make_page_dirty_debug variable.
      21f18bd9
  5. 30 Jan, 2024 1 commit
  6. 26 Jan, 2024 1 commit
  7. 24 Jan, 2024 1 commit
    • Alexander Barkov's avatar
      MDEV-32837 long unique does not work like unique key when using replace · 97fcafb9
      Alexander Barkov authored
      write_record() when performing REPLACE has an optimization:
      - if the unique violation happened in the last unique key, then do UPDATE
      - otherwise, do DELETE+INSERT
      
      This patch changes the way of detecting if this optimization
      can be applied if the table has long (hash based) unique
      (i.e. UNIQUE..USING HASH) constraints.
      
      Problem:
      
      The old condition did not take into account that
      TABLE_SHARE and TABLE see long uniques differently:
      - TABLE_SHARE sees as HA_KEY_ALG_LONG_HASH and HA_NOSAME
      - TABLE sees as usual non-unique indexes
      So the old condition could erroneously decide that the UPDATE optimization
      is possible when there are still some unique hash constraints in the table.
      
      Fix:
      
      - If the current key is a long unique, it now works as follows:
      
        UPDATE can be done if the current long unique is the last
        long unique, and there are no in-engine (normal) uniques.
      
      - For in-engine uniques nothing changes, it still works as before:
      
        If the current key is an in-engine (normal) unique:
        UPDATE can be done if it is the last normal unique.
      97fcafb9
  8. 23 Jan, 2024 9 commits
  9. 22 Jan, 2024 2 commits
    • Rex's avatar
      MDEV-33165 Incorrect result interceptor passed to mysql_explain_union() · 11738822
      Rex authored
      Statements affect by this bug are all SQL statements that
      1) prefixed with "EXPLAIN"
      2) have a lower level join structure created for a union subquery.
      
      A bug in select_describe() passed an incorrect "result" object to
      mysql_explain_union(), resulting in unpredictable behaviour and
      out of context calls.
      
      Reviewed by: Oleksandr Byelkin, sanja@mariadb.com
      11738822
    • Brandon Nesterenko's avatar
      MDEV-33283: Binlog Checksum is Zeroed by Zlib if Part of Event Data is Empty · 207c8578
      Brandon Nesterenko authored
      An existing binlog checksum can be overridden to 0 if writing a NULL
      payload when using Zlib for the computation. That is, calling into
      Zlib's crc32 with empty data initializes an incremental CRC
      computation to 0.
      
      This patch changes the Log_event_writer::write_data() to exit
      immediately if there is nothing to write, thereby bypassing the
      checksum computation. This follows the pattern of
      Log_event_writer::encrypt_and_write(), which also exits immediately
      if there is no data to write.
      
      Reviewed By:
      ============
      Andrei Elkin <andrei.elkin@mariadb.com>
      207c8578
  10. 19 Jan, 2024 1 commit
  11. 17 Jan, 2024 1 commit
  12. 16 Jan, 2024 1 commit
  13. 15 Jan, 2024 1 commit
    • Thirunarayanan Balathandayuthapani's avatar
      MDEV-32968 InnoDB fails to restore tablespace first page from doublewrite... · caad34df
      Thirunarayanan Balathandayuthapani authored
      MDEV-32968  InnoDB fails to restore tablespace first page from doublewrite buffer when page is empty
      
      - InnoDB fails to find the space id from the page0 of
      the tablespace. In that case, InnoDB can use
      doublewrite buffer to recover the page0 and write
      into the file.
      
      - buf_dblwr_t::init_or_load_pages(): Loads only the pages
      which are valid.(page lsn >= checkpoint). To do that,
      InnoDB has to open the redo log before system
      tablespace, read the latest checkpoint information.
      
      recv_dblwr_t::find_first_page():
      1) Iterate the doublewrite buffer pages and find the 0th page
      2) Read the tablespace flags, space id from the 0th page.
      3) Read the 1st, 2nd and 3rd page from tablespace file and
      compare the space id with the space id which is stored
      in doublewrite buffer.
      4) If it matches then we can write into the file.
      5) Return space which matches the pages from the file.
      
      SysTablespace::read_lsn_and_check_flags(): Remove the
      retry logic for validating the first page. After
      restoring the first page from doublewrite buffer,
      assign tablespace flags by reading the first page.
      
      recv_recovery_read_max_checkpoint(): Reads the maximum
      checkpoint information from log file
      
      recv_recovery_from_checkpoint_start(): Avoid reading
      the checkpoint header information from log file
      
      Datafile::validate_first_page(): Throw error in case
      of first page validation fails.
      caad34df
  14. 14 Jan, 2024 1 commit
  15. 13 Jan, 2024 1 commit
  16. 12 Jan, 2024 3 commits
  17. 11 Jan, 2024 4 commits
  18. 10 Jan, 2024 5 commits