1. 26 Aug, 2024 3 commits
    • Kristian Nielsen's avatar
      Fix sporadic test failure in rpl.rpl_create_drop_event · 7dc4ea56
      Kristian Nielsen authored
      Depending on timing, an extra event run could start just when the event
      scheduler is shut down and delay running until after the table has been
      dropped; this would cause the test to fail with a "table does not exist"
      error in the log.
      Signed-off-by: default avatarKristian Nielsen <knielsen@knielsen-hq.org>
      7dc4ea56
    • Kristian Nielsen's avatar
      Restore skiping rpl.rpl_mdev6020 under Valgrind · 33854d73
      Kristian Nielsen authored
      (Revert a change done by mistake when XtraDB was removed.)
      Signed-off-by: default avatarKristian Nielsen <knielsen@knielsen-hq.org>
      33854d73
    • Kristian Nielsen's avatar
      MDEV-34696: do_gco_wait() completes too early on InnoDB dict stats updates · b4c2e239
      Kristian Nielsen authored
      Before doing mark_start_commit(), check that there is no pending deadlock
      kill. If there is a pending kill, we won't commit (we will abort, roll back,
      and retry). Then we should not mark the commit as started, since that could
      potentially make the following GCO start too early, before we completed the
      commit after the retry.
      
      This condition could trigger in some corner cases, where InnoDB would take
      temporarily table/row locks that are released again immediately, not held
      until the transaction commits. This happens with dict_stats updates and
      possibly auto-increment locks.
      
      Such locks can be passed to thd_rpl_deadlock_check() and cause a deadlock
      kill to be scheduled in the background. But since the blocking locks are
      held only temporarily, they can be released before the background kill
      happens. This way, the kill can be delayed until after mark_start_commit()
      has been called. Thus we need to check the synchronous indication
      rgi->killed_for_retry, not just the asynchroneous thd->killed.
      Signed-off-by: default avatarKristian Nielsen <knielsen@knielsen-hq.org>
      b4c2e239
  2. 21 Aug, 2024 3 commits
    • Monty's avatar
      MDEV-34043 Drastically slower query performance between CentOS (2sec) and Rocky (48sec) · 1f040ae0
      Monty authored
      One cause of the slowdown is because the ftruncate call can be much
      slower on some systems.  ftruncate() is called by Aria for internal
      temporary tables, tables created by the optimizer, when the upper level
      asks Aria to delete the previous result set. This is needed when some
      content from previous tables changes.
      
      I have now changed Aria so that for internal temporary tables we don't
      call ftruncate() anymore for maria_delete_all_rows().
      
      I also had to update the Aria repair code to use the logical datafile
      size and not the on-disk datafile size, which may contain data from a
      previous result set.  The repair code is called to create indexes for
      the internal temporary table after it is filled.
      I also replaced a call to mysql_file_size() with a pwrite() in
      _ma_bitmap_create_first().
      
      Reviewer: Sergei Petrunia <sergey@mariadb.com>
      Tester: Dave Gosselin <dave.gosselin@mariadb.com>
      1f040ae0
    • Oleksandr Byelkin's avatar
      fix MDEV-34771 & MDEV-34776 · eadf0f63
      Oleksandr Byelkin authored
      removed duplicated methods
      eadf0f63
    • Oleksandr Byelkin's avatar
  3. 20 Aug, 2024 1 commit
  4. 19 Aug, 2024 5 commits
    • Oleksandr Byelkin's avatar
      MDEV-34776 Assertion failure in Item_string::do_build_clone · ae02999c
      Oleksandr Byelkin authored
      Added missed methods to Item_string child.
      ae02999c
    • Oleksandr Byelkin's avatar
      MDEV-34771 Types mismatch when cloning items causes debug assertion · fccfdc28
      Oleksandr Byelkin authored
      Missing methods added to Item_bin_string
      fccfdc28
    • Monty's avatar
      Sort result from table_statistics and index_statistics · db8ab4ac
      Monty authored
      This is needed as the order of rows are not deterministic,
      especially in future versions of table statistics.
      db8ab4ac
    • Monty's avatar
      Revert "mtr: remove not_valgrind_build" · e51d55a6
      Monty authored
      The original code is correct.
      
      valgrind and asan binaries should be built with a specialiced version of
      mem_root that makes it easier to find memory overwrites.
      This is what the BUILD scripts is doing.
      
      The specialiced mem_root code allocates a new block for every allocation
      which is visiable for any test that depenmds on the default original malloc
      size and usage.
      e51d55a6
    • Dmitry Shulga's avatar
      MDEV-34718: Trigger doesn't work correctly with bulk update · ba5482ff
      Dmitry Shulga authored
      Running an UPDATE statement in PS mode and having positional
      parameter(s) bound with an array of actual values (that is
      prepared to be run in bulk mode) results in incorrect behaviour
      in presence of on update trigger that also executes an UPDATE
      statement. The same is true for handling a DELETE statement in
      presence of on delete trigger. Typically, the visible effect of
      such incorrect behaviour is expressed in a wrong number of
      updated/deleted rows of a target table. Additionally, in case UPDATE
      statement, a number of modified rows and a state message returned
      by a statement contains wrong information about a number of modified rows.
      
      The reason for incorrect number of updated/deleted rows is that
      a data structure used for binding positional argument with its
      actual values is stored in THD (this is thd->bulk_param) and reused
      on processing every INSERT/UPDATE/DELETE statement. It leads to
      consuming actual values bound with top-level UPDATE/DELETE statement
      by other DML statements used by triggers' body.
      
      To fix the issue, reset the thd->bulk_param temporary to the value
      nullptr before invoking triggers and restore its value on finishing
      its execution.
      
      The second part of the problem relating with wrong value of affected
      rows reported by Connector/C API is caused by the fact that diagnostics
      area is reused by an original DML statement and a statement invoked
      by a trigger. This fact should be take into account on finalizing a
      state of diagnostics area on completion running of a statement.
      
      Important remark: in case the macros DBUG_OFF is on, call of the method
        Diagnostics_area::reset_diagnostics_area()
      results in reset of the data members
        m_affected_rows, m_statement_warn_count.
      Values of these data members of the class Diagnostics_area are used on
      sending OK and EOF messages. In case DML statement is executed in PS bulk
      mode such resetting results in sending wrong result values to a client
      for affected rows in case the DML statement fires a triggers. So, reset
      these data members only in case the current statement being processed
      is not run in bulk mode.
      ba5482ff
  5. 15 Aug, 2024 2 commits
  6. 14 Aug, 2024 1 commit
  7. 13 Aug, 2024 1 commit
    • Thirunarayanan Balathandayuthapani's avatar
      MDEV-14231 MATCH() AGAINST( IN BOOLEAN MODE), results mismatch · b304ec30
      Thirunarayanan Balathandayuthapani authored
      - Added plugin_debug.test, multiple_index.test to innodb_fts suite
      from mysql-5.7.
      
      - commit c5b28e55 removes the warning
      for InnoDB rebuilding table to add FTS_DOC_ID
      
      - multiple_index test case  has MATCH(a) values are smaller
      than in MySQL because ROLLBACK updates the stat_n_rows.
      
      - st_mysql_ftparser_boolean_info structure conveys boolean
      metadata to mysql search engine for every word in the query.
      This structure misses the position value to store the correct
      offset of every word. So phrase search queries in plugin_debug
      test case with boolean mode for simple parser throws
      wrong result.
      b304ec30
  8. 12 Aug, 2024 4 commits
    • Julius Goryavsky's avatar
      MDEV-30686: Endless loop when trying to establish connection · 2c5d8376
      Julius Goryavsky authored
      With wsrep_sst_rsync, node goes into endless loop when trying
      to establish connection to donor for IST/SST if the database
      is bind on specific IP address, not the "*".
      
      This commit fixes this problem. Separate tests are not
      required - the problem can occur in normal configurations
      on a number of systems when selecting a bing address other
      than "*", especially on FreeBSD and with the IPv6 addresses.
      2c5d8376
    • Jan Lindström's avatar
      MDEV-34594 : Assertion `client_state.transaction().active()' failed in · cd8b8bb9
      Jan Lindström authored
      int wsrep_thd_append_key(THD*, const wsrep_key*, int, Wsrep_service_key_type)
      
      CREATE TABLE [SELECT|REPLACE SELECT] is CTAS and idea was that
      we force ROW format. However, it was not correctly enforced
      and keys were appended before wsrep transaction was started.
      
      At THD::decide_logging_format we should force used stmt binlog
      format to ROW in CTAS case and produce a warning if used
      binlog format was not ROW.
      
      At ha_innodb::update_row we should not append keys similarly
      as in ha_innodb::write_row if sql_command is SQLCOM_CREATE_TABLE.
      Improved error logging on ::write_row, ::update_row and ::delete_row
      if wsrep key append fails.
      Signed-off-by: default avatarJulius Goryavsky <julius.goryavsky@mariadb.com>
      cd8b8bb9
    • Alexander Barkov's avatar
      MDEV-34376 Wrong data types when mixing an utf8 *TEXT column and a short binary · 0e273510
      Alexander Barkov authored
      A mixture of a multi-byte *TEXT column and a short binary column
      produced a too large column.
      For example, COALESCE(tinytext_utf8mb4, short_varbinary)
      produced a BLOB column instead of an expected TINYBLOB.
      
      - Adding a virtual method Type_all_attributes::character_octet_length(),
        returning max_length by default.
      - Overriding Item_field::character_octet_length() to extract
        the octet length from the underlying Field.
      - Overriding Item_ref::character_octet_length() to extract
        the octet length from the references Item (e.g. as VIEW fields).
      - Fixing Type_numeric_attributes::find_max_octet_length() to
        take the octet length using the new method character_octet_length()
        instead of accessing max_length directly.
      0e273510
    • Ian Gilfillan's avatar
      Update sponsors · c83ba513
      Ian Gilfillan authored
      c83ba513
  9. 09 Aug, 2024 1 commit
  10. 08 Aug, 2024 6 commits
  11. 07 Aug, 2024 2 commits
    • Nikita Malyavin's avatar
      MDEV-34632 Assertion failed in handler::assert_icp_limitations · 25e2d0a6
      Nikita Malyavin authored
      Assertion `table->field[0]->ptr >= table->record[0] &&
      table->field[0]->ptr <= table->record[0] + table->s->reclength' failed in
      handler::assert_icp_limitations.
      
      table->move_fields has some limitations:
      1. It cannot be used in cascade
      2. It should always have a restoring pair.
      
      Rule 1 is covered by assertions in handler::assert_icp_limitations
      and handler::ptr_in_record (commit 30894fe9).
      
      Rule 2 should be manually maintained with care. Hopefully, the rule 1 assertions
      may sometimes help as well.
      
      In ha_myisam::repair, both rules are broken. table->move_fields is used
      asymmetrically there: it is set on every param->fix_record call
      (i.e. in compute_vcols) but is restored only once, in the end of repair.
      
      The reason to updating field ptr's for every call is that compute_vcols can
      (supposedly) be called in parallel, that is, with the same table, but different
      records.
      
      The condition to "unmove" the pointers in ha_myisam::restore_vcos_after_repair
      is incorrect, when stored vcols are available, and myisam stores a VIRTUAL field
      if it's the only field in the table (the record cannot be of zero length).
      
      This patch solves the problem by "unmoving" the pointers symmetrically, in
      compute_vcols. That is, both rules will be preserved maintained.
      25e2d0a6
    • Yuchen Pei's avatar
  12. 04 Aug, 2024 6 commits
  13. 03 Aug, 2024 2 commits
    • Oleg Smirnov's avatar
      MDEV-34683 Types mismatch when cloning items causes debug assertion · cf202dec
      Oleg Smirnov authored
      New runtime type diagnostic (MDEV-34490) has detected that classes
      Item_func_eq, Item_default_value and Item_date_literal_for_invalid_dates
      incorrectly return an instance of its ancestor classes when being cloned.
      This commit fixes that.
      
      Additionally, it fixes a bug at Item_func_case_simple::do_build_clone()
      which led to an endless loop of cloning functions calls.
      
      Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
      cf202dec
    • Oleksandr Byelkin's avatar
      lost in editinig line added · 7a5b8bf0
      Oleksandr Byelkin authored
      7a5b8bf0
  14. 01 Aug, 2024 2 commits
    • Thirunarayanan Balathandayuthapani's avatar
      MDEV-29010 Table cannot be loaded after instant ALTER · 37119cd2
      Thirunarayanan Balathandayuthapani authored
      Reason:
      ======
      - InnoDB fails to load the instant alter table metadata from
      clustered index while loading the table definition.
      The reason is that InnoDB metadata blob has the column length
      exceeds maximum fixed length column size.
      
      Fix:
      ===
      - InnoDB should treat the long fixed length column as variable
      length fields that needs external storage while initializing
      the field map for instant alter operation
      37119cd2
    • Galina Shalygina's avatar
      MDEV-23983: Crash caused by query containing constant having clause · d072a296
      Galina Shalygina authored
      Before this patch the crash occured when a single row dataset is used and
      Item::remove_eq_conds() is called for HAVING. This function is not supposed
      to be called after the elimination of multiple equalities.
      
      To fix this problem instead of Item::remove_eq_conds() Item::val_int() is
      used. In this case the optimizer tries to evaluate the condition for the
      single row dataset and discovers impossible HAVING immediately. So, the
      execution phase is skipped.
      
      Approved by Igor Babaev <igor@maridb.com>
      d072a296
  15. 31 Jul, 2024 1 commit