1. 05 May, 2022 1 commit
    • Otto Kekäläinen's avatar
      Deb: Run wrap-and-sort -av · 2c529414
      Otto Kekäläinen authored
      Sort and organize the Debian packaging files.
      
      Also revert 4d032694 that was done in vain.
      For the sake of CI we do want to have working upgrades from previous 10.9
      releases and it is doable with another kind of fix in a later commit.
      2c529414
  2. 02 May, 2022 2 commits
  3. 29 Apr, 2022 9 commits
    • Sergei Petrunia's avatar
      Fix comment. · 9841a808
      Sergei Petrunia authored
      9841a808
    • Sergei Petrunia's avatar
      Merge MDEV-27021, MDEV-10000 into 10.9 · 94dc0bff
      Sergei Petrunia authored
      MDEV-27021: Extend SHOW EXPLAIN to support SHOW ANALYZE [FORMAT=JSON]
      MDEV-10000: EXPLAIN FOR CONNECTION syntax support
      94dc0bff
    • Sergei Petrunia's avatar
      MDEV-28268: Server crashes in Expression_cache_tracker::fetch_current_stats · 8db9aa49
      Sergei Petrunia authored
      (cherry-pick into preview-10.9-MDEV-27021-explain tree)
      
      Expression_cache_tmptable object uses an Expression_cache_tracker object
      to report the statistics.
      
      In the common scenario, Expression_cache_tmptable destructor sets
      tracker->cache=NULL. The tracker object survives after the expression
      cache is deleted and one may call cache_tracker->fetch_current_stats()
      for it with no harm.
      
      However a degenerate cache with no parameters does not set
      tracker->cache=NULL in Expression_cache_tmptable destructor which
      results in an attempt to use freed data in the
      cache_tracker->fetch_current_stats() call.
      
      Fixed by setting tracker->cache to NULL and wrapping the assignment into
      a function.
      8db9aa49
    • Sergei Petrunia's avatar
      MDEV-28201: Server crashes upon SHOW ANALYZE/EXPLAIN FORMAT=JSON · 3f68c216
      Sergei Petrunia authored
      - Describe the lifetime of EXPLAIN data structures in
        sql_explain.h:ExplainDataStructureLifetime.
      
      - Make Item_field::set_field() call set_refers_to_temp_table()
        when it refers to a temp. table.
      - Introduce QT_DONT_ACCESS_TMP_TABLES flag for Item::print.
        It directs Item_field::print to not try access its the
        temp table.
      - Introduce Explain_query::notify_tables_are_closed()
        and call it right before the query closes its tables.
      - Make Explain data stuctures' print_explain_json() methods
        accept "no_tmp_tbl" parameter which means pass
        QT_DONT_ACCESS_TMP_TABLES when printing items.
      - Make Show_explain_request::call_in_target_thread() not call
        set_current_thd(). This wasn't needed as the code inside
        lex->print_explain() uses output->thd anyway. output->thd
        refers to the SHOW command's THD object.
      3f68c216
    • Oleg Smirnov's avatar
      MDEV-28124 Server crashes in Explain_aggr_filesort::print_json_members · 02c3babd
      Oleg Smirnov authored
      SHOW EXPLAIN/ANALYZE FORMAT=JSON tries to access items that have already been
      freed by a call to free_items() during THD::cleanup_after_query().
      The solution is to disallow APC calls including SHOW EXPLAIN/ANALYZE
      just before the call to free_items().
      02c3babd
    • Oleg Smirnov's avatar
      MDEV-27021 Add explicit indication of SHOW EXPLAIN/ANALYZE. · a0475cb9
      Oleg Smirnov authored
      1. Add explicit indication that the output is produced by
      SHOW EXPLAIN/ANALYZE FORMAT=JSON command.
      2. Remove useless "r_total_time_ms" field from SHOW ANALYZE FORMAT=JSON
      output when there is no timed statistics gathered.
      3. Add "r_query_time_in_progress_ms" to the output of SHOW ANALYZE FORMAT=JSON.
      a0475cb9
    • Oleg Smirnov's avatar
      d1a1ad4c
    • Oleg Smirnov's avatar
      MDEV-27021 Implement SHOW ANALYZE command · e7fcd496
      Oleg Smirnov authored
      e7fcd496
    • Oleg Smirnov's avatar
      MDEV-10000 Add EXPLAIN [FORMAT=JSON] FOR CONNECTION syntax support · 32868483
      Oleg Smirnov authored
      EXPLAIN FOR CONNECTION is a MySQL-compatible syntax for SHOW EXPLAIN.
      This commit also adds support for FORMAT=JSON to SHOW EXPLAIN,
      so the possible options to get JSON-formatted output are:
      - SHOW EXPLAIN FORMAT=JSON FOR $con
      - EXPLAIN FORMAT=JSON FOR CONNECTION $con
      32868483
  4. 28 Apr, 2022 2 commits
    • Brandon Nesterenko's avatar
      MDEV-28435: rpl.rpl_mysqlbinlog_slave_consistency fails intermittently on tables comparison · 51b28b24
      Brandon Nesterenko authored
      Problem:
      ========
      The test logic checked for the wrong condition to validate that the
      slave had caught up with the master. Specifically, it used the
      thread stage of the IO and SQL thread to be in the “Waiting for
      master to send event” and “Slave has read all relay log; waiting for
      more updates” states, respectively. The problem exposed by this MDEV
      is that, this state is also the initial slave state before reading
      data from the primary (whereas the intended state was having already
      read all available events from the primary and now waiting for new
      events). This made the MTR test validate data that it had not yet
      received, and thereby fail.
      
      Solution:
      ========
      Instead of using the IO/SQL thread states, use the existing helper
      functions save_master_gtid.inc and sync_with_master_gtid.inc. Note
      that the test result file also needed to be updated to reflect
      this fix.
      
      Special thanks to Angelique Sklavounos for pointing out that
      --stop-position was not specified in any buildbot failures, as this
      led to an IF block in the MTR test that was the source of the test
      failure.
      
      Reviewed By
      ============
      Andrei Elkin <andrei.elkin@mariadb.com>
      51b28b24
    • Marko Mäkelä's avatar
      Merge 10.8 into 10.9 · 504a3b32
      Marko Mäkelä authored
      504a3b32
  5. 27 Apr, 2022 1 commit
  6. 26 Apr, 2022 16 commits
    • Marko Mäkelä's avatar
      Merge 10.6 into 10.7 · 638afc4a
      Marko Mäkelä authored
      638afc4a
    • Marko Mäkelä's avatar
      MDEV-26753 Assertion state == TRX_STATE_PREPARED ||... failed · 2c005261
      Marko Mäkelä authored
      dict_stats_save(): Do not attempt to commit an already committed
      transaction.
      2c005261
    • Marko Mäkelä's avatar
      MDEV-26217 Failing assertion: list.count > 0 in ut_list_remove or Assertion... · 2ca11234
      Marko Mäkelä authored
      MDEV-26217 Failing assertion: list.count > 0 in ut_list_remove or Assertion `lock->trx == this' failed in dberr_t trx_t::drop_table
      
      This follows up the previous fix in
      commit c3c53926 (MDEV-26554).
      
      ha_innobase::delete_table(): Work around the insufficient
      metadata locking (MDL) during DML operations by acquiring exclusive
      InnoDB table locks on all child tables. Previously, this was only
      done on TRUNCATE and ALTER.
      
      ibuf_delete_rec(), btr_cur_optimistic_delete(): Do not invoke
      lock_update_delete() during change buffer operations.
      The revised trx_t::commit(std::vector<pfs_os_file_t>&) will
      hold exclusive lock_sys.latch while invoking fil_delete_tablespace(),
      which in turn may invoke ibuf_delete_rec().
      
      dict_index_t::has_locking(): A new predicate, replacing the dummy
      !dict_table_is_locking_disabled(index->table). Used for skipping lock
      operations during ibuf_delete_rec().
      
      trx_t::commit(std::vector<pfs_os_file_t>&): Release the locks
      and remove the table from the cache while holding exclusive
      lock_sys.latch.
      
      trx_t::commit_in_memory(): Skip release_locks() if dict_operation holds.
      
      trx_t::commit(): Reset dict_operation before invoking commit_in_memory()
      via commit_persist().
      
      lock_release_on_drop(): Release locks while lock_sys.latch is
      exclusively locked.
      
      lock_table(): Add a parameter for a pointer to the table.
      We must not dereference the table before a lock_sys.latch has
      been acquired. If the pointer to the table does not match the table
      at that point, the table is invalid and DB_DEADLOCK will be returned.
      
      row_ins_foreign_check_on_constraint(): Improve the checks.
      Remove a bogus DB_LOCK_WAIT_TIMEOUT return that was needed
      before commit c5fd9aa5 (MDEV-25919).
      
      row_upd_check_references_constraints(),
      wsrep_row_upd_check_foreign_constraints(): Simplify checks.
      2ca11234
    • Vladislav Vaintroub's avatar
    • Federico Razzoli's avatar
    • Federico Razzoli's avatar
    • Federico Razzoli's avatar
    • Marko Mäkelä's avatar
      Merge 10.5 into 10.6 · e135edec
      Marko Mäkelä authored
      e135edec
    • Marko Mäkelä's avatar
      MDEV-15250 fixup: Remove MY_GNUC_PREREQ · 7725f870
      Marko Mäkelä authored
      The macro MY_GNUC_PREREQ() was used for testing for some minor
      GCC 4 versions before GCC 4.8.5, which is the oldest version
      that supports C++11, which we depend on ever since
      commit d9613b75
      7725f870
    • Thirunarayanan Balathandayuthapani's avatar
      MDEV-15250 UPSERT during ALTER TABLE results in 'Duplicate entry' error for alter · 7c0b9c60
      Thirunarayanan Balathandayuthapani authored
      - InnoDB should avoid bulk insert operation when table has active
      DDL. Because bulk insert writes only one undo log as TRX_UNDO_EMPTY
      and logging of concurrent DML happens at commit time uses undo log
      record to parse and get the value and operation.
      
      - Removed ROW_T_EMPTY, ROW_OP_EMPTY and their associated functions
      and also the test case which tries to log the ROW_OP_EMPTY
      when table has active DDL.
      7c0b9c60
    • Rucha Deodhar's avatar
      MDEV-28319: Assertion `cur_step->type & JSON_PATH_KEY' failed in json_find_path · 43fa8e0b
      Rucha Deodhar authored
      Analysis: When trying to find path and handling the match for path,
      value at current index is not set to 0 for array_counters. This causes wrong
      current step value which eventually causes wrong cur_step->type value.
      Fix: Set the value at current index for array_counters to 0.
      43fa8e0b
    • Rucha Deodhar's avatar
      MDEV-26695: Number of an invalid row is not calculated for table value · ee5966c7
      Rucha Deodhar authored
      constructor
      
      Analysis: counter does not increment while sending rows for table value
      constructor and so row_number assumes the default value (0 in this case).
      Fix: Increment the counter to avoid counter using default value.
      ee5966c7
    • Rucha Deodhar's avatar
      MDEV-28350: Test failing on buildbot with UBSAN · 4730a698
      Rucha Deodhar authored
       Analysis: There were two kinds of failing tests on buildbot with UBSAN.
      1) runtime error: signed integer overflow and
      2) runtime error: load of value is not valid value for type
      Signed integer overflow was occuring because addition of two integers
      (size of json array + item number in array) was causing overflow in
      json_path_parts_compare. This overflow happens because a->n_item_end
      wasn't set.
      The second error was occuring because c_path->p.types_used is not
      initialized but the value is used later on to check for negative path index.
      Fix: For signed integer overflow, use a->n_item_end only in case of range
      so that it is set.
      4730a698
    • Rucha Deodhar's avatar
      MDEV-28326: Server crashes in json_path_parts_compare · 3716eaff
      Rucha Deodhar authored
      Analysis: When trying to compare json paths, the array_sizes variable is
      NULL when beginning. But trying to access address by adding to the NULL
      pointer while recursive calling json_path_parts_compare() for handling
      double wildcard, it causes undefined behaviour and the array_sizes
      variable eventually becomes non-null (has some address).
      This eventually results in crash.
      Fix: If array_sizes variable is NULL then pass NULL recursively as well.
      3716eaff
    • Julius Goryavsky's avatar
      MDEV-28377: galera_as_slave_nonprim bind: Address already in use · cad792c6
      Julius Goryavsky authored
      This commit fixes a crash reported as MDEV-28377 and a number
      of other crashes in automated tests with mtr that are related
      to broken .cnf files in galera and galera_3nodes suites, which
      happened when automatically migrating MDEV-26171 from 10.3 to
      subsequent higher versions.
      cad792c6
    • Tuukka Pasanen's avatar
      MDEV-27033: Remove version suffix from Debian packages · 375b8f40
      Tuukka Pasanen authored
      Remove version suffix from Debian packages (for example mariadb-server-10.9)
      because installing suffixed package removes older version
      of package even if it's suffixed (for example mariadb-server-10.7)
      
      This make also Debian package management easier in future MariaDB
      version iterations because there is no need for stacking
      Conlicts/Breaks/Replaces-parameters in every new major release
      375b8f40
  7. 25 Apr, 2022 7 commits
    • Thirunarayanan Balathandayuthapani's avatar
      MDEV-15250 UPSERT during ALTER TABLE results in 'Duplicate entry' error for alter · 4b80c11f
      Thirunarayanan Balathandayuthapani authored
      - InnoDB DDL results in `Duplicate entry' if concurrent DML throws
      duplicate key error. The following scenario explains the problem
      
      connection con1:
        ALTER TABLE t1 FORCE;
      
      connection con2:
        INSERT INTO t1(pk, uk) VALUES (2, 2), (3, 2);
      
      In connection con2, InnoDB throws the 'DUPLICATE KEY' error because
      of unique index. Alter operation will throw the error when applying
      the concurrent DML log.
      
      - Inserting the duplicate key for unique index logs the insert
      operation for online ALTER TABLE. When insertion fails,
      transaction does rollback and it leads to logging of
      delete operation for online ALTER TABLE.
      While applying the insert log entries, alter operation
      encounters 'DUPLICATE KEY' error.
      
      - To avoid the above fake duplicate scenario, InnoDB should
      not write any log for online ALTER TABLE before DML transaction
      commit.
      
      - User thread which does DML can apply the online log if
      InnoDB ran out of online log and index is marked as completed.
      Set online log error if apply phase encountered any error.
      It can also clear all other indexes log, marks the newly
      added indexes as corrupted.
      
      - Removed the old online code which was a part of DML operations
      
      commit_inplace_alter_table() : Does apply the online log
      for the last batch of secondary index log and does frees
      the log for the completed index.
      
      trx_t::apply_online_log: Set to true while writing the undo
      log if the modified table has active DDL
      
      trx_t::apply_log(): Apply the DML changes to online DDL tables
      
      dict_table_t::is_active_ddl(): Returns true if the table
      has an active DDL
      
      dict_index_t::online_log_make_dummy(): Assign dummy value
      for clustered index online log to indicate the secondary
      indexes are being rebuild.
      
      dict_index_t::online_log_is_dummy(): Check whether the online
      log has dummy value
      
      ha_innobase_inplace_ctx::log_failure(): Handle the apply log
      failure for online DDL transaction
      
      row_log_mark_other_online_index_abort(): Clear out all other
      online index log after encountering the error during
      row_log_apply()
      
      row_log_get_error(): Get the error happened during row_log_apply()
      
      row_log_online_op(): Does apply the online log if index is
      completed and ran out of memory. Returns false if apply log fails
      
      UndorecApplier: Introduced a class to maintain the undo log
      record, latched undo buffer page, parse the undo log record,
      maintain the undo record type, info bits and update vector
      
      UndorecApplier::get_old_rec(): Get the correct version of the
      clustered index record that was modified by the current undo
      log record
      
      UndorecApplier::clear_undo_rec(): Clear the undo log related
      information after applying the undo log record
      
      UndorecApplier::log_update(): Handle the update, delete undo
      log and apply it on online indexes
      
      UndorecApplier::log_insert(): Handle the insert undo log
      and apply it on online indexes
      
      UndorecApplier::is_same(): Check whether the given roll pointer
      is generated by the current undo log record information
      
      trx_t::rollback_low(): Set apply_online_log for the transaction
      after partially rollbacked transaction has any active DDL
      
      prepare_inplace_alter_table_dict(): After allocating the online
      log, InnoDB does create fulltext common tables. Fulltext index
      doesn't allow the index to be online. So removed the dead
      code of online log removal
      
      Thanks to Marko Mäkelä for providing the initial prototype and
      Matthias Leich for testing the issue patiently.
      4b80c11f
    • Marko Mäkelä's avatar
      1a66e3f8
    • Marko Mäkelä's avatar
      Remove redundant innodb-page_compression_ tests · cba13079
      Marko Mäkelä authored
      These were replaced with innodb.innodb_page_compressed
      in commit 35095c45
      cba13079
    • Marko Mäkelä's avatar
      Clean up the page_compressed tests · 35095c45
      Marko Mäkelä authored
      It suffices to test compression with one record. Restarting the
      server is not really needed; we are exercising the log based recovery
      in other tests, such as mariabackup.page_compression_level.
      35095c45
    • Marko Mäkelä's avatar
      Cleanup: Remove IF_VALGRIND · 4faef6e2
      Marko Mäkelä authored
      The purpose of the compress() wrapper my_compress_buffer() was twofold:
      silence Valgrind warnings about uninitialized memory access before
      zlib 1.2.4, and have PERFORMANCE_SCHEMA instrumentation of some zlib
      related memory allocation. Because of PERFORMANCE_SCHEMA, we cannot
      trivially replace my_compress_buffer() with compress().
      
      az_open(): Remove a crc32() call. Any CRC of the empty string is 0.
      4faef6e2
    • Marko Mäkelä's avatar
      Do not disable --symbolic-links on Valgrind (or MSAN) · 232af0c7
      Marko Mäkelä authored
      The option --symbolic-links was originally disabled by default under
      Purify (and later Valgrind) in 51156c5a
      without any explanation.
      232af0c7
    • Alexander Barkov's avatar
      MDEV-27690 Crash on `CHARACTER SET csname COLLATE DEFAULT` in column definition · 4ed30b2a
      Alexander Barkov authored
      Adding a 10.6 specific test from the MDEV
      4ed30b2a
  8. 22 Apr, 2022 2 commits
    • Marko Mäkelä's avatar
      MDEV-27094 Debug builds include useless InnoDB "disabled" options · c009ce7d
      Marko Mäkelä authored
      This is a backport of commit 4489a89c
      in order to remove the test innodb.redo_log_during_checkpoint
      that would cause trouble in the DBUG subsystem invoked by
      safe_mutex_lock() via log_checkpoint(). Before
      commit 7cffb5f6
      these mutexes were of different type.
      
      The following options were introduced in
      commit 2e814d47 (mariadb-10.2.2)
      and have little use:
      
      innodb_disable_resize_buffer_pool_debug had no effect even in
      MariaDB 10.2.2 or MySQL 5.7.9. It was introduced in
      mysql/mysql-server@5c4094cf4971eebab89da4ee4ae92c71f69cd524
      to work around a problem that was fixed in
      mysql/mysql-server@2957ae4f990bf3aed25822b0ce15d3ccad0b54b6
      (but the parameter was not removed).
      
      innodb_page_cleaner_disabled_debug and innodb_master_thread_disabled_debug
      are only used by the test innodb.redo_log_during_checkpoint
      that will be removed as part of this commit.
      
      innodb_dict_stats_disabled_debug is only used by that test,
      and it is redundant because one could simply use
      innodb_stats_persistent=OFF or the STATS_PERSISTENT=0 attribute
      of the table in the test to achieve the same effect.
      c009ce7d
    • Marko Mäkelä's avatar
      Cleanup: Remove fil_names_write_bogus · 6948abb9
      Marko Mäkelä authored
      Ever since commit 685d958e
      some Perl code in the test mariabackup.huge_lsn is writing names of
      non-existing files to the InnoDB redo log and testing the recovery.
      We do not need any debug instrumentation to duplicate that test.
      6948abb9