1. 19 Jun, 2020 9 commits
    • Sergei Golubchik's avatar
      S3 is pluggable now · 35034d81
      Sergei Golubchik authored
      35034d81
    • Sergei Golubchik's avatar
      cleanup: Aria headers · 4acafaae
      Sergei Golubchik authored
      include/maria.h is a common header included in half of the server,
      if should only contain definitions and declarations that are
      used outside of storage/maria
      
      internal definitions and declarations should be in maria_def.h
      
      also remove few duplicate declarations
      4acafaae
    • Oleksandr Byelkin's avatar
      Server maturity increased · e9f62228
      Oleksandr Byelkin authored
      e9f62228
    • Marko Mäkelä's avatar
      MDEV-22871 follow-up fix: AHI corruption & leak · e341fb0d
      Marko Mäkelä authored
      Commit bf3c862f
      accidentally introduced two bugs.
      
      btr_search_update_hash_ref(): Pass the correct parameter part->heap.
      
      btr_search_sys_t::free(): Free all memory.
      
      Thanks to Michael Widenius and Thirunarayanan Balathandayuthapani
      for pointing out these bugs.
      e341fb0d
    • Oleksandr Byelkin's avatar
      MDEV-20302 Server hangs upon concurrent SELECT from partitioned S3 · 4243785f
      Oleksandr Byelkin authored
      Second attempt to fix same bug:
      
      Use the same queue for all READ operations.
      Release queues for all used pages.
      
      This fixes a hang in the s3.alter2 test case
      4243785f
    • Monty's avatar
      MDEV-22925 ALTER TABLE s3_table ENGINE=Aria can cause failure on slave · 60f08dd5
      Monty authored
      When converting a table (test.s3_table) from S3 to another engine, the
      following will be logged to the binary log:
      
      DROP TABLE IF EXISTS test.t1;
      CREATE OR REPLACE TABLE test.t1 (...) ENGINE=new_engine
      INSERT rows to test.t1 in binary-row-log-format
      
      The bug is that the above statements are logged one by one to the binary
      log. This means that a fast slave, configured to use the same S3 storage
      as the master, would be able to execute the DROP and CREATE from the
      binary log before the master has finished the ALTER TABLE.
      In this case the slave would ignore the DROP (as it's on a S3 table) but
      it will stop on CREATE of the local tale, as the table is still exists in
      S3. The REPLACE part will be ignored by the slave as it can't touch the
      S3 table.
      
      The fix is to ensure that all the above statements is written to binary
      log AFTER the table has been deleted from S3.
      60f08dd5
    • Monty's avatar
      Fixed bugs in s3 test cases · 6a0c05b7
      Monty authored
      6a0c05b7
    • Monty's avatar
      Added THD::binlog_table_should_be_logged() to simplify some code · 00bd52b1
      Monty authored
      - Added missing test for binlog_filter to ALTER TABLE
      00bd52b1
    • Monty's avatar
      Cleanup's and more DBUG_PRINT's · 1a49c5eb
      Monty authored
      - Rewrote bool Query_compressed_log_event::write() to make it more readable
        (no logic changes).
      - Changed DBUG_PRINT of 'is_error:' to 'is_error():' to make it easier to
        find error: in traces.
      - Ensure that 'db' is never null in Query_log_event (Simplified code).
      1a49c5eb
  2. 18 Jun, 2020 16 commits
    • Vladislav Vaintroub's avatar
    • Vladislav Vaintroub's avatar
    • Daniel Black's avatar
    • Marko Mäkelä's avatar
      MDEV-22871: Reduce InnoDB buf_pool.page_hash contention · 5155a300
      Marko Mäkelä authored
      The rw_lock_s_lock() calls for the buf_pool.page_hash became a
      clear bottleneck after MDEV-15053 reduced the contention on
      buf_pool.mutex. We will replace that use of rw_lock_t with a
      special implementation that is optimized for memory bus traffic.
      
      The hash_table_locks instrumentation will be removed.
      
      buf_pool_t::page_hash: Use a special implementation whose API is
      compatible with hash_table_t, and store the custom rw-locks
      directly in buf_pool.page_hash.array, intentionally sharing
      cache lines with the hash table pointers.
      
      rw_lock: A low-level rw-lock implementation based on std::atomic<uint32_t>
      where read_trylock() becomes a simple fetch_add(1).
      
      buf_pool_t::page_hash_latch: The special of rw_lock for the page_hash.
      
      buf_pool_t::page_hash_latch::read_lock(): Assert that buf_pool.mutex
      is not being held by the caller.
      
      buf_pool_t::page_hash_latch::write_lock() may be called while not holding
      buf_pool.mutex. buf_pool_t::watch_set() is such a caller.
      
      buf_pool_t::page_hash_latch::read_lock_wait(),
      page_hash_latch::write_lock_wait(): The spin loops.
      These will obey the global parameters innodb_sync_spin_loops and
      innodb_sync_spin_wait_delay.
      
      buf_pool_t::freed_page_hash: A singly linked list of copies of
      buf_pool.page_hash that ever existed. The fact that we never
      free any buf_pool.page_hash.array guarantees that all
      page_hash_latch that ever existed will remain valid until shutdown.
      
      buf_pool_t::resize_hash(): Replaces buf_pool_resize_hash().
      Prepend a shallow copy of the old page_hash to freed_page_hash.
      
      buf_pool_t::page_hash_table::n_cells: Declare as Atomic_relaxed.
      
      buf_pool_t::page_hash_table::lock(): Explain what prevents a
      race condition with buf_pool_t::resize_hash().
      5155a300
    • Marko Mäkelä's avatar
      MDEV-22871: Remove pointer indirection for InnoDB hash_table_t · cfd3d70c
      Marko Mäkelä authored
      hash_get_n_cells(): Remove. Access n_cells directly.
      
      hash_get_nth_cell(): Remove. Access array directly.
      
      hash_table_clear(): Replaced with hash_table_t::clear().
      
      hash_table_create(), hash_table_free(): Remove.
      
      hash0hash.cc: Remove.
      cfd3d70c
    • Marko Mäkelä's avatar
      MDEV-22871: Clean up btr_search_sys · bf3c862f
      Marko Mäkelä authored
      btr_search_sys::parts[]: A single structure for the partitions of
      the adaptive hash index. Replaces the 3 separate arrays:
      btr_search_latches[], btr_search_sys->hash_tables,
      btr_search_sys->hash_tables[i]->heap.
      
      hash_table_t::heap, hash_table_t::adaptive: Remove.
      
      ha0ha.cc: Remove. Move all code to btr0sea.cc.
      bf3c862f
    • Marko Mäkelä's avatar
      MDEV-22871: Clean up hash_table_t · 9159b897
      Marko Mäkelä authored
      HASH_TABLE_SYNC_MUTEX was kind-of used for the adaptive hash index,
      even though that hash table is already protected by btr_search_latches[].
      
      HASH_TABLE_SYNC_RWLOCK was only being used for buf_pool.page_hash.
      It is cleaner to decouple that synchronization from hash_table_t,
      and move it to the actual user.
      
      buf_pool_t::page_hash_latches[]: Synchronization for buf_pool.page_hash.
      
      LATCH_ID_HASH_TABLE_MUTEX: Remove.
      
      hash_table_t::sync_obj, hash_table_t::n_sync_obj: Remove.
      
      hash_table_t::type, hash_table_sync_t: Remove.
      
      HASH_ASSERT_OWN(), hash_get_mutex(), hash_get_nth_mutex(): Remove.
      
      ib_recreate(): Merge to the only caller, buf_pool_resize_hash().
      
      ib_create(): Merge to the callers.
      
      ha_clear(): Merge to the only caller buf_pool_t::close().
      
      buf_pool_t::create(): Merge the ib_create() and
      hash_create_sync_obj() invocations.
      
      ha_insert_for_fold_func(): Clarify an assertion.
      
      buf_pool_t::page_hash_lock(): Simplify the logic.
      
      hash_assert_can_search(), hash_assert_can_modify(): Remove.
      These predicates were only being invoked for the adaptive hash index,
      while they only are effective for buf_pool.page_hash.
      
      HASH_DELETE_AND_COMPACT(): Merge to ha_delete_hash_node().
      
      hash_get_sync_obj_index(): Remove.
      
      hash_table_t::heaps[], hash_get_nth_heap(): Remove. It was actually unused!
      
      hash_get_heap(): Remove. It was only used in ha_delete_hash_node(),
      where we always use hash_table_t::heap.
      
      hash_table_t::calc_hash(): Replaces hash_calc_hash().
      9159b897
    • Daniel Black's avatar
      libutils: merge_archives_unix · 08f6513c
      Daniel Black authored
      MRI scripts cannot handle + in paths, and ubuntu CI makes use of
      these.
      
      So we remove the top level build dir from the script and
      transform it into a relative path script.
      08f6513c
    • Daniel Black's avatar
      libutils: merge static libraries only once · 38774f8d
      Daniel Black authored
      Because of common dependencies between the
      static libraries list can contain duplicates.
      
      We reduce these down to the single last one in
      the list.
      
      This reduces the relative time of a rebuild from:
      
      $ (cd builddir/; time make -j)
      ...
      real	0m30.789s
      user	1m33.477s
      sys	0m19.678s
      
      and the LIB entries
      $ grep ADDLIB builddir/libmysqld/mysqlserver-\$\<CONFIG\>.mri.tpl  | wc -l
      179
      
      $ du -h builddir/libmysqld/libmariadbd.a
      4.1G	builddir/libmysqld/libmariadbd.a
      
      To:
      
      $ (cd builddir/; time make -j)
      ...
      real	0m20.139s
      user	1m32.423s
      sys	0m12.208s
      
      $ grep ADDLIB builddir/libmysqld/mysqlserver-\$\<CONFIG\>.mri.tpl  | wc -l
      25
      
      $ du -h builddir/libmysqld/libmariadbd.a
      688M	builddir/libmysqld/libmariadbd.a
      38774f8d
    • Marko Mäkelä's avatar
      Merge 10.4 into 10.5 · c515b1d0
      Marko Mäkelä authored
      c515b1d0
    • Vlad Lesin's avatar
      MDEV-22894: Mariabackup should not read [mariadb-client] option group · 205b0ce6
      Vlad Lesin authored
      from configuration files
      205b0ce6
    • Vlad Lesin's avatar
      MDEV-18215: mariabackup does not report unknown command line options · 0121a9e0
      Vlad Lesin authored
      Post-push fix: add mysqd options in backup string in mariabackup sst
      script for the case of logging not in syslog.
      0121a9e0
    • Marko Mäkelä's avatar
      Fix the test mariabackup.mdev-14447 · 01ed6140
      Marko Mäkelä authored
      The test mariabackup.mdev-14447 started failing due to the option
      --apply-log-only that became invalid since
      commit 9bdf35e9
      and had been removed in
      commit 8c71c6aa.
      01ed6140
    • Sergei Golubchik's avatar
      S3 compilation error on x86 · baff3ba6
      Sergei Golubchik authored
      baff3ba6
    • Sergei Golubchik's avatar
      update libmarias3 · 7c0cf204
      Sergei Golubchik authored
      7c0cf204
    • Sergei Golubchik's avatar
      more "removed" mysqld command-line options · 7bb32cda
      Sergei Golubchik authored
      and put them all together in mysqld.cc
      7bb32cda
  3. 17 Jun, 2020 11 commits
  4. 16 Jun, 2020 2 commits
    • Sachin's avatar
      MDEV-22370 safe_mutex: Trying to lock uninitialized mutex at... · 592a10d0
      Sachin authored
      MDEV-22370 safe_mutex: Trying to lock uninitialized mutex at /data/src/10.4-bug/sql/rpl_parallel.cc, line 470 upon shutdown during FTWRL
      
      Problem:- When we issue FTWRL with shutdown in parallel, there is race between
      FTWRL and shutdown. Shutdown might destroy the mutex (pool->LOCK_rpl_thread_pool)
      before FTWRL can lock it. So we can get crash on FTWRL thread
      
      Solution:- mysql_mutex_destroy(pool->LOCK_rpl_thread_pool) should wait for
      FTWRL thread to complete its work , and then destroy.
      So slave_prepare_for_shutdown will just deactivate the pool, and mutex is destroyed
      later in end_slave()
      592a10d0
    • MikkoJaakola's avatar
      MDEV-21759 galera.galera_parallel_autoinc_manytrx sporadic failures. · 0128e13e
      MikkoJaakola authored
      The galera.galera_parallel_autoinc_manytrx mtr test opens and runs test
      scenario through 3 connections to node 1 and one connection to node 2.
      In the test initialization phase, the test creates two tables 't1' and 'ten'
      and then creates a stored procedure 'p1' to operate on these tables.
      These 3 create DDL statements are issued through same connection to node 1.
      
      In the next test phase, the mtr script uses send command to launch the call
      for the p1 stored procedure through all 3 connections to node 1 and through
      one connection to node 2. As the mtr send command is asynchronous,
      this test phase is non blocking and fast operation.
      Now, if the replication between nodes is slow, it may happen that the
      initialization phase DDL statements have not been received or have not been
      fully applied in node 2. Therefore there is no guarantee that the test tables
      and the stored procedure have been created in node 2. Yet, the test is trying
      to call p1 in node 2.
      
      In the failure case error logs, there is error message
      "MTR failed: query 'reap' failed: 1305: PROCEDURE test.p1 does not exist"
      
      The reap command through connection to node 2, is the first place where test
      execution may observe that test tables and/or stored procedure are not yet
      created in node 2.
      
      The fix in this commit adds a wait condition in connection to node 2, to wait
      until the stored procedure is created before calling the stored procedure.
      The wait is implemented by looking in information_schema.routines for the p1
      stored procedure.
      0128e13e
  5. 15 Jun, 2020 1 commit
  6. 14 Jun, 2020 1 commit
    • Vlad Lesin's avatar
      MDEV-18215: mariabackup does not report unknown command line options · 9bdf35e9
      Vlad Lesin authored
      MDEV-21298: mariabackup doesn't read from the [mariadbd] and [mariadbd-X.Y]
      server option groups from configuration files
      MDEV-21301: mariabackup doesn't read [mariadb-backup] option group in
      configuration file
      
      All three issues require to change the same code, that is why their
      fixes are joined in one commit.
      
      The fix is in invoking load_defaults_or_exit() and handle_options() for
      backup-specific groups separately from client-server groups to let the last
      handle_options() call fail on unknown backup-specific options.
      
      The order of options procesing is the following:
      1) Load server groups and process server options, ignore unknown
      options
      2) Load client groups and process client options, ignore unknown
      options
      3) Load backup groups and process client-server options, exit on
      unknown option
      4) Process --mysqld-args command line options, ignore unknown options
      
      New global flag my_handle_options_init_variables was added to have
      ability to invoke handle_options() for the same allowed options set
      several times without re-initialising previously set option values.
      
      --password value destroying is moved from option processing callback to
      mariabackup's handle_options() function to have ability to invoke server's
      handle_options() several times for the same possible allowed options
      set.
      
      Galera invokes wsrep_sst_mariabackup.sh with mysqld command line
      options to configure mariabackup as close to the server as possible.
      It is not known what server options are supported by mariabackup when the
      script is invoked. That is why new mariabackup option "--mysqld-args" is added,
      all unknown options that follow this option will be silently ignored.
      
      wsrep_sst_mariabackup.sh was also changed to:
      - use "--mysqld-args" mariabackup option to pass mysqld options,
      - remove deprecated innobackupex mode,
      - remove unsupported mariabackup options:
          --encrypt
          --encrypt-key
          --rebuild-indexes
          --rebuild-threads
      9bdf35e9