1. 15 Dec, 2020 6 commits
    • Marko Mäkelä's avatar
      MDEV-21452: Replace ib_mutex_t with mysql_mutex_t · ff5d306e
      Marko Mäkelä authored
      SHOW ENGINE INNODB MUTEX functionality is completely removed,
      as are the InnoDB latching order checks.
      
      We will enforce innodb_fatal_semaphore_wait_threshold
      only for dict_sys.mutex and lock_sys.mutex.
      
      dict_sys_t::mutex_lock(): A single entry point for dict_sys.mutex.
      
      lock_sys_t::mutex_lock(): A single entry point for lock_sys.mutex.
      
      FIXME: srv_sys should be removed altogether; it is duplicating tpool
      functionality.
      
      fil_crypt_threads_init(): To prevent SAFE_MUTEX warnings, we must
      not hold fil_system.mutex.
      
      fil_close_all_files(): To prevent SAFE_MUTEX warnings for
      fil_space_destroy_crypt_data(), we must not hold fil_system.mutex
      while invoking fil_space_free_low() on a detached tablespace.
      ff5d306e
    • Marko Mäkelä's avatar
      MDEV-21452: Remove os_event_t, MUTEX_EVENT, TTASEventMutex, sync_array · db006a9a
      Marko Mäkelä authored
      We will default to MUTEXTYPE=sys (using OSTrackMutex) for those
      ib_mutex_t that have not been replaced yet.
      
      The view INFORMATION_SCHEMA.INNODB_SYS_SEMAPHORE_WAITS is removed.
      
      The parameter innodb_sync_array_size is removed.
      
      FIXME: innodb_fatal_semaphore_wait_threshold will no longer be enforced.
      We should enforce it for lock_sys.mutex and dict_sys.mutex somehow!
      
      innodb_sync_debug=ON might still cover ib_mutex_t.
      db006a9a
    • Marko Mäkelä's avatar
      MDEV-21452: Replace all direct use of os_event_t · 38fd7b7d
      Marko Mäkelä authored
      Let us replace os_event_t with mysql_cond_t, and replace the
      necessary ib_mutex_t with mysql_mutex_t so that they can be
      used with condition variables.
      
      Also, let us replace polling (os_thread_sleep() or timed waits)
      with plain mysql_cond_wait() wherever possible.
      
      Furthermore, we will use the lightweight srw_mutex for trx_t::mutex,
      to hopefully reduce contention on lock_sys.mutex.
      
      FIXME: Add test coverage of
      mariabackup --backup --kill-long-queries-timeout
      38fd7b7d
    • Marko Mäkelä's avatar
      Fix the SRW_LOCK_DUMMY build with PLUGIN_PERFSCHEMA=NO · 59b2848a
      Marko Mäkelä authored
      srw_lock_low: Declare the member functions public when wrapping rw_lock_t
      59b2848a
    • Marko Mäkelä's avatar
      MDEV-24410: Bug in SRW_LOCK_DUMMY rw_lock_t wrapper · 20da7b22
      Marko Mäkelä authored
      In commit 43d3dad1 we forgot to
      invert the return values of rw_tryrdlock() and rw_trywrlock(),
      causing strange failures.
      20da7b22
    • Marko Mäkelä's avatar
      MDEV-24142/MDEV-24167 fixup: Split ssux_lock and srw_lock · 43d3dad1
      Marko Mäkelä authored
      This conceptually reverts commit 1fdc161d
      and reintroduces an option for srw_lock to wrap a native implementation.
      
      The srw_lock and srw_lock_low differ from ssux_lock and ssux_lock_low
      in that Slim SUX locks support three modes (Shared, Update, eXclusive)
      while Slim RW locks support only two (Read, Write).
      
      On Microsoft Windows, the srw_lock will be implemented by SRWLOCK.
      On Linux and OpenBSD, it will be implemented by rw_lock and the
      futex system call, just like earlier.
      On other systems or if SRW_LOCK_DUMMY is defined on anything else
      than Microsoft Windows, rw_lock_t will be used.
      
      ssux_lock_low::read_lock(), ssux_lock_low::update_lock(): Correct
      the SRW_LOCK_DUMMY implementation to prevent hangs. The intention of
      commit 1fdc161d seems to have been
      do ... while loops, but the 'do' keyword was missing. This total
      breakage was missed in commit 260161fc
      which did reduce the probability of the hangs.
      
      ssux_lock_low::u_unlock(): In the SRW_LOCK_DUMMY implementation
      (based on a mutex and two condition variables), always invoke
      writer_wake() in order to ensure that a waiting update_lock()
      will be woken up.
      
      ssux_lock_low::writer_wait(), ssux_lock_low::readers_wait():
      In the SRW_LOCK_DUMMY implementation, keep waiting for the signal
      until the lock word has changed. The "while" had been changed to "if"
      in order to avoid hangs.
      43d3dad1
  2. 14 Dec, 2020 6 commits
    • Stepan Patryshev's avatar
      1c660211
    • Marko Mäkelä's avatar
      Merge 10.5 into 10.6 · 9ecd7665
      Marko Mäkelä authored
      9ecd7665
    • Marko Mäkelä's avatar
      MDEV-24313 fixup: GCC 8 -Wconversion · e8217d07
      Marko Mäkelä authored
      e8217d07
    • Marko Mäkelä's avatar
      MDEV-24313 fixup: GCC -Wparentheses · 2c226e01
      Marko Mäkelä authored
      2c226e01
    • Marko Mäkelä's avatar
      MDEV-24313 (2 of 2): Silently ignored innodb_use_native_aio=1 · f24b7383
      Marko Mäkelä authored
      In commit 5e62b6a5 (MDEV-16264)
      the logic of os_aio_init() was changed so that it will never fail,
      but instead automatically disable innodb_use_native_aio (which is
      enabled by default) if the io_setup() system call would fail due
      to resource limits being exceeded. This is questionable, especially
      because falling back to simulated AIO may lead to significantly
      reduced performance.
      
      srv_n_file_io_threads, srv_n_read_io_threads, srv_n_write_io_threads:
      Change the data type from ulong to uint.
      
      os_aio_init(): Remove the parameters, and actually return an error code.
      
      thread_pool::configure_aio(): Do not silently fall back to simulated AIO.
      
      Reviewed by: Vladislav Vaintroub
      f24b7383
    • Marko Mäkelä's avatar
      MDEV-24313 (1 of 2): Hang with innodb_write_io_threads=1 · 17d3f856
      Marko Mäkelä authored
      After commit a5a2ef07 (part of MDEV-23855)
      implemented asynchronous doublewrite, it is possible that the server will
      hang when the following parametes are in effect:
      
          innodb_doublewrite=1 (default)
          innodb_write_io_threads=1
          innodb_use_native_aio=0
      
      Note: In commit 5e62b6a5 (MDEV-16264)
      the logic of os_aio_init() was changed so that it will never fail,
      but instead automatically disable innodb_use_native_aio (which is
      enabled by default) if the io_setup() system call would fail due
      to resource limits being exceeded.
      
      Before commit a5a2ef07, we used
      a synchronous write for the doublewrite buffer batches, always at
      most 64 pages at a time. So, upon completing a doublewrite batch,
      a single thread would submit at most 64 page writes (for the
      individual pages that were first written to the doublewrite buffer).
      With that commit, we may submit up to 128 page writes at a time.
      
      The maximum number of outstanding requests per thread is 256.
      Because the maximum number of asynchronous write submissions per
      thread was roughly doubled, it is now possible that
      buf_dblwr_t::flush_buffered_writes_completed() will hang in
      io_slots::acquire(), called via os_aio() and fil_space_t::io(),
      when submitting writes of the individual blocks.
      
      We will prevent this type of hang by increasing the minimum number
      of innodb_write_io_threads from 1 to 2, so that this type of hang
      would only become possible when 512 outstanding write requests
      are exceeded.
      17d3f856
  3. 11 Dec, 2020 2 commits
    • Varun Gupta's avatar
      MDEV-24353: Adding GROUP BY slows down a query · d79c3f32
      Varun Gupta authored
      A heuristic in best_access_path says that if for an index
      ref access involved key parts which are greater than equal to that
      for range access, then range access should not be considered.
      The assumption made by this heuristic does not hold when
      the range optimizer opted to use the group-by min-max optimization.
      So the fix here would be to not consider the heuristic if
      the range optimizer picked the usage of group-by min-max
      optimization.
      d79c3f32
    • Marko Mäkelä's avatar
      MDEV-24391 heap-use-after-free in fil_space_t::flush_low() · 8677c14e
      Marko Mäkelä authored
      We observed a race condition that involved two threads
      executing fil_flush_file_spaces() and one thread
      executing fil_delete_tablespace(). After one of the
      fil_flush_file_spaces() observed that
      space.needs_flush_not_stopping() is set and was
      releasing the fil_system.mutex, the other fil_flush_file_spaces()
      would complete the execution of fil_space_t::flush_low() on
      the same tablespace. Then, fil_delete_tablespace() would
      destroy the object, because the value of fil_space_t::n_pending
      did not prevent that. Finally, the fil_flush_file_spaces() would
      resume execution and invoke fil_space_t::flush_low() on the freed
      object.
      
      This race condition was introduced in
      commit 118e258a of MDEV-23855.
      
      fil_space_t::flush(): Add a template parameter that indicates
      whether the caller is holding a reference to prevent the
      tablespace from being freed.
      
      buf_dblwr_t::flush_buffered_writes_completed(),
      row_quiesce_table_start(): Acquire a reference for the duration
      of the fil_space_t::flush_low() operation. It should be impossible
      for the object to be freed in these code paths, but we want to
      satisfy the debug assertions.
      
      fil_space_t::flush_low(): Do not increment or decrement the
      reference count, but instead assert that the caller is holding
      a reference.
      
      fil_space_extend_must_retry(), fil_flush_file_spaces():
      Acquire a reference before releasing fil_system.mutex.
      This is what will fix the race condition.
      8677c14e
  4. 09 Dec, 2020 4 commits
    • Marko Mäkelä's avatar
      Merge 10.5 into 10.6 · be4d2665
      Marko Mäkelä authored
      be4d2665
    • Marko Mäkelä's avatar
      Remove unused DBUG_EXECUTE_IF "ignore_punch_hole" · 0c7c4492
      Marko Mäkelä authored
      Since commit ea21d630 we
      conditionally define a variable that only plays a role on
      systems that support hole-punching (explicit creation of sparse files).
      However, that broke debug builds on such systems.
      
      It turns out that the debug_dbug label "ignore_punch_hole" is
      not at all used in MariaDB server. It would be covered by
      the MySQL 5.7 test innodb.table_compress. (Note: MariaDB 10.1
      implemented page_compressed tables before something comparable
      appeared in MySQL 5.7.)
      0c7c4492
    • Marko Mäkelä's avatar
      Merge 10.5 into 10.6 · ca821692
      Marko Mäkelä authored
      ca821692
    • Marko Mäkelä's avatar
      MDEV-12227 Defer writes to the InnoDB temporary tablespace · 5eb53955
      Marko Mäkelä authored
      The flushing of the InnoDB temporary tablespace is unnecessarily
      tied to the write-ahead redo logging and redo log checkpoints,
      which must be tied to the page writes of persistent tablespaces.
      
      Let us simply omit any pages of temporary tables from buf_pool.flush_list.
      In this way, log checkpoints will never incur any 'collateral damage' of
      writing out unmodified changes for temporary tables.
      
      After this change, pages of the temporary tablespace can only be written
      out by buf_flush_lists(n_pages,0) as part of LRU eviction. Hopefully,
      most of the time, that code will never be executed, and instead, the
      temporary pages will be evicted by buf_release_freed_page() without
      ever being written back to the temporary tablespace file.
      
      This should improve the efficiency of the checkpoint flushing and
      the buf_flush_page_cleaner thread.
      
      Reviewed by: Vladislav Vaintroub
      5eb53955
  5. 08 Dec, 2020 3 commits
    • Marko Mäkelä's avatar
      Fix -Wunused-but-set-variable · ea21d630
      Marko Mäkelä authored
      ea21d630
    • Marko Mäkelä's avatar
      MDEV-24369 Page cleaner sleeps despite innodb_max_dirty_pages_pct_lwm being exceeded · f0c295e2
      Marko Mäkelä authored
      MDEV-24278 improved the page cleaner so that it will no longer wake up
      once per second on an idle server. However, with innodb_adaptive_flushing
      (the default) the function page_cleaner_flush_pages_recommendation()
      could initially return 0 even if there is work to do.
      
      af_get_pct_for_dirty(): Remove. Based on a comment here, it appears
      that an initial intention of innodb_max_dirty_pages_pct_lwm=0.0
      (the default value) was to disable something. That ceased to hold in
      MDEV-23855: the value is a pure threshold; the page cleaner will not
      perform any work unless the threshold is exceeded.
      
      page_cleaner_flush_pages_recommendation(): Add the parameter dirty_blocks
      to ensure that buf_pool.flush_list will eventually be emptied.
      f0c295e2
    • Sergei Petrunia's avatar
      MDEV-24351: S3, same-backend replication: Dropping a table on master... · 6859e80d
      Sergei Petrunia authored
      ..causes error on slave.
      Cause: if the master doesn't have the frm file for the table,
      DROP TABLE code will call ha_delete_table_force() to drop the table
      in all available storage engines.
      The issue was that this code path didn't check for
      HTON_TABLE_MAY_NOT_EXIST_ON_SLAVE flag for the storage engine,
      and so did not add "... IF EXISTS" to the statement that's written
      to the binary log.  This can cause error on the slave when it tries to
      drop a table that's already gone.
      6859e80d
  6. 07 Dec, 2020 1 commit
  7. 04 Dec, 2020 2 commits
    • Marko Mäkelä's avatar
      MDEV-24350 buf_dblwr unnecessarily uses memory-intensive srv_stats counters · 83591a23
      Marko Mäkelä authored
      The counters in srv_stats use std::atomic and multiple cache lines per
      counter. This is an overkill in a case where a critical section already
      exists in the code. A regular variable will work just fine, with much
      smaller memory bus impact.
      83591a23
    • Marko Mäkelä's avatar
      MDEV-24348 InnoDB shutdown hang with innodb_flush_sync=0 · aa0e3805
      Marko Mäkelä authored
      This hang was caused by MDEV-23855, and we failed to fix it in
      MDEV-24109 (commit 4cbfdeca).
      
      When buf_flush_ahead() is invoked soon before server shutdown
      and the non-default setting innodb_flush_sync=OFF is in effect
      and the buffer pool contains dirty pages of temporary tables,
      the page cleaner thread may remain in an infinite loop
      without completing its work, thus causing the shutdown to hang.
      
      buf_flush_page_cleaner(): If the buffer pool contains no
      unmodified persistent pages, ensure that buf_flush_sync_lsn= 0
      will be assigned, so that shutdown will proceed.
      
      The test case is not deterministic. On my system, it reproduced
      the hang with 95% probability when running multiple instances
      of the test in parallel, and 4% when running single-threaded.
      
      Thanks to Eugene Kosov for debugging and testing this.
      aa0e3805
  8. 03 Dec, 2020 13 commits
    • Marko Mäkelä's avatar
      MDEV-24142: Avoid block_lock alignment loss on 64-bit systems · e9f33b77
      Marko Mäkelä authored
      sux_lock::recursive: Move right after the 32-bit sux_lock::lock.
      This will reduce sizeof(block_lock) from 24 to 16 bytes on
      64-bit systems with CMAKE_BUILD_TYPE=RelWithDebInfo. This may be
      significant, because there will be one buf_block_t::lock for each
      buffer pool page descriptor.
      
      We still have some potential for savings, with sizeof(buf_page_t)==112
      and sizeof(buf_block_t)==184 on a GNU/Linux AMD64 system.
      
      Note: On GNU/Linux AMD64, sizeof(index_lock) remains 32 bytes
      (16 with PLUGIN_PERFSCHEMA=NO) even tough it would fit in 24 bytes.
      This is because sizeof(srw_lock) includes 4 bytes of padding
      (to 16 bytes) that index_lock_t::recursive cannot reuse. So,
      in total 4+4 bytes will be lost to padding. This is rather
      insignificant compared to sizeof(dict_index_t)==400.
      e9f33b77
    • Monty's avatar
      Fixed usage of not initialized memory in LIKE ... ESCAPE · 6033cc85
      Monty authored
      This was noticed wben running "mtr --valgrind main.precedence"
      
      The problem was that Item_func_like::escape could be left unitialized
      when used with views combined with UNIONS like in:
      
      create or replace view v1 as select 2 LIKE 1 ESCAPE 3 IN (SELECT 0 UNION SELECT 1), 2 LIKE 1 ESCAPE (3 IN (SELECT 0 UNION SELECT 1)), (2 LIKE 1 ESCAPE 3) IN (SELECT 0 UNION SELECT 1);
      
      The above query causes in fix_escape_item()
      escape_item->const_during_execution() to be true
      and
      escape_item->const_item() to be false
      
      in which case 'escape' is never calculated.
      
      The fix is to make the main logic of fix_escape_item() out to a
      separate function and call that function once in Item.
      
      Other things:
      - Reorganized fields in Item_func_like class to make it more compact
      6033cc85
    • Marko Mäkelä's avatar
      MDEV-24142: Remove INFORMATION_SCHEMA.INNODB_MUTEXES · ba2d45dc
      Marko Mäkelä authored
      Let us remove sux_lock::waits and the associated bookkeeping.
      Starting with commit 1669c889
      the PERFORMANCE_SCHEMA instrumentation interface is keeping
      track of lock waits.
      
      The view INFORMATION_SCHEMA.INNODB_MUTEXES only exported counts
      of rw-lock waits.
      
      Also, SHOW ENGINE INNODB MUTEX will no longer export any information
      about rw-locks.
      ba2d45dc
    • Marko Mäkelä's avatar
    • Marko Mäkelä's avatar
      MDEV-24142: Remove the LatchDebug interface to rw-locks · ac028ec5
      Marko Mäkelä authored
      The latching order checks for rw-locks have not caught many bugs
      in the past few years and they are greatly complicating the code.
      
      Last time the debug checks were useful was in
      commit 59caf2c3 (MDEV-13485).
      
      The B-tree hang MDEV-14637 was not caught by LatchDebug,
      because the granularity of the checks is not sufficient
      to distinguish the levels of non-leaf B-tree pages.
      
      The interface was already made dead code by the grandparent
      commit 03ca6495.
      ac028ec5
    • Marko Mäkelä's avatar
      MDEV-24308: Windows improvements · 06efef4b
      Marko Mäkelä authored
      This reverts commit e34e53b5
      and defines os_thread_sleep() is a macro on Windows.
      06efef4b
    • Marko Mäkelä's avatar
      MDEV-24142: Replace InnoDB rw_lock_t with sux_lock · 03ca6495
      Marko Mäkelä authored
      InnoDB buffer pool block and index tree latches depend on a
      special kind of read-update-write lock that allows reentrant
      (recursive) acquisition of the 'update' and 'write' locks
      as well as an upgrade from 'update' lock to 'write' lock.
      The 'update' lock allows any number of reader locks from
      other threads, but no concurrent 'update' or 'write' lock.
      
      If there were no requirement to support an upgrade from 'update'
      to 'write', we could compose the lock out of two srw_lock
      (implemented as any type of native rw-lock, such as SRWLOCK on
      Microsoft Windows). Removing this requirement is very difficult,
      so in commit f7e7f487d4b06695f91f6fbeb0396b9d87fc7bbf we
      implemented an 'update' mode to our srw_lock.
      
      Re-entrant or recursive locking is mostly needed when writing or
      freeing BLOB pages, but also in crash recovery or when merging
      buffered changes to an index page. The re-entrancy allows us to
      attach a previously acquired page to a sub-mini-transaction that
      will be committed before whatever else is holding the page latch.
      
      The SUX lock supports Shared ('read'), Update, and eXclusive ('write')
      locking modes. The S latches are not re-entrant, but a single S latch
      may be acquired even if the thread already holds an U latch.
      
      The idea of the U latch is to allow a write of something that concurrent
      readers do not care about (such as the contents of BTR_SEG_LEAF,
      BTR_SEG_TOP and other page allocation metadata structures, or
      the MDEV-6076 PAGE_ROOT_AUTO_INC). (The PAGE_ROOT_AUTO_INC field
      is only updated when a dict_table_t for the table exists, and only
      read when a dict_table_t for the table is being added to dict_sys.)
      
      block_lock::u_lock_try(bool for_io=true) is used in buf_flush_page()
      to allow concurrent readers but no concurrent modifications while the
      page is being written to the data file. That latch will be released
      by buf_page_write_complete() in a different thread. Hence, we use
      the special lock owner value FOR_IO.
      
      The index_lock::u_lock() improves concurrency on operations that
      involve non-leaf index pages.
      
      The interface has been cleaned up a little. We will use
      x_lock_recursive() instead of x_lock() when we know that a
      lock is already held by the current thread. Similarly,
      a lock upgrade from U to X is only allowed via u_x_upgrade()
      or x_lock_upgraded() but not via x_lock().
      
      We will disable the LatchDebug and sync_array interfaces to
      InnoDB rw-locks.
      
      The SEMAPHORES section of SHOW ENGINE INNODB STATUS output
      will no longer include any information about InnoDB rw-locks,
      only TTASEventMutex (cmake -DMUTEXTYPE=event) waits.
      This will make a part of the 'innotop' script dead code.
      
      The block_lock buf_block_t::lock will not be covered by any
      PERFORMANCE_SCHEMA instrumentation.
      
      SHOW ENGINE INNODB MUTEX and INFORMATION_SCHEMA.INNODB_MUTEXES
      will no longer output source code file names or line numbers.
      The dict_index_t::lock will be identified by index and table names,
      which should be much more useful. PERFORMANCE_SCHEMA is lumping
      information about all dict_index_t::lock together as
      event_name='wait/synch/sxlock/innodb/index_tree_rw_lock'.
      
      buf_page_free(): Remove the file,line parameters. The sux_lock will
      not store such diagnostic information.
      
      buf_block_dbg_add_level(): Define as empty macro, to be removed
      in a subsequent commit.
      
      Unless the build was configured with cmake -DPLUGIN_PERFSCHEMA=NO
      the index_lock dict_index_t::lock will be instrumented via
      PERFORMANCE_SCHEMA. Similar to
      commit 1669c889
      we will distinguish lock waits by registering shared_lock,exclusive_lock
      events instead of try_shared_lock,try_exclusive_lock.
      Actual 'try' operations will not be instrumented at all.
      
      rw_lock_list: Remove. After MDEV-24167, this only covered
      buf_block_t::lock and dict_index_t::lock. We will output their
      information by traversing buf_pool or dict_sys.
      03ca6495
    • Marko Mäkelä's avatar
      MDEV-24142 preparation: Add srw_mutex and srw_lock::u_lock() · d46b4248
      Marko Mäkelä authored
      The PERFORMANCE_SCHEMA insists on distinguishing read-update-write
      locks from read-write locks, so we must add
      template<bool support_u_lock> in rd_lock() and wr_lock() operations.
      
      rd_lock::read_trylock(): Add template<bool prioritize_updater=false>
      which is used by the srw_lock_low::read_lock() loop. As long as
      an UPDATE lock has already been granted to some thread, we will grant
      subsequent READ lock requests even if a waiting WRITE lock request
      exists. This will be necessary to be compatible with existing usage
      pattern of InnoDB rw_lock_t where the holder of SX-latch (which we
      will rename to UPDATE latch) may acquire an additional S-latch
      on the same object. For normal read-write locks without update operations
      this should make no difference at all, because the rw_lock::UPDATER
      flag would never be set.
      d46b4248
    • Marko Mäkelä's avatar
      MDEV-24167: Stabilize perfschema.sxlock_func · 3872e585
      Marko Mäkelä authored
      The extension of the test perfschema.sxlock_func in
      commit 1669c889
      turned out to be unstable.
      
      Let us filter out purge_sys.latch (trx_purge_latch) from the output,
      because it might happen that the purge tasks will not be executed
      during the test execution.
      3872e585
    • Marko Mäkelä's avatar
      MDEV-24167 fixup: Improve the PERFORMANCE_SCHEMA instrumentation · 1669c889
      Marko Mäkelä authored
      Let us try to avoid code bloat for the common case that
      performance_schema is disabled at runtime, and use
      ATTRIBUTE_NOINLINE member functions for instrumented latch acquisition.
      
      Also, let us distinguish lock waits from non-contended lock requests
      by using write_lock,read_lock for the requests that lead to waits,
      and try_write_lock,try_read_lock for the wait-free lock acquisitions.
      Actual 'try' operations are not being instrumented at all.
      1669c889
    • Marko Mäkelä's avatar
      MDEV-24167 fixup: Avoid hangs in SRW_LOCK_DUMMY · 260161fc
      Marko Mäkelä authored
      In commit 1fdc161d we introduced
      a mutex-and-condition-variable based fallback implementation
      for platforms that lack a futex system call. That implementation
      is prone to hangs.
      
      Let us use separate condition variables for shared and exclusive requests.
      260161fc
    • Marko Mäkelä's avatar
      Merge 10.5 into 10.6 · a13fac9e
      Marko Mäkelä authored
      a13fac9e
    • Marko Mäkelä's avatar
      MDEV-22929 fixup: root_name() clash with clang++ <fstream> · f146969f
      Marko Mäkelä authored
      The clang++ -stdlib=libc++ header file <fstream> depends on
      <filesystem> that defines a member function path::root_name(),
      which conflicts with the rather unused #define root_name()
      that had been introduced in
      commit 7c58e97b.
      
      Because an instrumented -stdlib=libc++ (rather than the default
      -stdlib=libstdc++) is easier to build for a working -fsanitize=memory
      (cmake -DWITH_MSAN=ON), let us remove the conflicting #define for now.
      f146969f
  9. 02 Dec, 2020 3 commits