1. 15 Nov, 2017 1 commit
    • Andrei Elkin's avatar
      MDEV-12012/MDEV-11969 Can't remove GTIDs for a stale GTID Domain ID · aae49327
      Andrei Elkin authored
      As reported in MDEV-11969 "there's no way to ditch knowledge" about some
      domain that is no longer updated on a server. Besides being of annoyance to
      clutter output in DBA console stale domains can prevent the slave
      to connect the master as MDEV-12012 witnesses.
      What domain is obsolete must be evaluated by the user (DBA) according
      to whether the domain info is still relevant and will the domain ever
      receive any update.
      
      This patch introduces a method to discard obsolete gtid domains from
      the server binlog state. The removal requires no event group from such
      domain present in existing binlog files though. If there are any the
      containing logs must be first PURGEd in order for
      
        FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains)
      
      succeed. Otherwise the command returns an error.
      
      The list of obsolete domains can be computed through
      intersecting two sets - the earliest (first) binlog's Gtid_list
      and the current value of @@global.gtid_binlog_state - and extracting
      the domain id components from the intersection list items.
      The new DELETE_DOMAIN_ID featured FLUSH continues to rotate binlog
      omitting the deleted domains from the active binlog file's Gtid_list.
      Notice though when the command is ineffective - that none of requested to delete
      domain exists in the binlog state - rotation does not occur.
      
      Obsolete domain deletion is not harmful for connected slaves as long
      as master side binlog files *purge* is synchronized with FLUSH-DELETE_DOMAIN_ID.
      The slaves must have the last event from purged files processed as usual,
      in order not to bump later into requesting a gtid from a file which
      was already gone.
      While the command is not replicated (as ordinary FLUSH BINLOG LOGS is)
      slaves, even though having extra domains, won't suffer from reconnection errors
      thanks to master-slave gtid connection protocol allowing the master
      to be ignorant about a gtid domain.
      Should at failover such slave to be promoted into master role it may run
      the ex-master's
      
       FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains)
      
      to clean its own binlog state.
      
      NOTES.
        suite/perfschema/r/start_server_low_digest.result
      is re-recorded as consequence of internal parser codes changes.
      aae49327
  2. 14 Nov, 2017 2 commits
  3. 13 Nov, 2017 2 commits
  4. 10 Nov, 2017 4 commits
  5. 09 Nov, 2017 8 commits
  6. 08 Nov, 2017 14 commits
  7. 07 Nov, 2017 3 commits
  8. 06 Nov, 2017 6 commits
    • Vladislav Vaintroub's avatar
      Fix test case · 120f848f
      Vladislav Vaintroub authored
      120f848f
    • Marko Mäkelä's avatar
      Remove dead code for non-debug builds · f830314f
      Marko Mäkelä authored
      f830314f
    • Vladislav Vaintroub's avatar
    • Marko Mäkelä's avatar
      Merge 10.0 into 10.1 · 56911096
      Marko Mäkelä authored
      56911096
    • Marko Mäkelä's avatar
      MDEV-13328 ALTER TABLE…DISCARD TABLESPACE takes a lot of time · 51b4366b
      Marko Mäkelä authored
      With a big buffer pool that contains many data pages,
      DISCARD TABLESPACE took a long time, because it would scan the
      entire buffer pool to remove any pages that belong to the tablespace.
      With a large buffer pool, this would take a lot of time, especially
      when the table-to-discard is empty.
      
      The minimum amount of work that DISCARD TABLESPACE must do is to
      remove the pages of the to-be-discarded table from the
      buf_pool->flush_list because any writes to the data file must be
      prevented before the file is deleted.
      
      If DISCARD TABLESPACE does not evict the pages from the buffer pool,
      then IMPORT TABLESPACE must do it, because we must prevent pre-DISCARD,
      not-yet-evicted pages from being mistaken for pages of the imported
      tablespace.
      
      It would not be a useful fix to simply move the buffer pool scan to
      the IMPORT TABLESPACE step. What we can do is to actively evict those
      pages that could be mistaken for imported pages. In this way, when
      importing a small table into a big buffer pool, the import should
      still run relatively fast.
      
      Import is bypassing the buffer pool when reading pages for the
      adjustment phase. In the adjustment phase, if a page exists in
      the buffer pool, we could replace it with the page from the imported
      file. Unfortunately I did not get this to work properly, so instead
      we will simply evict any matching page from the buffer pool.
      
      buf_page_get_gen(): Implement BUF_EVICT_IF_IN_POOL, a new mode
      where the requested page will be evicted if it is found. There
      must be no unwritten changes for the page.
      
      buf_remove_t: Remove. Instead, use trx!=NULL to signify that a write
      to file is desired, and use a separate parameter bool drop_ahi.
      
      buf_LRU_flush_or_remove_pages(), fil_delete_tablespace():
      Replace buf_remove_t.
      
      buf_LRU_remove_pages(), buf_LRU_remove_all_pages(): Remove.
      
      PageConverter::m_mtr: A dummy mini-transaction buffer
      
      PageConverter::PageConverter(): Complete the member initialization list.
      
      PageConverter::operator()(): Evict any 'shadow' pages from the
      buffer pool so that pre-existing (garbage) pages cannot be mistaken
      for pages that exist in the being-imported file.
      
      row_discard_tablespace(): Remove a bogus comment that seems to
      refer to IMPORT TABLESPACE, not DISCARD TABLESPACE.
      51b4366b
    • Marko Mäkelä's avatar
      Remove redundant function parameters · 57ba66b9
      Marko Mäkelä authored
      buf_flush_or_remove_pages(), buf_flush_dirty_pages(): Remove the
      redundant parameter flush=(trx!=NULL).
      57ba66b9