1. 14 Jun, 2020 16 commits
    • Monty's avatar
      Fix for crash in Aria LOCK TABLES + CREATE TRIGGER · 56045ef9
      Monty authored
      MDEV-22829 SIGSEGV in _ma_reset_history on LOCK
      56045ef9
    • Monty's avatar
      Fixed typos in aria_read_log · ab7eedc1
      Monty authored
      ab7eedc1
    • Monty's avatar
      BINLOG with LOCK TABLES and SAVEPOINT could cause a crash in debug bin · 654b5931
      Monty authored
      MDEV-22048 Assertion `binlog_table_maps == 0 ||
                 locked_tables_mode == LTM_LOCK_TABLES' failed in
                 THD::reset_for_next_command
      654b5931
    • Monty's avatar
      MDEV-19745 BACKUP STAGE BLOCK_DDL hangs on flush sequence table · 6a3b581b
      Monty authored
      Problem was that FLUSH TABLES where trying to read latest sequence state
      which conflicted with a running ALTER SEQUENCE. Removed the reading
      of the state, when opening a table for FLUSH, as it's not needed in this
      case.
      
      Other thing:
      - Fixed a potential issue with concurrently running ALTER SEQUENCE where
        the later ALTER could potentially read old data
      6a3b581b
    • Monty's avatar
      08d475c7
    • Monty's avatar
      Updated code comments · 1cca8378
      Monty authored
      1cca8378
    • Monty's avatar
      Fixed crash in failing instant alter table with partitioned table · d35616aa
      Monty authored
      MDEV-22649 SIGSEGV in ha_partition::create_partitioning_metadata on ALTER
      MDEV-22804 SIGSEGV in ha_partition::create_partitioning_metadata
      d35616aa
    • Monty's avatar
      Changes needed for ColumnStore and insert cache · 10b88deb
      Monty authored
      MCOL-3875 Columnstore write cache
      
      The main change is to change thr_lock function get_status to
      return a value that indicates we have to abort the lock.
      
      Other thing:
      - Made start_bulk_insert() and end_bulk_insert() protected so that the
        insert cache can use these
      10b88deb
    • Monty's avatar
      Changed some DBUG_PRINT that used error: · 74df3c80
      Monty authored
      The reson for the change was to make it easier to find true errors
      when searching in trace logs.
      "error:" should mainly be used when we have a real error
      74df3c80
    • Monty's avatar
      Fixed access of undefined memory for compressed MyISAM and Aria tables · 96d72945
      Monty authored
      MDEV-22689 MSAN use-of-uninitialized-value in decode_bytes()
      
      This was not a user visible issue as the huffman code lookup tables would
      automatically ignore any of the unitialized bits
      
      Fixed by adding a end-zero byte to the bit-stream buffer.
      
      Other things:
      - Fixed a (for this case) wrong assert in strmov() for myisamchk
        and aria_chk by removing the strmov()
      96d72945
    • Monty's avatar
      Make error messages from DROP TABLE and DROP TABLE IF EXISTS consistent · dfb41fdd
      Monty authored
      - IF EXISTS ends with a list of all not existing object, instead of a
        separate note for every not existing object
      - Produce a "Note" for all wrongly dropped objects
        (like trying to do DROP SEQUENCE for a normal table)
      - Do not write existing tables that could not be dropped to binlog
      
      Other things:
      MDEV-22820 Bogus "Unknown table" warnings produced upon attempt to drop
                 parent table referenced by FK
      This was caused by an older version of this commit patch and later fixed
      dfb41fdd
    • Monty's avatar
      Fixed error messages from DROP VIEW to align with DROP TABLE · 346d10a9
      Monty authored
      - Produce a "Note" for all wrongly dropped objects
        (Like doing DROP VIEW on a table).
      - IF EXISTS ends with a list of all not existing objects, instead of a
        separate note for every not existing object.
      
      Other things:
       - Fixed bug where one could do CREATE TEMPORARY SEQUENCE multiple times
         and create multiple temporary sequences with the same name.
      346d10a9
    • Monty's avatar
      MDEV-11412 Ensure that table is truly dropped when using DROP TABLE · 5bcb1d65
      Monty authored
      The used code is largely based on code from Tencent
      
      The problem is that in some rare cases there may be a conflict between .frm
      files and the files in the storage engine. In this case the DROP TABLE
      was not able to properly drop the table.
      
      Some MariaDB/MySQL forks has solved this by adding a FORCE option to
      DROP TABLE. After some discussion among MariaDB developers, we concluded
      that users expects that DROP TABLE should always work, even if the
      table would not be consistent. There should not be a need to use a
      separate keyword to ensure that the table is really deleted.
      
      The used solution is:
      - If a .frm table doesn't exists, try dropping the table from all storage
        engines.
      - If the .frm table exists but the table does not exist in the engine
        try dropping the table from all storage engines.
      - Update storage engines using many table files (.CVS, MyISAM, Aria) to
        succeed with the drop even if some of the files are missing.
      - Add HTON_AUTOMATIC_DELETE_TABLE to handlerton's where delete_table()
        is not needed and always succeed. This is used by ha_delete_table_force()
        to know which handlers to ignore when trying to drop a table without
        a .frm file.
      
      The disadvantage of this solution is that a DROP TABLE on a non existing
      table will be a bit slower as we have to ask all active storage engines
      if they know anything about the table.
      
      Other things:
      - Added a new flag MY_IGNORE_ENOENT to my_delete() to not give an error
        if the file doesn't exist. This simplifies some of the code.
      - Don't clear thd->error in ha_delete_table() if there was an active
        error. This is a bug fix.
      - handler::delete_table() will not abort if first file doesn't exists.
        This is bug fix to handle the case when a drop table was aborted in
        the middle.
      - Cleaned up mysql_rm_table_no_locks() to ensure that if_exists uses
        same code path as when it's not used.
      - Use non_existing_Table_error() to detect if table didn't exists.
        Old code used different errors tests in different position.
      - Table_triggers_list::drop_all_triggers() now drops trigger file if
        it can't be parsed instead of leaving it hanging around (bug fix)
      - InnoDB doesn't anymore print error about .frm file out of sync with
        InnoDB directory if .frm file does not exists. This change was required
        to be able to try to drop an InnoDB file when .frm doesn't exists.
      - Fixed bug in mi_delete_table() where the .MYD file would not be dropped
        if the .MYI file didn't exists.
      - Fixed memory leak in Mroonga when deleting non existing table
      - Fixed memory leak in Connect when deleting non existing table
      
      Bugs fixed introduced by the original version of this commit:
      MDEV-22826 Presence of Spider prevents tables from being force-deleted from
                 other engines
      5bcb1d65
    • Marko Mäkelä's avatar
      5579c389
    • Marko Mäkelä's avatar
      MDEV-22889: Disable innodb.innodb_force_recovery_rollback · ad5edf3c
      Marko Mäkelä authored
      The test case that was added for MDEV-21217
      (commit b68f1d84)
      should have only two possible outcomes for the locking SELECT statement:
      
      (1) The statement is blocked, and the test will eventually fail
      with a lock wait timeout. This is what I observed when the
      code fix for MDEV-21217 was missing.
      
      (2) The lock conflict will ensure that the statement will execute
      after the rollback has completed, and an empty table will be observed.
      This is the expected outcome with the recovery fix.
      
      What occasionally happens (in some of our CI environments only, so far)
      is that the locking SELECT will return all 1,000 rows of the table that
      had been inserted by the transaction that was never supposed to be
      committed. One possibility is that the transaction was unexpectedly
      committed when the server was killed.
      
      Let us disable the test until the reason of the failure has been
      determined and addressed.
      ad5edf3c
    • Marko Mäkelä's avatar
      Merge 10.4 into 10.5 · 3dbc49f0
      Marko Mäkelä authored
      3dbc49f0
  2. 13 Jun, 2020 7 commits
    • Sergei Golubchik's avatar
      MDEV-22884 Assertion `grant_table || grant_table_role' failed on perfschema · 9ed08f35
      Sergei Golubchik authored
      when allowing access via perfschema callbacks, update
      the cached GRANT_INFO to match
      9ed08f35
    • Sergei Golubchik's avatar
      MDEV-21560 Assertion `grant_table || grant_table_role' failed in check_grant_all_columns · b58586aa
      Sergei Golubchik authored
      With RETURNING it can happen that the user has some privileges on
      the table (namely, DELETE), but later needs different privileges
      on individual columns (namely, SELECT).
      
      Do the same as in check_grant_column() - ER_COLUMNACCESS_DENIED_ERROR,
      not an assert.
      b58586aa
    • Marko Mäkelä's avatar
      Merge 10.3 into 10.4 · 80534093
      Marko Mäkelä authored
      80534093
    • Marko Mäkelä's avatar
      Merge 10.2 into 10.3 · d83a4432
      Marko Mäkelä authored
      d83a4432
    • Marko Mäkelä's avatar
      MDEV-21217 innodb_force_recovery=2 may wrongly abort rollback · b68f1d84
      Marko Mäkelä authored
      trx_roll_must_shutdown(): Correct the condition that detects
      the start of shutdown.
      b68f1d84
    • Marko Mäkelä's avatar
      MDEV-22190 InnoDB: Apparent corruption of an index page ... to be written · 574ef380
      Marko Mäkelä authored
      An InnoDB check for the validity of index pages would occasionally fail
      in the test encryption.innodb_encryption_discard_import.
      
      An analysis of a "rr replay" failure trace revealed that the problem
      basically is a combination of two old anomalies, and a recently
      implemented optimization in MariaDB 10.5.
      
      MDEV-15528 allows InnoDB to discard buffer pool pages that were freed.
      
      PageBulk::init() will disable the InnoDB validity check, because
      during native ALTER TABLE (rebuilding tables or creating indexes)
      we could write inconsistent index pages to data files.
      
      In the occasional test failure, page 8:6 would have been written
      from the buffer pool to the data file and subsequently freed.
      
      However, fil_crypt_thread may perform dummy writes to pages that
      have been freed. In case we are causing an inconsistent page to
      be re-encrypted on page flush, we should disable the check.
      
      In the analyzed "rr replay" trace, a fil_crypt_thread attempted
      to access page 8:6 twice after it had been freed.
      On the first call, buf_page_get_gen(..., BUF_PEEK_IF_IN_POOL, ...)
      returned NULL. The second call succeeded, and shortly thereafter,
      the server intentionally crashed due to writing the corrupted page.
      574ef380
    • Alexander Barkov's avatar
      MDEV-22268 virtual longlong Item_func_div::int_op(): Assertion `0' failed in Item_func_div::int_op · 6c30bc21
      Alexander Barkov authored
      Item_func_div::fix_length_and_dec_temporal() set the return data type to
      integer in case of @div_precision_increment==0 for temporal input with FSP=0.
      This caused Item_func_div to call int_op(), which is not implemented,
      so a crash on DBUG_ASSERT(0) happened.
      
      Fixing fix_length_and_dec_temporal() to set the result type to DECIMAL.
      6c30bc21
  3. 12 Jun, 2020 17 commits
    • Sidney Cammeresi's avatar
      when printing Item_in_optimizer, use precedence of wrapped Item · 114a8436
      Sidney Cammeresi authored
      when Item::print() is called with the QT_PARSABLE flag, WHERE i NOT IN
      (SELECT ...) gets printed as WHERE !i IN (SELECT ...) instead of WHERE
      !(i in (SELECT ...)) because Item_in_optimizer returns DEFAULT_PRECEDENCE.
      it should return the precedence of the inner operation.
      114a8436
    • Varun Gupta's avatar
      MDEV-22840: JSON_ARRAYAGG gives wrong results with NULL values and ORDER by clause · ab9bd628
      Varun Gupta authored
      The problem here is similar to the case with DISTINCT, the tree used for ORDER BY
      needs to also hold the null bytes of the record. This was not done for GROUP_CONCAT
      as NULLS are rejected by GROUP_CONCAT.
      
      Also introduced a comparator function for the order by tree to handle null
      values with JSON_ARRAYAGG.
      ab9bd628
    • Varun Gupta's avatar
      MDEV-22011: DISTINCT with JSON_ARRAYAGG gives wrong results · 0f6f0daa
      Varun Gupta authored
      For DISTINCT to be handled with JSON_ARRAYAGG, we need to make sure
      that the Unique tree also holds the NULL bytes of a table record
      inside the node of the tree. This behaviour for JSON_ARRAYAGG is
      different from GROUP_CONCAT because in GROUP_CONCAT we just reject
      NULL values for columns.
      
      Also introduced a comparator function for the unique tree to handle null
      values for distinct inside JSON_ARRAYAGG.
      0f6f0daa
    • Varun Gupta's avatar
      MDEV-11563: GROUP_CONCAT(DISTINCT ...) may produce a non-distinct list · a006e88c
      Varun Gupta authored
      Backported from MYSQL
      Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT
      Issue:
      ------
      The problem occurs when:
      1) GROUP_CONCAT (DISTINCT ....) is used in the query.
      2) Data size greater than value of system variable:
      tmp_table_size.
      
      The result would contain values that are non-unique.
      
      Root cause:
      -----------
      An in-memory structure is used to filter out non-unique
      values. When the data size exceeds tmp_table_size, the
      overflow is written to disk as a separate file. The
      expectation here is that when all such files are merged,
      the full set of unique values can be obtained.
      
      But the Item_func_group_concat::add function is in a bit of
      hurry. Even as it is adding values to the tree, it wants to
      decide if a value is unique and write it to the result
      buffer. This works fine if the configured maximum size is
      greater than the size of the data. But since tmp_table_size
      is set to a low value, the size of the tree is smaller and
      hence requires the creation of multiple copies on disk.
      
      Item_func_group_concat currently has no mechanism to merge
      all the copies on disk and then generate the result. This
      results in duplicate values.
      
      Solution:
      ---------
      In case of the DISTINCT clause, don't write to the result
      buffer immediately. Do the merge and only then put the
      unique values in the result buffer. This has be done in
      Item_func_group_concat::val_str.
      
      Note regarding result file changes:
      -----------------------------------
      Earlier when a unique value was seen in
      Item_func_group_concat::add, it was dumped to the output.
      So result is in the order stored in SE. But with this fix,
      we wait until all the data is read and the final set of
      unique values are written to output buffer. So the data
      appears in the sorted order.
      
      This only fixes the cases when we have DISTINCT without ORDER BY clause
      in GROUP_CONCAT.
      a006e88c
    • Sergei Petrunia's avatar
      MDEV-15101: Stop ANALYZE TABLE from flushing table definition cache · fd1755e4
      Sergei Petrunia authored
      Part#2: forgot to commit the adjustments for the testcases.
      fd1755e4
    • Marko Mäkelä's avatar
      MDEV-22867 Assertion instant.n_core_fields == n_core_fields failed · 43120009
      Marko Mäkelä authored
      This is a race condition where a table on which a 10.3-style
      instant ADD COLUMN is emptied during the execution of
      ALTER TABLE ... DROP COLUMN ..., DROP INDEX ..., ALGORITHM=NOCOPY.
      
      In commit 2c4844c9 the
      function instant_metadata_lock() would prevent this race condition.
      But, it would also hold a page latch on the leftmost leaf page of
      clustered index for the duration of a possible DROP INDEX operation.
      
      The race could be fixed by restoring the function
      instant_metadata_lock() that was removed in
      commit ea37b144
      but it would be more future-proof to prevent the
      dict_index_t::clear_instant_add() call from being issued at all.
      
      We at some point support DROP COLUMN ..., ADD INDEX ..., ALGORITHM=NOCOPY
      and that would spend a non-trivial amount of
      execution time in ha_innobase::inplace_alter(),
      making a server hang possible. Currently this is not supported
      and our added test case will notice when the support is introduced.
      
      dict_index_t::must_avoid_clear_instant_add(): Determine if
      a call to clear_instant_add() must be avoided.
      
      btr_discard_only_page_on_level(): Preserve the metadata record
      if must_avoid_clear_instant_add() holds.
      
      btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete():
      Do not remove the metadata record even if the table becomes empty
      but must_avoid_clear_instant_add() holds.
      
      btr_pcur_store_position(): Relax a debug assertion.
      
      This is joint work with Thirunarayanan Balathandayuthapani.
      43120009
    • Sergei Petrunia's avatar
      MDEV-15101: Stop ANALYZE TABLE from flushing table definition cache · d7d80689
      Sergei Petrunia authored
      Apply this patch from Percona Server (amended for 10.5):
      
      commit cd7201514fee78aaf7d3eb2b28d2573c76f53b84
      Author: Laurynas Biveinis <laurynas.biveinis@gmail.com>
      Date:   Tue Nov 14 06:34:19 2017 +0200
      
          Fix bug 1704195 / 87065 / TDB-83 (Stop ANALYZE TABLE from flushing table definition cache)
      
          Make ANALYZE TABLE stop flushing affected tables from the table
          definition cache, which has the effect of not blocking any subsequent
          new queries involving the table if there's a parallel long-running
          query:
      
          - new table flag HA_ONLINE_ANALYZE, return it for InnoDB and TokuDB
            tables;
          - in mysql_admin_table, if we are performing ANALYZE TABLE, and the
            table flag is set, do not remove the table from the table
            definition cache, do not invalidate query cache;
          - in partitioning handler, refresh the query optimizer statistics
            after ANALYZE if the underlying handler supports HA_ONLINE_ANALYZE;
          - new testcases main.percona_nonflushing_analyze_debug,
            parts.percona_nonflushing_abalyze_debug and a supporting debug sync
            point.
      
          For TokuDB, this change exposes bug TDB-83 (Index cardinality stats
          updated for handler::info(HA_STATUS_CONST), not often enough for
          tokudb_cardinality_scale_percent). TokuDB may return different
          rec_per_key values depending on dynamic variable
          tokudb_cardinality_scale_percent value. The server does not have a way
          of knowing that changing this variable invalidates the previous
          rec_per_key values in any opened table shares, and so does not call
          info(HA_STATUS_CONST) again. Fix by updating rec_per_key for both
          HA_STATUS_CONST and HA_STATUS_VARIABLE. This also forces a re-record
          of tokudb.bugs.db756_card_part_hash_1_pick, with the new output
          seeming to be more correct.
      d7d80689
    • Thirunarayanan Balathandayuthapani's avatar
    • Marko Mäkelä's avatar
      MDEV-22877 Avoid unnecessary buf_pool.page_hash S-latch acquisition · d2c593c2
      Marko Mäkelä authored
      MDEV-15053 did not remove all unnecessary buf_pool.page_hash S-latch
      acquisition. There are code paths where we are holding buf_pool.mutex
      (which will sufficiently protect buf_pool.page_hash against changes)
      and unnecessarily acquire the latch. Many invocations of
      buf_page_hash_get_locked() can be replaced with the much simpler
      buf_pool.page_hash_get_low().
      
      In the worst case the thread that is holding buf_pool.mutex will become
      a victim of MDEV-22871, suffering from a spurious reader-reader conflict
      with another thread that genuinely needs to acquire a buf_pool.page_hash
      S-latch.
      
      In many places, we were also evaluating page_id_t::fold() while holding
      buf_pool.mutex. Low-level functions such as buf_pool.page_hash_get_low()
      must get the page_id_t::fold() as a parameter.
      
      buf_buddy_relocate(): Defer the hash_lock acquisition to the critical
      section that starts by calling buf_page_t::can_relocate().
      d2c593c2
    • Sergei Golubchik's avatar
      more mysql_create_view link/unlink woes · 0b5dc626
      Sergei Golubchik authored
      0b5dc626
    • Sergei Golubchik's avatar
      MDEV-22878 galera.wsrep_strict_ddl hangs in 10.5 after merge · fb70eb77
      Sergei Golubchik authored
      if mysql_create_view is aborted when `view` isn't unlinked,
      it should not be linked back on cleanup
      fb70eb77
    • Andrei Elkin's avatar
      efa67ee0
    • Oleksandr Byelkin's avatar
    • Vicențiu Ciorbaru's avatar
      MDEV-22834: Disks plugin - change datatype to bigint · 8ec21afc
      Vicențiu Ciorbaru authored
      On large hard disks (> 2TB), the plugin won't function correctly, always
      showing 2 TB of available space due to integer overflow. Upgrade table
      fields to bigint to resolve this problem.
      8ec21afc
    • Andrei Elkin's avatar
      MDEV-21851: Error in BINLOG_BASE64_EVENT i s always error-logged as if it is done by Slave · e156a8da
      Andrei Elkin authored
      The prefix of error log message out of a failed BINLOG applying
      is corrected to be the sql command name.
      e156a8da
    • Aleksey Midenkov's avatar
      MDEV-22602 Disable UPDATE CASCADE for SQL constraints · 762bf7a0
      Aleksey Midenkov authored
      CHECK constraint is checked by check_expression() which walks its
      items and gets into Item_field::check_vcol_func_processor() to check
      for conformity with foreign key list.
      
      WITHOUT OVERLAPS is checked for same conformity in
      mysql_prepare_create_table().
      
      Long uniques are already impossible with InnoDB foreign keys. See
      ER_CANT_CREATE_TABLE in test case.
      
      2 accompanying bugs fixed (test main.constraints failed):
      
      1. check->name.str lived on SP execute mem_root while "check" obj
      itself lives on SP main mem_root. On second SP execute check->name.str
      had garbage data. Fixed by allocating from thd->stmt_arena->mem_root
      which is SP main mem_root.
      
      2. CHECK_CONSTRAINT_IF_NOT_EXISTS value was mixed with
      VCOL_FIELD_REF. VCOL_FIELD_REF is assigned in check_expression() and
      then detected as CHECK_CONSTRAINT_IF_NOT_EXISTS in
      handle_if_exists_options().
      
      Existing cases for MDEV-16932 in main.constraints cover both fixes.
      762bf7a0
    • Vicențiu Ciorbaru's avatar
      Fix wrong merge of commit d218d1aa · 2fd2fd77
      Vicențiu Ciorbaru authored
      2fd2fd77