1. 19 Jan, 2018 2 commits
    • Vicențiu Ciorbaru's avatar
      MDEV-14229: Stack trace is not resolved for shared objects · 26e5f9dd
      Vicențiu Ciorbaru authored
      Resolving a stacktrace including functions in dynamic libraries requires
      us to look inside the libraries for the symbols. Addr2line needs to be
      started with the correct binary for each address on the stack. To do this,
      figure out which library it is using dladdr, then if the addr2line
      binary was started with a different binary, fork it again with the
      correct one.
      
      We only have one addr2line process running at any point during the
      stacktrace resolving step. The maximum number of forks for addr2line should
      generally be around 6.
      
      One for server stacktrace code, one for plugin code, one when going back
      into server code, one for pthread library, one for libc, one for the
      _start function in the server. More can come up if plugin calls server
      function which goes back to a plugin, etc.
      26e5f9dd
    • Varun Gupta's avatar
      MDEV-14241: Server crash in key_copy / get_matching_chain_by_join_key or valgrind warnings · a7a4519a
      Varun Gupta authored
      In this case we were using the optimization derived_with_keys but we could not create a key
      because the length of the key was greater than the max allowed(MI_MAX_KEY_LENGTH).
      To do the join we needed to create a hash join key instead, but in the explain output it
      showed that we were still referring to derived keys which were created but not used.
      a7a4519a
  2. 18 Jan, 2018 8 commits
  3. 17 Jan, 2018 1 commit
  4. 16 Jan, 2018 4 commits
    • Sergei Golubchik's avatar
      bug: ha_heap was unilaterally increasing reclength · b80fa400
      Sergei Golubchik authored
      MEMORY engine needs the record length to be at least sizeof(void*),
      because it stores a pointer there (linking deleted records into a list).
      So when the reclength is less than sizeof(void*), it's set to sizeof(void*).
      That is done inside heap_create(), and the upper layer doesn't know
      that the engine writes beyond share->reclength.
      
      While it's usually safe (in-memory record size is rounded up to
      sizeof(double), so even if share->reclength is too small,
      share->rec_buff_len is not), it could cause problems in the code that
      copies records and expects them to fix in share->reclength,
      e.g. in partitioning.
      b80fa400
    • Sergei Golubchik's avatar
      BIT field woes · 444587d8
      Sergei Golubchik authored
      * get_rec_bits() was always reading two bytes, even if the
        bit field contained only of one byte
      * In various places the code used field->pack_length() bytes
        starting from field->ptr, while it should be field->pack_length_in_rec()
      * Field_bit::key_cmp and Field_bit::cmp_max passed field_length as
        an argument to memcmp(), but field_length is the number of bits!
      444587d8
    • Sergei Golubchik's avatar
      add support for ASAN instrumentation · 5e7593ad
      Sergei Golubchik authored
      5e7593ad
    • Sergei Golubchik's avatar
      fix compilation with ASAN · 6634f460
      Sergei Golubchik authored
      if the property is not found, set it to the empty string,
      otherwise it'll show as libmysql_link_flags-NOTFOUND on the linker
      command line, and the linker won't like it.
      
      Also, don't specify LINK_FLAG_NO_UNDEFINED twice, MERGE_LIBRARIES
      already put it into LINK_FLAGS.
      6634f460
  5. 15 Jan, 2018 7 commits
  6. 14 Jan, 2018 1 commit
    • Oleksandr Byelkin's avatar
      MDEV-14526: MariaDB keeps crashing under load when query_cache_type is changed · 5fe1d7d0
      Oleksandr Byelkin authored
      The problem was in such scenario:
      T1 - starts registering query and locked QC
      T2 - starts disabling QC and wait for UNLOCK
      T1 - unlock QC
      T2 - disable QC and destroy signals without waiting for query unlock
      T1 a) - not yet unlocked query in qc and crash on attempt to unlock because
              QC signals are destroyed
         b) if above was done before destruction, it execute end_of results first
            time at exit on after try_lock which see QC disables and return TRUE.
            But it do not reset query_cache_tls->first_query_block which lead to
            second call of end_of_result when diagnostic arena has already
            inappropriate status (not is_eof()).
      
      Fix is:
        1) wait for all queries unlocked before destroying them by locking and
           unlocking
        2) remove query_cache_tls->first_query_block if QC disabled
      5fe1d7d0
  7. 13 Jan, 2018 1 commit
  8. 12 Jan, 2018 3 commits
  9. 11 Jan, 2018 4 commits
    • Oleksandr Byelkin's avatar
      Fixed misleading voariable names. · a5285a8f
      Oleksandr Byelkin authored
      a5285a8f
    • Oleksandr Byelkin's avatar
      MDEV-14690: Assertion `page_link == &fake_link' failed in pagecache_write_part · abb9e703
      Oleksandr Byelkin authored
      Fix the call to correspond protocoll of pagecache call.
      Fix of misleading variables names.
      abb9e703
    • Monty's avatar
      MDEV-8200 aria bug with insert select and lock tables · 1f18bd63
      Monty authored
      This bug happens when locking the same Aria "transactional" table
      (page format) more then once with LOCK TABLES and inserting into one
      of them with INSERT ... SELECT when the table is empty.
      
      Fixed by ensuring we don't use fast bulk insert if table is opened
      twice with LOCK TABLES (as this changes table->s->state)
      
      Code changes:
      - Added use_count to MARIA_USED_TABLES to be able to check if
        table is opened twice for a statement/lock table
      - Don't clear history or reset info->start_state if we
        don't have versioning. One reason for the bug was
        was that info->start_state was set to point to different
        states for the two tables.  If there is no versioning
        info->start_state should always point to info->s->state.common.
      
      Other things:
      - Fixed also some typos that was noticed while scanning the code
      - More DBUG_PRINT
      1f18bd63
    • Marko Mäkelä's avatar
      MDEV-14916 InnoDB reports warning for "Purge reached the head of the history list" · bdcd7f79
      Marko Mäkelä authored
      The warning was originally added in
      commit c6766305
      (MySQL 4.1.12, 5.0.3) to trace claimed undo log corruption that
      was analyzed in https://lists.mysql.com/mysql/176250
      on November 9, 2004.
      
      Originally, the limit was 20,000 undo log headers or transactions,
      but in commit 9d6d1902
      in MySQL 5.5.11 it was increased to 2,000,000.
      
      The message can be triggered when the progress of purge is prevented
      by a long-running transaction (or just an idle transaction whose
      read view was started a long time ago), by running many transactions
      that UPDATE or DELETE some records, then starting another transaction
      with a read view, and finally by executing more than 2,000,000
      transactions that UPDATE or DELETE records in InnoDB tables. Finally,
      when the oldest long-running transaction is completed, purge would
      run up to the next-oldest transaction, and there would still be more
      than 2,000,000 transactions to purge.
      
      Because the message can be triggered when the database is obviously
      not corrupted, it should be removed. Heavy users of InnoDB should be
      monitoring the "History list length" in SHOW ENGINE INNODB STATUS;
      there is no need to spam the error log.
      bdcd7f79
  10. 10 Jan, 2018 3 commits
  11. 03 Jan, 2018 1 commit
  12. 02 Jan, 2018 2 commits
    • Marko Mäkelä's avatar
      Follow-up to MDEV-14799: Remove bogus debug assertions · 20fab71b
      Marko Mäkelä authored
      trx_undo_rec_get_partial_row(): When the PRIMARY KEY includes a
      column prefix of an externally stored column, the already parsed
      part of the undo log record may contain a reference to
      an off-page column. This is the case in the bug58912 test in
      innodb.innodb.
      20fab71b
    • Marko Mäkelä's avatar
      MDEV-14799 After UPDATE of indexed columns, old values will not be purged from secondary indexes · d384ead0
      Marko Mäkelä authored
      This is a regression caused by MDEV-14051 'Undo log record is too big.'
      
      Purge in the secondary index is wrongly skipped in
      row_purge_upd_exist_or_extern() because node->row only does not contain all
      indexed columns.
      
      trx_undo_rec_get_partial_row(): Add the parameter for node->update
      so that the updated columns will be copied from the initial part
      of the undo log record.
      d384ead0
  13. 27 Dec, 2017 2 commits
  14. 20 Dec, 2017 1 commit
    • Varun Gupta's avatar
      MDEV-12350: Heap corruption, overrun buffer, ASAN errors, server crash in my_fill_8bit / filesort · 924db8b4
      Varun Gupta authored
      In the function make_sortkey a tmp buffer was defined and in the absence of
      param->tmp_buffer, tmp buffer used the sort_keys buffer. sort_keys buffer
      has a length defined in sort_field->length, while param->tmp_buffer is
      stored in param->rec_length. Make sure to use the appropriate length
      based on which buffer we are using otherwise we'll overflow.
      
      Also added a type cast to size_t during the calculation of the sort keys
      buffer size to avoid an oveflow if the buffer size exceeds 32 bits.
      924db8b4