1. 22 Jan, 2018 4 commits
    • Sergei Golubchik's avatar
      improve ASAN instrumentation: TRASH · a966d422
      Sergei Golubchik authored
      mark freed memory as not accessible, not merely undefined
      a966d422
    • Sergei Golubchik's avatar
      Correct TRASH() macro usage · 22ae3843
      Sergei Golubchik authored
      TRASH was mapped to TRASH_FREE and was supposed to be used for memory
      that should not be accessed anymore, while TRASH_ALLOC() is to be
      used for uninitialized but to-be-used memory.
      
      But sometimes TRASH() was used in the latter sense.
      
      Remove TRASH() macro, always use explicit TRASH_ALLOC() or TRASH_FREE().
      22ae3843
    • Sergei Golubchik's avatar
      Fix compilation without dlopen · 204cb85a
      Sergei Golubchik authored
      204cb85a
    • Marko Mäkelä's avatar
      MDEV-7049 MySQL#74585 - InnoDB: Failing assertion: *mbmaxlen < 5 in file ha_innodb.cc line 1904 · 906ce096
      Marko Mäkelä authored
      InnoDB limited the maximum number of bytes per character to 4.
      But, the filename character set that was introduced in MySQL 5.1
      uses up to 5 bytes per character.
      
      To allow InnoDB tables to be created with wider characters, let
      us split the mbminmaxlen fields into mbminlen, mbmaxlen, and increase
      the limit to 7 bytes per character. This will increase the payload size
      of dtype_t and dict_col_t by one bit. The storage size will be unchanged
      (54 bits and 77 bits will use the same number of bytes as the
      previous sizes 53 and 76 bits).
      906ce096
  2. 19 Jan, 2018 4 commits
    • Vicențiu Ciorbaru's avatar
    • Daniel Bartholomew's avatar
      bump the VERSION · 17f64b36
      Daniel Bartholomew authored
      17f64b36
    • Vicențiu Ciorbaru's avatar
      MDEV-14229: Stack trace is not resolved for shared objects · 26e5f9dd
      Vicențiu Ciorbaru authored
      Resolving a stacktrace including functions in dynamic libraries requires
      us to look inside the libraries for the symbols. Addr2line needs to be
      started with the correct binary for each address on the stack. To do this,
      figure out which library it is using dladdr, then if the addr2line
      binary was started with a different binary, fork it again with the
      correct one.
      
      We only have one addr2line process running at any point during the
      stacktrace resolving step. The maximum number of forks for addr2line should
      generally be around 6.
      
      One for server stacktrace code, one for plugin code, one when going back
      into server code, one for pthread library, one for libc, one for the
      _start function in the server. More can come up if plugin calls server
      function which goes back to a plugin, etc.
      26e5f9dd
    • Varun Gupta's avatar
      MDEV-14241: Server crash in key_copy / get_matching_chain_by_join_key or valgrind warnings · a7a4519a
      Varun Gupta authored
      In this case we were using the optimization derived_with_keys but we could not create a key
      because the length of the key was greater than the max allowed(MI_MAX_KEY_LENGTH).
      To do the join we needed to create a hash join key instead, but in the explain output it
      showed that we were still referring to derived keys which were created but not used.
      a7a4519a
  3. 18 Jan, 2018 8 commits
  4. 17 Jan, 2018 1 commit
  5. 16 Jan, 2018 4 commits
    • Sergei Golubchik's avatar
      bug: ha_heap was unilaterally increasing reclength · b80fa400
      Sergei Golubchik authored
      MEMORY engine needs the record length to be at least sizeof(void*),
      because it stores a pointer there (linking deleted records into a list).
      So when the reclength is less than sizeof(void*), it's set to sizeof(void*).
      That is done inside heap_create(), and the upper layer doesn't know
      that the engine writes beyond share->reclength.
      
      While it's usually safe (in-memory record size is rounded up to
      sizeof(double), so even if share->reclength is too small,
      share->rec_buff_len is not), it could cause problems in the code that
      copies records and expects them to fix in share->reclength,
      e.g. in partitioning.
      b80fa400
    • Sergei Golubchik's avatar
      BIT field woes · 444587d8
      Sergei Golubchik authored
      * get_rec_bits() was always reading two bytes, even if the
        bit field contained only of one byte
      * In various places the code used field->pack_length() bytes
        starting from field->ptr, while it should be field->pack_length_in_rec()
      * Field_bit::key_cmp and Field_bit::cmp_max passed field_length as
        an argument to memcmp(), but field_length is the number of bits!
      444587d8
    • Sergei Golubchik's avatar
      add support for ASAN instrumentation · 5e7593ad
      Sergei Golubchik authored
      5e7593ad
    • Sergei Golubchik's avatar
      fix compilation with ASAN · 6634f460
      Sergei Golubchik authored
      if the property is not found, set it to the empty string,
      otherwise it'll show as libmysql_link_flags-NOTFOUND on the linker
      command line, and the linker won't like it.
      
      Also, don't specify LINK_FLAG_NO_UNDEFINED twice, MERGE_LIBRARIES
      already put it into LINK_FLAGS.
      6634f460
  6. 15 Jan, 2018 7 commits
  7. 14 Jan, 2018 1 commit
    • Oleksandr Byelkin's avatar
      MDEV-14526: MariaDB keeps crashing under load when query_cache_type is changed · 5fe1d7d0
      Oleksandr Byelkin authored
      The problem was in such scenario:
      T1 - starts registering query and locked QC
      T2 - starts disabling QC and wait for UNLOCK
      T1 - unlock QC
      T2 - disable QC and destroy signals without waiting for query unlock
      T1 a) - not yet unlocked query in qc and crash on attempt to unlock because
              QC signals are destroyed
         b) if above was done before destruction, it execute end_of results first
            time at exit on after try_lock which see QC disables and return TRUE.
            But it do not reset query_cache_tls->first_query_block which lead to
            second call of end_of_result when diagnostic arena has already
            inappropriate status (not is_eof()).
      
      Fix is:
        1) wait for all queries unlocked before destroying them by locking and
           unlocking
        2) remove query_cache_tls->first_query_block if QC disabled
      5fe1d7d0
  8. 13 Jan, 2018 1 commit
  9. 12 Jan, 2018 3 commits
  10. 11 Jan, 2018 4 commits
    • Oleksandr Byelkin's avatar
      Fixed misleading voariable names. · a5285a8f
      Oleksandr Byelkin authored
      a5285a8f
    • Oleksandr Byelkin's avatar
      MDEV-14690: Assertion `page_link == &fake_link' failed in pagecache_write_part · abb9e703
      Oleksandr Byelkin authored
      Fix the call to correspond protocoll of pagecache call.
      Fix of misleading variables names.
      abb9e703
    • Monty's avatar
      MDEV-8200 aria bug with insert select and lock tables · 1f18bd63
      Monty authored
      This bug happens when locking the same Aria "transactional" table
      (page format) more then once with LOCK TABLES and inserting into one
      of them with INSERT ... SELECT when the table is empty.
      
      Fixed by ensuring we don't use fast bulk insert if table is opened
      twice with LOCK TABLES (as this changes table->s->state)
      
      Code changes:
      - Added use_count to MARIA_USED_TABLES to be able to check if
        table is opened twice for a statement/lock table
      - Don't clear history or reset info->start_state if we
        don't have versioning. One reason for the bug was
        was that info->start_state was set to point to different
        states for the two tables.  If there is no versioning
        info->start_state should always point to info->s->state.common.
      
      Other things:
      - Fixed also some typos that was noticed while scanning the code
      - More DBUG_PRINT
      1f18bd63
    • Marko Mäkelä's avatar
      MDEV-14916 InnoDB reports warning for "Purge reached the head of the history list" · bdcd7f79
      Marko Mäkelä authored
      The warning was originally added in
      commit c6766305
      (MySQL 4.1.12, 5.0.3) to trace claimed undo log corruption that
      was analyzed in https://lists.mysql.com/mysql/176250
      on November 9, 2004.
      
      Originally, the limit was 20,000 undo log headers or transactions,
      but in commit 9d6d1902
      in MySQL 5.5.11 it was increased to 2,000,000.
      
      The message can be triggered when the progress of purge is prevented
      by a long-running transaction (or just an idle transaction whose
      read view was started a long time ago), by running many transactions
      that UPDATE or DELETE some records, then starting another transaction
      with a read view, and finally by executing more than 2,000,000
      transactions that UPDATE or DELETE records in InnoDB tables. Finally,
      when the oldest long-running transaction is completed, purge would
      run up to the next-oldest transaction, and there would still be more
      than 2,000,000 transactions to purge.
      
      Because the message can be triggered when the database is obviously
      not corrupted, it should be removed. Heavy users of InnoDB should be
      monitoring the "History list length" in SHOW ENGINE INNODB STATUS;
      there is no need to spam the error log.
      bdcd7f79
  11. 10 Jan, 2018 3 commits