1. 22 Jan, 2018 9 commits
  2. 19 Jan, 2018 4 commits
    • Vicențiu Ciorbaru's avatar
    • Daniel Bartholomew's avatar
      bump the VERSION · 17f64b36
      Daniel Bartholomew authored
      17f64b36
    • Vicențiu Ciorbaru's avatar
      MDEV-14229: Stack trace is not resolved for shared objects · 26e5f9dd
      Vicențiu Ciorbaru authored
      Resolving a stacktrace including functions in dynamic libraries requires
      us to look inside the libraries for the symbols. Addr2line needs to be
      started with the correct binary for each address on the stack. To do this,
      figure out which library it is using dladdr, then if the addr2line
      binary was started with a different binary, fork it again with the
      correct one.
      
      We only have one addr2line process running at any point during the
      stacktrace resolving step. The maximum number of forks for addr2line should
      generally be around 6.
      
      One for server stacktrace code, one for plugin code, one when going back
      into server code, one for pthread library, one for libc, one for the
      _start function in the server. More can come up if plugin calls server
      function which goes back to a plugin, etc.
      26e5f9dd
    • Varun Gupta's avatar
      MDEV-14241: Server crash in key_copy / get_matching_chain_by_join_key or valgrind warnings · a7a4519a
      Varun Gupta authored
      In this case we were using the optimization derived_with_keys but we could not create a key
      because the length of the key was greater than the max allowed(MI_MAX_KEY_LENGTH).
      To do the join we needed to create a hash join key instead, but in the explain output it
      showed that we were still referring to derived keys which were created but not used.
      a7a4519a
  3. 18 Jan, 2018 8 commits
  4. 17 Jan, 2018 1 commit
  5. 16 Jan, 2018 4 commits
    • Sergei Golubchik's avatar
      bug: ha_heap was unilaterally increasing reclength · b80fa400
      Sergei Golubchik authored
      MEMORY engine needs the record length to be at least sizeof(void*),
      because it stores a pointer there (linking deleted records into a list).
      So when the reclength is less than sizeof(void*), it's set to sizeof(void*).
      That is done inside heap_create(), and the upper layer doesn't know
      that the engine writes beyond share->reclength.
      
      While it's usually safe (in-memory record size is rounded up to
      sizeof(double), so even if share->reclength is too small,
      share->rec_buff_len is not), it could cause problems in the code that
      copies records and expects them to fix in share->reclength,
      e.g. in partitioning.
      b80fa400
    • Sergei Golubchik's avatar
      BIT field woes · 444587d8
      Sergei Golubchik authored
      * get_rec_bits() was always reading two bytes, even if the
        bit field contained only of one byte
      * In various places the code used field->pack_length() bytes
        starting from field->ptr, while it should be field->pack_length_in_rec()
      * Field_bit::key_cmp and Field_bit::cmp_max passed field_length as
        an argument to memcmp(), but field_length is the number of bits!
      444587d8
    • Sergei Golubchik's avatar
      add support for ASAN instrumentation · 5e7593ad
      Sergei Golubchik authored
      5e7593ad
    • Sergei Golubchik's avatar
      fix compilation with ASAN · 6634f460
      Sergei Golubchik authored
      if the property is not found, set it to the empty string,
      otherwise it'll show as libmysql_link_flags-NOTFOUND on the linker
      command line, and the linker won't like it.
      
      Also, don't specify LINK_FLAG_NO_UNDEFINED twice, MERGE_LIBRARIES
      already put it into LINK_FLAGS.
      6634f460
  6. 15 Jan, 2018 7 commits
  7. 14 Jan, 2018 1 commit
    • Oleksandr Byelkin's avatar
      MDEV-14526: MariaDB keeps crashing under load when query_cache_type is changed · 5fe1d7d0
      Oleksandr Byelkin authored
      The problem was in such scenario:
      T1 - starts registering query and locked QC
      T2 - starts disabling QC and wait for UNLOCK
      T1 - unlock QC
      T2 - disable QC and destroy signals without waiting for query unlock
      T1 a) - not yet unlocked query in qc and crash on attempt to unlock because
              QC signals are destroyed
         b) if above was done before destruction, it execute end_of results first
            time at exit on after try_lock which see QC disables and return TRUE.
            But it do not reset query_cache_tls->first_query_block which lead to
            second call of end_of_result when diagnostic arena has already
            inappropriate status (not is_eof()).
      
      Fix is:
        1) wait for all queries unlocked before destroying them by locking and
           unlocking
        2) remove query_cache_tls->first_query_block if QC disabled
      5fe1d7d0
  8. 13 Jan, 2018 1 commit
  9. 12 Jan, 2018 3 commits
  10. 11 Jan, 2018 2 commits