1. 16 Nov, 2012 9 commits
  2. 15 Nov, 2012 6 commits
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · f481e6fa
      Marko Mäkelä authored
      f481e6fa
    • Marko Mäkelä's avatar
      Bug#15872736 FAILING ASSERTION · 26226e34
      Marko Mäkelä authored
      Remove a bogus debug assertion.
      26226e34
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · 49b51dd2
      Marko Mäkelä authored
      49b51dd2
    • Marko Mäkelä's avatar
      Bug#15874001 CREATE INDEX ON A UTF8 CHAR COLUMN FAILS WITH ROW_FORMAT=REDUNDANT · 2bb6cefa
      Marko Mäkelä authored
      CHAR(n) in ROW_FORMAT=REDUNDANT tables is always fixed-length
      (n*mbmaxlen bytes), but in the temporary file it is variable-length
      (n*mbminlen to n*mbmaxlen bytes) for variable-length character sets,
      such as UTF-8.
      
      The temporary file format used during index creation and online ALTER
      TABLE is based on ROW_FORMAT=COMPACT. Thus, it should use the
      variable-length encoding even if the base table is in
      ROW_FORMAT=REDUNDNAT.
      
      dtype_get_fixed_size_low(): Replace an assertion-like check with a
      debug assertion.
      
      rec_init_offsets_comp_ordinary(), rec_convert_dtuple_to_rec_comp():
      Make this an inline function.  Replace 'ulint extra' with 'bool temp'.
      
      rec_get_converted_size_comp_prefix_low(): Renamed from
      rec_get_converted_size_comp_prefix(), and made inline. Add the
      parameter 'bool temp'. If temp=true, do not add REC_N_NEW_EXTRA_BYTES.
      
      rec_get_converted_size_comp_prefix(): Remove the comment about
      dict_table_is_comp(). This function is only to be called for other
      than ROW_FORMAT=REDUNDANT records.
      
      rec_get_converted_size_temp(): New function for computing temporary
      file record size. Omit REC_N_NEW_EXTRA_BYTES from the sizes.
      
      rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): New functions,
      for operating on temporary file records.
      
      rb:1559 approved by Jimmy Yang
      2bb6cefa
    • magnus.blaudd@oracle.com's avatar
      remove usage of --skip-ndb from collections · f6fe5596
      magnus.blaudd@oracle.com authored
       - no need to use --skip-ndb in collections files anymore, since long but
        more clear logic after recent mtr.pl fixes. ndb tests are never run in MySQL Server
        unless explicitly requested
       - remove sys_vars.ndb_log_update_as_write_basic.test and sys_vars.ndb_log_updated_only_basic.result since MySQL Server does not have those
        options.
       - Only sys_vars.have_ndbcluster_basic left since MySQL Server has that variable hardcoded.
      f6fe5596
    • mysql-builder@oracle.com's avatar
      No commit message · dbb6af27
      mysql-builder@oracle.com authored
      No commit message
      dbb6af27
  3. 14 Nov, 2012 8 commits
  4. 13 Nov, 2012 4 commits
    • magnus.blaudd@oracle.com's avatar
      Merge · ab73c0cf
      magnus.blaudd@oracle.com authored
      ab73c0cf
    • Mattias Jonsson's avatar
      87e7b521
    • Mattias Jonsson's avatar
      Bug#14845133: · 9b50775d
      Mattias Jonsson authored
      The problem is related to the changes made in bug#13025132.
      get_partition_set can do dynamic pruning which limits the partitions
      to scan even further. This is not accounted for when setting
      the correct start of the preallocated record buffer used in
      the priority queue, thus leading to wrong buffer is used
      (including wrong preset partitioning id, connected to that buffer).
      
      Solution is to fast forward the buffer pointer to point to the correct
      partition record buffer.
      9b50775d
    • Mattias Jonsson's avatar
      Bug#14845133: · d43e1987
      Mattias Jonsson authored
      The problem is related to the changes made in bug#13025132.
      get_partition_set can do dynamic pruning which limits the partitions
      to scan even further. This is not accounted for when setting
      the correct start of the preallocated record buffer used in
      the priority queue, thus leading to wrong buffer is used
      (including wrong preset partitioning id, connected to that buffer).
      
      Solution is to fast forward the buffer pointer to point to the correct
      partition record buffer.
      d43e1987
  5. 12 Nov, 2012 3 commits
  6. 09 Nov, 2012 8 commits
    • mysql-builder@oracle.com's avatar
      No commit message · 9a6255c0
      mysql-builder@oracle.com authored
      No commit message
      9a6255c0
    • Venkata Sidagam's avatar
      Bug#13556000: CHECK AND REPAIR TABLE SHOULD BE MORE ROBUST[2] · ca8abf5a
      Venkata Sidagam authored
      Problem description: Corrupt key file for the table. Size of the 
      key is greater than the maximum specified size. This results in 
      the overflow of the key buffer while reading the key from key 
      file.
      
      Fix: If size of key is greater than the maximum size it returns 
      an error before writing it into the key buffer. Gives error as 
      corrupt file but no stack overflow.
      ca8abf5a
    • Annamalai Gurusami's avatar
      a9dc2bb8
    • Annamalai Gurusami's avatar
      Bug #14669848 CRASH DURING ALTER MAKES ORIGINAL TABLE INACCESSIBLE · 12fab2a6
      Annamalai Gurusami authored
      When a new primary key is added to an InnoDB table, then the following
      steps are taken by InnoDB plugin:
      
      .  let t1 be the original table.
      .  a temporary table t1@00231 will be created by cloning t1.
      .  all data will be copied from t1 to t1@00231.
      .  rename t1 to t1@00232.
      .  rename t1@00231 to t1.
      .  drop t1@00232.
      
      The rename and drop operations involve file operations.  But file operations
      cannot be rolled back.  So in row_merge_rename_tables(), just after doing data
      dictionary update and before doing any file operations, generate redo logs
      for file operations and commit the transaction.  This will ensure that any
      crash after this commit, the table is still recoverable by moving .ibd and
      .frm files.  Manual recovery is required.
      
      During recovery, the rename file operation redo logs are processed.
      Previously this was being ignored.
      
      rb://1460 approved by Marko Makela.
      12fab2a6
    • Annamalai Gurusami's avatar
      a2c3ba24
    • Anirudh Mangipudi's avatar
      BUG#11762933: MYSQLDUMP WILL SILENTLY SKIP THE `EVENT` · d97caadc
      Anirudh Mangipudi authored
                    TABLE DATA IF DUMPS MYSQL DATABA
      Problem: If mysqldump is run without --events (or with --skip-events)
      it will not dump the mysql.event table's data. This behaviour is inconsistent
      with that of --routines option, which does not affect the dumping of
      mysql.proc table. According to the Manual, --events (--skip-events) defines,
      if the Event Scheduler events for the dumped databases should be included
      in the mysqldump output and this has nothing to do with the mysql.event table
      itself.
      Solution: A warning has been added when mysqldump is used without --events 
      (or with --skip-events) and a separate patch with the behavioral change 
      will be prepared for 5.6/trunk.
      d97caadc
    • Anirudh Mangipudi's avatar
      BUG#11762933: MYSQLDUMP WILL SILENTLY SKIP THE `EVENT` · 27134cbd
      Anirudh Mangipudi authored
                    TABLE DATA IF DUMPS MYSQL DATABA
      Problem: If mysqldump is run without --events (or with --skip-events)
      it will not dump the mysql.event table's data. This behaviour is inconsistent
      with that of --routines option, which does not affect the dumping of
      mysql.proc table. According to the Manual, --events (--skip-events) defines,
      if the Event Scheduler events for the dumped databases should be included
      in the mysqldump output and this has nothing to do with the mysql.event table
      itself.
      Solution: A warning has been added when mysqldump is used without --events 
      (or with --skip-events) and a separate patch with the behavioral change 
      will be prepared for 5.6/trunk.
      27134cbd
    • Thayumanavar's avatar
      BUG#14458232 - CRASH IN THD_IS_TRANSACTION_ACTIVE DURING · 8afcdfe9
      Thayumanavar authored
                     THREAD POOLING STRESS TEST
      PROBLEM:
      Connection stress tests which consists of concurrent
      kill connections interleaved with mysql ping queries
      cause the mysqld server which uses thread pool scheduler
      to crash.
      FIX:
      Killing a connection involves shutdown and close of client
      socket and this can cause EPOLLHUP(or EPOLLERR) events to be
      to be queued and handled after disarming and cleanup of 
      of the connection object (THD) is being done.We disarm the 
      the connection by modifying the epoll mask to zero which
      ensure no events come and release the ownership of waiting 
      thread that collect events and then do the cleanup of THD.
      object.As per the linux kernel epoll source code (               
      http://lxr.linux.no/linux+*/fs/eventpoll.c#L1771), EPOLLHUP
      (or EPOLLERR) can't be masked even if we set EPOLL mask
      to zero. So we disarm the connection and thus prevent 
      execution of any query processing handler/queueing to 
      client ctx. queue by removing the client fd from the epoll        
      set via EPOLL_CTL_DEL. Also there is a race condition which
      involve the following threads:
      1) Thread X executing KILL CONNECTION Y and is in THD::awake
      and using mysys_var (holding LOCK_thd_data).
      2) Thread Y in tp_process_event executing and is being killed.
      3) Thread Z receives KILL flag internally and possible call
      the tp_thd_cleanup function which set thread session variable
      and changing mysys_var.
      The fix for the above race is to set thread session variable
      under LOCK_thd_data.
      We also do not call THD::awake if we found the thread in the
      thread list that is to be killed but it's KILL_CONNECTION flag
      set thus avoiding any possible concurrent cleanup. This patch
      is approved by Mikael Ronstrom via email review.
      8afcdfe9
  7. 08 Nov, 2012 2 commits