1. 16 Apr, 2013 3 commits
  2. 14 Apr, 2013 2 commits
  3. 12 Apr, 2013 1 commit
    • Michael Widenius's avatar
      Increase default value of max_binlog_cache_size and max_binlog_stmt_cache_size to ulonglong_max. · aa4c7dea
      Michael Widenius authored
      This fixes that by default LOAD DATA INFILE will not generate the error:
      "Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage..."
      
      
      mysql-test/suite/sys_vars/r/max_binlog_cache_size_basic.result:
        Updated test case
      mysql-test/suite/sys_vars/r/max_binlog_stmt_cache_size_basic.result:
        Updated test case
      sql/sys_vars.cc:
        Increase default value of max_binlog_cache_size and max_binlog_stmt_cache_size to ulonglong_max.
      aa4c7dea
  4. 11 Apr, 2013 4 commits
  5. 07 Apr, 2013 1 commit
    • Vladislav Vaintroub's avatar
      MDEV-4356 : MariaDB does not start if bind-address gets resolved to more than single IP address. · 5ae72bb7
      Vladislav Vaintroub authored
        
      MySQL bug http://bugs.mysql.com/bug.php?id=61713 was fixed in 5.5
        
      Fix is to remove check for multiple entries returned by getaddrinfo(), and use the first entry that works  - i.e socket can be created.  
      
      Unlike Oracle/MySQL's fix ,this one  is kept minimal : 
      -  we do not prioritize IPv4 over IPv6,  orr other way around,  and just rely on operating system to sort getaddrinfo() entries in sensible order. There is RFC that defines  what is sensible order for getaddrinfo entries ( RFC 3484), and OS specific tweaks are also possible , like /etc/gai.conf o Linux.
      -  also,  we do not force "0.0.0.0" address if bind-address is not given -  this would be a change in behavior of 5.5 at least on Windows, where passing NULL as  to getaddrinfo()  gives back IPv6-wildcard.
      5ae72bb7
  6. 06 Apr, 2013 3 commits
  7. 08 Apr, 2013 1 commit
    • unknown's avatar
      If a range tree has a branch that is an expensive constant, · 385de874
      unknown authored
      currently get_mm_tree skipped the evaluation of this constant
      and icorrectly proceeded. The correct behavior is to return a
      NULL subtree, according to the IF branch being fixed - when it
      evaluates the constant it returns a value, and doesn't continue
      further.
      385de874
  8. 05 Apr, 2013 2 commits
  9. 04 Apr, 2013 5 commits
  10. 01 Apr, 2013 1 commit
  11. 29 Mar, 2013 1 commit
  12. 28 Mar, 2013 1 commit
  13. 03 Apr, 2013 1 commit
  14. 29 Mar, 2013 3 commits
    • unknown's avatar
      Fix for MDEV-4144 · 599a1384
      unknown authored
        
      Analysis:
      The reason for the inefficent plan was that Item_subselect::is_expensive()
      didn't detect the special case when a subquery was optimized, but had no
      join plan because it either has no table, or its tables have been optimized
      away, or the optimizer detected that the result set is empty.
        
      Solution:
      Identify the special cases above in the Item_subselect::is_expensive(),
      and consider such degenerate subqueries inexpensive.
      599a1384
    • Vladislav Vaintroub's avatar
      fa01b76b
    • Igor Babaev's avatar
      Merge 5.3->5.5. · e91e8c8c
      Igor Babaev authored
      e91e8c8c
  15. 28 Mar, 2013 2 commits
    • Igor Babaev's avatar
      Merge · a2c3d7d3
      Igor Babaev authored
      a2c3d7d3
    • Igor Babaev's avatar
      Fixed bug mdev-4311 (bug #68749). · 323fdd7a
      Igor Babaev authored
      This bug was introduced by the patch for WL#3220.
      If the memory allocated for the tree to store unique elements
      to be counted is not big enough to include all of them then
      an external file is used to store the elements.
      The unique elements are guaranteed not to be nulls. So, when 
      reading them from the file we don't have to care about the null
      flags of the read values. However, we should remove the flag 
      at the very beginning of the process. If we don't do it and
      if the last value written into the record buffer for the field
      whose distinct values needs to be counted happens to be null,
      then all values read from the file  are considered to be nulls
      and are not counted in.
      The fix does not remove a possible null flag for the read values.
      Rather it just counts the values in the same way it was done
      before WL #3220.
      323fdd7a
  16. 27 Mar, 2013 2 commits
  17. 26 Mar, 2013 6 commits
  18. 25 Mar, 2013 1 commit