- 26 Mar, 2014 2 commits
-
-
Sergey Petrunya authored
- Fix Histogram::point_selectivity() to work in the case where the passed value_pos=0 (or 1) and the first (or the last) bucket in the histogram has zero value-range (i.e one value).
-
Sergey Petrunya authored
[Attempt #2] - Use a new selectivity calculation formula in Histogram::point_selectivity. The formula is different from the old one because it was developed from scratch. it doesn't have any possible division-by-zero problems.
-
- 21 Mar, 2014 2 commits
-
-
Sergey Petrunya authored
- Forgot to update one .result file.
-
Jan Lindström authored
Analysis: XtraDB merge regression, at the end of mutex_spin_wait before goto mutex_loop there is missing if (prio_mutex) { os_atomic_decrement_ulint(&prio_mutex->high_priority_waiters, 1); } Hence we get unbalanced waiter count. Thanks to Laurynas Biveinis for finding this.
-
- 20 Mar, 2014 3 commits
-
-
Sergey Petrunya authored
- Save range key before making field->pos_in_interval() call (like we do for non-equality ranges)
-
Sergey Vojtovich authored
Let TABLE_SHARE::tdc.free_tables, TABLE_SHARE::tdc.all_tables, TABLE_SHARE::tdc.flushed and corresponding invariants be protected by per-share TABLE_SHARE::tdc.LOCK_table_share instead of global LOCK_open.
-
Jan Lindström authored
-
- 19 Mar, 2014 12 commits
-
-
Michael Widenius authored
-
Michael Widenius authored
Now if CREATE OR REPLACE fails but we have deleted a table already, we will generate a DROP TABLE in the binary log. This fixes this issue. In addition, for a failing CREATE OR REPLACE TABLE ... SELECT we don't generate a log of all the inserted rows, only the DROP TABLE. I added code for not logging DROP TEMPORARY TABLE for tables where the CREATE TABLE was not logged. This code will be activated in 10.1 by removing the code protected by DONT_LOG_DROP_OF_TEMPORARY_TABLES. mysql-test/suite/rpl/r/create_or_replace_mix.result: More test cases mysql-test/suite/rpl/r/create_or_replace_row.result: More test cases mysql-test/suite/rpl/r/create_or_replace_statement.result: More test cases mysql-test/suite/rpl/t/create_or_replace.inc: More test cases sql/log.cc: Added binlog_reset_cache() to clear the binary log. sql/log.h: Added prototype sql/sql_insert.cc: If CREATE OR REPLACE TABLE ... SELECT fails: - Don't log anything if nothing changed - If table was deleted, log a DROP TABLE. Remember if we table creation of temporary tables was logged. sql/sql_table.cc: Added log_drop_table() Remember if we table creation of temporary tables was logged. If CREATE OR REPLACE TABLE ... SELECT fails and a table was deleted, log a DROP TABLE. sql/sql_table.h: Added prototype sql/sql_truncate.cc: Remember if we table creation of temporary tables was logged. sql/table.h: Added table_creation_was_logged
-
Igor Babaev authored
-
Jan Lindström authored
-
Sergey Petrunya authored
- Part#2: call HA_EXTRA_FLUSH for the correct handler object, and call it after every change (ha_write_row, ha_update_row, ha_delete_row).
-
Jan Lindström authored
-
Michael Widenius authored
-
Michael Widenius authored
mysql-test/r/create_or_replace2.result: Added test case mysql-test/t/create_or_replace.test: Fixed comment mysql-test/t/create_or_replace2.test: Added test case sql/sql_base.cc: Safety fix: Don't let threads with query_id=0 free temporary tables as this may free temporary tables not in use. This is mostly the case for the slave io threads, as most other threads has thd->query_id != 0. sql/sql_table.cc: Added comment. Ignore kill when opening temporary table for CREATE ... LIKE. This fixed the original isue
-
Sergey Petrunya authored
-
Sergey Petrunya authored
- Do like sp.cc does with mysql.proc table: call HA_EXTRA_FLUSH after we've modified a statistical table.
-
unknown authored
-
unknown authored
-
- 18 Mar, 2014 1 commit
-
-
Igor Babaev authored
Corrected cost estimates when a join buffer is used and the optimizer is requested to use condition selectivities.
-
- 17 Mar, 2014 3 commits
-
-
Jan Lindström authored
Analysis: This was merge error on file fil0fil.cc. fil_system mutex was taken twice because of this. Fix: Remove unnecessary mutex_enter and fixed the issue with slow posix_fallocate usage.
-
Sergey Petrunya authored
-
unknown authored
-
- 16 Mar, 2014 1 commit
-
-
Sergey Petrunya authored
- If an UPDATE 1) modifies the key it is using, and 2) has ORDER BY ... LIMIT which matches the key it is using, Then we should use "Using buffer", not "Using filesort".
-
- 14 Mar, 2014 3 commits
-
-
Sergey Petrunya authored
- Adopt MySQL's fix: don't run index_merge optimizer if the table statistics reports that the table has 0 rows.
-
Sergey Petrunya authored
-
unknown authored
MDEV-5819: MySQL Bug #13500371 63704: CONVERSION OF '1.' TO A NUMBER GIVES ERROR 1265 (WARN_DATA_TRUNCATED) Fix by MySQL ported
-
- 13 Mar, 2014 2 commits
-
-
Michael Widenius authored
Automatic merge, except for server_audit.cc that had to be modified slightly Changes to xtradb and innobase where ignored was these made no sence for 10.0
-
unknown authored
Fixed max_length of dynamic columns json/create/add functions.
-
- 12 Mar, 2014 5 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Michael Widenius authored
mysql-test/r/create_or_replace.result: Added test of releasing of metadata locks mysql-test/t/create_or_replace.test: Added test of releasing of metadata locks sql/handler.h: Added marker if table was deleted as part of CREATE OR REPLACE sql/sql_base.cc: Added Locked_tables_list::unlock_locked_table() sql/sql_class.h: New prototypes sql/sql_insert.cc: Unlock metadata locks for deleted table in case of error. Also do unlock tables if this was the only locked table. sql/sql_table.cc: Unlock metadata locks for deleted table in case of error. Also do unlock tables if this was the only locked table.
-
Michael Widenius authored
Remove memory warnings if mysql client aborts early Changed copyright for clients client/mysql.cc: Free memory if get_options fails, so that we don't get warnings from safemalloc include/welcome_copyright_notice.h: Added SkySQL to client copyrights mysql-test/valgrind.supp: Added suppressions for memory leaks from dlopen() for OpenSUSE 12.3 storage/oqgraph/mysql-test/oqgraph/regression_mdev5744.result: Suppress warning storage/oqgraph/mysql-test/oqgraph/regression_mdev5744.test: Suppress warning
-
unknown authored
meaning of the count, and to remove the alpha warning.
-
- 11 Mar, 2014 3 commits
-
-
unknown authored
MDEV-5804: If same GTID is received on multiple master connections in multi-source replication, the event is double-executed causing corruption or replication failure Some fixes, mainly to make it work in non-parallel replication mode also (--slave-parallel-threads=0). Patch should be fairly complete now.
-
Michael Widenius authored
-
Michael Widenius authored
extra/replace.c: Removed compiler warning sql/unireg.cc: Removed compiler warning storage/maria/ma_blockrec.c: Removed compiler warning storage/maria/ma_dynrec.c: Fixed compiler failure storage/maria/ma_unique.c: Removed compiler warning storage/myisam/mi_check.c: Removed compiler warning storage/myisam/mi_checksum.c: Removed compiler warning
-
- 10 Mar, 2014 1 commit
-
-
Michael Widenius authored
Fixed MDEV-5724 "Server crashes on SQL select containing more group by and left join statements using innodb tables" The problem was that a big record was allocated on the stack, which casued stack to run out. Fixed by using my_safe_alloca() instead of my_alloca() when allocating records. Now only records <= 16384 are allocated on the stack. mysql-test/r/stack-crash.result: Added test case mysql-test/t/stack-crash.test: Added test case storage/maria/ma_blockrec.c: Use my_safe_alloca() instead of my_alloca() storage/maria/ma_dynrec.c: Use my_safe_alloca() instead of my_alloca() storage/maria/maria_def.h: Added MARIA_MAX_RECORD_ON_STACK storage/maria/maria_pack.c: Use my_safe_alloca() instead of my_alloca()
-
- 09 Mar, 2014 1 commit
-
-
unknown authored
MDEV-5804: If same GTID is received on multiple master connections in multi-source replication, the event is double-executed causing corruption or replication failure Before, the arrival of same GTID twice in multi-source replication would cause double-apply or in gtid strict mode an error. Keep the behaviour, but add an option --gtid-ignore-duplicates which allows to correctly handle duplicates, ignoring all but the first. This relies on the user ensuring correct configuration so that sequence numbers are strictly increasing within each replication domain; then duplicates can be detected simply by comparing the sequence numbers against what is already applied. Only one master connection (but possibly multiple parallel worker threads within that connection) is allowed to apply events within one replication domain at a time; any other connection that receives a GTID in the same domain either discards it (if it is already applied) or waits for the other connection to not have any events to apply. Intermediate patch, as proof-of-concept for testing. The main limitation is that currently it is only implemented for parallel replication, @@slave_parallel_threads > 0.
-
- 15 Mar, 2014 1 commit
-
-
Elena Stepanova authored
thread IDs
-