- 20 Sep, 2012 1 commit
-
-
Marko Mäkelä authored
-
- 19 Sep, 2012 2 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Delete-mark change buffer records when resorting to a pessimistic delete from the change buffer B-tree. Skip delete-marked records in the change buffer merge and when estimating whether an operation can be buffered. Without this fix, we could try to apply the same buffered changes multiple times if the server was killed at the right moment. In MySQL 5.5 and later: ibuf_get_volume_buffered_count_func(): Ignore delete-marked (already processed) records. ibuf_delete_rec(): Add a crash point before optimistic delete. If the optimistic delete fails, flag the record processed before mtr_commit(). ibuf_merge_or_delete_for_page(): Ignore delete-marked (already processed) records. Backport to 5.1: Rename btr_cur_del_unmark_for_ibuf() to btr_cur_set_deleted_flag_for_ibuf() and add a parameter. rb:1307 approved by Jimmy Yang
-
- 18 Sep, 2012 1 commit
-
-
Tor Didriksen authored
Bug#14530242 CRASH / MEMORY CORRUPTION IN FILESORT_BUFFER::GET_RECORD_BUFFER WITH MYISAM This is a backport of Bug#12694872 - VALGRIND: 18,816 BYTES IN 196 BLOCKS ARE DEFINITELY LOST Bug#13340270: assertion table->sort.record_pointers == __null Bug#14536113 CRASH IN CLOSEFRM (TABLE.CC) OR UNPACK (FIELD.H) ON SUBQUERY WITH MYISAM TABLES Also: removed and re-added test files with file-ids from trunk.
-
- 17 Sep, 2012 9 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Harin Vadodaria authored
INC_HOST_ERRORS() IS CALLED. Description : Merge from MySQL-5.1 to MySQL-5.5
-
Harin Vadodaria authored
INC_HOST_ERRORS() IS CALLED. Issue : Sequence of calling inc_host_errors() and reset_host_errors() required some changes in order to maintain correct connection error count. Solution : Call to reset_host_errors() is shifted to a location after which no calls to inc_host_errors() are made.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
page_zip_validate(), page_zip_validate_low(): Add a parameter for the B-tree index. page_zip_validate_low(): If the page contents does not match, check that the record link chains match. Furthermore, if dict_index_t is passed, check that the records match. (This reduces coverage a bit: if index=NULL, we will ignore differences in record contents, that is, the page payload.) rb:1264 approved by Inaam Rana
-
Sujatha Sivakumar authored
-
Sujatha Sivakumar authored
Problem: ======= trx_data->empty() assert happens at `binlog_close_connection' Analysis: ======== trx_data->empty() function checks for no pending events and the transaction cache to be empty.This function returns "true" if no pending events are present and cache is empty. Otherwise it returns false. `binlog_close_connection' call expects the above function to return true. But if the return value is false then assert is raised. This bug was reproducible in a diskfull scenario. In this disk full scenario try to do an insert operation so that a new pending event is created and flushing this pending event fails. Due to this failure the server goes down and invokes `binlog_close_connection' for clean closure. Since the pending event still remains the assert is caused. This assert is caused only in non transactional databases. Fix: === In a disk full scenario when the insertion fails the transaction is rolled back and `binlog_end_trans` is called to flush the pending events. But flush operation fails as the disk is full and the function simply returns `1' without taking any action to delete the pending event. This leaves the event to remain till the closure of connection. `delete pending' statement has been added to do the required clean up action. sql/log.cc: Added "delete pending" statement to clean pending event
-
- 12 Sep, 2012 4 commits
-
-
unknown authored
No commit message
-
unknown authored
No commit message
-
Tor Didriksen authored
-
Tor Didriksen authored
-
- 11 Sep, 2012 2 commits
-
-
Georgi Kodinov authored
-
Jon Olav Hauglid authored
Added deprecation warning for SHOW AUTHORS and SHOW CONTRIBUTORS. This is the 5.5 version of the patch.
-
- 10 Sep, 2012 2 commits
-
-
Andrei Elkin authored
-
Andrei Elkin authored
An "orthographic" typo in User_var::set_deferred() was made in fixes for bug@14275000. While editing the signature of the initial patch to remove the only argument, the assigned value of the argument remained in the body ... to be successfully compiled (!) thanks to names coincidence: the arg to User_var method and its member. Fixed with correcting the typo.
-
- 07 Sep, 2012 1 commit
-
-
unknown authored
-
- 05 Sep, 2012 1 commit
-
-
Tor Didriksen authored
In fill_schema_table_by_open(): free item list before restoring active arena. sql/sql_show.cc: Replaced i_s_arena.free_items with DBUG_ASSERT(i_s_arena.free_list == NULL) (there's nothing to free in that list)
-
- 03 Sep, 2012 1 commit
-
-
unknown authored
No commit message
-
- 31 Aug, 2012 2 commits
-
-
Annamalai Gurusami authored
THOUGH IT IS NOT. The following error message is misleading because it claims that the BLOB space is not counted. "ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 8126. You have to change some columns to TEXT or BLOBs" When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used, the BLOB prefix is stored inline along with the row. So the above error message is changed as follows depending on the row format used: For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error message is as follows: "ERROR 42000: Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline." For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error message is as follows: "ERROR 42000: Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline." rb://1252 approved by Marko Makela
-
unknown authored
No commit message
-
- 30 Aug, 2012 2 commits
-
-
Marko Mäkelä authored
PAGE SPLIT page_rec_get_nth_const(): Map nth==0 to the page infimum. btr_compress(adjust=TRUE): Add a debug assertion for nth>0. The cursor should never be positioned on the page infimum. btr_index_page_validate(): Add test instrumentation for checking the return values of page_rec_get_nth_const() during CHECK TABLE, and for checking that the page directory slot 0 always contains only one record, the predefined page infimum record. page_cur_delete_rec(), page_delete_rec_list_end(): Add debug assertions guarding against accessing the page slot 0. page_copy_rec_list_start(): Clarify a comment about ret_pos==0. rb:1248 approved by Jimmy Yang
-
Marko Mäkelä authored
ha_innodb::records_in_range(): Remove a debug assertion that prohibits an open range (full table). The patch by Jorgen Loland only removed the assertion from the built-in InnoDB, not from the InnoDB Plugin.
-
- 28 Aug, 2012 1 commit
-
-
Jorgen Loland authored
ha_innobase::records_in_range(): Remove a debug assertion that prohibits an open range (full table). This assertion catches unnecessary calls to this method, but such calls are not harming correctness.
-
- 27 Aug, 2012 1 commit
-
-
Georgi Kodinov authored
TO DEFAULT IGNOR The test in mysqld_safe for the presence of the --plugin-dir and assigning a default value to it were performed before the actual argument parsing. This is wrong, as PLUGIN_DIR mysqld_safe code also uses MY_BASEDIR_VERSION to look for version specific plugin directory if present. Fixed by moving the PLUGIN_DIR logic after the parse_arguments() call.
-
- 24 Aug, 2012 1 commit
-
-
Georgi Kodinov authored
The script is different from what's used on unixes. It was not playing the table insertion script (mysql_system_tables_data.sql), although it was checking for the presence of this script. Fixed by re-enabling the lookup for this file and replaying it at bootstrap time. Note that on the Unixes "SELECT @@hostname" does return a fully qualified name, whereas on Windows it returns only a hostname. So by default we're filtering records in the mysql.user table until we ensure this is fixed.
-
- 10 Sep, 2012 1 commit
-
-
Andrei Elkin authored
-
- 07 Sep, 2012 6 commits
-
-
Marc Alff authored
-
Marc Alff authored
-
Akhil Mohan authored
-
Marc Alff authored
Improved the robustness of the func_mutex test.
-
Marc Alff authored
Improved the robustness of the func_file_io tests.
-
Tor Didriksen authored
Ignore --with-client-ldflags it's not supported by the cmake scripts anyways. Ignore --with-mysqld-ldflags it's only used with --with-mysqld-ldflags=-static and that doesn't work.
-
- 05 Sep, 2012 1 commit
-
-
Tor Didriksen authored
-
- 04 Sep, 2012 1 commit
-
-
Annamalai Gurusami authored
The ha_innobase table handler contained two search key buffers (srch_key_val1, srch_key_val2) of fixed size used to store the search key. The size of these buffers where fixed at REC_VERSION_56_MAX_INDEX_COL_LEN + 2. But this size is not sufficient to hold the search key. Hence the following assert in row_sel_convert_mysql_key_to_innobase() failed. 2438 /* Storing may use at most data_len bytes of buf */ 2439 2440 if (UNIV_LIKELY(!is_null)) { 2441 ut_a(buf + data_len <= original_buf + buf_len); 2442 row_mysql_store_col_in_innobase_format( 2443 dfield, buf, 2444 FALSE, /* MySQL key value format col */ 2445 key_ptr + data_offset, data_len, 2446 dict_table_is_comp(index->table)); 2447 buf += data_len; 2448 } The buffer size is now calculated with the formula MAX_KEY_LENGTH + MAX_REF_PARTS*2. This properly takes into account the extra bytes needed to store the length for each column. An index can contain a maximum of MAX_REF_PARTS columns in it, and for each column 2 bytes are needed to store length. rb://1238 approved by Marko and Vasil Dimov.
-