- 22 Dec, 2011 2 commits
-
-
Vasil Dimov authored
-
Vasil Dimov authored
CREATE TABLE bug13510739 (c INTEGER NOT NULL, PRIMARY KEY (c)) ENGINE=INNODB; INSERT INTO bug13510739 VALUES (1), (2), (3), (4); DELETE FROM bug13510739 WHERE c=2; HANDLER bug13510739 OPEN; HANDLER bug13510739 READ `primary` = (2); HANDLER bug13510739 READ `primary` NEXT; <-- crash The bug is that in the particular testcase row_search_for_mysql() picked up a delete-marked record and quit, leaving the cursor non-positioned state and on the subsequent 'get next' call the code crashed because of the non-positioned cursor. In row0sel.cc (line numbers from mysql-trunk): 4653 if (rec_get_deleted_flag(rec, comp)) { ... 4679 if (index == clust_index && unique_search) { 4680 4681 err = DB_RECORD_NOT_FOUND; 4682 4683 goto normal_return; 4684 } it quit from here, not storing the cursor position. In contrast, if the record=2 is not found at all (e.g. sleep(1) after DELETE to let the purge wipe it away completely) then 'get = 2' does find record=3 and quits from here: 4366 if (0 != cmp_dtuple_rec(search_tuple, rec, offsets)) { ... 4394 btr_pcur_store_position(pcur, &mtr); 4395 4396 err = DB_RECORD_NOT_FOUND; 4397 #if 0 4398 ut_print_name(stderr, trx, FALSE, index->name); 4399 fputs(" record not found 3\n", stderr); 4400 #endif 4401 4402 goto normal_return; Another fix could be to extend the condition on line 4366 to hold only if seach_tuple matches rec AND if rec is not delete marked. Notice that in the above test case if we wait about 1 second somewhere after DELETE and before 'get = 2', then the testcase does not crash and returns 4 instead. Not sure if this is the correct behavior, but this bugfix removes the crash and makes the code return what it also returns in the non-crashing case (if rec=2 is not found during 'get = 2', e.g. we have sleep(1) there). Approved by: Marko (http://bur03.no.oracle.com/rb/r/863/)
-
- 16 Dec, 2011 13 commits
-
-
Joerg Bruehe authored
-
Joerg Bruehe authored
-
Joerg Bruehe authored
-
Joerg Bruehe authored
-
Joerg Bruehe authored
-
Joerg Bruehe authored
-
Joerg Bruehe authored
-
Joerg Bruehe authored
-
Joerg Bruehe authored
-
Joerg Bruehe authored
-
unknown authored
-
unknown authored
-
Sergey Vojtovich authored
-
- 15 Dec, 2011 7 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Chaithra Gopalareddy authored
Problem description: When a view is created using function FORMAT and if FORMAT function uses locale option,definition of view saved into server doesn't contain that locale information, Ex: create table test2 (bb decimal (10,2)); insert into test2 values (10.32),(10009.2),(12345678.21); create view test3 as select format(bb,1,'sk_SK') as cc from test2; select * from test3; +--------------+ | cc | +--------------+ | 10.3 | | 10,009.2 | | 12,345,678.2 | +--------------+ 3 rows in set (0.02 sec) show create view test3 View: test3 Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `test3` AS select format(`test2`.`bb`,1) AS `cc` from `test2` character_set_client: latin1 collation_connection: latin1_swedish_ci 1 row in set (0.02 sec) Problem Analysis: The function Item_func_format::print() which prints the query string to create the view does not print the third argument (i.e the locale information). Hence view is created without locale information. Problem Solution: If argument count is more than 2 we now print the third argument onto the query string. Files changed: sql/item_strfunc.cc Function call changes: Item_func_format::print() mysql-test/t/select.test Added test case to test the bug mysql-test/r/select.result Result of the test case appended here
-
Tor Didriksen authored
Remove it.
-
- 14 Dec, 2011 6 commits
-
-
Andrei Elkin authored
bug#13437900 post-push changes to please solaris compiler.
-
Mattias Jonsson authored
Also added tests for partitions key caches.
-
Mattias Jonsson authored
-
Andrei Elkin authored
There was memory leak when running some tests on PB2. The reason of the failure is an early return from change_master() that was supposed to deallocate a dyn-array. Actually the same bug58915 was fixed in trunk with relocating the dyn-array destruction into THD::cleanup_after_query() which can't be bypassed. The current patch backports magne.mahre@oracle.com-20110203101306-q8auashb3d7icxho and adds two optimizations: were done: the static buffer for the dyn-array to base on, and the array initialization is called precisely when it's necessary rather than per each CHANGE-MASTER as before. mysql-test/suite/rpl/t/rpl_empty_master_host.test: the test is binlog-format insensitive so it will be run with MIXED mode only. mysql-test/suite/rpl/t/rpl_server_id_ignore.test: the test is binlog-format insensitive so it will be run with MIXED mode only. sql/sql_class.cc: relocating the dyn-array destruction into THD::cleanup_after_query(). sql/sql_lex.cc: LEX.mi zero initialization is done in LEX(). sql/sql_lex.h: Optimization for repl_ignore_server_ids to base on a static buffer which size is chosen to fit to most common use cases. sql/sql_repl.cc: dyn-array destruction is relocated to THD::cleanup_after_query(). sql/sql_yacc.yy: Refining logics of Lex->mi.repl_ignore_server_ids initialization. The array is initialized once a corresponding option in CHANGE MASTER token sequence is found.
-
Mattias Jonsson authored
-
Georgi Kodinov authored
the new --slow-start-timeout option's help output
-
- 13 Dec, 2011 3 commits
-
-
Georgi Kodinov authored
Added a global read-only option slow-start-timeout to control the Windows service control manager's service start timeout, that was currently hard-coded to be 15 seconds. The default of the new option is 15 seconds. The timeout can also be set to 0 (to mean no timeout applicable).
-
Annamalai Gurusami authored
-
Annamalai Gurusami authored
The counter handler_read_key (SSV::ha_read_key_count) is incremented incorrectly. The mysql server maintains a per thread system_status_var (SSV) object. This object contains among other things the counter SSV::ha_read_key_count. The purpose of this counter is to measure the number of requests to read a row based on a key (or the number of index lookups). This counter was wrongly incremented in the ha_innobase::innobase_get_index(). The fix removes this increment statement (for both innodb and innodb_plugin). The various callers of the innobase_get_index() was checked to determine if anybody must increment this counter (if they first call innobase_get_index() and then perform an index lookup). It was found that no caller of innobase_get_index() needs to worry about the SSV::ha_read_key_count counter.
-
- 12 Dec, 2011 6 commits
-
-
Mattias Jonsson authored
SMALL KEY CACHE The server crashed on division by zero because the key cache was not initialized and the block length was 0 which was used in a division. The fix was to not allow CACHE INDEX if the key cache was not initiallized. Thus never try LOAD INDEX INTO CACHE for an uninitialized key cache. Also added some windows files/directories to .bzrignore.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
When printing information about a ROW_FORMAT=REDUNDANT record, pass the correct flag to rec_get_next_offs(). rb:821 approved by Jimmy Yang
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 08 Dec, 2011 2 commits
-
-
unknown authored
This bug ensures that a live downgrade from an InnoDB 5.6 with WL5756 and a database created with innodb-page-size=8k or 4k will make a version 5.5 installation politely refuse to start. Instead of crashing or giving some indication about a corrupted database, it will indicate the page size difference. This patch takes only that part of the Wl5756 patch that is needed to protect against opening a tablespace that is stamped with a different page size. It also contains the change in dict_index_find_on_id_low() just in case a database with another page size is created by recompiling a pre-WL5756 InnoDB.
-
Jimmy Yang authored
WITH FOREIGN KEY CONSTRAI rb://844 approved by marko
-
- 07 Dec, 2011 1 commit
-
-
Inaam Rana authored
WITH LARGE BUFFER POOL (Note: this a backport of revno:3472 from mysql-trunk) rb://845 approved by: Marko When dropping a table (with an .ibd file i.e.: with innodb_file_per_table set) we scan entire LRU to invalidate pages from that table. This can be painful in case of large buffer pools as we hold the buf_pool->mutex for the scan. Note that gravity of the problem does not depend on the size of the table. Even with an empty table but a large and filled up buffer pool we'll end up scanning a very long LRU list. The fix is to scan flush_list and just remove the blocks belonging to the table from the flush_list, marking them as non-dirty. The blocks are left in the LRU list for eventual eviction due to aging. The flush_list is typically much smaller than the LRU list but for cases where it is very long we have the solution of releasing the buf_pool->mutex after scanning 1K pages. buf_page_[set|unset]_sticky(): Use new IO-state BUF_IO_PIN to ensure that a block stays in the flush_list and LRU list when we release buf_pool->mutex. Previously we have been abusing BUF_IO_READ to achieve this.
-