- 03 Jun, 2013 1 commit
-
-
sayantan dutta authored
-
- 29 May, 2013 1 commit
-
-
Sreedhar S authored
-
- 24 May, 2013 4 commits
-
-
Maitrayi Sabaratnam authored
Bug#13116514 - CREATE LOGFILE GROUP INITIAL_SIZE & UNDO_BUFFER_SIZE FAILS Fixing parser to accept the syntax: to give a size with suffix 'M', eg. undo_buffer_size=10M (M for mega bytes), in 'create logfile group' command.
-
Marko Mäkelä authored
i_s_innodb_buffer_page_get_info(): Do not read the buffer block frame contents of read-fixed blocks, because it may be invalid or uninitialized. When we are going to decompress or read a block, we will put it into buf_pool->page_hash and buf_pool->LRU, read-fix the block and release the mutexes for the duration of the reading or decompression. rb#2500 approved by Jimmy Yang
-
Venkatesh Duggirala authored
BY BINLOG_KILLED_SIMULATE.TEST Merging fix from mysql-5.1
-
Venkatesh Duggirala authored
BY BINLOG_KILLED_SIMULATE.TEST 'mysqbinlog' tool creates a temporary file while preparing LOAD DATA QUERY. These files needs to be deleted at the end of the test script otherwise these files are left out in the daily-run machines, causing "no space on device issues" Fix: Delete them at the end of these test scripts 1) execute mysqlbinlog with --local-load option to create these files in a specified tmpdir 2) delete the tmpdir at the end of the test script
-
- 23 May, 2013 3 commits
-
-
Chaithra Gopalareddy authored
-
Chaithra Gopalareddy authored
STRING CONVERSION FUNCTIONS Problem: While executing the prepared statement, user variable is set to memory which would be freed at the end of execution. If the statement is executed again, valgrind throws error when accessing this pointer. Analysis: 1. First time when Item_func_set_user_var::check is called, memory is allocated for "value" to store the result. (In the call to copy_if_not_alloced). 2. While sending the result, Item_func_set_user_var::check is called again. But, this time, its called with "use_result_field" set to true. As a result, we call result_field->val_str(&value). 3. Here memory allocated for "value" gets freed. And "value" gets set to "result_field", with "str_length" being that of result_field's. 4. In the call to JOIN::cleanup, result_field's memory gets freed as this is allocated in a chunk as part of the temporary table which is needed to execute the query. 5. Next time, when execute of the same statement is called, "value" will be set to memory which is already freed. Valgrind error occurs as "str_length" is positive (set at Step 3) Note that user variables list is stored as part of the Lex object in set_var_list. Hence the persistance across executions. Solution: Patch for Bug#11764371 fixed in mysql-5.6+ fixes this problem as well.So backporting the same. In the solution for Bug#11764371, we create another object of user_var and repoint it to temp_table's field. As a result while deleting the alloced buffer in Step 3, since the cloned object does not own the buffer, deletion will not happen. So at step 5 when we execute the statement second time, the original object will be used and since deletion did not happen valgrind will not complain about dangling pointer.
-
Chaithra Gopalareddy authored
-
- 22 May, 2013 1 commit
-
-
Chaithra Gopalareddy authored
Bug#12608543: CRASHES WITH DECIMALS AND STATEMENT NEEDS TO BE REPREPARED ERRORS Backporting these two fixes to 5.1 Added unittest to test my_decimal construtor and assignment operators
-
- 20 May, 2013 1 commit
-
-
mysql-builder@oracle.com authored
No commit message
-
- 19 May, 2013 1 commit
-
-
Ashish Agarwal authored
USING THE PLUGIN INTERFACE. ISSUE: No support for floating-point plugin system variables. SOLUTION: Allowing plugins to define and expose floating-point system variables of type double. MYSQL_SYSVAR_DOUBLE and MYSQL_THDVAR_DOUBLE are added. ISSUE: Fractional part of the def, min, max values of system variables are ignored. SOLUTION: Adding functions that are used to store the raw representation of a double in the raw bits of unsigned longlong in a way that the binary representation remains the same.
-
- 18 May, 2013 1 commit
-
-
Annamalai Gurusami authored
ESCAPED WITH BACKSLASH Problem: When the CREATE TABLE statement used COMMENTS with escape sequences like 'foo\'s', InnoDB did not parse is correctly when trying to extract the foreign key information. Because of this, the foreign keys specified in the CREATE TABLE statement were not created. Solution: Make the InnoDB internal parser aware of escape sequences. rb#2457 approved by Kevin.
-
- 17 May, 2013 2 commits
-
-
Venkatesh Duggirala authored
MYSQL DB FROM REMOTE 5.0.96 SERVER Problem: mysqldump tool assumes the existence of general_log and slow_log tables in the server. If mysqldump tool executes on a old server where there are no log tables like these, mysqldump tool fails. Analysis: general_log and slow_log tables are added in the ignore-table list as part of bug-26121 fix causes bug-45740 (MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY CAUSES RESTORE PROBLEM). As part of the bug-45740 fix, mysqldump tool adds create table queries for these two tables. But the fix assumes that on all the servers, general_log and slow_log will be there. If the new mysqldump tool is executed against a old server where there are no general_log and slow_log, the mysqldump tool fails with an error that 'there is no general_log table'. Fix: When mysqldump tool is trying to retrieve general_log and slow_log table structures, first the tool should check their existence of these tables in the server instead of trying to dump it blindly.
-
mysql-builder@oracle.com authored
No commit message
-
- 16 May, 2013 15 commits
-
-
sayantan dutta authored
-
Mattias Jonsson authored
The problem was in get_partition_id_cols_range_for_endpoint and cmp_rec_and_tuple_prune, which stepped one partition too long. Solution was to move a small portion of logic to cmp_rec_and_tuple_prune, to simplify both get_partition_id_cols_range_for_endpoint and get_partition_id_cols_list_for_endpoint.
-
Annamalai Gurusami authored
ibuf_insert_to_index_page_low() add a DBUG_RETURN(NULL).
-
sayantan dutta authored
-
sayantan dutta authored
-
sayantan dutta authored
-
sayantan dutta authored
-
Annamalai Gurusami authored
-
Annamalai Gurusami authored
INSERT BUFFER MERGE Problem: When the record is merged from the change buffer to the actual page, in a particular condition, it is assumed that the deleted rec will be re-used by the inserted rec. With this assumption the lock is restored on the pointer to the deleted rec itself, thinking that it is pointing to the newly inserted rec. Solution: Just before restoring the lock, update the rec pointer to point to the newly inserted record. An assert has been added to verify this. This assert will fail without the fix and will pass with the fix. rb#2449 in review by Marko and Jimmy
-
Annamalai Gurusami authored
-
Jon Olav Hauglid authored
-
Jon Olav Hauglid authored
In order to keep error message numbers stable between GA releases, we can not now add a new error message to 5.1/5.5 as this message would get a number now used in 5.6. This patch enforces this by adding a 5.1/5.5 specific check when processing the error message file. If a new error message is added, building will abort and report an error.
-
mysql-builder@oracle.com authored
No commit message
-
Annamalai Gurusami authored
INSERT BUFFER MERGE Problem: When the record is merged from the change buffer to the actual page, in a particular condition, it is assumed that the deleted rec will be re-used by the inserted rec. With this assumption the lock is restored on the pointer to the deleted rec itself, thinking that it is pointing to the newly inserted rec. Solution: Just before restoring the lock, update the rec pointer to point to the newly inserted record. An assert has been added to verify this. This assert will fail without the fix and will pass with the fix. rb#2449 in review by Marko and Jimmy
-
Annamalai Gurusami authored
INNODB_FAST_SHUTDOWN IS 2 Problem: When innodb_fast_shutdown is set to 2 and the master thread enters flush loop, under some circumstances it will not be able to exit it. This may lead to a shutdown hanging. This is happening because of the following: 1. In the flush_loop block of code, if the srv_fast_shutdown is equal to 2 (very fast shutdown), then we do not flush dirty pages in buffer pool to disk. 2. In the same flush_loop block of code, if the number of dirty pages is more than user specified limit, we go to step 1. This results in infinite loop. Solution: When we are in the process of doing a very fast shutdown, don't do step 2 above. rb#2328 approved by Inaam.
-
- 15 May, 2013 4 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
When a record contains no user data bytes (such as when the PRIMARY KEY is an empty string and all secondary index fields are NULL or the empty string), page_zip_decompress() could fail to set the record heap_no correctly. page_zip_decompress_node_ptrs(), page_zip_decompress_sec(), page_zip_decompress_clust(): Set heap_no also at the end of the compressed data stream. rb#2448 approved by Jimmy Yang and Inaam Rana
-
Inaam Rana authored
AFTER A ROW IS READ Approved by: Sunny Bains rb://2425 Don't release concurrency tickets when asked to release btr_search_latch. This is a 5.5 only bug. It is already fixed in 5.6 upwards.
-
mysql-builder@oracle.com authored
No commit message
-
- 14 May, 2013 1 commit
-
-
Shubhangi Garg authored
In log_event.h DESCRIPTION: Due to inclusion of an implementation file, namely 'rpl_tblmap.cc' in a header file, namely 'log_event.h'; linker errors occur if log_event.h is included in an application containing multiple source files, such as in the case of Binlog API. Binlog API requires including log_event.h in its source files; which leads to multiple definition errors, for functions defined in rpl_tblmap.cc for class 'table_mapping'. FIX: Change the inclusion from header file(log_event.h) to source files using this header and have flag MYSQL_CLIENT set. The only file in the current server repository is mysqlbinlog.cc.
-
- 13 May, 2013 4 commits
-
-
bin.x.su@oracle.com authored
== Analysis == Both change buffer pages and on-disk indexes pages are marked as FIL_PAGE_INDEX. So all ibuf index pages will classify as INDEX with NULL table_name and index_name. == Solution == A new page type for ibuf data pages named I_S_PAGE_TYPE_IBUF is defined. All these pages whose index_id equal (DICT_IBUF_ID_MIN + IBUF_SPACE_ID) will classify as IBUF_DATA instead of INDEX in INNODB_BUFFER_PAGE and INNODB_BUFFER_PAGE_LRU. This fix is only for IS reporting, both on-disk and buffer pool structures keep unchanged. Approved by both Marko and Jimmy. rb#2334
-
Neeraj Bisht authored
WITH COMPOSITE KEY COLUMNS Problem:- While running a SELECT query with several AGGR(DISTINCT) function and these are referring to different field of same composite key, Returned incorrect value. Analysis:- In a table, where we have composite key like (a,b,c) and when we give a query like select COUNT(DISTINCT b), SUM(DISTINCT a) from .... here, we first make a list of items in Aggr(distinct) function (which is a, b), where order of item doesn't matter. and then we see, whether we have a composite key where the prefix of index columns matches the items of the aggregation function. (in this case we have a,b,c). if yes, so we can use loose index scan and we need not perform duplicate removal to distinct in our aggregate function. In our table, we traverse column marked with <-- and get the result as (a,b,c) count(distinct b) sum(distinct a) treated as count b treated as sum(a) (1,1,2)<-- 1 1 (1,2,2)<-- 1++=2 1+1=2 (1,2,3) (2,1,2)<-- 2++=3 1+1+2=4 (2,2,2)<-- 3++=4 1+1+2+2=6 (2,2,3) result will be 4,6, but it should be (2,3) As in this case, our assumption is incorrect. If we have query like select count(distinct a,b), sum(distinct a,b)from .. then we can use loose index scan Solution:- In our query, when we have more then one aggr(distinct) function then they should refer to same fields like select count(distinct a,b), sum(distinct a,b) from .. -->we can use loose scan index as both aggr(distinct) refer to same fields a,b. If they are referring to different field like select count(distinct a), sum(distinct b) from .. -->will not use loose scan index as both aggr(distinct) refer to different fields.
-
Annamalai Gurusami authored
-
mysql-builder@oracle.com authored
No commit message
-
- 12 May, 2013 1 commit
-
-
Annamalai Gurusami authored
-