- 21 Jan, 2013 2 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
buf_page_get_gen(): Do not attempt to decompress a compressed-only page when mode == BUF_PEEK_IF_IN_POOL. This mode is only being used by btr_search_drop_page_hash_when_freed(). There cannot be any adaptive hash index pointing to a page that does not exist in uncompressed format in the buffer pool. innodb_buffer_pool_evict_update(): New function for debug builds, to handle SET GLOBAL innodb_buffer_pool_evicted='uncompressed' by evicting all uncompressed page frames of compressed tablespaces from the buffer pool. rb#1873 approved by Jimmy Yang
-
- 19 Jan, 2013 2 commits
-
-
Venkatesh Duggirala authored
THAN A TABLE. Merging fix from mysql-5.1
-
Venkatesh Duggirala authored
RATHER THAN A TABLE Problem: In RBR, If a table is converted into a view at slave, (i.e., "drop table 'object1'" & "create view 'object1'"), then any DML operations on the table at master are causing crash at slave. Analysis: Slave prepares tables to be opened for DML list when it receives Table_map_log_event(s). And the same list will be sent to open_table function. Open_table logic assumes that if the list contains a view object, it also contains "select_lex" object of that view. In the above special case, the table object does not contain 'select_lex' as it is base table at master. Since it is a view at slave, open_table logic goes to 'mysql_make_view()' function which assumes that 'select_lex' exists for the object. Fix: While preparing 'tables to be opened' list, we should make sure that table required type is 'base table'. If it is not base table while opening the object, mysql_make_view will throw an error similar to 'object is not a base table'
-
- 18 Jan, 2013 3 commits
-
-
Astha Pareek authored
disabled binlog_spurious_ddl_errors on mysql-5.5
-
mysql-builder@oracle.com authored
No commit message
-
Astha Pareek authored
The test, binlog.binlog_spurious_ddl_errors was failing on pb2 at the statement "UNINSTALL PLUGIN example;" with this warning: "Warning 1620 Plugin is busy and will be uninstalled on shutdown " Fix Spurious warnings occur in the test since we do not empty the Query cache, used by the example plugin at the time of creating tables using the plugin. Hence, the query chache is flushed before uninstalling the plugin. Also, as part of running the test across platforms, the plugin installation script is changed.
-
- 17 Jan, 2013 1 commit
-
-
Marko Mäkelä authored
Get rid of O(n^2) scan in dyn array (mtr->memo) operations, accessing the dyn array blocks directly. dyn_array_get_last_block(), dyn_array_get_next_block(), dyn_array_get_prev_block(): Define as a constness-preserving macro. Add const qualifiers to many dyn_array functions. mtr_memo_slot_release_func(): Renamed from mtr_memo_slot_release(): Make mtr_t* a debug-only parameter. Assume that slot->object != NULL. mtr_memo_pop_all(): Access the dyn_array blocks directly, replacing O(n^2) operation with O(n). mtr_memo_release(): Access the dyn_array blocks directly, replacing O(n^2) operation with O(n). This caused the performance problem. rb#1540 approved by Jimmy Yang
-
- 16 Jan, 2013 5 commits
-
-
Anirudh Mangipudi authored
Null Merge from 5.1 to 5.5
-
Anirudh Mangipudi authored
Problem: When a view, with a specific character set and collation, is created on another view with a different character set and collation the dump restoration results in an illegal mix of collations error. SOLUTION: To avoid this confusion of collations, the create table datatype being used is hardcoded as "tinyint NOT NULL". This will not matter as the table created will be dropped at runtime and specifically tinyint is used to avoid hitting the row size conflicts.
-
Neeraj Bisht authored
Consider the following query: SELECT f_1,..,f_m, AGGREGATE_FN(C) FROM t1 WHERE ... GROUP BY ... Loose index scan ("Using index for group-by") can be used for this query if there is an index 'i' covering all fields in the select list, and the GROUP BY clause makes up a prefix f1,...,fn of 'i'. Furthermore, according to rule NGA2 of get_best_group_min_max(), the WHERE clause must contain a conjunction of equality predicates for all fields fn+1,...,fm. The problem in this bug was that a query with WHERE clause that broke NGA2(NGA: Non Group Attribuite) was not detected and therefore used loose index scan. This lead to wrong result. The query had an index covering (c1,c2) and had: "WHERE (c1 = 1 AND c2 = 'a') OR (c1 = 2 AND c2 = 'b') GROUP BY c1" or "WHERE (c1 = 1 ) OR (c1 = 2 AND c2 = 'b') GROUP BY c1" This WHERE clause cannot be transformed to a conjunction of equality predicates. The solution is to introduce another rule, NGA3, that complements NGA2. NGA3 says that if a gap field (field between those listed in GROUP BY and C in the index) has a predicate, then there can only be one range in the query. This requirement is more strict than it has to be in theory. BUG 15947433 will deal with that.
-
Neeraj Bisht authored
Consider the following query: SELECT f_1,..,f_m, AGGREGATE_FN(C) FROM t1 WHERE ... GROUP BY ... Loose index scan ("Using index for group-by") can be used for this query if there is an index 'i' covering all fields in the select list, and the GROUP BY clause makes up a prefix f1,...,fn of 'i'. Furthermore, according to rule NGA2 of get_best_group_min_max(), the WHERE clause must contain a conjunction of equality predicates for all fields fn+1,...,fm. The problem in this bug was that a query with WHERE clause that broke NGA2 was not detected and therefore used loose index scan. This lead to wrong result. The query had an index covering (c1,c2) and had: "WHERE (c1 = 1 AND c2 = 'a') OR (c1 = 2 AND c2 = 'b') GROUP BY c1" or "WHERE (c1 = 1 ) OR (c1 = 2 AND c2 = 'b') GROUP BY c1" This WHERE clause cannot be transformed to a conjunction of equality predicates. The solution is to introduce another rule, NGA3, that complements NGA2. NGA3 says that if a gap field (field between those listed in GROUP BY and C in the index) has a predicate, then there can only be one range in the query. This requirement is more strict than it has to be in theory. BUG 15947433 will deal with that.
-
mysql-builder@oracle.com authored
No commit message
-
- 15 Jan, 2013 4 commits
-
-
Nisha Gopalakrishnan authored
Analysis: --------- When the server is out of memory, an error is raised to indicate the same. Handling the error requires more memory to be allocated which fails, hence the error handling loops in a recursion and causes the server to crash. Fix: --- a) Prevents pushing the 'out of memory' error condition to the diagnostic area as it requires memory allocation. GET DIAGNOSTICS, SHOW WARNINGS and SHOW ERRORS statements will not show information about this error. However the 'out of memory' error is returned to the client. b) It sets the ME_FATALERROR flag when 'out of memory' errors are reported (for places where the flag is not already set). This flag prevents activation of SP error handlers which also require memory allocation and therefore are likely to fail.
-
Neeraj Bisht authored
Problem:- In case of blob data field, UNION ALL doesn't give correct result. Analysis:- In MyISAM table, when we dont want to check for the distinct for particular key, we set the key_map to zero. While writing record in MyISAM table, we check the distinct with the help of keys, by checking whether that key is active in key_map and then writing the record. In case of blob field, we are checking for distinct by unique constraint, where we are not checking whether that unique key is active or not in key_map. Solution: Before checking for distinct, check whether any key is active in key_map.
-
Neeraj Bisht authored
Problem:- In case of blob data field, UNION ALL doesn't give correct result. Analysis:- In MyISAM table, when we dont want to check for the distinct for particular key, we set the key_map to zero. While writing record in MyISAM table, we check the distinct with the help of keys, by checking whether that key is active in key_map and then writing the record. In case of blob field, we are checking for distinct by unique constraint, where we are not checking whether that unique key is active or not in key_map. Solution:- Before checking for distinct, check whether any key is active in key_map.
-
mysql-builder@oracle.com authored
No commit message
-
- 14 Jan, 2013 5 commits
-
-
Neeraj Bisht authored
BUG#14303860 - EXECUTING A SELECT QUERY WITH TOO MANY WILDCARDS CAUSES A SEGFAULT Back port from 5.6 and trunk
-
Olav Sandstaa authored
WITH A VARIABLE AND ORDER BY Bug#16035412 MYSQL SERVER 5.5.29 WRONG SORTING USING COMPLEX INDEX This is a fix for a regression introduced by Bug#12667154: Bug#12667154 attempted to fix a performance problem with subqueries that did filesort. For doing filesort, the optimizer creates a quick select object to use when building the sort index. This quick select object was deleted after the first call to create_sort_index(). Thus, for queries where the subquery was executed multiple times, the quick object was only used for the first execution. For all later executions of the subquery, filesort used a complete table scan for building the sort index. The fix for Bug#12667154 tried to fix this by not deleting the quick object after the first execution of create_sort_index() so that it would be re-used for building the sort index by the following executions of the subquery. This regression introduced in Bug#12667154 is that due to not deleting the quick select object after building the sort index, the quick object could in some cases be used also during the second phase of the execution of the subquery instead of using the created sort index. This caused wrong results to be returned. The fix for this issue is to delete the reference to the select object after it has been used in create_sort_index(). In this way the select and quick objects will not be available when doing the second phase of the execution of the select operation. To ensure that the select object can be re-used for the following executions of the subquery we make a copy of the select pointer. This is used for restoring the select object after the select operation is completed.
-
Neeraj Bisht authored
MANY WILDCARDS CAUSES A SEGFAULT Back port from 5.6 and trunk
-
-
WITH AN ASSERTION Recently we added check to handle kill query signal for long operating queries. While the query interruption is reported it must to ensure cursor is restore to proper state for HANDLER interface to work correctly. Normal select query will not face this problem, as on recieving interrupt, select query is aborted and new select query result in re-initialization (including cursor). rb://1836. Approved by Marko.
-
- 12 Jan, 2013 2 commits
-
-
Nisha Gopalakrishnan authored
Merge from 5.1 to 5.5
-
Nisha Gopalakrishnan authored
Analysis: -------- REPLACE operation provides incorrect output when user variable is supplied as an argument and there are multiple rows on which the operation is performed. Consider the example below: SET @var='(( 00000000 ++ 00000000 ))'; SELECT REPLACE(@var, '00000000', table_name) AS a FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='mysql'; Invalid output: +---------------------------------------+ | REPLACE(@var, '00000000', TABLE_NAME) | +---------------------------------------+ | (( columns_priv ++ columns_priv )) | | (( columns_priv ++ columns_priv )) | ...... ...... | (( columns_priv ++ columns_priv )) | | (( columns_priv ++ columns_priv )) | | (( columns_priv ++ columns_priv )) | +---------------------------------------+ The user argument supplied as the string to REPLACE operation is overwritten after the first iteration to '(( columns_priv ++ columns_priv ))'. The overwritten string after the first iteration is used for the subsequent REPLACE iteration. Since the pattern string is not found, it returns invalid output as mentioned above. Fix: --- If the Alloced_length is zero, realloc() and create a copy of the string which is then used for the REPLACE operation for every iteration.
-
- 11 Jan, 2013 3 commits
-
-
Aditya A authored
INCLUDES FIRST PARTITION WHEN PRUNING [Merge from 5.1 to 5.5]
-
Aditya A authored
INCLUDES FIRST PARTITION WHEN PRUNING PROBLEM ------- TO_DAYS()/TO_SECONDS() can return NULL for invalid dates which was stored in the first partition ,therefore the first partition was always included for the scan when range was specified. FIX --- The fix is a small optimization which we have included ,which will prune the scanning of NULL/first partition if the dates specified in the range are valid and in the same year and month . TO_SECONDS() function is not supported in 5.1 so removed it from the fix and test scripts for mysql-5.1 version.
-
Chaithra Gopalareddy authored
-
- 10 Jan, 2013 6 commits
-
-
Venkata Sidagam authored
Problem description: When client loses the connection to the MySQL server or if the server gets shutdown after mysql_stmt_prepare() then the next mysql_stmt_prepare() will return an error(as expected) but consecutive call mysql_stmt_execute(), will crash the client program. The expected behavior would be, it should through an error. Analysis: The mysql_stmt_prepare() interns calls the function end_server() and net->vio and net->buff are freed and set to NULL. Then the next call mysql_stmt_execute() will interns call net_clear() where we are "net->vio" with out validating it. Fix: we are validating the net->vio, before calling net_clear().
-
Chaithra Gopalareddy authored
INCORRECT RESULTS This is a backport of fix for Bug#13068506.
-
Praveenkumar Hulakund authored
-
Praveenkumar Hulakund authored
AVAILABLE MEMORY IS TOO LOW Analysis: --------- In function "mysql_make_view", "table->view" is initialized after parsing(using File_parser::parse) the view definition. If "::parse" function fails then control is moved to label "err:". Here we have assert (table->view == thd->lex). This assert fails if "::parse" function fails, as table->view is not initialized yet. File_parser::parse fails if data being parsed is incorrect/ corrupted or when memory allocation fails. In this scenario its failing because of failure in memory allocation. Fix: --------- In case of failure in function "File_parser::parse", moving to label "err:" is incorrect. Modified code to move to label "end:".
-
Annamalai Gurusami authored
Problem: During the index intersect access method, the SQL layer will access one row, that satisfies a set of conditions, using an index i1. And then it will try to access the same row, with other set of conditions using the next index i2. If the fetch from i2 fails (we are talking about an error situation here and not simply an unmatched row situation), then it will unlock the row accessed via i1. This will work in all situations except deadlock error. When a deadlock happens, InnoDB will rollback the transaction. InnoDB intimates the SQL layer about this through the THD::transaction_rollback_request member. But this is not currently used by the SQL layer. Solution: When an error happens, the SQL layer must check the THD::transaction_rollback_request member, before calling handler::unlock_row(). We have also added a debug assert in ha_innobase::unlock_row() checking that it must be called only when the transaction is in active state. rb#1773 approved by Marko and Sunny.
-
prabakaran thirumalai authored
Analysis: On solaris, killing a connection which waits on debug sync (waits on condition variable) is neglected. Subsequent kill connection to that thread succeeds. Debug sync code is not included in release build hence it is not an customer issue. Also verified that except this case, other cases succeed in main.kill test script. So moving this test to experimental state on solaris platform only in mysql-5.5 branch.
-
- 09 Jan, 2013 2 commits
-
-
Sunny Bains authored
-
Sunny Bains authored
Backport fix from mysql-5.6.
-
- 08 Jan, 2013 3 commits
-
-
Hery Ramilison authored
-
hery.ramilison@oracle.com authored
-
Sunanda Menon authored
-
- 07 Jan, 2013 2 commits
-
-
Satya Bodapati authored
-
Satya Bodapati authored
DIAGNOSTICS_AREA::SET_OK_STATUS Use DBUG_RETURN() instead of return() if DBUG_ENTER() is used in the function. This patch is to fix the Windows pb2 failure on mysql-5.1 Approved by Marko. rb#1792
-