- 16 May, 2012 2 commits
-
-
Olav Sandstaa authored
RESULTS ON IN() & NOT IN() COMP #3 This bug causes a wrong result in mysql-trunk when ICP is used and bad performance in mysql-5.5 and mysql-trunk. Using the query from bug report to explain what happens and causes the wrong result from the query when ICP is enabled: 1. The t3 table contains four records. The outer query will read these and for each of these it will execute the subquery. 2. Before the first execution of the subquery it will be optimized. In this case the important is what happens to the first table t1: -make_join_select() will call the range optimizer which decides that t1 should be accessed using a range scan on the k1 index It creates a QUICK_RANGE_SELECT object for this. -As the last part of optimization the ICP code pushes the condition down to the storage engine for table t1 on the k1 index. This produces the following information in the explain for this table: 2 DEPENDENT SUBQUERY t1 range k1 k1 5 NULL 3 Using index condition; Using filesort Note the use of filesort. 3. The first execution of the subquery does (among other things) due to the need for sorting: a. Call create_sort_index() which again will call find_all_keys(): b. find_all_keys() will read the required keys for all qualifying rows from the storage engine. To do this it checks if it has a quick-select for the table. It will use the quick-select for reading records. In this case it will read four records from the storage engine (based on the range criteria). The storage engine will evaluate the pushed index condition for each record. c. At the end of create_sort_index() there is code that cleans up a lot of stuff on the join tab. One of the things that is cleaned is the select object. The result of this is that the quick-select object created in make_join_select is deleted. 4. The second execution of the subquery does the same as the first but the result is different: a. Call create_sort_index() which again will call find_all_keys() (same as for the first execution) b. find_all_keys() will read the keys from the storage engine. To do this it checks if it has a quick-select for the table. Now there is NO quick-select object(!) (since it was deleted in step 3c). So find_all_keys defaults to read the table using a table scan instead. So instead of reading the four relevant records in the range it reads the entire table (6 records). It then evaluates the table's condition (and here it goes wrong). Since the entire condition has been pushed down to the storage engine using ICP all 6 records qualify. (Note that the storage engine will not evaluate the pushed index condition in this case since it was pushed for the k1 index and now we do a table scan without any index being used). The result is that here we return six qualifying key values instead of four due to not evaluating the table's condition. c. As above. 5. The two last execution of the subquery will also produce wrong results for the same reason. Summary: The problem occurs due to all but the first executions of the subquery is done as a table scan without evaluating the table's condition (which is pushed to the storage engine on a different index). This is caused by the create_sort_index() function deleting the quick-select object that should have been used for executing the subquery as a range scan. Note that this bug in addition to causing wrong results also can result in bad performance due to executing the subquery using a table scan instead of a range scan. This is an issue in MySQL 5.5. The fix for this problem is to avoid that the Quick-select-object that the optimizer created is deleted when create_sort_index() is doing clean-up of the join-tab. This will ensure that the quick-select object and the corresponding pushed index condition will be available and used by all following executions of the subquery. sql/sql_select.cc: Fix for Bug#12667154: Change how create_sort_index() cleans up the join_tab's select and quick-select objects in order to avoid that a quick-select object created outside of create_sort_index() is deleted.
-
unknown authored
No commit message
-
- 15 May, 2012 8 commits
-
-
Nuno Carvalho authored
Automerge from mysql-5.1 into mysql-5.5.
-
Nuno Carvalho authored
Improved random number filtering verification on rpl_filter_tables_not_exist test.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
dict_table_replace_index_in_foreign_list(): Replace the dropped index also in the foreign key constraints of child tables that are referencing this table. row_ins_check_foreign_constraint(): If the underlying index is missing, refuse the operation. rb:1051 approved by Jimmy Yang
-
Georgi Kodinov authored
-
Georgi Kodinov authored
Applied the fix that updates yaSSL to 2.2.1 and fixes parsing this particular certificate. Added a test case with the certificate itself.
-
Bjorn Munch authored
-
Bjorn Munch authored
-
- 14 May, 2012 1 commit
-
-
Joerg Bruehe authored
-
- 10 May, 2012 2 commits
-
-
Annamalai Gurusami authored
-
Annamalai Gurusami authored
BY A CONCURRENT TRANSACTIO The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs a table handler clone. Innodb does not provide a clone operation. The ha_innobase::clone() is not there. The handler::clone() does not take care of the ha_innobase->prebuilt->select_lock_type. Because of this what happens is that for one index we do a locking read, and for the other index we were doing a non-locking (consistent) read. The patch introduces ha_innobase::clone() member function. It is implemented similar to ha_myisam::clone(). It calls the base class handler::clone() and then does any additional operation required. I am setting the ha_innobase->prebuilt->select_lock_type correctly. rb://1060 approved by Marko
-
- 09 May, 2012 1 commit
-
-
Georgi Kodinov authored
-
- 08 May, 2012 1 commit
-
-
Sunanda Menon authored
-
- 07 May, 2012 3 commits
-
-
Joerg Bruehe authored
This is a weave merge, but without any conflicts. In 14 source files, the copyright year needed to be updated to 2012.
-
Venkata Sidagam authored
CAUSES RESTORE PROBLEM Merging the fix from mysql-5.1 to mysql-5.5 mysql-test/t/mysqldump.test: There is a difference in the testcase which is added as part of this fix, when compared with mysql-5.1. In mysql-5.5 and mysql-5.6, "DROP mysql database" fails by enabling logging, hence removed those lines.
-
Venkata Sidagam authored
CAUSES RESTORE PROBLEM Problem Statement: ------------------ mysqldump is not having the dump stmts for general_log and slow_log tables. That is because of the fix for Bug#26121. Hence, after dropping the mysql database, and applying the dump by enabling the logging, "'general_log' table not found" errors are logged into the server log file. Analysis: --------- As part of the fix for Bug#26121, we skipped the dumping of tables for general_log and slow_log, because the data dump of those tables are taking LOCKS, which is not allowed for log tables. Fix: ---- We came up with an approach that instead of taking both meta data and data dump information for those tables, take only the meta data dump which doesn't need LOCKS. As part of fixing the issue we came up with below algorithm. Design before fix: 1) mysql database is having tables like db, event,... general_log, ... slow_log... 2) Skip general_log and slow_log while preparing the tables list 3) Take the TL_READ lock on tables which are present in the table list and do 'show create table'. 4) Release the lock. Design with the fix: 1) mysql database is having tables like db, event,... general_log, ... slow_log... 2) Skip general_log and slow_log while preparing the tables list 3) Explicitly call the 'show create table' for general_log and slow_log 3) Take the TL_READ lock on tables which are present in the table list and do 'show create table'. 4) Release the lock. While taking the meta data dump for general_log and slow_log the "CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". This is because we skipped "DROP TABLE" for those tables, "DROP TABLE" fails for these tables if logging is enabled. Customer is applying the dump by enabling logging so, if the dump has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" stmts for those tables. After the fix we could observe "Table 'mysql.general_log' doesn't exist" errors initially that is because in the customer scenario they are dropping the mysql database by enabling the logging, Hence, those errors are expected. Once we apply the dump which is taken before the "drop database mysql", the errors will not be there. client/mysqldump.c: In get_table_structure() added code to skip the DROP TABLE stmts for general_log and slow_log tables, because when logging is enabled those stmts will fail. And replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for those tables. In dump_all_tables_in_db() added code to call get_table_structure() for general_log and slow_log tables. mysql-test/r/mysqldump.result: Added a test as part of fix for Bug #11754178 mysql-test/t/mysqldump.test: Added a test as part of fix for Bug #11754178
-
- 04 May, 2012 2 commits
-
-
Venkata Sidagam authored
CAUSES RESTORE PROBLEM Problem Statement: ------------------ mysqldump is not having the dump stmts for general_log and slow_log tables. That is because of the fix for Bug#26121. Hence, after dropping the mysql database, and applying the dump by enabling the logging, "'general_log' table not found" errors are logged into the server log file. Analysis: --------- As part of the fix for Bug#26121, we skipped the dumping of tables for general_log and slow_log, because the data dump of those tables are taking LOCKS, which is not allowed for log tables. Fix: ---- We came up with an approach that instead of taking both meta data and data dump information for those tables, take only the meta data dump which doesn't need LOCKS. As part of fixing the issue we came up with below algorithm. Design before fix: 1) mysql database is having tables like db, event,... general_log, ... slow_log... 2) Skip general_log and slow_log while preparing the tables list 3) Take the TL_READ lock on tables which are present in the table list and do 'show create table'. 4) Release the lock. Design with the fix: 1) mysql database is having tables like db, event,... general_log, ... slow_log... 2) Skip general_log and slow_log while preparing the tables list 3) Explicitly call the 'show create table' for general_log and slow_log 3) Take the TL_READ lock on tables which are present in the table list and do 'show create table'. 4) Release the lock. While taking the meta data dump for general_log and slow_log the "CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". This is because we skipped "DROP TABLE" for those tables, "DROP TABLE" fails for these tables if logging is enabled. Customer is applying the dump by enabling logging so, if the dump has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" stmts for those tables. After the fix we could observe "Table 'mysql.general_log' doesn't exist" errors initially that is because in the customer scenario they are dropping the mysql database by enabling the logging, Hence, those errors are expected. Once we apply the dump which is taken before the "drop database mysql", the errors will not be there. client/mysqldump.c: In get_table_structure() added code to skip the DROP TABLE stmts for general_log and slow_log tables, because when logging is enabled those stmts will fail. And replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for those tables. In dump_all_tables_in_db() added code to call get_table_structure() for general_log and slow_log tables. mysql-test/r/mysqldump.result: Added a test as part of fix for Bug #11754178 mysql-test/t/mysqldump.test: Added a test as part of fix for Bug #11754178
-
Annamalai Gurusami authored
the keyword "last" and not "break". Fixing the failing test case.
-
- 27 Apr, 2012 6 commits
-
-
Alexander Nozdrin authored
TO "LOCALHOST" IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED. Previous commit comments were wrong. The default value has always been NULL. The original patch for Bug#12762885 just makes it visible in the logs. This patch uses "0.0.0.0" string if bind-address is not set.
-
Alexander Nozdrin authored
- alexander.nozdrin@oracle.com-20120427151428-7llk1mlwx8xmbx0t - alexander.nozdrin@oracle.com-20120427144227-kltwiuu8snds4j3l.
-
Alexander Nozdrin authored
IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED. The original patch removed default value of the bind-address option. So, the default value became NULL. By coincedence NULL resolves to 0.0.0.0 and ::, and since the server chooses first IPv4-address, 0.0.0.0 is choosen. So, there was no change in the behaviour. This patch restores default value of the bind-address option to "0.0.0.0".
-
Alexander Nozdrin authored
IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED. The original patch removed default value of the bind-address option. So, the default value became NULL. By coincedence NULL resolves to 0.0.0.0 and ::, and since the server chooses first IPv4-address, 0.0.0.0 is choosen. So, there was no change in the behaviour. This patch restores default value of the bind-address option to "0.0.0.0".
-
Yasufumi Kinoshita authored
Fixed not to check timeout during the check table.
-
Yasufumi Kinoshita authored
Fixed not to check timeout during the check table.
-
- 26 Apr, 2012 3 commits
-
-
irana authored
-
irana authored
-
Manish Kumar authored
Problem - The failure on PB2 is possbily due to the port number being still in use even after the server restarts which is not reflected in the server restart. Fix - The problem is fixed by starting the servers forcefully using the option file and also the parameters for the server restart is passed correctly. mysql-test/suite/rpl/t/rpl_report_port-master.opt: Option file for the master.
-
- 24 Apr, 2012 1 commit
-
-
Inaam Rana authored
rb://1043 approved by: Sunny Bains Two internal counters were incremented twice for a single operations. The counters are: srv_buf_pool_flushed buf_lru_flush_page_count
-
- 23 Apr, 2012 9 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
Fixed a cmake 2.8.8 compilation problem.
-
Kent Boortz authored
-
Kent Boortz authored
-
Inaam Rana authored
rb://1033 approved by: Marko Makela Check return value from os_aio_init() and refuse to start if it fails.
-
Andrei Elkin authored
-
Andrei Elkin authored
-
Andrei Elkin authored
-
Andrei Elkin authored
rpl_auto_increment_bug45679.test is refined due to not fixed in 5.1 Bug11749859-39934.
-
- 21 Apr, 2012 1 commit
-
-
Andrei Elkin authored
-