- 17 May, 2012 7 commits
-
-
Mayank Prasad authored
Issue/Cause: Issue is of memory corruption.During optimization phase, pattern to be matched in where clause, is prepared. This is done in Item_func_concat::val_str() function which forms the resultant string (tmp_value) and return its pointer. In caller, Item_func_like::fix_fields, pattern is made to point to this string (tmp_value). In further processing, tmp_value is getting modified which causes pattern to have changed/wrong values. Fix: Allocate its own memroy location in caller, copy value of resultant string (tmp_value) into that and make pattern to point to that. This makes sure no further changes to tmp_value will affect pattern.
-
Gopal Shankar authored
-
Gopal Shankar authored
PROBLEM: Threads end-up in deadlock due to locks acquired as described below, con1: Run Query on a table. It is important that this SELECT must back-off while trying to open the t1 and enter into wait_for_condition(). The SELECT then is blocked trying to lock mysys_var->mutex which is held by con3. The very significant fact here is that mysys_var->current_mutex will still point to LOCK_open, even if LOCK_open is no longer held by con1 at this point. con2: Try dropping table used in con1 or query some table. It will hold LOCK_open and be blocked trying to lock kernel_mutex held by con4. con3: Try killing the query run by con1. It will hold THD::LOCK_thd_data belonging to con1 while trying to lock mysys_var->current_mutex belonging to con1. But current_mutex will point to LOCK_open which is held by con2. con4: Get innodb engine status It will hold kernel_mutex, trying to lock THD::LOCK_thd_data belonging to con1 which is held by con3. So while technically only con2, con3 and con4 participate in the deadlock, con1's mysys_var->current_mutex pointing to LOCK_open is a vital component of the deadlock. CYCLE = (THD::LOCK_thd_data -> LOCK_open -> kernel_mutex -> THD::LOCK_thd_data) FIX: LOCK_thd_data has responsibility of protecting, 1) thd->query, thd->query_length 2) VIO 3) thd->mysys_var (used by KILL statement and shutdown) 4) THD during thread delete. Among above responsibilities, 1), 2)and (3,4) seems to be three independent group of responsibility. If there is different LOCK owning responsibility of (3,4), the above mentioned deadlock cycle can be avoid. This fix introduces LOCK_thd_kill to handle responsibility (3,4), which eliminates the deadlock issue. Note: The problem is not found in 5.5. Introduction MDL subsystem caused metadata locking responsibility to be moved from TDC/TC to MDL subsystem. Due to this, responsibility of LOCK_open is reduced. As the use of LOCK_open is removed in open_table() and mysql_rm_table() the above mentioned CYCLE does not form. Revision ID for changes, open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
-
mysql-builder@oracle.com authored
No commit message
-
mysql-builder@oracle.com authored
No commit message
-
mysql-builder@oracle.com authored
No commit message
-
mysql-builder@oracle.com authored
No commit message
-
- 16 May, 2012 12 commits
-
-
Annamalai Gurusami authored
-
Annamalai Gurusami authored
The following scenario crashes our mysql server: 1. set global innodb_file_per_table=1; 2. create table t1(c1 int) engine=innodb; 3. alter table t1 discard tablespace; 4. alter table t1 add unique index(c1); Step 4 crashes the server. This patch introduces a check on discarded tablespace to avoid the crash. rb://1041 approved by Marko Makela
-
Annamalai Gurusami authored
-
Venkata Sidagam authored
-
Venkata Sidagam authored
FULLTEXT INDEX AND CONCURRENT DML. Problem Statement: ------------------ 1) Create a table with FT index. 2) Enable concurrent inserts. 3) In multiple threads do below operations repeatedly a) truncate table b) insert into table .... c) select ... match .. against .. non-boolean/boolean mode After some time we could observe two different assert core dumps Analysis: -------- 1)assert core dump at key_read_cache(): Two select threads operating in-parallel on same key root block. 1st select thread block->status is set to BLOCK_ERROR because the my_pread() in read_block() is returning '0'. Truncate table made the index file size as 1024 and pread was asked to get the block of count bytes(1024 bytes) from offset of 1024 which it cannot read since its "end of file" and retuning '0' setting "my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR the 1st select thread enter into the free_block() and will be under wait on conditional mutex by making status as BLOCK_REASSIGNED and goes for wait_on_readers(). Other select thread will also work on the same block and sees the status as BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED and asserting the server. 2)assert core dump at key_write_cache(): One select thread and One insert thread. Select thread gets the unlocks the 'keycache->cache_lock', which allows other threads to continue and gets the pread() return value as'0'(please see the explanation above) and tries to get the lock on 'keycache->cache_lock' and waits there for the lock. Insert thread requests for the block, block will be assigned from the hash list and makes the page_status as 'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits in the queue since there are some other threads performing reads on the same block. Select thread which was waiting for the 'keycache->cache_lock' mutex in the read_block() will continue after getting the my_pread() value as '0' and sets the block status as BLOCK_ERROR and goes to the free_block() and go to the wait_for_readers(). Now the insert thread will awake and continues. and checks block->status as not BLOCK_READ and it asserts. Fix: --- In the full text code, multiple readers of index file is not guarded. Hence added below below code in _ft2_search() and walk_and_match(). to lock the key_root I have used below code in _ft2_search() if (info->s->concurrent_insert) mysql_rwlock_rdlock(&share->key_root_lock[0]); and to unlock if (info->s->concurrent_insert) mysql_rwlock_unlock(&share->key_root_lock[0]);
-
Venkata Sidagam authored
-
Annamalai Gurusami authored
The following scenario crashes our mysql server: 1. set global innodb_file_per_table=1; 2. create table t1(c1 int) engine=innodb; 3. alter table t1 discard tablespace; 4. alter table t1 add unique index(c1); Step 4 crashes the server. This patch introduces a check on discarded tablespace to avoid the crash. rb://1041 approved by Marko Makela
-
Annamalai Gurusami authored
-
Venkata Sidagam authored
FULLTEXT INDEX AND CONCURRENT DML. Problem Statement: ------------------ 1) Create a table with FT index. 2) Enable concurrent inserts. 3) In multiple threads do below operations repeatedly a) truncate table b) insert into table .... c) select ... match .. against .. non-boolean/boolean mode After some time we could observe two different assert core dumps Analysis: -------- 1)assert core dump at key_read_cache(): Two select threads operating in-parallel on same key root block. 1st select thread block->status is set to BLOCK_ERROR because the my_pread() in read_block() is returning '0'. Truncate table made the index file size as 1024 and pread was asked to get the block of count bytes(1024 bytes) from offset of 1024 which it cannot read since its "end of file" and retuning '0' setting "my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR the 1st select thread enter into the free_block() and will be under wait on conditional mutex by making status as BLOCK_REASSIGNED and goes for wait_on_readers(). Other select thread will also work on the same block and sees the status as BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED and asserting the server. 2)assert core dump at key_write_cache(): One select thread and One insert thread. Select thread gets the unlocks the 'keycache->cache_lock', which allows other threads to continue and gets the pread() return value as'0'(please see the explanation above) and tries to get the lock on 'keycache->cache_lock' and waits there for the lock. Insert thread requests for the block, block will be assigned from the hash list and makes the page_status as 'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits in the queue since there are some other threads performing reads on the same block. Select thread which was waiting for the 'keycache->cache_lock' mutex in the read_block() will continue after getting the my_pread() value as '0' and sets the block status as BLOCK_ERROR and goes to the free_block() and go to the wait_for_readers(). Now the insert thread will awake and continues. and checks block->status as not BLOCK_READ and it asserts. Fix: --- In the full text code, multiple readers of index file is not guarded. Hence added below below code in _ft2_search() and walk_and_match(). to lock the key_root I have used below code in _ft2_search() if (info->s->concurrent_insert) mysql_rwlock_rdlock(&share->key_root_lock[0]); and to unlock if (info->s->concurrent_insert) mysql_rwlock_unlock(&share->key_root_lock[0]);
-
Olav Sandstaa authored
RESULTS ON IN() & NOT IN() COMP #3 This bug causes a wrong result in mysql-trunk when ICP is used and bad performance in mysql-5.5 and mysql-trunk. Using the query from bug report to explain what happens and causes the wrong result from the query when ICP is enabled: 1. The t3 table contains four records. The outer query will read these and for each of these it will execute the subquery. 2. Before the first execution of the subquery it will be optimized. In this case the important is what happens to the first table t1: -make_join_select() will call the range optimizer which decides that t1 should be accessed using a range scan on the k1 index It creates a QUICK_RANGE_SELECT object for this. -As the last part of optimization the ICP code pushes the condition down to the storage engine for table t1 on the k1 index. This produces the following information in the explain for this table: 2 DEPENDENT SUBQUERY t1 range k1 k1 5 NULL 3 Using index condition; Using filesort Note the use of filesort. 3. The first execution of the subquery does (among other things) due to the need for sorting: a. Call create_sort_index() which again will call find_all_keys(): b. find_all_keys() will read the required keys for all qualifying rows from the storage engine. To do this it checks if it has a quick-select for the table. It will use the quick-select for reading records. In this case it will read four records from the storage engine (based on the range criteria). The storage engine will evaluate the pushed index condition for each record. c. At the end of create_sort_index() there is code that cleans up a lot of stuff on the join tab. One of the things that is cleaned is the select object. The result of this is that the quick-select object created in make_join_select is deleted. 4. The second execution of the subquery does the same as the first but the result is different: a. Call create_sort_index() which again will call find_all_keys() (same as for the first execution) b. find_all_keys() will read the keys from the storage engine. To do this it checks if it has a quick-select for the table. Now there is NO quick-select object(!) (since it was deleted in step 3c). So find_all_keys defaults to read the table using a table scan instead. So instead of reading the four relevant records in the range it reads the entire table (6 records). It then evaluates the table's condition (and here it goes wrong). Since the entire condition has been pushed down to the storage engine using ICP all 6 records qualify. (Note that the storage engine will not evaluate the pushed index condition in this case since it was pushed for the k1 index and now we do a table scan without any index being used). The result is that here we return six qualifying key values instead of four due to not evaluating the table's condition. c. As above. 5. The two last execution of the subquery will also produce wrong results for the same reason. Summary: The problem occurs due to all but the first executions of the subquery is done as a table scan without evaluating the table's condition (which is pushed to the storage engine on a different index). This is caused by the create_sort_index() function deleting the quick-select object that should have been used for executing the subquery as a range scan. Note that this bug in addition to causing wrong results also can result in bad performance due to executing the subquery using a table scan instead of a range scan. This is an issue in MySQL 5.5. The fix for this problem is to avoid that the Quick-select-object that the optimizer created is deleted when create_sort_index() is doing clean-up of the join-tab. This will ensure that the quick-select object and the corresponding pushed index condition will be available and used by all following executions of the subquery.
-
mysql-builder@oracle.com authored
No commit message
-
Annamalai Gurusami authored
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER When an insert stmt like "insert into t values (1),(2),(3)" is executed, the autoincrement values assigned to these three rows are expected to be contiguous. In the given lock mode (innodb_autoinc_lock_mode=1), the auto inc lock will be released before the end of the statement. So to make the autoincrement contiguous for a given statement, we need to reserve the auto inc values at the beginning of the statement. rb://1074 approved by Alexander Nozdrin
-
- 15 May, 2012 8 commits
-
-
Nuno Carvalho authored
Automerge from mysql-5.1 into mysql-5.5.
-
Nuno Carvalho authored
Improved random number filtering verification on rpl_filter_tables_not_exist test.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
dict_table_replace_index_in_foreign_list(): Replace the dropped index also in the foreign key constraints of child tables that are referencing this table. row_ins_check_foreign_constraint(): If the underlying index is missing, refuse the operation. rb:1051 approved by Jimmy Yang
-
Georgi Kodinov authored
-
Georgi Kodinov authored
Applied the fix that updates yaSSL to 2.2.1 and fixes parsing this particular certificate. Added a test case with the certificate itself.
-
Bjorn Munch authored
-
Bjorn Munch authored
-
- 14 May, 2012 1 commit
-
-
Joerg Bruehe authored
-
- 10 May, 2012 2 commits
-
-
Annamalai Gurusami authored
-
Annamalai Gurusami authored
BY A CONCURRENT TRANSACTIO The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs a table handler clone. Innodb does not provide a clone operation. The ha_innobase::clone() is not there. The handler::clone() does not take care of the ha_innobase->prebuilt->select_lock_type. Because of this what happens is that for one index we do a locking read, and for the other index we were doing a non-locking (consistent) read. The patch introduces ha_innobase::clone() member function. It is implemented similar to ha_myisam::clone(). It calls the base class handler::clone() and then does any additional operation required. I am setting the ha_innobase->prebuilt->select_lock_type correctly. rb://1060 approved by Marko
-
- 09 May, 2012 1 commit
-
-
Georgi Kodinov authored
-
- 08 May, 2012 1 commit
-
-
Sunanda Menon authored
-
- 07 May, 2012 3 commits
-
-
Joerg Bruehe authored
This is a weave merge, but without any conflicts. In 14 source files, the copyright year needed to be updated to 2012.
-
Venkata Sidagam authored
CAUSES RESTORE PROBLEM Merging the fix from mysql-5.1 to mysql-5.5
-
Venkata Sidagam authored
CAUSES RESTORE PROBLEM Problem Statement: ------------------ mysqldump is not having the dump stmts for general_log and slow_log tables. That is because of the fix for Bug#26121. Hence, after dropping the mysql database, and applying the dump by enabling the logging, "'general_log' table not found" errors are logged into the server log file. Analysis: --------- As part of the fix for Bug#26121, we skipped the dumping of tables for general_log and slow_log, because the data dump of those tables are taking LOCKS, which is not allowed for log tables. Fix: ---- We came up with an approach that instead of taking both meta data and data dump information for those tables, take only the meta data dump which doesn't need LOCKS. As part of fixing the issue we came up with below algorithm. Design before fix: 1) mysql database is having tables like db, event,... general_log, ... slow_log... 2) Skip general_log and slow_log while preparing the tables list 3) Take the TL_READ lock on tables which are present in the table list and do 'show create table'. 4) Release the lock. Design with the fix: 1) mysql database is having tables like db, event,... general_log, ... slow_log... 2) Skip general_log and slow_log while preparing the tables list 3) Explicitly call the 'show create table' for general_log and slow_log 3) Take the TL_READ lock on tables which are present in the table list and do 'show create table'. 4) Release the lock. While taking the meta data dump for general_log and slow_log the "CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". This is because we skipped "DROP TABLE" for those tables, "DROP TABLE" fails for these tables if logging is enabled. Customer is applying the dump by enabling logging so, if the dump has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" stmts for those tables. After the fix we could observe "Table 'mysql.general_log' doesn't exist" errors initially that is because in the customer scenario they are dropping the mysql database by enabling the logging, Hence, those errors are expected. Once we apply the dump which is taken before the "drop database mysql", the errors will not be there.
-
- 04 May, 2012 2 commits
-
-
Venkata Sidagam authored
CAUSES RESTORE PROBLEM Problem Statement: ------------------ mysqldump is not having the dump stmts for general_log and slow_log tables. That is because of the fix for Bug#26121. Hence, after dropping the mysql database, and applying the dump by enabling the logging, "'general_log' table not found" errors are logged into the server log file. Analysis: --------- As part of the fix for Bug#26121, we skipped the dumping of tables for general_log and slow_log, because the data dump of those tables are taking LOCKS, which is not allowed for log tables. Fix: ---- We came up with an approach that instead of taking both meta data and data dump information for those tables, take only the meta data dump which doesn't need LOCKS. As part of fixing the issue we came up with below algorithm. Design before fix: 1) mysql database is having tables like db, event,... general_log, ... slow_log... 2) Skip general_log and slow_log while preparing the tables list 3) Take the TL_READ lock on tables which are present in the table list and do 'show create table'. 4) Release the lock. Design with the fix: 1) mysql database is having tables like db, event,... general_log, ... slow_log... 2) Skip general_log and slow_log while preparing the tables list 3) Explicitly call the 'show create table' for general_log and slow_log 3) Take the TL_READ lock on tables which are present in the table list and do 'show create table'. 4) Release the lock. While taking the meta data dump for general_log and slow_log the "CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". This is because we skipped "DROP TABLE" for those tables, "DROP TABLE" fails for these tables if logging is enabled. Customer is applying the dump by enabling logging so, if the dump has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" stmts for those tables. After the fix we could observe "Table 'mysql.general_log' doesn't exist" errors initially that is because in the customer scenario they are dropping the mysql database by enabling the logging, Hence, those errors are expected. Once we apply the dump which is taken before the "drop database mysql", the errors will not be there.
-
Annamalai Gurusami authored
the keyword "last" and not "break". Fixing the failing test case.
-
- 27 Apr, 2012 3 commits
-
-
Alexander Nozdrin authored
TO "LOCALHOST" IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED. Previous commit comments were wrong. The default value has always been NULL. The original patch for Bug#12762885 just makes it visible in the logs. This patch uses "0.0.0.0" string if bind-address is not set.
-
Alexander Nozdrin authored
- alexander.nozdrin@oracle.com-20120427151428-7llk1mlwx8xmbx0t - alexander.nozdrin@oracle.com-20120427144227-kltwiuu8snds4j3l.
-
Alexander Nozdrin authored
IF LOCALHOST IS BOTH IPV4/IPV6 ENABLED. The original patch removed default value of the bind-address option. So, the default value became NULL. By coincedence NULL resolves to 0.0.0.0 and ::, and since the server chooses first IPv4-address, 0.0.0.0 is choosen. So, there was no change in the behaviour. This patch restores default value of the bind-address option to "0.0.0.0".
-