- 26 Jun, 2012 1 commit
-
-
Kent Boortz authored
a multiple definition of 'THD::clear_error()' in (at least) libmysqld.a(lib_sql.o) and libmysqld.a(libfederated_a-ha_federated.o). Patch provided by Ramil Kalimullin.
-
- 21 Jun, 2012 1 commit
-
-
Joerg Bruehe authored
-
- 20 Jun, 2012 2 commits
-
-
Kent Boortz authored
-
Kent Boortz authored
-
- 19 Jun, 2012 1 commit
-
-
Harin Vadodaria authored
INC_HOST_ERRORS() IS CALLED. Description: Reverting patch 3755 for bug#11753779
-
- 15 Jun, 2012 1 commit
-
-
unknown authored
-
- 14 Jun, 2012 1 commit
-
-
unknown authored
-
- 13 Jun, 2012 1 commit
-
-
Harin Vadodaria authored
INC_HOST_ERRORS() IS CALLED. Issue : Sequence of calling inc_host_errors() and reset_host_errors() required some changes in order to maintain correct connection error count. Solution : Call to reset_host_errors() is shifted to a location after which no calls to inc_host_errors() are made.
-
- 12 Jun, 2012 1 commit
-
-
Manish Kumar authored
Problem ======== Replication breaks in the cases if the event length exceeds the size of master Dump thread's max_allowed_packet. The reason why this failure is occuring is because the event length is more than the total size of the max_allowed_packet, on addition of the max_event_header length exceeds the max_allowed_packet of the DUMP thread. This causes the Dump thread to break replication and throw an error. That can happen e.g with row-based replication in Update_rows event. Fix ==== The problem is fixed in 2 steps: 1.) The Dump thread limit to read event is increased to the upper limit i.e. Dump thread reads whatever gets logged in the binary log. 2.) On the slave side we increase the the max_allowed_packet for the slave's threads (IO/SQL) by increasing it to 1GB. This is done using the new server option (slave_max_allowed_packet) included, is used to regulate the max_allowed_packet of the slave thread (IO/SQL) by the DBA, and facilitates the sending of large packets from the master to the slave. This causes the large packets to be received by the slave and apply it successfully. sql/log_event.cc: The max_allowed_packet is not evaluated to the new option slave_max_allowed_packet after the fix. sql/log_event.h: Added the new option in the log_event.h file. sql/mysqld.cc: Added a new option to the server. sql/slave.cc: Increasing the session max_allowed_packet to a large value, i.e. not taking global(max_allowed) into consideration, for the slave's threads. sql/sql_repl.cc: The dump thread's max_allowed_packet is set to the upper limit which makes it independent and it now reads whatever gets logged in the binary log.
-
- 05 Jun, 2012 1 commit
-
-
Tor Didriksen authored
Patch for 5.1 and 5.5: fix typo in byte comparison in rr_cmp()
-
- 01 Jun, 2012 1 commit
-
-
Annamalai Gurusami authored
WHEN KILLING Suppose there is a query waiting for a lock. If the user kills this query, then "Got error -1 when reading table" error message must not be logged in the server log file. Since this is a user requested interruption, no spurious error message must be logged in the server log. This patch will remove the error message from the log. approved by joh and tatjana
-
- 31 May, 2012 2 commits
- 30 May, 2012 2 commits
-
-
unknown authored
No commit message
-
Rohit Kalhans authored
-
- 29 May, 2012 1 commit
-
-
Rohit Kalhans authored
Problem: mysqlbinlog exits without any error code in case of file write error. It is because of the fact that the calls to Log_event::print() method does not return a value and the thus any error were being ignored. Resolution: We resolve this problem by checking for the IO_CACHE::error == -1 after every call to Log_event:: print() and terminating the further execution. client/mysqlbinlog.cc: - handled error conditions during event->print() calls - added check for error in end_io_cache() mysys/my_write.c: Added debug code to simulate file write error. error returned will be ENOSPC=> error no space on the disk sql/log_event.cc: Added debug code to simulate file write error, by reducing the size of io cache.
-
- 24 May, 2012 1 commit
-
-
Inaam Rana authored
rb://1088 approved by: Marko Makela This bug was introduced in early stages of plugin. We were not checking for an implicit lock on sec index rec for trx_id that is stamped on current version of the clust_index in case where the clust_index has a previous delete marked version.
-
- 21 May, 2012 2 commits
-
-
Annamalai Gurusami authored
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER When an insert stmt like "insert into t values (1),(2),(3)" is executed, the autoincrement values assigned to these three rows are expected to be contiguous. In the given lock mode (innodb_autoinc_lock_mode=1), the auto inc lock will be released before the end of the statement. So to make the autoincrement contiguous for a given statement, we need to reserve the auto inc values at the beginning of the statement. Modified the fix based on review comment by Svoj.
-
Manish Kumar authored
Problem ======== SQL statements close to the size of max_allowed_packet produce binary log events larger than max_allowed_packet. The reason why this failure is occuring is because the event length is more than the total size of the max_allowed_packet + max_event_header length. Now since the event length exceeds this size master Dump thread is unable to send the packet on to the slave. That can happen e.g with row-based replication in Update_rows event. Fix ==== The problem was fixed by increasing the max_allowed_packet for the slave's threads (IO/SQL) by increasing it to 1GB. This is done using the new server option included which is used to regulate the max_allowed_packet of the slave thread (IO/SQL). This causes the large packets to be received by the slave and apply it successfully. sql/log_event.h: Added the new option in the log_event.h file. sql/mysqld.cc: Added a new option to the server. sql/slave.cc: Increasing the session max_allowed_packet to a large value , i.e. not taking global(max_allowed) into consideration, for the slave's threads.
-
- 18 May, 2012 1 commit
-
-
Rohit Kalhans authored
Problem: After the fix for Bug#12589870, a new field that stores the length of db name was added in the buffer that stores the query to be executed. Unlike for the plain user session, the replication execution did not allocate the necessary chunk in Query-event constructor. This caused an invalid read while accessing this field. Solution: We fix this problem by allocating a necessary chunk in the buffer created in the Query_log_event::Query_log_event() and store the length of database name. sql/log_event.cc: Added a new field in the buffer created in the Query_log_event's constructor and store the length of database name.
-
- 17 May, 2012 3 commits
-
-
Gopal Shankar authored
PROBLEM: Threads end-up in deadlock due to locks acquired as described below, con1: Run Query on a table. It is important that this SELECT must back-off while trying to open the t1 and enter into wait_for_condition(). The SELECT then is blocked trying to lock mysys_var->mutex which is held by con3. The very significant fact here is that mysys_var->current_mutex will still point to LOCK_open, even if LOCK_open is no longer held by con1 at this point. con2: Try dropping table used in con1 or query some table. It will hold LOCK_open and be blocked trying to lock kernel_mutex held by con4. con3: Try killing the query run by con1. It will hold THD::LOCK_thd_data belonging to con1 while trying to lock mysys_var->current_mutex belonging to con1. But current_mutex will point to LOCK_open which is held by con2. con4: Get innodb engine status It will hold kernel_mutex, trying to lock THD::LOCK_thd_data belonging to con1 which is held by con3. So while technically only con2, con3 and con4 participate in the deadlock, con1's mysys_var->current_mutex pointing to LOCK_open is a vital component of the deadlock. CYCLE = (THD::LOCK_thd_data -> LOCK_open -> kernel_mutex -> THD::LOCK_thd_data) FIX: LOCK_thd_data has responsibility of protecting, 1) thd->query, thd->query_length 2) VIO 3) thd->mysys_var (used by KILL statement and shutdown) 4) THD during thread delete. Among above responsibilities, 1), 2)and (3,4) seems to be three independent group of responsibility. If there is different LOCK owning responsibility of (3,4), the above mentioned deadlock cycle can be avoid. This fix introduces LOCK_thd_kill to handle responsibility (3,4), which eliminates the deadlock issue. Note: The problem is not found in 5.5. Introduction MDL subsystem caused metadata locking responsibility to be moved from TDC/TC to MDL subsystem. Due to this, responsibility of LOCK_open is reduced. As the use of LOCK_open is removed in open_table() and mysql_rm_table() the above mentioned CYCLE does not form. Revision ID for changes, open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
-
unknown authored
No commit message
-
unknown authored
No commit message
-
- 16 May, 2012 3 commits
-
-
Annamalai Gurusami authored
The following scenario crashes our mysql server: 1. set global innodb_file_per_table=1; 2. create table t1(c1 int) engine=innodb; 3. alter table t1 discard tablespace; 4. alter table t1 add unique index(c1); Step 4 crashes the server. This patch introduces a check on discarded tablespace to avoid the crash. rb://1041 approved by Marko Makela
-
Venkata Sidagam authored
FULLTEXT INDEX AND CONCURRENT DML. Problem Statement: ------------------ 1) Create a table with FT index. 2) Enable concurrent inserts. 3) In multiple threads do below operations repeatedly a) truncate table b) insert into table .... c) select ... match .. against .. non-boolean/boolean mode After some time we could observe two different assert core dumps Analysis: -------- 1)assert core dump at key_read_cache(): Two select threads operating in-parallel on same key root block. 1st select thread block->status is set to BLOCK_ERROR because the my_pread() in read_block() is returning '0'. Truncate table made the index file size as 1024 and pread was asked to get the block of count bytes(1024 bytes) from offset of 1024 which it cannot read since its "end of file" and retuning '0' setting "my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR the 1st select thread enter into the free_block() and will be under wait on conditional mutex by making status as BLOCK_REASSIGNED and goes for wait_on_readers(). Other select thread will also work on the same block and sees the status as BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED and asserting the server. 2)assert core dump at key_write_cache(): One select thread and One insert thread. Select thread gets the unlocks the 'keycache->cache_lock', which allows other threads to continue and gets the pread() return value as'0'(please see the explanation above) and tries to get the lock on 'keycache->cache_lock' and waits there for the lock. Insert thread requests for the block, block will be assigned from the hash list and makes the page_status as 'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits in the queue since there are some other threads performing reads on the same block. Select thread which was waiting for the 'keycache->cache_lock' mutex in the read_block() will continue after getting the my_pread() value as '0' and sets the block status as BLOCK_ERROR and goes to the free_block() and go to the wait_for_readers(). Now the insert thread will awake and continues. and checks block->status as not BLOCK_READ and it asserts. Fix: --- In the full text code, multiple readers of index file is not guarded. Hence added below below code in _ft2_search() and walk_and_match(). to lock the key_root I have used below code in _ft2_search() if (info->s->concurrent_insert) mysql_rwlock_rdlock(&share->key_root_lock[0]); and to unlock if (info->s->concurrent_insert) mysql_rwlock_unlock(&share->key_root_lock[0]); storage/myisam/ft_boolean_search.c: Since its a recursion function, to avoid confusion in taking and releasing the locks, renamed _ft2_search() to _ft2_search_internal() function. And _ft2_search() will take the lock, call _ft2_search_internal() and release the lock in case of concurrent inserts. storage/myisam/ft_nlq_search.c: Added read locks code in walk_and_match()
-
Annamalai Gurusami authored
INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER When an insert stmt like "insert into t values (1),(2),(3)" is executed, the autoincrement values assigned to these three rows are expected to be contiguous. In the given lock mode (innodb_autoinc_lock_mode=1), the auto inc lock will be released before the end of the statement. So to make the autoincrement contiguous for a given statement, we need to reserve the auto inc values at the beginning of the statement. rb://1074 approved by Alexander Nozdrin
-
- 15 May, 2012 4 commits
-
-
Nuno Carvalho authored
Improved random number filtering verification on rpl_filter_tables_not_exist test.
-
Marko Mäkelä authored
dict_table_replace_index_in_foreign_list(): Replace the dropped index also in the foreign key constraints of child tables that are referencing this table. row_ins_check_foreign_constraint(): If the underlying index is missing, refuse the operation. rb:1051 approved by Jimmy Yang
-
Georgi Kodinov authored
Applied the fix that updates yaSSL to 2.2.1 and fixes parsing this particular certificate. Added a test case with the certificate itself.
-
Bjorn Munch authored
-
- 10 May, 2012 1 commit
-
-
Annamalai Gurusami authored
BY A CONCURRENT TRANSACTIO The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs a table handler clone. Innodb does not provide a clone operation. The ha_innobase::clone() is not there. The handler::clone() does not take care of the ha_innobase->prebuilt->select_lock_type. Because of this what happens is that for one index we do a locking read, and for the other index we were doing a non-locking (consistent) read. The patch introduces ha_innobase::clone() member function. It is implemented similar to ha_myisam::clone(). It calls the base class handler::clone() and then does any additional operation required. I am setting the ha_innobase->prebuilt->select_lock_type correctly. rb://1060 approved by Marko
-
- 08 May, 2012 1 commit
-
-
Sunanda Menon authored
-
- 07 May, 2012 1 commit
-
-
Venkata Sidagam authored
CAUSES RESTORE PROBLEM Problem Statement: ------------------ mysqldump is not having the dump stmts for general_log and slow_log tables. That is because of the fix for Bug#26121. Hence, after dropping the mysql database, and applying the dump by enabling the logging, "'general_log' table not found" errors are logged into the server log file. Analysis: --------- As part of the fix for Bug#26121, we skipped the dumping of tables for general_log and slow_log, because the data dump of those tables are taking LOCKS, which is not allowed for log tables. Fix: ---- We came up with an approach that instead of taking both meta data and data dump information for those tables, take only the meta data dump which doesn't need LOCKS. As part of fixing the issue we came up with below algorithm. Design before fix: 1) mysql database is having tables like db, event,... general_log, ... slow_log... 2) Skip general_log and slow_log while preparing the tables list 3) Take the TL_READ lock on tables which are present in the table list and do 'show create table'. 4) Release the lock. Design with the fix: 1) mysql database is having tables like db, event,... general_log, ... slow_log... 2) Skip general_log and slow_log while preparing the tables list 3) Explicitly call the 'show create table' for general_log and slow_log 3) Take the TL_READ lock on tables which are present in the table list and do 'show create table'. 4) Release the lock. While taking the meta data dump for general_log and slow_log the "CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". This is because we skipped "DROP TABLE" for those tables, "DROP TABLE" fails for these tables if logging is enabled. Customer is applying the dump by enabling logging so, if the dump has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" stmts for those tables. After the fix we could observe "Table 'mysql.general_log' doesn't exist" errors initially that is because in the customer scenario they are dropping the mysql database by enabling the logging, Hence, those errors are expected. Once we apply the dump which is taken before the "drop database mysql", the errors will not be there. client/mysqldump.c: In get_table_structure() added code to skip the DROP TABLE stmts for general_log and slow_log tables, because when logging is enabled those stmts will fail. And replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for those tables. In dump_all_tables_in_db() added code to call get_table_structure() for general_log and slow_log tables. mysql-test/r/mysqldump.result: Added a test as part of fix for Bug #11754178 mysql-test/t/mysqldump.test: Added a test as part of fix for Bug #11754178
-
- 27 Apr, 2012 1 commit
-
-
Yasufumi Kinoshita authored
Fixed not to check timeout during the check table.
-
- 26 Apr, 2012 1 commit
-
-
irana authored
-
- 23 Apr, 2012 2 commits
-
-
Andrei Elkin authored
-
Andrei Elkin authored
rpl_auto_increment_bug45679.test is refined due to not fixed in 5.1 Bug11749859-39934.
-
- 20 Apr, 2012 2 commits
-
-
Nuno Carvalho authored
The function mysql_show_binlog_events has a local stack variable 'LOG_INFO linfo;', which is assigned to thd->current_linfo, however this variable goes out of scope and is destroyed before clean thd->current_linfo. The problem is solved by moving 'LOG_INFO linfo;' to function scope.
-
Andrei Elkin authored
BUG#11761686 insert_id event is not filtered. Two issues are covered. INSERT into autoincrement field which is not the first part in the composed primary key is unsafe by autoincrement logging design. The case is specific to MyISAM engine because Innodb does not allow such table definition. However no warnings and row-format logging in the MIXED mode was done, and that is fixed. Int-, Rand-, User-var log-events were not filtered along with their parent query that made possible them to screw up execution context of the following query. Fixed with deferring their execution until the parent query. ****** Bug#11754117 Post review fixes. mysql-test/suite/rpl/r/rpl_auto_increment_bug45679.result: a new result file is added. mysql-test/suite/rpl/r/rpl_filter_tables_not_exist.result: results updated. mysql-test/suite/rpl/t/rpl_auto_increment_bug45679.test: regression test for BUG#11754117-45670 is added. mysql-test/suite/rpl/t/rpl_filter_tables_not_exist.test: regression test for filtering issue of BUG#11754117 - 45670 is added. sql/log_event.cc: Logics are added for deferring and executing events associated with the Query event. sql/log_event.h: Interface to deferred events batch execution is added. sql/rpl_rli.cc: initialization for new RLI members is added. sql/rpl_rli.h: New members to RLI are added to facilitate deferred events gathering and execution control; two general character RLI cleanup methods are constructed. sql/rpl_utility.cc: Deferred_log_events methods are difined. sql/rpl_utility.h: A new class Deferred_log_events is defined to implement IRU events gathering, execution and cleanup. sql/slave.cc: Necessary changes to initialize `rli->deferred_events' and prevent deferred event deletion in the main read-exec branch. sql/sql_base.cc: A new safe-check function for multi-part pk with auto-increment is defined and deployed in lock_tables(). sql/sql_class.cc: Initialization for a new member and replication cleanups are added to THD class. sql/sql_class.h: THD class receives a new member to hold a specific execution context for slave applier. sql/sql_parse.cc: Execution of the deferred event in started prior to its parent query.
-
- 19 Apr, 2012 1 commit
-
-
Mayank Prasad authored
Reason: This is a regression happened because of changes done in code refactoring in 5.1 from 5.0. Issue: While doing "Show tables" lex->verbose was being checked to avoid opening FRM files to get table type. In case of "Show full table", lex->verbose is true to indicate table type is required. In 5.0, this check was present which got missing in >=5.5. Fix: Added the required check to avoid opening FRM files unnecessarily in case of "Show tables".
-