- 25 Nov, 2013 1 commit
-
-
Anirudh Mangipudi authored
WITH MALFORMED XPATH EXP Problem: A malformed XPATH expression in the ExtractValue query is causing a server crash. This malformed XPATH expression is resulted when the position attribute in the substring function contains ".." in the beginning. Solution: The original crash is happening because the "../" is being evaluated prematurely. It tries to access XML while it hasn't been parsed yet. The premature evaluation is happening because the val_nodeset function is being set to constant, in which case we proceed to evaluate them in JOIN:prepare stage only. The solution to this is setting the val_nodeset functions as non-constant. This forces us to evaluate the function in the JOIN:exec stage and thus avoid any premature evaluation of the XML strings.
-
- 04 Nov, 2013 2 commits
- 01 Nov, 2013 1 commit
-
-
Tor Didriksen authored
get_cost_calc_buff_size() could return wrong value for the size of imerge_cost_buff.
-
- 31 Oct, 2013 2 commits
-
-
unknown authored
No commit message
-
Venkata Sidagam authored
UPPER CASE HOST NAME ANYMORE Description: It is not possible to drop users with host names with upper case letters in them. i.e DROP USER 'root'@'Tmp_Host_Name'; is failing with error. Analysis: Since the fix 11748570 we came up with lower case hostnames as standard. But in the current bug the hostname is created by mysql_install_db script is still having upper case hostnames. So, if we have the hostname with upper case letters like(Tmp_Host_Name) then we will have as it is stored in the mysql.user table. In this case if use "'DROP USER 'root'@'Tmp_Host_Name';" it gives error because we do compare with the lower case of hostname since the 11748570 fix. Fix: We need to convert the hostname to lower case before storing into the mysql.user table when we run the mysql_install_db script.
-
- 30 Oct, 2013 1 commit
-
-
Balasubramanian Kandasamy authored
-
- 29 Oct, 2013 1 commit
-
-
Tor Didriksen authored
The filesort implementation needs space for at least 15 records (plus some internal overhead) in its main sort buffer.
-
- 18 Oct, 2013 1 commit
-
-
Aditya A authored
AS A INNODB PARTITTION. PROBLEM ------- The correct engine_type was not being set during rebuild of the partition due to which the handler was always created with the default engine, which is innodb for 5.5+ ,therefore even if the table was myisam, after rebuilding the partitions ended up as innodb partitions. FIX --- Set the correct engine type during rebuild. [Approved by mattiasj #rb3599]
-
- 16 Oct, 2013 2 commits
-
-
Venkatesh Duggirala authored
REPLICATION FILTERS ARE USED. Problem: When Filtered-slave applies Int_var_log_event and when it tries to write the event to its own binlog, LAST_INSERT_ID value is written wrongly. Analysis: THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt is a variable which is set when LAST_INSERT_ID() is used by a statement. If it is set, first_successful_insert_id_in_ prev_stmt_for_binlog will be stored in the statement-based binlog. This variable is CUMULATIVE along the execution of a stored function or trigger: if one substatement sets it to 1 it will stay 1 until the function/trigger ends, thus making sure that first_successful_insert_id_in_ prev_stmt_for_binlog does not change anymore and is propagated to the caller for binlogging. This is achieved using the following code if(!stmt_depends_on_first_successful_insert_id_in_prev_stmt) { /* It's the first time we read it */ first_successful_insert_id_in_prev_stmt_for_binlog= first_successful_insert_id_in_prev_stmt; stmt_depends_on_first_successful_insert_id_in_prev_stmt= 1; } Slave server, after receiving Int_var_log_event event from master, it is setting stmt_depends_on_first_successful_insert_id_in_prev_stmt to true(*which is wrong*) and not setting first_successful_insert_id_in_prev_stmt_for_binlog. Because of this problem, when the actual DML statement with LAST_INSERT_ID() is parsed by slave SQL thread, first_successful_insert_id_in_prev_stmt_for_binlog is not set. Hence the value zero (default value) is written to slave's binlog. Why only *Filtered slave* is effected when the code is in common place: ------------------------------------------------------- In Query_log_event::do_apply_event, THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt is reset to zero at the end of the function. In case of normal slave (No Filters), this variable will be reset. In Filtered slave, Slave SQL thread defers all IRU events's execution until IRU's Query_log event is received. Once it receives Query_log_event it executes all pending IRU events and then it executes Query_log_event. Hence the variable is not getting reset to 0, causing this bug. Fix: As described above, the root cause was setting THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt when Int_var_log_event was executed by a SQL thread. Hence removing the problematic line from the code.
-
Venkata Sidagam authored
Description: Fix for bug CVE-2012-5611 (bug 67685) is incomplete. The ACL_KEY_LENGTH-sized buffers in acl_get() and check_grant_db() can be overflown by up to two bytes. That's probably not enough to do anything more serious than crashing mysqld. Analysis: In acl_get() when "copy_length" is calculated it just adding the variable lengths. But when we are using them with strmov() we are adding +1 to each. This will lead to a three byte buffer overflow (i.e two +1's at strmov() and one byte for the null added by strmov() function). Similarly it happens for check_grant_db() function as well. Fix: We need to add "+2" to "copy_length" in acl_get() and "+1" to "copy_length" in check_grant_db().
-
- 14 Oct, 2013 1 commit
-
-
Nuno Carvalho authored
WL#7266: Dump-thread additional concurrency tests This worklog aims at testing the two following scenarios: 1) Whenever the mysql_binlog_send method (dump thread) reaches the end of file when reading events from the binlog, before checking if it should wait for more events, there was a test to check if the file being read was still active, i.e, it was the last known binlog. However, it was possible that something was written to the binary log and then a rotation would happen, after EOF was detected and before the check for active was performed. In this case, the end of the binary log would not be read by the dump thread, and this would cause the slave to lose updates. This test verifies that the problem has been fixed. It waits during this window while forcing a rotation in the binlog. 2) Verify dump thread can send events in active file, correctly after encountering an IO error.
-
- 07 Oct, 2013 2 commits
-
-
unknown authored
No commit message
-
Yasufumi Kinoshita authored
ha_innobase::records_in_range() should return HA_POS_ERROR for the table during discarded without requesting pages. The later other handler method should treat the error correctly. Approved by Sunny in rb#3433
-
- 04 Oct, 2013 1 commit
-
-
unknown authored
No commit message
-
- 27 Sep, 2013 1 commit
-
-
unknown authored
No commit message
-
- 20 Sep, 2013 1 commit
-
-
unknown authored
-
- 12 Sep, 2013 1 commit
-
-
Satya Bodapati authored
disable testcase due to BUG#17446090
-
- 11 Sep, 2013 1 commit
-
-
Satya Bodapati authored
IT IS DONE IN-PLACE With change buffer enabled, InnoDB doesn't write a transaction log record when it merges a record from the insert buffer to an secondary index page if the insertion is performed as an update-in-place. Fixed by logging the 'update-in-place' operation on secondary index pages. Approved by Marko. rb#2429
-
- 10 Sep, 2013 3 commits
-
-
mithun authored
WITH MY_B_VPRINTF() Issue : In LP 64 machine max long value can be 20 digit decimal value. But in my_b_vprintf() the intermediate buffer storage used is 17 bytes length. This will lead to buffer overflow. Solution : Increased the buffer storage from 17 to 32 bytes. code is backported from 5.6 mysys/mf_iocache2.c: In function my_b_vprintf increased the size of local buff from 17 to 32 bytes.
-
Libing Song authored
Postfix, suppress the new warning generated by the bug's fix.
-
Libing Song authored
Dump thread may encounter an error when reading events from the active binlog file. However the errors may be temporary, so dump thread will try to read the event again. But dump thread seeked to an wrong position, it caused some events was sent twice. To fix the bug, prev_pos is defined out the while loop and is set the correct position after reading every event correctly. This patch also make binlog_can_be_corrupted more accurate, only the binlogs not closed normally are marked binlog_can_be_corrupted. Finally, two warnings are added when dump threads encounter the temporary errors.
-
- 09 Sep, 2013 3 commits
-
-
Balasubramanian Kandasamy authored
Reverted the changes to spec file to ignore mysqld_safe.pid file, updated the logic to get the correct count of PID files
-
Hery Ramilison authored
-
Venkata Sidagam authored
Reverting the patch. Because this change is not to me made for GA versions.
-
- 03 Sep, 2013 1 commit
-
-
Hery Ramilison authored
-
- 30 Aug, 2013 2 commits
-
-
Igor Solodovnikov authored
Memory Leak in mysql_options() was caused by missing call to my_free() in MYSQL_SET_CLIENT_IP branch. Fixed by adding my_free() to cleanup mysql->options.client_ip value before assigning new value.
-
Balasubramanian Kandasamy authored
-
- 29 Aug, 2013 1 commit
-
-
Balasubramanian Kandasamy authored
-
- 28 Aug, 2013 1 commit
-
-
Raghav Kapoor authored
ERROR HANDLING CODE BACKGROUND: There can be a potential crash due to buffer overrun in SSL error handling code due to missing comma in ssl_error_string[] array in viosslfactories.c. ANALYSIS: Found by code Inspection. FIX: Added the missing comma in SSL error handling code in ssl_error_string[] array in viosslfactories.c.
-
- 26 Aug, 2013 1 commit
-
-
unknown authored
-
- 23 Aug, 2013 1 commit
-
-
Neeraj Bisht authored
Problem:- In a Procedure, when we are comparing value of select query with IN clause and they both have different collation, cause error on first time execution and assert second time. procedure will have query like set @x = ((select a from t1) in (select d from t2));<---proc1 sel1 sel2 Analysis:- When we execute this proc1(first time) While resolving the fields of user variable, we will call Item_in_subselect::fix_fields while will resolve sel2. There in Item_in_subselect::select_transformer, we evaluate the left expression(sel1) and store it in Item_cache_* object (to avoid re-evaluating it many times during subquery execution) by making Item_in_optimizer class. While evaluating left expression we will prepare sel1. After that, we will put a new condition in sel2 in Item_in_subselect::select_transformer() which will compare t2.d and sel1(which is cached in Item_in_optimizer). Later while checking the collation in agg_item_collations() we get error and we cleanup the item. While cleaning up we cleaned the cached value in Item_in_optimizer object. When we execute the procedure second time, we have condition for sel2 and while setup_cond(), we can't able to find reference item as it is cleanup while item cleanup.So it assert. Solution:- We should not cleanup the cached value for Item_in_optimizer object, if we have put the condition to subselect.
-
- 21 Aug, 2013 4 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
compressed pages After loading a compressed-only page in buf_page_get_gen() we allocate a new block for decompression. The problem is that the compressed page is neither buffer-fixed nor I/O-fixed by the time we call buf_LRU_get_free_block(), so it may end up being evicted and returned back as a new block. buf_page_get_gen(): Temporarily buffer-fix the compressed-only block while allocating memory for an uncompressed page frame. This should prevent this form of the infinite loop, which is more likely with a small innodb_buffer_pool_size. rb#2511 approved by Jimmy Yang, Sunny Bains
-
Praveenkumar Hulakund authored
"SHOW PROCESSLIST" Analysis: ---------- The problem here is, if one connection changes its default db and at the same time another connection executes "SHOW PROCESSLIST", when it wants to read db of the another connection then there is a chance of accessing the invalid memory. The db name stored in THD is not guarded while changing user DB and while reading the user DB in "SHOW PROCESSLIST". So, if THD.db is freed by thd "owner" thread and if another thread executing "SHOW PROCESSLIST" statement tries to read and copy THD.db at the same time then we may endup in the issue reported here. Fix: ---------- Used mutex "LOCK_thd_data" to guard THD.db while freeing it and while copying it to processlist.
-
- 16 Aug, 2013 1 commit
-
-
Marko Mäkelä authored
DICT_TABLE_GET_FORMAT(CLUST_INDEX->TABLE) >= 1 The function row_sel_sec_rec_is_for_clust_rec() was incorrectly preparing to compare a NULL column prefix in a secondary index with a non-NULL column in a clustered index. This can trigger an assertion failure in 5.1 plugin and later. In the built-in InnoDB of MySQL 5.1 and earlier, we would apparently only do some extra work, by trimming the clustered index field for the comparison. The code might actually have worked properly apart from this debug assertion failure. It is merely doing some extra work in fetching a BLOB column, and then comparing it to NULL (which would return the same result, no matter what the BLOB contents is). While the test case involves CHECK TABLE, this could theoretically occur during any read that uses a secondary index on a column prefix of a column that can be NULL. rb#3101 approved by Mattias Jonsson
-
- 15 Aug, 2013 1 commit
-
-
Marko Mäkelä authored
There was a race condition in the rollback of TRX_UNDO_UPD_DEL_REC. Once row_undo_mod_clust() has rolled back the changes by the rolling-back transaction, it attempts to purge the delete-marked record, if possible, in a separate mini-transaction. However, row_undo_mod_remove_clust_low() fails to check if the DB_TRX_ID of the record that it found after repositioning the cursor, is still the same. If it is not, it means that the record was purged and another record was inserted in its place. So, the rollback would have performed an incorrect purge, breaking the locking rules and causing corruption. The problem was found by creating a table that contains a unique secondary index and a primary key, and two threads running REPLACE with only one value for the unique column, so that the uniqueness constraint would be violated all the time, leading to statement rollback. This bug exists in all InnoDB versions (I checked MySQL 3.23.53). It has become easier to repeat in 5.5 and 5.6 thanks to scalability improvements and a dedicated purge thread. rb#3085 approved by Jimmy Yang
-
- 14 Aug, 2013 1 commit
-
-
Marko Mäkelä authored
FAILED BLOB WRITE btr_store_big_rec_extern_fields(): Relax a debug assertion so that some BLOB pointers may remain zero if an error occurs. btr_free_externally_stored_field(), row_undo_ins(): Allow the BLOB pointer to be zero on any rollback. rb#3059 approved by Jimmy Yang, Kevin Lewis
-
- 12 Aug, 2013 1 commit
-
-
Anirudh Mangipudi authored
Problem Description: A mysqld_safe instance is started. An InnoDB crash recovery begins which takes few seconds to complete. During this crash recovery process happening, another mysqld_safe instance is started with the same server startup parameters. Since the mysqld's pid file is absent during the crash recovery process the second instance assumes there is no other process and tries to acquire a lock on the ibdata files in the datadir. But this step fails and the 2nd instance keeps retrying 100 times each with a delay of 1 second. Now after the 100 attempts, the server goes down, but while going down it hits the mysqld_safe script's cleanup section and without any check it blindly deletes the socket and pid files. Since no lock is placed on the socket file, it gets deleted. Solution: We create a mysqld_safe.pid file in the datadir, which protects the presence server instance resources by storing the mysqld_safe's process id in it. We place a check if the mysqld_safe.pid file is existing in the datadir. If yes then we check if the pid it contains is an active pid or not. If yes again, then the scripts logs an error saying "A mysqld_safe instance is already running". Otherwise it will log the present mysqld_safe's pid into the mysqld_safe.pid file.
-