- 19 Jan, 2010 5 commits
-
-
Georgi Kodinov authored
-
Mattias Jonsson authored
-
Georgi Kodinov authored
-
Sergey Glukhov authored
added check_length optimization for I_S_NAME comparison sql/event_data_objects.cc: added check_length optimization for I_S_NAME comparison sql/events.cc: added check_length optimization for I_S_NAME comparison sql/mysql_priv.h: added check_length optimization for I_S_NAME comparison sql/repl_failsafe.cc: added check_length optimization for I_S_NAME comparison sql/sql_db.cc: added check_length optimization for I_S_NAME comparison sql/sql_parse.cc: added check_length optimization for I_S_NAME comparison sql/sql_show.cc: added check_length optimization for I_S_NAME comparison sql/sql_view.cc: added check_length optimization for I_S_NAME comparison sql/table.cc: added check_length optimization for I_S_NAME comparison
-
Luis Soares authored
PB2 run uncovered issue that needs further analysis.
-
- 18 Jan, 2010 2 commits
-
-
Mattias Jonsson authored
REORGANIZE PARTITION There were several problems which lead to this this, all related to bad error handling. 1) There was several bugs preventing the ddl-log to be used for cleaning up created files on error. 2) The error handling after the copy partition rows did not close and unlock the tables, resulting in deletion of partitions which were in use, which lead InnoDB to put the partition to drop in a background queue. sql/ha_partition.cc: Bug#47343: InnoDB fails to clean-up after lock wait timeout on REORGANIZE PARTITION Better error handling, if partition has been created/opened/locked then make sure it is unlocked and closed before returning error. The delete of the newly created partition is handled by the ddl-log. sql/sql_parse.cc: Bug#47343: InnoDB fails to clean-up after lock wait timeout on REORGANIZE PARTITION Fix a bug found when experimenting, thd could really be NULL here, as mentioned in the function header. sql/sql_partition.cc: Bug#47343: InnoDB fails to clean-up after lock wait timeout on REORGANIZE PARTITION Used the correct .frm shadow name to put into the ddl-log. Really use the ddl-log to handle errors. sql/sql_table.cc: Bug#47343: InnoDB fails to clean-up after lock wait timeout on REORGANIZE PARTITION Fixes of the ddl-log when used as error recovery (no crash). When executing an entry from memory (not read from disk) the name_len was not set correctly.
-
Georgi Kodinov authored
error in the query. Fixes a leak after materializing a GROUP BY subquery to a temp table when the subquery has a blob column in the SELECT list. Fixed by correctly destructing temporary buffers after doing the conversion.
-
- 17 Jan, 2010 1 commit
-
-
Mattias Jonsson authored
-
- 16 Jan, 2010 1 commit
-
-
unknown authored
'CREATE TABLE IF NOT EXISTS ... SELECT' statement were causing 'CREATE TEMPORARY TABLE ...' to be written to the binary log in row-based mode (a.k.a. RBR), when there was a temporary table with the same name. Because the 'CREATE TABLE ... SELECT' statement was executed as 'INSERT ... SELECT' into the temporary table. Since in RBR mode no other statements related to temporary tables are written into binary log, this sometimes broke replication. This patch changes behavior of 'CREATE TABLE [IF NOT EXISTS] ... SELECT ...'. it ignores existence of temporary table with the same name as table being created and is interpreted as attempt to create/insert into base table. This makes behavior of 'CREATE TABLE [IF NOT EXISTS] ... SELECT' consistent with how ordinary 'CREATE TABLE' and 'CREATE TABLE ... LIKE' behave.
-
- 15 Jan, 2010 3 commits
-
-
Georgi Kodinov authored
The optimizer must not continue executing the current query if e.g. the storage engine reports an error. This is somewhat hard to implement with Item::val_xxx() because they do not have means to return error code. This is why we need to check the thread's error state after a call to one of the Item::val_xxx() methods. Fixed store_key_item::copy_inner() to return an error state if an error happened during the call to Item::save_in_field() because it calls Item::val_xxx(). Also added similar checks to related places.
-
Georgi Kodinov authored
-
Georgi Kodinov authored
Added not_embedded to the new dbug_sync test file.
-
- 14 Jan, 2010 4 commits
-
-
Luis Soares authored
BUG#49481: RBR: MyISAM and bit fields may cause slave to stop on delete: cant find record BUG#49482: RBR: Replication may break on deletes when MyISAM tables + char field are used When using MyISAM tables, despite the fact that the null bit is set for some fields, their old value is still in the row. This can cause the comparison of records to fail when the slave is doing an index or range scan. We fix this by avoiding memcmp for MyISAM tables when comparing records. Additionally, when comparing field by field, we first check if both fields are not null and if so, then we compare them. If just one field is null we return failure immediately. If both fields are null, we move on to the next field.
-
Luis Soares authored
Small fix in the test case. Changed the UNLOCK tables to happen after each insert.
-
Luis Soares authored
-
Georgi Kodinov authored
-
- 13 Jan, 2010 13 commits
-
-
Kristofer Pettersson authored
-
Kristofer Pettersson authored
In certain rare cases when a process was interrupted during a FLUSH PRIVILEGES operation the diagnostic area would be set to an error state but the function responsible for the operation would still signal success. This would lead to a debug assertion error later on when the server would attempt to reset the DA before sending the error message. This patch fixes the issue by assuring that reload_acl_and_cache() always fails if an error condition is raised. The second issue was that a KILL could cause a console error message which referred to a DA state without first making sure that such a state existed. This patch fixes this issue in two different palces by first checking DA state before fetching the error message. sql/sql_acl.cc: * Make sure that there is an error to print before attempting to do so. * Minor style change: change 1 to TRUE for clarity. sql/sql_parse.cc: * Always fail reload_acl_and_cache() if the query was killed. sql/sql_servers.cc: * Make sure that there is an error to print before attempting to do so.
-
Martin Hansson authored
-
Ramil Kalimullin authored
-
Ramil Kalimullin authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Joerg Bruehe authored
-
Georgi Kodinov authored
-
Ramil Kalimullin authored
-
Sven Sandberg authored
Problem: When RAND() is binlogged in statement mode, the seed is binlogged too, so the replication slave generates the same sequence of random numbers. This makes replication work in many cases, but not in all cases: the order of rows is not guaranteed for, e.g., UPDATE or INSERT...SELECT statements, so the row data will be different if master and slave retrieve the rows in different orders. Fix: Mark RAND() as unsafe. It will generate a warning if binlog_format=STATEMENT and switch to row-logging if binlog_format=ROW. mysql-test/extra/rpl_tests/rpl_row_func003.test: updated test case to ignore new warnings mysql-test/suite/binlog/r/binlog_unsafe.result: updated result file mysql-test/suite/binlog/t/binlog_unsafe.test: Added test for RAND(). Also clarified some old comments. mysql-test/suite/rpl/r/rpl_misc_functions.result: updated result file mysql-test/suite/rpl/r/rpl_nondeterministic_functions.result: updated test case to ignore new warnings mysql-test/suite/rpl/r/rpl_optimize.result: updated result file mysql-test/suite/rpl/r/rpl_row_func003.result: updated result file mysql-test/suite/rpl/t/rpl_misc_functions.test: updated test case to ignore new warnings mysql-test/suite/rpl/t/rpl_nondeterministic_functions.test: updated test case to ignore new warnings mysql-test/suite/rpl/t/rpl_optimize.test: updated test case to ignore new warnings mysql-test/suite/rpl/t/rpl_trigger.test: updated test case to ignore new warnings mysql-test/suite/rpl_ndb/r/rpl_ndb_func003.result: updated result file sql/item_create.cc: Mark RAND() unsafe.
-
Ramil Kalimullin authored
Problem: copying issuer's (or subject's) name tags into an internal buffer from incoming stream we didn't check the buffer overflow. That may lead to memory overrun, crash etc. Fix: ensure we don't overrun the buffer. Note: there's no simple test case (exploit needed). extra/yassl/taocrypt/include/asn.hpp: Fix for bug#50227: Pre-auth buffer-overflow in mySQL through yaSSL - CertDecoder::AddTag() introduced. extra/yassl/taocrypt/src/asn.cpp: Fix for bug#50227: Pre-auth buffer-overflow in mySQL through yaSSL - copying data from incoming stream to the issuer_ or subject_ buffers ensure we don't overrun them. - code cleanup.
-
Gleb Shchepa authored
Selecting of the CONCAT_WS(...<PS parameter>...) result into a user variable may return wrong data. Item_func_concat_ws::val_str contains a number of memory allocation-saving optimization tricks. After the fix for bug 46815 the control flow has been changed to a branch that is commented as "This is quite uncommon!": one of places where we are trying to concatenate strings inplace. However, that "uncommon" place didn't care about PS parameters, that have another trick in Item_sp_variable::val_str(): they use the intermediate Item_sp_variable::str_value field, where they may store a reference to an external argument's buffer. The Item_func_concat_ws::val_str function has been modified to take into account val_str functions (such as Item_sp_variable::val_str) that return a pointer to an internal Item member variable that may reference to a buffer provided. mysql-test/r/func_concat.result: Added test case for bug #50096. mysql-test/t/func_concat.test: Added test case for bug #50096. sql/item_strfunc.cc: Bug #50096: CONCAT_WS inside procedure returning wrong data The Item_func_concat_ws::val_str function has been modified to take into account val_str functions (such as Item_sp_variable::val_str) that return a pointer to an internal Item member variable that may reference to a buffer provided.
-
- 12 Jan, 2010 4 commits
-
-
Martin Hansson authored
MySQL handles the join syntax "JOIN ... USING( field1, ... )" and natural joins by building the same parse tree as a corresponding join with an "ON t1.field1 = t2.field1 ..." expression would produce. This parse tree was not cleaned up properly in the following scenario. If a thread tries to lock some tables and finds that the tables were dropped and re-created while waiting for the lock, it cleans up column references in the statement by means a per-statement free list. But if the statement was part of a stored procedure, column references on the stored procedure's free list weren't cleaned up and thus contained pointers to freed objects. Fixed by adding a call to clean up the current prepared statement's free list. mysql-test/r/sp_sync.result: Bug#48157: Test case mysql-test/t/sp_sync.test: Bug#48157: Test result sql/item.h: Bug#48157: Commented field. sql/sql_parse.cc: Bug#48157: Commented function. sql/sql_update.cc: Bug#48157: fix
-
Joerg Bruehe authored
This includes "MYSQL_U_SCORE_VERSION" in "configure.in".
-
Joerg Bruehe authored
- "release" starts from 1 - "level" ("m2", "rc", ...) is included in the RPM version.
-
Joerg Bruehe authored
but don't take the "tree name" change.
-
- 11 Jan, 2010 3 commits
-
-
Joerg Bruehe authored
- "release" starts from 1 - "level" ("m2", "rc", ...) is included in the RPM version.
-
Gleb Shchepa authored
-
Gleb Shchepa authored
32bit builds with the --enable-assembler flag (enabled by default) fail with an error message: undefined reference to `strmov_overlapp'. Since the fix for bug 48866 we use a home-grown strmov function instead of the ctpcpy function, but the source file for this function was missed in the Makefile.am. The strings/Makefile.am file has been modified to include strmov.c file into ASSEMBLER_x86 and ASSEMBLER_sparc32 sections. strings/Makefile.am: Bug #49955: ld error message: undefined reference to `strmov_overlapp' The strings/Makefile.am file has been modified to include strmov.c file into ASSEMBLER_x86 and ASSEMBLER_sparc32 sections.
-
- 08 Jan, 2010 2 commits
-
-
unknown authored
Recover the right contents of the index file at the end of the test case.
-
unknown authored
Manually deleteing one or more entries from 'master-bin.index', will cause master infinitely loop to send one binlog file. When starting a dump session, master opens index file and search the binlog file which is being requested by the slave. The position of the binlog file in the index file is recorded. it will be used to find the next binlog file when current binlog file has dumped completely. As only the position is used, it may not get the correct file if some entries has been removed manually from the index file. the master will reopen the current binlog file which has been dump completely and redump it if it can not get the next binlog file's name from index file. It obviously is a logical error. Even though it is allowed to manually change index file, but it is not recommended. so after this patch, master sends a fatal error to slave and close the dump session if a new binlog file has been generated and master can not get it from the index file.
-
- 07 Jan, 2010 2 commits
-
-
Luis Soares authored
Some improvements on the test case as suggested during review.
-
Luis Soares authored
-