- 31 Jul, 2012 1 commit
-
-
Joerg Bruehe authored
-
- 27 Jul, 2012 2 commits
-
-
Tor Didriksen authored
Space available for merging was calculated incorrectly.
-
Venkata Sidagam authored
Fixed the missing of federated/include folder at the time of preparing package distribution, issue happens only in 5.1
-
- 26 Jul, 2012 5 commits
-
-
Praveenkumar Hulakund authored
IS PLACE HOLDER AND USE SERVER-SIDE Analysis: LIMIT always takes nonnegative integer constant values. http://dev.mysql.com/doc/refman/5.6/en/select.html So parsing of value '5' for LIMIT in SELECT fails. But, within prepared statement, LIMIT parameters can be specified using '?' markers. Value for the parameter can be supplied while executing the prepared statement. Passing string values, float or double value for LIMIT works well from CLI. Because, while setting the value for the parameters from the variable list (added using SET), if the value is for parameter LIMIT then its converted to integer value. But, when prepared statement is executed from the other interfaces as J connectors, or C applications etc. The value for the parameters are sent to the server with execute command. Each item in log has value and the data TYPE. So, While setting parameter value from this log, value is set to all the parameters with the same data type as passed. But here logic to convert value to integer type if its for LIMIT parameter is missing. Because of this,string '5' is set to LIMIT. And the same is logged into the binlog file too. Fix: When executing prepared statement having parameter for CLI it worked fine, as the value set for the parameter is converted to integer. And this failed in other interfaces as J connector,C Applications etc as this conversion is missing. So, as a fix added check while setting value for the parameters. If the parameter is for LIMIT value then its converted to integer value.
-
Venkata Sidagam authored
Fix for pb2 test failure.
-
Nirbhay Choubey authored
WORK + SAVES ROOT PASSWORD TO DISK! The secure installation scripts connect to the server by storing the password in a temporary option file. Now, if the script gets killed or fails for some reason, the removal of the option file may not take place. This patch introduces following enhancements : * (.sh) Made sure that cleanup happens at every call to 'exit 1'. This is performed implicitly by END{} in pl.in. * (.pl.in) Added a warning in case unlink fails to delete the option/query files. * (.sh/.pl.in) Added more signals to the signal handler list. SIG# 1, 3, 6, 15
-
Tor Didriksen authored
-
Venkata Sidagam authored
Problem description: Table 't' created with two colums having compound index on both the columns under innodb/myisam engine at remote machine. In the local machine same table is created undet the federated engine. A select having where clause with along 'AND' operation gives wrong results on local machine. Analysis: The given query at federated engine is wrongly transformed by federated::create_where_from_key() function and the same was sent to the remote machine. Hence the local machine is showing wrong results. Given query "select c1 from t where c1 <= 2 and c2 = 1;" Query transformed, after ha_federated::create_where_from_key() function is: SELECT `c1`, `c2` FROM `t` WHERE (`c1` IS NOT NULL ) AND ( (`c1` >= 2) AND (`c2` <= 1) ) and the same sent to real_query(). In the above the '<=' and '=' conditions were transformed to '>=' and '<=' respectively. ha_federated::create_where_from_key() function behaving as below: The key_range is having both the start_key and end_key. The start_key is used to get "(`c1` IS NOT NULL )" part of the where clause, this transformation is correct. The end_key is used to get "( (`c1` >= 2) AND (`c2` <= 1) )", which is wrong, here the given conditions('<=' and '=') are changed as wrong conditions('>=' and '<='). The end_key is having {key = 0x39fa6d0 "", length = 10, keypart_map = 3, flag = HA_READ_AFTER_KEY} The store_length is having value '5'. Based on store_length and length values the condition values is applied in HA_READ_AFTER_KEY switch case. The switch case 'HA_READ_AFTER_KEY' is applicable to only the last part of the end_key and for previous parts it is going to 'HA_READ_KEY_OR_NEXT' case, here the '>=' is getting added as a condition instead of '<='. Fix: Updated the 'if' condition in 'HA_READ_AFTER_KEY' case to affect for all parts of the end_key. i.e 'i > 0' will used for end_key, Hence added it in the if condition.
-
- 25 Jul, 2012 1 commit
-
-
Annamalai Gurusami authored
Backporting the WL#5716, "Information schema table for InnoDB buffer pool information". Backporting revisions 2876.244.113, 2876.244.102 from mysql-trunk. rb://1175 approved by Jimmy Yang.
-
- 24 Jul, 2012 1 commit
-
-
Alexander Barkov authored
while the copyright notice still mentioned 2003.
-
- 19 Jul, 2012 3 commits
-
-
Bjorn Munch authored
-
Bjorn Munch authored
Added new minimal client using same framework Added internal test using it Small changes to top level make/configure/cmake to have it built
-
Venkata Sidagam authored
Problem description: Giving "help 'contents'" in the mysql client as a first statement gives error Analysis: In com_server_help() function the "server_cmd" variable was initialised with buffer->ptr(). And the "server_cmd" variable is not updated since we are passing "'contents'"(with single quote) so the buffer->ptr() consists of the previous buffer values and it was sent to the mysql_real_query() hence we are getting error. Fix: We are not initialising the "server_cmd" variable and we are updating the variable with "server_cmd= cmd_buf" in any of the case i.e with single quote or without single quote for the contents. As part of error message improvement, added new error message in case of "help 'contents'".
-
- 18 Jul, 2012 1 commit
-
-
Chaithra Gopalareddy authored
"ORDER BY" AND "LIMIT BY" CLAUSE PROBLEM: When a 'limit' clause is specified in a query along with group by and order by, optimizer chooses wrong index there by examining more number of rows than required. However without the 'limit' clause, optimizer chooses the right index. ANALYSIS: With respect to the query specified, range optimizer chooses the first index as there is a range present ( on 'a'). Optimizer then checks for an index which would give records in sorted order for the 'group by' clause. While checking chooses the second index (on 'c,b,a') based on the 'limit' specified and the selectivity of 'quick_condition_rows' (number of rows present in the range) in 'test_if_skip_sort_order' function. But, it fails to consider that an order by clause on a different column will result in scanning the entire index and hence the estimated number of rows calculated above are wrong (which results in choosing the second index). FIX: Do not enforce the 'limit' clause in the call to 'test_if_skip_sort_order' if we are creating a temporary table. Creation of temporary table indicates that there would be more post-processing and hence will need all the rows. This fix is backported from 5.6. This problem is fixed in 5.6 as part of changes for work log #5558
-
- 12 Jul, 2012 1 commit
-
-
Annamalai Gurusami authored
RBR AND RC Description: When scanning and locking rows with < or <=, InnoDB locks the next row even though row based binary logging and read committed is used. Solution: In the handler, when the row is identified to fall outside of the range (as specified in the query predicates), then request the storage engine to unlock the row (if possible). This is done in handler::read_range_first() and handler::read_range_next().
-
- 11 Jul, 2012 1 commit
-
-
bjorn.munch@oracle.com authored
-
- 10 Jul, 2012 7 commits
-
-
mysql-builder@oracle.com authored
No commit message
-
Andrei Elkin authored
-
Andrei Elkin authored
-
Bjorn Munch authored
-
Andrei Elkin authored
-
Sujatha Sivakumar authored
Problem: ======= The return value from my_b_write is ignored by: `my_b_write_quoted', `my_b_write_bit',`Query_log_event::print_query_header' Most callers of `my_b_printf' ignore the return value. `log_event.cc' has many calls to it. Analysis: ======== `my_b_write' is used to write data into a file. If the write fails it sets appropriate error number and error message through my_error() function call and sets the IO_CACHE::error == -1. `my_b_printf' function is also used to write data into a file, it internally invokes my_b_write to do the write operation. Upon success it returns number of characters written to file and on error it returns -1 and sets the error through my_error() and also sets IO_CACHE::error == -1. Most of the event specific print functions for example `Create_file_log_event::print', `Execute_load_log_event::print' etc are the ones which make several calls to the above two functions and they do not check for the return value after the 'print' call. All the above mentioned abuse cases deal with the client side. Fix: === As part of bug fix a check for IO_CACHE::error == -1 has been added at a very high level after the call to the 'print' function. There are few more places where the return value of "my_b_write" is ignored those are mentioned below. +++ mysys/mf_iocache2.c 2012-06-04 07:03:15 +0000 @@ -430,7 +430,8 @@ memset(buffz, '0', minimum_width - length2); else memset(buffz, ' ', minimum_width - length2); - my_b_write(info, buffz, minimum_width - length2); +++ sql/log.cc 2012-06-08 09:04:46 +0000 @@ -2388,7 +2388,12 @@ { end= strxmov(buff, "# administrator command: ", NullS); buff_len= (ulong) (end - buff); - my_b_write(&log_file, (uchar*) buff, buff_len); At these places appropriate return value handlers have been added.
-
Bjorn Munch authored
-
- 09 Jul, 2012 2 commits
-
-
Bjorn Munch authored
-
Bjorn Munch authored
-
- 05 Jul, 2012 2 commits
-
-
Andrei Elkin authored
Fixes for BUG11761686 left a flaw that managed to slip away from testing. Only effective filtering branch was actually tested with a regression test added to rpl_filter_tables_not_exist. The reason of the failure is destuction of too early mem-root-allocated memory at the end of the deferred User-var's do_apply_event(). Fixed with bypassing free_root() in the deferred execution branch. Deallocation of created in do_apply_event() items is done by the base code through THD::cleanup_after_query() -> free_items() that the parent Query can't miss.
-
Georgi Kodinov authored
HANDLE_FATAL_SIGNAL IN STRNLEN Fixed the following bounds checking problems : 1. in check_if_legal_filename() make sure the null terminated string is long enough before accessing the bytes in it. Prevents pottential read-past-buffer-end 2. in my_wc_mb_filename() of the filename charset check for the end of the destination buffer before sending single byte characters into it. Prevents write-past-end-of-buffer (and garbaling stack in the cases reported here) errors. Added test cases.
-
- 03 Jul, 2012 1 commit
-
-
Rohit Kalhans authored
This is a followup patch for the bug enabling the test i_binlog.binlog_mysqlbinlog_file_write.test this was disabled in mysql trunk and mysql 5.5 as in the release build mysqlbinlog was not debug compiled whereas the mysqld was. Since have_debug.inc script checks only for mysqld to be debug compiled, the test was not being skipped on release builds. We resolve this problem by creating a new inc file mysqlbinlog_have_debug.inc which checks exclusively for mysqlbinlog to be debug compiled. if not it skips the test.
-
- 29 Jun, 2012 1 commit
-
-
Gleb Shchepa authored
-
- 28 Jun, 2012 1 commit
-
-
Georgi Kodinov authored
Several fixes : * sql-common/client.c Added a validity check of the fields metadata packet sent by the server. Now libmysql will check if the length of the data sent by the server matches what's expected by the protocol before using the data. * client/mysqltest.cc Fixed the error handling code in mysqltest to avoid sending new commands when the reading the result set failed (and there are unread data in the pipe). * sql_common.h + libmysql/libmysql.c + sql-common/client.c unpack_fields() now generates a proper error when it fails. Added a new argument to this function to support the error generation. * sql/protocol.cc Added a debug trigger to cause the server to send a NULL insted of the packet expected by the client for testing purposes.
-
- 29 Jun, 2012 2 commits
-
-
Jon Olav Hauglid authored
This patch fixes various compilation warnings of the type "error: narrowing conversion of 'x' from 'datatype1' to 'datatype2'
-
Gleb Shchepa authored
Print the warning(note): YEAR(x) is deprecated and will be removed in a future release. Please use YEAR(4) instead on "CREATE TABLE ... YEAR(x)" or "ALTER TABLE MODIFY ... YEAR(x)", where x != 4
-
- 28 Jun, 2012 1 commit
-
-
Norvald H. Ryeng authored
-
- 19 Jun, 2012 1 commit
-
-
Harin Vadodaria authored
INC_HOST_ERRORS() IS CALLED. Description: Reverting patch 3755 for bug#11753779
-
- 18 Jun, 2012 1 commit
-
-
Norvald H. Ryeng authored
Problem: Some queries with subqueries and a HAVING clause that consists only of a column not in the select or grouping lists causes the server to crash. During parsing, an Item_ref is constructed for the HAVING column. The name of the column is resolved when JOIN::prepare calls fix_fields() on its having clause. Since the column is not mentioned in the select or grouping lists, a ref pointer is not found and a new Item_field is created instead. The Item_ref is replaced by the Item_field in the tree of HAVING clauses. Since the tree consists only of this item, the pointer that is updated is JOIN::having. However, st_select_lex::having still points to the Item_ref as the root of the tree of HAVING clauses. The bug is triggered when doing filesort for create_sort_index(). When find_all_keys() calls select->cond->walk() it eventually reaches Item_subselect::walk() where it continues to walk the having clauses from lex->having. This means that it finds the Item_ref instead of the new Item_field, and Item_ref::walk() tries to dereference the ref pointer, which is still null. The crash is reproducible only in 5.5, but the problem lies latent in 5.1 and trunk as well. Fix: After calling fix_fields on the having clause in JOIN::prepare(), set select_lex::having to point to the same item as JOIN::having. This patch also fixes a bug in 5.1 and 5.5 that is triggered if the query is executed as a prepared statement. The Item_field is created in the runtime arena when the query is prepared, and the pointer to the item is saved by st_select_lex::fix_prepare_information() and brought back as a dangling pointer when the query is executed, after the runtime arena has been reclaimed. Fix: Backport fix from trunk that switches to the permanent arena before calling Item_ref::fix_fields() in JOIN::prepare().
-
- 15 Jun, 2012 1 commit
-
-
kent.boortz@oracle.com authored
-
- 14 Jun, 2012 1 commit
-
-
sayantan.dutta@oracle.com authored
-
- 13 Jun, 2012 1 commit
-
-
Harin Vadodaria authored
INC_HOST_ERRORS() IS CALLED. Issue : Sequence of calling inc_host_errors() and reset_host_errors() required some changes in order to maintain correct connection error count. Solution : Call to reset_host_errors() is shifted to a location after which no calls to inc_host_errors() are made.
-
- 12 Jun, 2012 1 commit
-
-
Manish Kumar authored
Problem ======== Replication breaks in the cases if the event length exceeds the size of master Dump thread's max_allowed_packet. The reason why this failure is occuring is because the event length is more than the total size of the max_allowed_packet, on addition of the max_event_header length exceeds the max_allowed_packet of the DUMP thread. This causes the Dump thread to break replication and throw an error. That can happen e.g with row-based replication in Update_rows event. Fix ==== The problem is fixed in 2 steps: 1.) The Dump thread limit to read event is increased to the upper limit i.e. Dump thread reads whatever gets logged in the binary log. 2.) On the slave side we increase the the max_allowed_packet for the slave's threads (IO/SQL) by increasing it to 1GB. This is done using the new server option (slave_max_allowed_packet) included, is used to regulate the max_allowed_packet of the slave thread (IO/SQL) by the DBA, and facilitates the sending of large packets from the master to the slave. This causes the large packets to be received by the slave and apply it successfully.
-
- 05 Jun, 2012 1 commit
-
-
Tor Didriksen authored
Patch for 5.1 and 5.5: fix typo in byte comparison in rr_cmp()
-