- 11 Mar, 2015 3 commits
-
-
Sreeharsha Ramanavarapu authored
This reverts commit c7de768ec20f5167cff2c69a255d95ca2eded46a.
-
Thirunarayanan Balathandayuthapani authored
RESERVATION AND SIGNAL COUNT Problem: Reservation and Signal count value shows negative value for show engine innodb statement. Solution: This is happening due to counter overflow error. Reservation and Signal count values are defined as unsigned long but these variables are converted to long while printing it. Change Reservation and Signal count values as unsigned long datatype while printing it. Reviewed-by: Marko Mäkelä <marko.makela@oracle.com> Approved in bug page.
-
Sreeharsha Ramanavarapu authored
MYISAM TABLE CAUSES THE SERVER TO CRASH Issue: ----- During index maintanence, R-tree node might need a split. In some cases the square of mbr could be calculated to infinite (as in this case) or to NaN. This is currently not handled. This is specific to MyISAM. SOLUTION: --------- If the calculated value in "mbr_join_square" is infinite or NaN, set it to max double value. Initialization of output parameters of "pick_seeds" is required if calculation is infinite (or negative infinite). Similar to the fix made for INNODB as part of Bug#19533996.
-
- 03 Mar, 2015 1 commit
-
-
Annamalai Gurusami authored
Problem: This is a coding mistake during error handling. When the specified foreign key constraint is wrong because of data type mismatch, the resulting foreign key object will not have valid foreign->id (it will be NULL.) Solution: While removing the foreign key object from dictionary cache during error handling, ensure that foreign->id is not null before using it. rb#8204 approved by Sunny.
-
- 01 Mar, 2015 2 commits
-
-
Terje Røsten authored
Log file directory had too strict access rights, server not able to write to log file. Signed-off-by: Terje Røsten <terje.rosten@oracle.com>
-
Balasubramanian Kandasamy authored
Due to large version numbers used libmysqlclient-devel packages was not considered for install. Fixed by removing version check.
-
- 26 Feb, 2015 1 commit
-
-
Chaithra Gopalareddy authored
3RD EXECUTION OF PS Problem: When order by is by a column number for a group concat function which has an outer reference, server fails in case of prepared statements on the third execution Analysis: When a group concat function has order by, the fields in order by are not resolved until execution if the input is a column number. During execution they get resolved after the temp table gets created. As a result they will be pointing to temp table fields which are runtime created objects. This results in dangling pointers leading to server failure. Solution: Reset the pointers for the order by fields to point to the original arguments after execution as they are invalid. Done in Item_func_group_concat::cleanup.
-
- 25 Feb, 2015 1 commit
-
-
Mithun C Y authored
ISSUE: ------ There can be up to MERGEBUFF2 number of sorted merge chunks, We need enough buffer space for at least one record from each merge chunks. If estimates are wrong(very low) and we allocate buffer space for less than MERGEBUFF2, then we will have issue in merge_buffers, if actual number of rows to be sorted is bigger than estimate and external filesort is chosen. SOLUTION: --------- Set number of rows to sort to be at least MERGEBUFF2.
-
- 24 Feb, 2015 1 commit
-
-
Vishal Chaudhary authored
-
- 20 Feb, 2015 1 commit
-
-
Chaithra Gopalareddy authored
Problem: find_order_by_list does not update the address of order_item correctly after resolving. Solution: Change the ref_by address for a order_by field if its SUM_FUNC_ITEM to the address of the field present in all_fields.
-
- 18 Feb, 2015 1 commit
-
-
Chaithra Gopalareddy authored
Problem: While getting the temp table field for a REF_ITEM make_sortorder is using the real_item. As a result server fails later with an assert. Solution: Do not use real_item to get the temp table field. Instead use the REF_ITEM itself as temp table fields are created for REF_ITEM not the real_item.
-
- 06 Feb, 2015 1 commit
-
-
Praveenkumar.Hulakund authored
In versions 5.5 and 5.6 the MySQL version is not logged until server is started and ready to accept connections. Exiting server before this point will not have server version information in the log. But in 5.7 code, we log a server version information just after we prepare server_version string and logging is initialized. For 5.5 and 5.6 code also adding this code to print server version information. Test results: ================ 5.5 ----- Server version will be logged as below on server startup: 141218 8:45:48 [Note] /home/praveen/WorkDir/mysql_local/bug20052694/mysql/sql/mysqld (mysqld 5.5.42-debug-log) starting as process 19697 ... 5.6 ---- Server version will be logged as below on server startup: 2014-12-18 09:08:43 0 [Note] /home/praveen/WorkDir/mysql_local/bug20052694/mysql-5.6/sql/mysqld (mysqld 5.6.23-debug-log) starting as process 18474 ...
-
- 05 Feb, 2015 1 commit
-
-
sreeharsha authored
LEADS TO INCORRECT BEHAVIOR ISSUE: ------ When the following conditions are satisfied in a query, a server crash occurs: a) Two rows are compared using a NULL-safe equal-to operator. b) Each of these rows belong to different charsets. SOLUTION: --------- When one charset is converted to another for comparision, the constructor of "Item_func_conv_charset" is called. This will attempt to use the Item_cache if the string is a constant. This check succeeds because the "used_table_map" of the Item_cache class is never set to the correct value. Since it is mistakenly assumed to be a constant, it tries to fetch the relevant null value related fields which are yet to be initialized. This results in valgrind issues and wrong results. The fix is to update the "used_table_map" of "Item_cache". This will allow "Item_func_conv_charset" to realise that this is not a constant.
-
- 03 Feb, 2015 1 commit
-
-
Bala authored
-
- 30 Jan, 2015 1 commit
-
-
Mithun C Y authored
ISSUE: ------ We pre-allocate the ref_pointer_array before we resolve outer references. This means that in some cases the ref_pointer_array may not be large enough to hold all references created. One such case is aggregate functions in having clause of a subquery which may add items to select list of outer query. So it is necessary to consider select_n_having_items for subqueries while allocating ref_pointer_array else we will get buffer overflow. SOLUTION: --------- Allocate a larger ref_pointer_array by aggregating select_n_having_items for subqueries. The fix in sql_yacc.yy is a backport from bug fix 18782905.
-
- 28 Jan, 2015 1 commit
-
-
Arun Kuruvila authored
CRASHES WITH AUTO_INCREMENT COLUMN Description:- Creating a federated table with AUTO_INCREMENT column using LIKE clause results in a server crash. Analysis:- Creating a federated table with AUTO_INCREMENT column using LIKE clause results in a federated server crash due to the uninitialized connection structure(mysql). Also due to unassigned connection string for the remote server, at the time of preparation of "create_info" structure, the creation of any federated table using LIKE clause fails with an error, "ERROR 1 (HY000): server name: '' doesn't exist!". This bug is not only with AUTO_INCREMENT but in all creations of federated tables with LIKE clause. Fix :- In ha_federated::info(), "mysql->insert_id" assigned to "stats.auto_increment_value" only when there is an active connection. This fixes the crash issue. For creating the federated table with LIKE clause, connection string is assigned at the time of preparation of "create_info" structure.
-
- 27 Jan, 2015 1 commit
-
-
Nisha Gopalakrishnan authored
Backporting the patch and the test case fixed as part of BUG#16041903 and BUG#19683834 respectively.
-
- 23 Jan, 2015 1 commit
-
-
Jon Olav Hauglid authored
The problem was that the maximum value of the transaction_prealloc_size session system variable was ULONG_MAX which meant that it was possible to cause the server to allocate excessive amounts of memory. This patch fixes the problem by reducing the maxmimum value of transaction_prealloc_size and transaction_alloc_block_size down to 128K. Note that transactions will still be able to allocate more than 128K if needed, this patch just reduces the amount that can be preallocated - as well as the maximum size of the incremental allocation blocks.
-
- 19 Jan, 2015 2 commits
-
-
s.sujatha authored
Fixing a post push test issue.
-
Thayumanavar authored
Problem Description And Fix: Inserting a fudged record in mysql.proc with the dbname column value as test and the name column as empty, will cause a crash in mysqld when we run the command DROP DATABASE test. During DROP DATABASE test, mysql_rm_db subsequently calls lock_db_routines. In the routine we fetch the field 'name' from mysql.proc by calling the underlying storage engine API in lock_db_routines. This cause NULL value as the field column of mysql.proc and subsequent dereference MDL_request::init leads to crash. Modifying mysql.proc using SQL command by user is not supported, but in principle, there is a possibility of mysql.proc getting corrupted which can also lead to empty fields and arbitary values. The patch fixes the crash by checking NULL and propagating the appopriate error code to the user.
-
- 16 Jan, 2015 1 commit
-
-
Bala authored
-
- 15 Jan, 2015 1 commit
-
-
Jon Olav Hauglid authored
Rename a CMake variable in compile_flags.cmake to avoid triggering CMake 3.1 warning about CMP0054 about interpreting if() arguments as keywords or variables. No changes in behavior.
-
- 14 Jan, 2015 3 commits
-
-
Bala authored
-
Venkatesh Duggirala authored
special character sets like utf16, utf32, ucs2. Analysis: MySQL server does not support few special character sets like utf16,utf32 and ucs2 as "client's character set"(eg: utf16,utf32, ucs2). It is known limitation listed in the documentation http://dev.mysql.com/doc/refman/5.5/en/charset-connection.html. The default value for default-character-set parameter is 'auto' which means that if the server's character set is not supported, then server automatically changes client's character set to predefined character-set which is 'latin1' in the current code. Eg: $ ./mysql -uroot -S$SOCKET_FILE --default-character-set=utf16 ERROR 1231 (42000): Variable 'character_set_client' can't be set to the value of 'utf16' $ ./mysql -uroot -S$SOCKET_FILE will be successfully connected to server with 'latin1' as default client side character set. When IO thread is trying to connect to Master, it sets server's character set as client's character set. When Slave server is started with these special character sets, IO thread (which is like a connection to Master) fails because of the above said limitation. Fix: Now even IO thread also behaves the same as a regular client behaves. i.e., If server's character set is not supported as client's character set, then set default's client character set(latin1) as client's character set.
-
Praveenkumar.Hulakund authored
Attempt to truncate temporary table using Blackhole storage and locked by LOCK TABLES caused assertion failure and crashes. Blackhole is a transaction-aware engine. While creating the temporary table in transaction-aware engine, temporary table of type "TRANSACTIONAL_TMP_TABLE" is created. For such temporary tables a THR_LOCK lock is acquired by the LOCK TABLE operation. References to them are also added into MYSQL_LOCK::table[] array. Also for Blackhole engine, flag HTON_CAN_RECREATE is set. While truncating temporary tables, no locks are taken and recreate_temporary_table() is called for engines having "HTON_CAN_RECREATE" in flag. Function closefrm() is called from the recreate_temporary_table(), to close the current temporary table. In closefrm(), the lock on table expected is "F_UNLCK". In debug builds, assert condition on this fails when lock of type "F_WRLCK" is acquired by LOCK TABLE operation on temporary tables using Blackhole engine. In non-debug builds closefrm() simply freed TABLE object leaving dangling pointer to this object in MYSQL_LOCK::table[] array which might lead to crashes later. Fix: --------- To fix this issue, we now unlock and remove table from MYSQL_LOCK::table[] array before calling close_temporary_table() in recreate_temporary_table(). This is achieved by calling mysql_lock_remove() function for this table.
-
- 06 Jan, 2015 1 commit
-
-
Bala authored
-
- 05 Jan, 2015 1 commit
-
-
Bala authored
-
- 02 Jan, 2015 1 commit
-
-
Harin Vadodaria authored
Explicitly disable weaker SSL protocols.
-
- 30 Dec, 2014 1 commit
-
-
Harin Vadodaria authored
Upgrading YaSSL from 2.3.5 to 2.3.7 Reviewed-by : Kristofer Pettersson <kristofer.pettersson@oracle.com> Reviewed-by : Vamsikrishna Bhagi <vamsikrishna.bhagi@oracle.com>
-
- 29 Dec, 2014 1 commit
-
-
s.sujatha authored
Fix: === Backport Bug#11756194 to mysql-5.5. slave breaks if 'drop database' fails on master and mismatched tables on slave. 'DROP TABLE <deleted tables>' was binlogged when 'DROP DATABASE' failed and at least one table was deleted from the database. The log event would lead slave SQL thread stop if some of the tables did not exist on slave. After this patch, It is always binlogged with 'IF EXISTS' option.
-
- 24 Dec, 2014 1 commit
-
-
Thiru authored
CRASHES ON EVERY START ATTEMPT Description: ------------ push_warning_printf function is used to print the warning message to the client. So this function should not invoke while recovering the server. Moreover current_thd is NULL while starting the server. Solution: --------- - Avoiding the warning to be printed while recovery. This patch already pushed in mysql-5.6.
-
- 22 Dec, 2014 1 commit
-
-
Thiru authored
- Moving the test case to correct place.
-
- 11 Dec, 2014 1 commit
-
-
Tor Didriksen authored
Patch for 5.5
-
- 09 Dec, 2014 1 commit
-
-
Vamsikrishna Bhagi authored
CODE Fixed a failure on pb2 caused by the patch previously pushed.
-
- 05 Dec, 2014 1 commit
-
-
Harin Vadodaria authored
Generated new certificates with validity upto 2029.
-
- 03 Dec, 2014 1 commit
-
-
Vamsikrishna Bhagi authored
CODE Problem: UDF doesn't handle the arguments properly when they are of string type due to a misplaced break. The length of arguments is also not set properly when the argument is NULL. Solution: Fixed the code by putting the break at right place and setting the argument length to zero when the argument is NULL.
-
- 28 Nov, 2014 2 commits
-
-
Daniel Fischer authored
-
Balasubramanian Kandasamy authored
-
- 26 Nov, 2014 1 commit
-
-
V S Murthy Sidagam authored
Description: When querying a subset of columns from the information_schema.TABLES Analysis: When information about tables is collected for statements like "SELECT ENGINE FROM I_S.TABLES" we do not perform full-blown table opens in SE, instead we only use information from table shares from the Table Definition Cache or .FRMs. Still in order to simplify I_S implementation mock TABLE objects are created from TABLE_SHARE during this process. This is done by calling open_table_from_share() function with special arguments. Since this function always increments "Opened_tables" counter, calls to it can be mistakingly interpreted as full-blown table opens in SE. Note that claim that "'SELECT ENGINE FROM I_S.TABLES' statement doesn't use Table Cache" is nevertheless factually correct. But it misses the point, since such statements a) don't use full-blown TABLE objects and therefore don't do table opens b) still use Table Definition Cache. Fix: We are now incrementing the counter when db_stat(i.e open flags for ha_open( we have considered an optimization which would use TABLE objects from Table Cache when available instead of constructing mock TABLE objects, but found it too intrusive for stable releases.
-
- 24 Nov, 2014 1 commit
-
-
Nisha Gopalakrishnan authored
Analysis: -------- Certain queries using intrinsic temporary tables may fail due to name clashes in the file name for the temporary table when the 'temp-pool' enabled. 'temp-pool' tries to reduce the number of different filenames used for temp tables by allocating them from small pool in order to avoid problems in the Linux kernel by using a three part filename: <tmp_file_prefix>_<pid>_<temp_pool_slot_num>. The bit corresponding to the temp_pool_slot_num is set in the bit map maintained for the temp-pool when it used for the file name. It is cleared after the temp table is deleted for re-use. The 'create_tmp_table()' function call under error condition tries to clear the same bit twice by calling 'free_tmp_table()' and 'bitmap_lock_clear_bit()'. 'free_tmp_table()' does a delete of the table/file and clears the bit by calling the same function 'bitmap_lock_clear_bit()'. The issue reported can be triggered under the timing window mentioned below for an error condition while creating the temp table: a) THD1: Due to an error clears the temp pool slot number used by it by calling 'free_tmp_table'. b) THD2: In the process of creating the temp table by using an unused slot number in the bit map. c) THD1: Clears the slot number used THD2 by calling 'bitmap_lock_clear_bit()' after completing the call 'free_tmp_table'. d) THD3: Uses the slot number used the THD2 since it is freed by THD1. When it tries to create the temp file using that slot number, an error is reported since it is currently in use by THD2. [The error: Error 'Can't create/write to file '/tmp/#sql_277e_0.MYD' (Errcode: 17)'] Another issue which may occur in 5.6 and trunk is that: When the open temporary table fails after its creation(due to ulimit or OOM error), the file is not deleted. Thus further attempts to use the same slot number in the 'temp-pool' results in failure. Fix: --- a) Under the error condition calling the 'bitmap_lock_clear_bit()' function to clear the bit is unnecessary since 'free_tmp_table()' deletes the table/file and clears the bit. Hence removed the redundant call 'bitmap_lock_clear_bit()' in 'create_tmp_table()' This prevents the timing window under which the issue reported can be seen. b) If open of the temporary table fails, then the file is deleted thus allowing the temp-pool slot number to be utilized for the subsequent temporary table creation. c) Also if the attempt to create temp table fails since it already exists, the temp-pool slot for it is marked as used, to avoid the problem from re-appearing.
-