- 24 Jul, 2015 1 commit
-
-
Murthy Narkedimilli authored
-
- 23 Jul, 2015 1 commit
-
-
Nisha Gopalakrishnan authored
IS REJECTED. Analysis ======== View creation with named columns over UNION is rejected. Consider the following view definition: CREATE VIEW v1 (fld1, fld2) AS SELECT 1 AS a, 2 AS b UNION ALL SELECT 1 AS a, 1 AS a; A 'duplicate column' error was reported due to the duplicate alias name in the secondary SELECT. The VIEW column names are either explicitly specified or determined from the first SELECT (which can be auto generated if not specified). Since a duplicate column name check was performed even for the secondary SELECTs, an error was reported. Fix ==== Check for duplicate column names only for the named columns if specified or only for the first SELECT.
-
- 16 Jul, 2015 1 commit
-
-
Sreeharsha Ramanavarapu authored
INCORRECT RESULTS Issue: ----- Updating varchar and text fields in the same update statement can produce incorrect results. When a varchar field is assigned to the text field and the varchar field is then set to a different value, the text field's result contains the varchar field's new value. SOLUTION: --------- Currently the blob type does not allocate space for the string to be stored. Instead it contains a pointer to the varchar string. So when the varchar field is changed as part of the update statement, the value contained in the blob also changes. The fix would be to actually store the value by allocating space for the blob's string. We can avoid allocating this space when the varchar field is not being written into.
-
- 14 Jul, 2015 1 commit
-
-
mysql-builder@oracle.com authored
No commit message
-
- 13 Jul, 2015 2 commits
-
-
Tor Didriksen authored
Post-push fix: broken build on windows. The problem is min/max macros from windows.h which interfere with a template function callex max. Solution: ADD_DEFINITIONS(-DNOMINMAX)
-
Sreeharsha Ramanavarapu authored
DATABASE WHEN USING TABLE ALIASES Issue: ----- When using table aliases for deleting, MySQL checks privileges against the current database and not the privileges on the actual table or database the table resides. SOLUTION: --------- While checking privileges for multi-deletes, correspondent_table should be used since it points to the correct table and database.
-
- 10 Jul, 2015 3 commits
-
-
Christopher Powers authored
For WAIT events, fall back to other timers if CYCLE is not available.
-
Sreeharsha Ramanavarapu authored
-
Sreeharsha Ramanavarapu authored
WARNINGS Backporting to 5.1 and 5.5
-
- 08 Jul, 2015 5 commits
-
-
Robert Golebiowski authored
YASSL-COMPILED SERVER/CLIENT Description: thread_pool.thread_pool_connect hangs when the server and client are compiled with yaSSL. Bug-fix: Test thread_pool.thread_pool_connect was temporary disabled for yaSSL. However, now that yaSSL is fixed it runs OK. The bug was introduced by one of the yaSSL updates. set_current was not working for i == 0. Now this is fixed. YASSL is updated to 2.3.7d
-
Robert Golebiowski authored
INITIAL STARTUP Description: By using mysql_ssl_rsa_setup to get SSL enabled server (after running mysqld --initialize) server don't answer properly to "mysqladmin ping" first 30 secs after startup. Bug-fix: YASSL validated certificate date to the minute but should have to the second. This is why the ssl on the server side was not up right away after new certs were created with mysql_ssl_rsa_setup. The fix for that was submitted by Todd. YASSL was updated to 2.3.7c.
-
Robert Golebiowski authored
Affects at least 5.6 and 5.7. In customer case, the "client" happened to be a replication slave, therefore his server crashed. Bug-fix: The bug was in yassl. Todd Ouska has provided us with the patch. (cherry picked from commit 42ffa91aad898b02f0793b669ffd04f5c178ce39)
-
Shishir Jaiswal authored
MYSQLADMIN -U ROOT -P DESCRIPTION =========== Crash occurs when no command is given while executing mysqladmin utility. ANALYSIS ======== In mask_password() the final write to array 'temp_argv' is done without checking if corresponding index 'argc' is valid (non-negative) or not. In case its negative (would happen when this function is called with 'argc'=0), it may cause a SEGFAULT. Logically in such a case, mask_password() should not have been called as it would do no valid thing. FIX === mask_password() is now called after checking 'argc'. This function is now called only when 'argc' is positive otherwise the process terminates
-
Debarun Banerjee authored
Problem : --------- The specific issue reported in this bug is with range/list column value that is allocated and initialized by evaluating partition expression(item tree) during execution. After evaluation the range list value is marked fixed [part_column_list_val]. During next execution, we don't re-evaluate the expression and use the old value since it is marked fixed. Solution : ---------- One way to solve the issue is to mark all column values as not fixed during clone so that the expression is always re-evaluated once we attempt partition_info::fix_column_value_functions() after cloning the part_info object during execution of DDL on partitioned table. Reviewed-by: Jimmy Yang <Jimmy.Yang@oracle.com> Reviewed-by: Mattias Jonsson <mattias.jonsson@oracle.com> RB: 9424
-
- 03 Jul, 2015 1 commit
-
-
Praveenkumar Hulakund authored
Follow up patch to fix sys_vars.query_cache_min_res_unit_basic_32 test failure.
-
- 02 Jul, 2015 1 commit
-
-
Praveenkumar Hulakund authored
Valid min value for query_cache_min_res_unit is 512. But attempt to set value greater than or equal to the ULONG_MAX(max value) is resulting query_cache_min_res_unit value to 0. This result in crash while searching for memory block lesser than the valid min value to store query results. Free memory blocks in query cache are stored in bins according to their size. The bins are stored in size descending order. For the memory block request the appropriate bin is searched using binary search algorithm. The minimum free memory block request expected is 512 bytes. And the appropriate bin is searched for block greater than or equals to 512 bytes. Because of the bug the query_cache_min_res_unit is set to 0. Due to which there is a chance of request for memory blocks lesser than the minimum size in free memory block bins. Search for bin for this invalid input size fails and returns garbage index. Accessing bins array element with this index is causing the issue reported. The valid value range for the query_cache_min_res_unit is 512 to ULONG_MAX(when value is greater than the max allowed value, max allowed value is used i.e ULONG_MAX). While setting result unit block size (query_cache_min_res_unit), size is memory aligned by using a macro ALIGN_SIZE. The ALIGN_SIZE logic is as below, (input_size + sizeof(double) - 1) & ~(sizeof(double) - 1) For unsigned long type variable when input_size is greater than equal to ULONG_MAX-(sizeof(double)-1), above expression is resulting in value 0. Fix: ----- Comparing value set for query_cache_min_res_unit with max aligned value which can be stored in ulong type variable. If it is greater then setting it to the max aligned value for ulong type variable.
-
- 30 Jun, 2015 1 commit
-
-
Arun Kuruvila authored
MULTIPLE THREADS Description:- The utility "mysqlimport" does not use multiple threads for the execution with option "--use-threads". "mysqlimport" while importing multiple files and multiple tables, uses a single thread even if the number of threads are specified with "--use-threads" option. Analysis:- This utility uses ifdef HAVE_LIBPTHREAD to check for libpthread library and if defined uses libpthread library for mutlithreaing. Since HAVE_LIBPTHREAD is not defined anywhere in the source, "--use-threads" option is silently ignored. Fix:- "-DTHREADS" is set to the COMPILE_FLAGS which will enable pthreads. HAVE_LIBPTHREAD macro is removed.
-
- 25 Jun, 2015 1 commit
-
-
Balasubramanian Kandasamy authored
-
- 24 Jun, 2015 2 commits
-
-
Yashwant Sahu authored
-
Debarun Banerjee authored
Problem : --------- Issue-1: The root cause for the issues is that (col1 > 1) is not a valid partition function and we should have thrown error at much early stage [partition_info::check_partition_info]. We are not checking sub-partition expression when partition expression is NULL. Issue-2: Potential issue for future if any partition function needs to change item tree during open/fix_fields. We should release changed items, if any, before doing closefrm when we open the partitioned table during creation in create_table_impl. Solution : ---------- 1.check_partition_info() - Check for sub-partition expression even if no partition expression. [partition by ... columns(...) subpartition by hash(<expr>)] 2.create_table_impl() - Assert that the change list is empty before doing closefrm for partitioned table. Currently no supported partition function seems to be changing item tree during open. Reviewed-by: Mattias Jonsson <mattias.jonsson@oracle.com> RB: 9345
-
- 23 Jun, 2015 3 commits
-
-
Balasubramanian Kandasamy authored
-
Balasubramanian Kandasamy authored
-
Murthy Narkedimilli authored
-
- 22 Jun, 2015 2 commits
-
-
Annamalai Gurusami authored
Post push fix. The function cmp_dtuple_rec() was used without a prototype in the file row0purge.c. Adding the include file rem0cmp.h to row0purge.c to resolve this issue. approved by Krunal over IM.
-
Ajo Robert authored
AVOID DEADLOCK AFTER RESTORE Post push test fix.
-
- 19 Jun, 2015 2 commits
-
-
Annamalai Gurusami authored
Problem: If we add a referential integrity constraint with a duplicate name, an error occurs. The foreign key object would not have been added to the dictionary cache. In the error path, there is an attempt to remove this foreign key object. Since this object is not there, the search returns a NULL result. De-referencing the null object results in this crash. Solution: If the search to the foreign key object failed, then don't attempt to access it. rb#9309 approved by Marko.
-
V S Murthy Sidagam authored
Description: The newest RHEL/CentOS/SL 6.6 openssl package (1.0.1e-30.el6_6.9; published around 6/4/2015) contains a fix for LogJam. RedHat's fix for this was to limit the use of any SSL DH key sizes to a minimum of 768 bits. This breaks any DHE SSL ciphers for MySQL clients as soon as you install the openssl update, since in vio/viosslfactories.c, the default DHPARAM is a 512 bit one. This cannot be changed in configuration/runtime; and needs a recompile. Because of this the client connection with --ssl-cipher=DHE-RSA-AES256-SHA is not able to connect the server. Analysis: Openssl has changed Diffie-Hellman key from the 512 to 1024 due to some reasons(please see the details at http://openssl.org/news/secadv_20150611.txt) Because of this the client with DHE cipher is failing to connect the server. This change took place from the openssl-1.0.1n onwards. Fix: Similar bug fix is already pushed to mysql-5.7 under bug#18367167. Hence we backported the same fix to mysql-5.5 and mysql-5.6.
-
- 17 Jun, 2015 2 commits
-
-
Tor Didriksen authored
Backport from 5.6 to 5.5 This makes filesort robust to misc variants of order by / group by on columns/expressions with zero length.
-
Balasubramanian Kandasamy authored
Fixed the syntax in mysql-systemd-start script
-
- 16 Jun, 2015 2 commits
-
-
Balasubramanian Kandasamy authored
-
Balasubramanian Kandasamy authored
-
- 05 Jun, 2015 2 commits
-
-
mysql-builder@oracle.com authored
No commit message
-
mysql-builder@oracle.com authored
No commit message
-
- 04 Jun, 2015 2 commits
-
-
Arun Kuruvila authored
-
Arun Kuruvila authored
Description:- mysqlslap is a diagnostic utility designed to emulate client load for a MySQL server and to report the timing of each stage. This utility crashes when invalid values are passed to the options 'num_int_cols_opt' or 'num_chars_cols_opt' or 'engine'. Analysis:- mysqlslap uses "parse_option()" to parse the values specified to the options 'num_int_cols_opt', 'num_chars_cols_opt' and 'engine'. These options takes values separated by commas. In "parse_option()", the comma separated values are separated and copied into a buffer without checking the length of the string to be copied. The size of the buffer is defined by a macro HUGE_STRING_LENGTH whose value is 8196. So if the length of the any of the comma separated value exceeds HUGE_STRING_LENGTH, will result in a buffer overflow. Fix:- A check is introduced in "parse_option()" to check whether the size of the string to be copied is more than HUGE_STRING_LENGTH. If it is more, an error, "Invalid value specified for the option 'xxx'" is thrown. Option length was incorrectly calculated for the last comma separated value. So fixed that as well.
-
- 03 Jun, 2015 2 commits
-
-
Debarun Banerjee authored
Problem : --------- This is a regression of Bug#19138298. In purge_node_t::validate_pcur we are trying to get offsets for all columns of clustered index from stored record in persistent cursor. This would fail when stored record is not having all fields of the index. The stored record stores only fields that are needed to uniquely identify the entry. Solution : ---------- 1. Use pcur.old_n_fields to get fields that are stored 2. Add comment to note dependency between stored fields in purge node ref and stored cursor. 3. Return if the cursor record is not already stored as it is not safe to access cursor record directly without latch. Reviewed-by: Marko Makela <marko.makela@oracle.com> RB: 9139
-
Debarun Banerjee authored
Problem : --------- This is a regression of bug-19138298. During purge, if btr_pcur_restore_position fails, we set found_clust to FALSE so that it can find a possible clustered index record in future calls for the same undo entry. This, however, overwrites the old_rec_buf while initializing pcur again in next call. The leak is reproducible in local environment and with the test provided along with bug-19138298. Solution : ---------- If btr_pcur_restore_position() fails close the cursor. Reviewed-by: Marko Makela <Marko.Makela@oracle.com> Reviewed-by: Annamalai Gurusami <Annamalai.Gurusami@oracle.com> RB: 9074
-
- 29 May, 2015 2 commits
-
-
Bjorn Munch authored
-
Bjorn Munch authored
-
- 22 May, 2015 1 commit
-
-
mysql-builder@oracle.com authored
No commit message
-