- 19 Jun, 2015 1 commit
-
-
V S Murthy Sidagam authored
Description: The newest RHEL/CentOS/SL 6.6 openssl package (1.0.1e-30.el6_6.9; published around 6/4/2015) contains a fix for LogJam. RedHat's fix for this was to limit the use of any SSL DH key sizes to a minimum of 768 bits. This breaks any DHE SSL ciphers for MySQL clients as soon as you install the openssl update, since in vio/viosslfactories.c, the default DHPARAM is a 512 bit one. This cannot be changed in configuration/runtime; and needs a recompile. Because of this the client connection with --ssl-cipher=DHE-RSA-AES256-SHA is not able to connect the server. Analysis: Openssl has changed Diffie-Hellman key from the 512 to 1024 due to some reasons(please see the details at http://openssl.org/news/secadv_20150611.txt) Because of this the client with DHE cipher is failing to connect the server. This change took place from the openssl-1.0.1n onwards. Fix: Similar bug fix is already pushed to mysql-5.7 under bug#18367167. Hence we backported the same fix to mysql-5.5 and mysql-5.6.
-
- 17 Jun, 2015 2 commits
-
-
Tor Didriksen authored
Backport from 5.6 to 5.5 This makes filesort robust to misc variants of order by / group by on columns/expressions with zero length.
-
Balasubramanian Kandasamy authored
Fixed the syntax in mysql-systemd-start script
-
- 16 Jun, 2015 2 commits
-
-
Balasubramanian Kandasamy authored
-
Balasubramanian Kandasamy authored
-
- 05 Jun, 2015 2 commits
-
-
mysql-builder@oracle.com authored
No commit message
-
mysql-builder@oracle.com authored
No commit message
-
- 04 Jun, 2015 2 commits
-
-
Arun Kuruvila authored
-
Arun Kuruvila authored
Description:- mysqlslap is a diagnostic utility designed to emulate client load for a MySQL server and to report the timing of each stage. This utility crashes when invalid values are passed to the options 'num_int_cols_opt' or 'num_chars_cols_opt' or 'engine'. Analysis:- mysqlslap uses "parse_option()" to parse the values specified to the options 'num_int_cols_opt', 'num_chars_cols_opt' and 'engine'. These options takes values separated by commas. In "parse_option()", the comma separated values are separated and copied into a buffer without checking the length of the string to be copied. The size of the buffer is defined by a macro HUGE_STRING_LENGTH whose value is 8196. So if the length of the any of the comma separated value exceeds HUGE_STRING_LENGTH, will result in a buffer overflow. Fix:- A check is introduced in "parse_option()" to check whether the size of the string to be copied is more than HUGE_STRING_LENGTH. If it is more, an error, "Invalid value specified for the option 'xxx'" is thrown. Option length was incorrectly calculated for the last comma separated value. So fixed that as well.
-
- 03 Jun, 2015 2 commits
-
-
Debarun Banerjee authored
Problem : --------- This is a regression of Bug#19138298. In purge_node_t::validate_pcur we are trying to get offsets for all columns of clustered index from stored record in persistent cursor. This would fail when stored record is not having all fields of the index. The stored record stores only fields that are needed to uniquely identify the entry. Solution : ---------- 1. Use pcur.old_n_fields to get fields that are stored 2. Add comment to note dependency between stored fields in purge node ref and stored cursor. 3. Return if the cursor record is not already stored as it is not safe to access cursor record directly without latch. Reviewed-by: Marko Makela <marko.makela@oracle.com> RB: 9139
-
Debarun Banerjee authored
Problem : --------- This is a regression of bug-19138298. During purge, if btr_pcur_restore_position fails, we set found_clust to FALSE so that it can find a possible clustered index record in future calls for the same undo entry. This, however, overwrites the old_rec_buf while initializing pcur again in next call. The leak is reproducible in local environment and with the test provided along with bug-19138298. Solution : ---------- If btr_pcur_restore_position() fails close the cursor. Reviewed-by: Marko Makela <Marko.Makela@oracle.com> Reviewed-by: Annamalai Gurusami <Annamalai.Gurusami@oracle.com> RB: 9074
-
- 29 May, 2015 2 commits
-
-
Bjorn Munch authored
-
Bjorn Munch authored
-
- 22 May, 2015 1 commit
-
-
mysql-builder@oracle.com authored
No commit message
-
- 21 May, 2015 1 commit
-
-
Bin Su authored
As man page of open(2) suggested, we should open the same file in the same mode, to have better performance. For some data files, we will first call os_file_create_simple_no_error_handling_func() to open them, and then call os_file_create_func() again. We have to make sure if DIRECT IO is specified, these two functions should both open file with O_DIRECT. Reviewed-by: Sunny Bains <sunny.bains@oracle.com> RB: 8981
-
- 18 May, 2015 1 commit
-
-
Tatiana Azundris Nuernberg authored
The MySQL server uses Henry Spencer's library for regular expressions to support the REGEXP/RLIKE string operator. This changeset adapts a recent fix from the upstream for better 32-bit compatiblity. (Note that we cannot simply use the current upstream version as a drop-in replacement for the version used by the server as the latter has been extended to understand MySQL charsets etc.)
-
- 12 May, 2015 1 commit
-
-
Ajo Robert authored
AVOID DEADLOCK AFTER RESTORE Test Failure Fix.
-
- 11 May, 2015 1 commit
-
-
Ajo Robert authored
AVOID DEADLOCK AFTER RESTORE Analysis -------- Accessing the restored NDB table in an active multi-statement transaction was resulting in deadlock found error. MySQL Server needs to discover metadata of NDB table from data nodes after table is restored from backup. Metadata discovery happens on the first access to restored table. Current code mandates this statement to be the first one in the transaction. This is because discover needs exclusive metadata lock on the table. Lock upgrade at this point can lead to MDL deadlock and the code was written at the time when MDL deadlock detector was not present. In case when discovery attempted in the statement other than the first one in transaction ER_LOCK_DEADLOCK error is reported pessimistically. Fix: --- Removed the constraint as any potential deadlock will be handled by deadlock detector. Also changed code in discover to keep metadata locks of active transaction. Same issue was present in table auto repair scenario. Same fix is added in repair path also.
-
- 09 May, 2015 1 commit
-
-
Annamalai Gurusami authored
Scenario: 1. The purge thread takes an undo log record and parses it and forms the record to be purged. We have the primary and secondary keys to locate the actual records. 2. Using the secondary index key, we search in the secondary index. One record is found. 3. Then it is checked if this record can be purged. The answer is we can purge this record. To determine this we look up the clustered index record. Either there is no corresponding clustered index record, or the matching clustered index record is delete marked. 4. Then we check whether the secondary index record is delete marked. We find that it is not delete marked. We report warning in optimized build and assert in debug build. Problem: In step 3, we report that the record is purgeable even though it is not delete marked. This is because of inconsistency between the following members of purge_node_t structure - found_clust, ref and pcur. Solution: In the row_purge_reposition_pcur(), if the persistent cursor restore fails, then reset the purge_node_t->found_clust member. This will keep the members of purge_node_t structure in a consistent state. rb#8813 approved by Marko.
-
- 04 May, 2015 1 commit
-
-
Balasubramanian Kandasamy authored
-
- 29 Apr, 2015 1 commit
-
-
V S Murthy Sidagam authored
As part of the fix find_files() prototype has been modified and mysql-cluster uses find_files() function. Hence modified find_files() call in ha_ndbcluster_binlog.cc file to make mysql-cluster build successful.
-
- 28 Apr, 2015 2 commits
-
-
Arun Kuruvila authored
-
Arun Kuruvila authored
HOST WHEN IT CONTAINS WILDCARD Description :- Incorrect access privileges are provided to a user due to wrong sorting of users when wildcard characters is present in the hostname. Analysis :- Function "get_sorts()" is used to sort the strings of user name, hostname, database name. It is used to arrange the users in the access privilege matching order. When a user connects, it checks in the sorted user access privilege list and finds a corresponding matching entry for the user. Algorithm used in "get_sort()" sorts the strings inappropriately. As a result, when a user connects to the server, it is mapped to incorrect user access privileges. Algorithm used in "get_sort()" counts the number of characters before the first occurence of any one of the wildcard characters (single-wildcard character '_' or multi-wildcard character '%') and sorts in that order. As a result of inconnect sorting it treats hostname "%" and "%.mysql.com" as equally-specific values and therefore the order is indeterminate. Fix:- The "get_sort()" algorithm has been modified to treat "%" seperately. Now "get_sort()" returns a number which, if sorted in descending order, puts strings in the following order:- * strings with no wildcards * strings containg wildcards and non-wildcard characters * single muilt-wildcard character('%') * empty string.
-
- 27 Apr, 2015 3 commits
-
-
V S Murthy Sidagam authored
Description: On an example MySQL instance with 28k empty InnoDB tables, a specific query to information_schema.tables and information_schema.columns leads to memory consumption over 38GB RSS. Analysis: In get_all_tables() call, we fill the I_S tables from frm files and storage engine. As part of that process we call make_table_name_list() and allocate memory for all the 28k frm file names in the THD mem_root through make_lex_string_root(). Since it has been called around 28k * 28k times there is a huge memory getting hogged in THD mem_root. This causes the RSS to grow to 38GB. Fix: As part of fix we are creating a temporary mem_root in get_all_tables and passing it to fill_fiels(). There we replace the THD mem_root with the temporary mem_root and allocates the file names in temporary mem_root and frees it once we fill the I_S tables in get_all_tables and re-assign the original mem_root back to THD mem_root. Note: Checked the massif out put with the fix now the memory growth is just around 580MB at peak.
-
V S Murthy Sidagam authored
-
V S Murthy Sidagam authored
Restrict when user table hashes can be viewed. Require SUPER privileges.
-
- 24 Apr, 2015 2 commits
-
-
Arun Kuruvila authored
-
Arun Kuruvila authored
Description:- There is a possibility of negative array index write associated with the function "terminal_writec()". This is due to the assumption that there is a possibility of getting -1 return value from the function call "ct_visual_char()". Analysis:- The function "terminal_writec()" is called only from "em_delete_or_list()" and "vi_list_or_eof()" and both these functions deal with the "^D" (ctrl+D) signal. So the "size_t len" and "Char c" passed to "ct_visual_char()" (when called from "terminal_writec()") is always 8 (macro VISUAL_WIDTH_MAX is passed whose value is 8) and 4 (ASCII value for "^D"/"ctrl+D") respectively. Since the value of "c" is 4, "ct_chr_class()" returns -1 (macro CHTYPE_ASCIICTL is associated with -1 value). And since value of "len" is 8, "ct_visual_char()" will always return 2 when it is called from "terminal_writec()". So there is no possible case so that we encounter a negative array index write in "terminal_writec()". But since there is a rare posibility of using "terminal_writec()" in future enhancements, it is good handle the error case as well. Fix:- A condition is added in "terminal_writec()" to check whether "ct_visual_char()" is returning -1 or not. If the return value is -1, then value 0 is returned to its calling function "em_delete_or_list()" or "vi_list_or_eof()", which in turn will return CC_ERROR. NOTE:- No testcase is added since currently there is no possible scenario to encounter this error case.
-
- 21 Apr, 2015 1 commit
-
-
V S Murthy Sidagam authored
post push change: fixing valgrind failures
-
- 20 Apr, 2015 2 commits
-
-
V S Murthy Sidagam authored
post push change: missed the change in mysql-5.5 (Fixing compiler warning/error)
-
V S Murthy Sidagam authored
Description: Can't build mysql-5.5 latest source with openssl 0.9.8e. Analysis: Older OpenSSL versions(prior to openssl 1.0) doesn't have 'SSL_OP_NO_COMPRESSION' defined. Hence the build is failing with SSL_OP_NO_COMPRESSION undeclared. Fix: Added a conditonal compilation for 'SSL_OP_NO_COMPRESSION'. i.e if 'SSL_OP_NO_COMPRESSION' is defined then have the SSL_set_options call for OpenSSL 1.0 versions. Have sk_SSL_COMP_zero() call for OpenSSL 0.9.8 version
-
- 17 Apr, 2015 1 commit
-
-
Mauritz Sundell authored
One can not see in PB2 test logs which unit tests have been run and passed. This patchs adds an option --unit-tests-report to mtr which include the ctest report in mtr output. It will also turn on unit testing if not explicitly turned off with --no-unit-tests or equivalent. In manual runs one can always look in the ctest.log file in mtr vardir. --unit-tests are replaced with --unit-tests-report in files under mysql-test/collections/ to activate report in PB2.
-
- 15 Apr, 2015 1 commit
-
-
Tor Didriksen authored
Fix typo: s/COPY_ONLY/COPYONLY/g (cherry picked from commit 034046b4dfe3e6f83e0cf73310884334e0507f06) Conflicts: cmake/make_dist.cmake.in
-
- 13 Apr, 2015 2 commits
-
-
Bjorn Munch authored
-
Bjorn Munch authored
-
- 10 Apr, 2015 2 commits
-
-
Sreeharsha Ramanavarapu authored
-
Sreeharsha Ramanavarapu authored
MYISAM TABLE CAUSES THE SERVER TO CRASH Backport to mysql-5.1
-
- 09 Apr, 2015 2 commits
-
-
Marko Mäkelä authored
Embedded server does not support restarting and needs to skip the test.
-
Marko Mäkelä authored
if XA PREPARE transactions hold explicit locks. innobase_shutdown_for_mysql(): Call trx_sys_close() before lock_sys_close() (and dict_close()) so that trx_free_prepared() will see all locks intact. RB: 8561 Reviewed-by: Vasil Dimov <vasil.dimov@oracle.com>
-
- 08 Apr, 2015 1 commit
-
-
Marc Alff authored
FILL_VARIABLES Prevent mutexes used in SHOW VARIABLES from being locked twice.
-