- 23 Jan, 2014 1 commit
-
-
Tor Didriksen authored
Backported only the softlink part of the patch, *not* the bumping of library version. With this patch, the libmysql/ directory contains: libmysqlclient.a libmysqlclient_r.a -> libmysqlclient.a libmysqlclient_r.so -> libmysqlclient.so* libmysqlclient_r.so.18 -> libmysqlclient.so.18* libmysqlclient_r.so.18.0.0 -> libmysqlclient.so.18.0.0* libmysqlclient.so -> libmysqlclient.so.18* libmysqlclient.so.18 -> libmysqlclient.so.18.0.0* libmysqlclient.so.18.0.0*
-
- 16 Jan, 2014 2 commits
-
-
Tor Didriksen authored
Bug#68338 RFE: make tmpdir a build-time configurable option Post-push fix: windows needs DEFAULT_TMPDIR as well.
-
Tor Didriksen authored
Bug#68338 RFE: make tmpdir a build-time configurable option Post-push fix: 'cmake -LH | grep TMP' showed TMPDIR as a BOOL option, which was a bit confusing: show it as a PATH instead.
-
- 13 Jan, 2014 1 commit
-
-
Thayumanavar authored
This is a backport of the patch of bug#11765785. Commit message by Prabakaran Thirumalai from bug#11765785 is reproduced below: Description: ------------ Global Query ID (global_query_id ) is not incremented for PING and statistics command. These two query types are filtered before incrementing the global query id. This causes race condition and results in duplicate query id for different queries originating from different connections. Analysis: --------- sqlparse.cc::dispath_command() is the only place in code which sets thd->query_ id to global_query_id and then increments it based on the query type. In all other places it is incremented first and then assigned to thd->query_id. This is done such that global_query_id is not incremented for PING and statistics commands in dispatch_command() function. Fix: ---- As per suggestion from Serg, "There is no reason to skip query_id for the PING and STATISTICS command.", removing the check which filters PING and statistics commands. Instead of using get_query_id() and next_query_id() which can still cause race condition if context switch happens soon after executing get_query_id(), changing the code to use next_query_id() instead of get_query_id() as it is done in other parts of code which deals with global_query_id. Removed get_query_id() function and forced next_query_id() caller to use the return value by specifying warn_unused_result attribute.
-
- 11 Jan, 2014 1 commit
-
-
Venkata Sidagam authored
Description: A typo in create_tailoring() causes the "contraction_flags" to be written into cs->contractions in the wrong place. This causes two problems: (1) Anyone relying on `contraction_flags` to decide "could this character be part of a contraction" is 100% broken. (2) Anyone relying on `contractions` to determine the weight of a contraction is mostly broken Analysis: When we are preparing the contraction in create_tailoring(), we are corrupting the cs->contractions memory location which is supposed to store the weights(8k) + contraction information(256 bytes). We started storing the contraction information after the 4k location. This is because of logic flaw in the code. Fix: When we create the contractions, we need to calculate the contraction with (char*) (cs->contractions + 0x40*0x40) from ((char*) cs->contractions) + 0x40*0x40. This makes the "cs->contractions" to move to 8k bytes and stores the contraction information from there. Similarly when we are calculating it for like range queries we need to calculate it from the 8k bytes onwards, this can be done by changing the logic to (const char*) (cs->contractions + 0x40*0x40). And for ucs2 charsets we need to modify the my_cs_can_be_contraction_head() and my_cs_can_be_contraction_tail() to point to 8k+ locations.
-
- 10 Jan, 2014 1 commit
-
-
Sujatha Sivakumar authored
OF ROW DATA Problem: ======== Inserting a row larger than 4G when server uses RBR leads to crash. Analysis: ======== Row-based binary logging logs changes in individual table rows. During the execution of DML statements in RBR the actual row data will be stored within "m_rows_buf" buffer and this buffer contents will be written to binary log. "m_rows_buf" is prepared within the following function "Rows_log_event::do_add_row_data". When a huge row is specified as in this bug scenario where row size is 4294971520 > UINT_MAX (4294967295) then the "m_rows_buf" is reallocated to accommodate the row data and then the row is copied to the buffer. During this realloc call, the length is getting type casted to "uint" which results in overflow. Because of the overflow the reallocated memory happens to be incorrect than what was requested and it results in a crash during copy of rowdata to buffer. Hence rows of size > 4GB cannot be written to binary log. By default the event_length can be stored within 4 bytes which in turn restricts an event's size to grow. Hence large rows cannot be replicated using row based replication. Fix: === An error is generated if the row size exceeds 4GB value.
-
- 09 Jan, 2014 4 commits
-
-
Luis Soares authored
- Automerged from bug branch into latest mysql-5.5. - Fixed trailing whitespaces. - Updated the copyright notice year to 2014.
-
Murthy Narkedimilli authored
-
mithun authored
FROM SUBSELECT ISSUE : In function find_all_keys. If selected row do not satisfy condition then we call unlock_row to release the locked row. Suppose if we have subquery in condition and we have an innodb error during its execution. Then we should not call the unlock_row. If the error is because of deadlock, innodb will rollback the transaction. And calling unlock_row without transaction is an invalid case hence an assertion failure. SOLUTION : We call unlock_row only if only there is no error occurred previously. The solution is back ported from 5.6 defect number 14226481
-
mysql-builder@oracle.com authored
No commit message
-
- 08 Jan, 2014 3 commits
-
-
Aditya A authored
IN DOCUMENTATION Problem ------- The documentation says that we support 'K' prefix while specifiying size for innodb datafile in the server variable for innodb_data_file_path ,but the function srv_parse_megabytes() only handles only 'M' (megabytes) and 'G' (gigabytes) . Fix --- Modify srv_parse_megabytes() to handle Kilobytes. Add in documentation that while specifying size in KB it should be mentioned in multiples of 1024 other wise they will be rounded off to nearest MB (megabyte) boundry .(eg if size mentioned as 2313KB will be considered as 2 MB ). [ Approved by Marko #rb 2387 ]
-
Anirudh Mangipudi authored
WITH SSL ENABLED Problem: It was reported that MySQL community utilities cannot connect to a MySQL Enterprise 5.6.x server with SSL configured. We can reproduce the issue when we try to connect an MySQL Enterprise Server with a MySQL Client with --ssl-ca parameter enabled. We get an ERROR 2026 (HY000): SSL connection error: unknown error number. Solution: The root cause of the problem was determined to be the difference in handling of the certificates by OpenSSL(Enterprise) and yaSSL(Community). OpenSSL expects a blank certificate to be sent when a parameter (ssl-ca, or ssl-cert or ssl-key) has not been specified.On the other hand yaSSL doesn't send any certificate and since OpenSSL does not expect this behaviour it returns an Unknown SSL error. The issue was resolved by yaSSL adding capability to send blank certificate when any of the parameter is missing.
-
Nisha Gopalakrishnan authored
Analysis -------- Running 'MYSQLD --help --verbose' as ROOT user without using '--user' option displays the help contents but aborts at the end with an exit code '1'. While starting the server, a validation is performed to ensure when the server is started as root user, it should be done using '--user' option. Else we abort. In case of help, we dump the help contents and abort. Fix: --- During the validation, we skip aborting the server incase we are using the help option under the condition mentioned above. NOTE: Test case has not been added since it requires using 'root' user.
-
- 07 Jan, 2014 1 commit
-
-
Bharathy Satish authored
Problem: Drop Trigger succeeds even after setting read_only variable to ON. Fix: Fix is to report the standard error (ER_OPTION_PREVENTS_STATEMENT)when global read_only variable is set to ON.
-
- 06 Jan, 2014 2 commits
-
-
laasya.moduludu@oracle.com authored
-
Murthy Narkedimilli authored
-
- 30 Dec, 2013 1 commit
-
-
Arun Kuruvila authored
LOCAL TABLE WHEN ONLY 1 LOCAL ROW Description: When updating a federated table with UPDATE... JOIN, the server consistently crashes with Signal 11 when only 1 row exists in the local table involved in the join and that 1 row can be joined with a row in the federated table. Analysis: Interaction between the federated engine and the optimizer results in the crash. In our scenario, ie, local table having only one row, the program is following a different path because the table is treated as a constant table by the join optimizer. So in this scenario "index_read()" is happening in the prepare phase, since optimizer plan is different for constant table joins. In this case, "index_read_idx_map()" (inside handler.cc) is calling "index_read()" and inside "index_read()", matching rows are fetched and "stored_result" gets populated by calling "store_result()". And just after "index_read()", "index_end()" function is called. And in the "index_end()", its freeing the "stored_result" by calling "free_result()". So when it reaches the execution phase, in "position()" function, we are getting assertion at "DBUG_ASSERT(stored_result);". In all other scenarios (ie, table with more than 1 row), optimizer plan is different and "index_read()" is happening in the execution phase. Fix: So my fix is to have a separate ha_federated member function for "index_read_idx_map()" which will handle federated engine separately. So that position() will be called before index_end() call in constant table scenario.
-
- 29 Dec, 2013 1 commit
-
-
Aditya A authored
ERRORS IN THE FK SECTION ANALYSIS -------- Any error during the renaming of the table was incorrectly logged in the dict_foreign_err_file and it showed up in foreign key section when we give the query "show engine innodb status". FIX --- Prevent renaming error from being logged in dict_foreign_err_file section. [Aprooved by marko #rb 2501 ]
-
- 26 Dec, 2013 1 commit
-
-
Satya Bodapati authored
IT IS DONE IN-PLACE Add testcase as innodb-change-buffer-recovery.test
-
- 19 Dec, 2013 1 commit
-
-
Venkata Sidagam authored
Adding the test cases for the BUG#16451878.
-
- 18 Dec, 2013 5 commits
-
-
Bjorn Munch authored
FAILS TO DROP CREATED EVENTS: - Check for triggers should exclude mtr's own - Move the code to before checksum table as it might affect result of some autdit_log tests (does in 5.6) - Replace SHOW STATUS LIKE 'slave_open_temp_tables' to be like in 5.6
-
Luis Soares authored
AUTO_INC COLUMN ONLY ON SLAVE In RBR, if the slave's table as one additional auto_inc column, then, it will insert the value 0 instead of generating the next auto_inc number. We fix this by checking that if an auto_inc extra column exists, when compared to column data of the row event, we explicitly set it to NULL and flag the engine that a nulled auto_inc column will be inserted.
-
Tor Didriksen authored
fix: DROP EVENT e1;
-
Tor Didriksen authored
Bug#68338 RFE: make tmpdir a build-time configurable option Background: Some distributions use tmpfs for mounting /tmp by default, which has some advantages, but brings also new issues. Fedora started using tmpfs on /tmp in version 18 for example. If not configured otherwise in my.cnf, MySQL uses system's constant P_tmpdir expanded to /tmp on Linux. This can introduce some problems with limited space in /tmp and also some data loss in case of replication slave [1]. In case distributions would like to use /var/tmp, which should be better for MySQL purposes, then we have to patch the source or change tmpdir option in my.cnf, which is however not updated in case it has already existed. Thus, it would be useful to be able to specify default tmpdir path using a configure option, while using P_tmpdir in case it is not defined explicitly. Based on a contribution from Honza Horak
-
Venkatesh Duggirala authored
(MYSQLBINLOG -V CRASHES WITH THAT BINLOG) Post Push: Fixing Werror compiler issue
-
- 17 Dec, 2013 1 commit
-
-
Venkatesh Duggirala authored
(MYSQLBINLOG -V CRASHES WITH THAT BINLOG) Problem: If slave receives a corrupted row event, slave server is crashing. Analysis: When slave is unpacking the row event, it is not validating the data before applying the event. If the data is corrupted for eg: the length of a field is wrong, it could end up reading wrong data leading to a crash. A similar problem happens when mysqlbinlog tool is used against a corrupted binlog using '-v' option. Due to -v option, the tool tries to print the values of all the fields. Corrupted field length could lead to a crash. Fix: Before unpacking the field, a verification will be made on the length. If it falls into the event range, only then it will be unpacked. Otherwise, "ER_SLAVE_CORRUPT_EVENT" error will be thrown. Incase mysqlbinlog -v case, the field value will not be printed and the processing of the file will be stopped.
-
- 14 Dec, 2013 1 commit
-
-
Kent Boortz authored
Bug#29716 : Bug#11746921 : MYSQL_INSTALL_DB REFERS TO THE (OBSOLETE) MYSQLBUG SCRIPT DURING INSTALLATION Bug#68742 : Bug#16530527 : OBSOLETE BUGREPORT ADDRESSES
-
- 13 Dec, 2013 1 commit
-
-
Marc Alff authored
-
- 12 Dec, 2013 1 commit
-
-
sayantan dutta authored
-
- 11 Dec, 2013 1 commit
-
-
Marc Alff authored
DESTRUCTED THD OBJ Prior to fix, function check_performance_schema() could leave behind stale pointers in thread local storage, for the following keys: - THR_THD (used by _current_thd) - THR_MALLOC (used for memory allocation) This is an unsafe practice, which can potentially cause crashes, and that can cause other bugs when code is modified during maintenance. With this fix, thread local storage keys used temporarily within function check_performance_schema() are cleaned up after use.
-
- 04 Dec, 2013 2 commits
-
-
Guilhem Bichot authored
Bug#17867117 - ERROR RESULT WHEN "COUNT + DISTINCT + CASE WHEN" NEED MERGE_WALK Problem: COUNT DISTINCT gives incorrect result when it uses a Unique Tree and its last inserted record has null value. Here is how COUNT DISTINCT is processed, given that this query is not using loose index scan. When a row is produced as a result of joining tables (there is only one table here), we store the SELECTed value in a Unique tree. This allows elimination of any duplicates, and thus implements DISTINCT. When we have processed all rows like this, we walk the Unique tree, counting its elements, in Aggregator_distinct::endup() (tree->walk()); for each element we call Item_sum_count::add(). Such function wants to ignore any NULL value, for that it checks item_sum -> args[0] -> null_value. It is a mistake: when walking the Unique tree, the value to be aggregated is not item_sum ->args[0] but rather table -> field[0]. Solution: instead of item_sum -> args[0] -> null_value, use arg_is_null(), which knows where to look (like in fix for bug 57932). As a consequence of this solution, we have to make arg_is_null() a little more general: 1) Because it was so far only used for AVG() (which always has a single argument), this function was looking at a single argument; now that it has to work with COUNT(DISTINCT expression1,expression2), it must look at all arguments. 2) Because we start using arg_is_null () for COUNT(DISTINCT), i.e. in Item_sum_count::add (), it implies that we are also using it for COUNT(no DISTINCT) (same add ()). For COUNT(no DISTINCT), the nullness to check is that of item_sum -> args[0]. But the null_value of such item is reliable only if val_*() has been called on it. So far arg_is_null() was always used after a call to arg_val*(), so could rely on null_value; but for COUNT, there is no call to arg_val*(), so arg_is_null() has to call is_null() instead. Testcase for 16539979 by Neeraj. Testcase for 17867117 contributed by Xiaobin Lin from Taobao.
-
Hery Ramilison authored
-
- 03 Dec, 2013 1 commit
-
- 29 Nov, 2013 2 commits
-
-
Pavan Naik authored
Fix : ------- Created separate suites called innodb_zip ans i_innodb_zip that contain all compression tests. Running the new suites with following compression-related parameters : * innodb_compression_level = {1/9} * innodb_log_compressed_pages = {ON/OFF}
-
Balasubramanian Kandasamy authored
-
- 27 Nov, 2013 1 commit
-
-
mysql-builder@oracle.com authored
No commit message
-
- 25 Nov, 2013 4 commits
-
-
Balasubramanian Kandasamy authored
-
Balasubramanian Kandasamy authored
-
Anirudh Mangipudi authored
MALFORMED XPATH EXP Problem: A malformed XPATH expression in the ExtractValue query is causing a server crash. This malformed XPATH expression is resulted when the position attribute in the substring function contains ".." in the beginning. Solution: The original crash is happening because the "../" is being evaluated prematurely. It tries to access XML while it hasn't been parsed yet. The premature evaluation is happening because the val_nodeset function is being set to constant, in which case we proceed to evaluate them in JOIN:prepare stage only. The solution to this is setting the val_nodeset functions as non-constant. This forces us to evaluate the function in the JOIN:exec stage and thus avoid any premature evaluation of the XML strings.
-
Anirudh Mangipudi authored
WITH MALFORMED XPATH EXP Problem: A malformed XPATH expression in the ExtractValue query is causing a server crash. This malformed XPATH expression is resulted when the position attribute in the substring function contains ".." in the beginning. Solution: The original crash is happening because the "../" is being evaluated prematurely. It tries to access XML while it hasn't been parsed yet. The premature evaluation is happening because the val_nodeset function is being set to constant, in which case we proceed to evaluate them in JOIN:prepare stage only. The solution to this is setting the val_nodeset functions as non-constant. This forces us to evaluate the function in the JOIN:exec stage and thus avoid any premature evaluation of the XML strings.
-