- 30 Jan, 2013 6 commits
-
-
Mattias Jonsson authored
Due to an internal change in the server code in between 5.1 and 5.5 (wl#2649) the hash function used in KEY partitioning changed for numeric and date/time columns (from binary hash calculation to character based hash calculation). Also enum/set changed from latin1 ci based hash calculation to binary hash between 5.1 and 5.5. (bug#11759782). These changes makes KEY [sub]partitioned tables on any of the affected column types incompatible with 5.5 and above, since the calculation of partition id differs. Also since InnoDB asserts that a deleted row was previously read (positioned), the server asserts on delete of a row that is in the wrong partition. The solution for this situation is: 1) The partitioning engine will check that delete/update will go to the partition the row was read from and give an error otherwise, consisting of the rows partitioning fields. This will avoid asserts in InnoDB and also alert the user that there is a misplaced row. A detailed error message will be given, including an entry to the error log consisting of both table name, partition and row content (PK if exists, otherwise all partitioning columns). 2) A new optional syntax for KEY () partitioning in 5.5 is allowed: [SUB]PARTITION BY KEY [ALGORITHM = N] (list_of_cols) Where N = 1 uses the same hashing as 5.1 (Numeric/date/time fields uses binary hashing, ENUM/SET uses charset hashing) N = 2 uses the same hashing as 5.5 (Numeric/date/time fields uses charset hashing, ENUM/SET uses binary hashing). If not set on CREATE/ALTER it will default to 2. This new syntax should probably be ignored by NDB. 3) Since there is a demand for avoiding scanning through the full table, during upgrade the ALTER TABLE t PARTITION BY ... command is considered a no-op (only .frm change) if everything except ALGORITHM is the same and ALGORITHM was not set before, which allows manually upgrading such table by something like: ALTER TABLE t PARTITION BY KEY ALGORITHM = 1 () or ALTER TABLE t PARTITION BY KEY ALGORITHM = 2 () 4) Enhanced partitioning with CHECK/REPAIR to also check for/repair misplaced rows. (Also works for ALTER TABLE t CHECK/REPAIR PARTITION) CHECK FOR UPGRADE: If the .frm version is < 5.5.3 and uses KEY [sub]partitioning and an affected column type then it will fail with an message: KEY () partitioning changed, please run: ALTER TABLE `test`.`t1` PARTITION BY KEY ALGORITHM = 1 (a) PARTITIONS 12 (i.e. current partitioning clause, with the addition of ALGORITHM = 1) CHECK without FOR UPGRADE: if MEDIUM (default) or EXTENDED options are given: Scan all rows and verify that it is in the correct partition. Fail for the first misplaced row. REPAIR: if default or EXTENDED (i.e. not QUICK/USE_FRM): Scan all rows and every misplaced row is moved into its correct partitions. 5) Updated mysqlcheck (called by mysql_upgrade) to handle the new output from CHECK FOR UPGRADE, to run the ALTER statement instead of running REPAIR. This will allow mysql_upgrade (or CHECK TABLE t FOR UPGRADE) to upgrade a KEY [sub]partitioned table that has any affected field type and a .frm version < 5.5.3 to ALGORITHM = 1 without rebuild. Also notice that if the .frm has a version of >= 5.5.3 and ALGORITHM is not set, it is not possible to know if it consists of rows from 5.1 or 5.5! In these cases I suggest that the user does: (optional) LOCK TABLE t WRITE; SHOW CREATE TABLE t; (verify that it has no ALGORITHM = N, and to be safe, I would suggest backing up the .frm file, to be used if one need to change to another ALGORITHM = N, without needing to rebuild/repair) ALTER TABLE t <old partitioning clause, but with ALGORITHM = N>; which should set the ALGORITHM to N (if the table has rows from 5.1 I would suggest N = 1, otherwise N = 2) CHECK TABLE t; (here one could use the backed up .frm instead and change to a new N and run CHECK again and see if it passes) and if there are misplaced rows: REPAIR TABLE t; (optional) UNLOCK TABLES;
-
mysql-builder@oracle.com authored
No commit message
-
mysql-builder@oracle.com authored
No commit message
-
Aditya A authored
WITH --SKIP-INNODB Description ----------- If the server is started with skip-innodb or InnoDB otherwise fails to start, any one of these queries will crash the server: For (5.5) SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE; SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE_LRU; SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_POOL_STATS; In (5.6+) ,following queries will also crash the server. SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_INDEXES; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_COLUMNS; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FIELDS; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FOREIGN; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FOREIGN_COLS; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESTATS; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_DATAFILES; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES; FIX ---- When Innodb is not active we must prevent it from processing these tables,so we return a warning saying that innodb is not active. Approved by marko (http://rb.no.oracle.com/rb/r/1891)
-
WITH AN ASSERTION Null merge from mysql-5.1
-
WITH AN ASSERTION Correcting the build failure that was caused because of changes checked-in to below mentioned revision. (Changes: DEBUG_SYNC_C should be disabled for innodb_plugin under Windows enviornment. Note: only for innodb_plugin.) revno: 3915 revision-id: krunal.bauskar@oracle.com-20130114051951-ang92lkirop37431 parent: nisha.gopalakrishnan@oracle.com-20130112054337-gk5pmzf30d2imuw7 committer: Krunal Bauskar krunal.bauskar@oracle.com branch nick: mysql-5.1 timestamp: Mon 2013-01-14 10:49:51 +0530
-
- 29 Jan, 2013 2 commits
-
-
Neeraj Bisht authored
ON COL WITH COMPOSITE INDEX This problem is caused by the patch for the bug#11751794. While checking for the keypart covering non grouping attribute. we are not checking whether the root node of the SEL_ARG* tree for the index have any cvalue or not.
-
Neeraj Bisht authored
ON COL WITH COMPOSITE INDEX This problem is caused by the patch for the bug#11751794. While checking for the keypart covering non grouping attribute. we are not checking whether the root node of the SEL_ARG* tree for the index have any cvalue or not.
-
- 28 Jan, 2013 5 commits
-
-
Nuno Carvalho authored
Merge from mysql-5.1 into mysql-5.5.
-
Nuno Carvalho authored
On a previous fix, user variables with zero length name were incorrectly considered as event corruption, despite that them are allowed by server. Fix this wrong assumption by allowing again user variables with zero length on binary log.
-
Satya Bodapati authored
With innodb_change_buffering enabled, Innodb buffers all modifications to secondary index leaf pages when the leaf pages are not in buffer pool. Crash InnoDB while an IBUF_OP_DELETE is being applied. Restart and note that the same record can be applied again which may lead to crash. Mark the change buffer record processed, so that it will not be merged again in case the server crashes between the following mtr_commit() and the subsequent mtr_commit() of deleting the change buffer record. Testcase: No testcase because it is difficult to get the timing right with the two asyncronous task purge and change buffering Approved by Marko. rb#1893
-
Venkatesh Duggirala authored
PROPERLY QUOTED IN BINLOG FILE Merging fix from mysql-5.1
-
Venkatesh Duggirala authored
PROPERLY QUOTED IN BINLOG FILE Problem: In load data file query, User variables are allowed inside "Into_list" and "Set_list". These user variables used inside these two lists are not properly guarded with backticks while server is writting into binlog. Hence user variable names like a` cannot be used in this context. Fix: Properly quote these variables while writting into binlog
-
- 26 Jan, 2013 1 commit
-
-
Venkatesh Duggirala authored
Due to not resetting a member (last_added) of Deferred events class inside a clean up function (Deferred_log_events::rewind), there is a memory leak on filtered slaves. Fix: Resetting last_added to NULL in rewind() function.
-
- 24 Jan, 2013 5 commits
-
-
mysql-builder@oracle.com authored
No commit message
-
Venkata Sidagam authored
Backporting bug patch from 5.5 to 5.1. This fix is applicable to BUG#14362617 as well
-
Venkata Sidagam authored
CERTAIN LEVEL Merging from 5.1 to 5.5
-
Venkata Sidagam authored
CERTAIN LEVEL Problem description: mysqld crashes when we update the max_connections variable to lesser value than the number of currently open connections. Analysis: The "alarm_queue.max_elements" size will be decided at the server start time and it will get modified if we change max_connections value. In the current scenario the value of "alarm_queue.max_elements" is decremented when the max_connections is set to 2. When updating the "alarm_queue.max_elements" value we are not updating "max_used_alarms" value. Hence, instead of getting the warning "thr_alarm queue is full" it is ending up in asserting the server at the time of inserting new elements in the queue. Fix: the fix is to dynamically increase the size of the alarm_queue. In order to do that, queue_insert_safe() should be used instead if queue_insert().
-
Venkatesh Duggirala authored
FROM MYSQL_BINLOG_SEND As part Bug #11747416 A DISK FULL MAKES BINARY LOG CORRUPT, reading the variable "binlog_can_be_corrupted" was removed In the existing code the value of this variable is only set, never read. And also this issue causing compiler warnings. So the variable is completely redundant and should be removed.
-
- 23 Jan, 2013 2 commits
-
-
Yasufumi Kinoshita authored
-
Yasufumi Kinoshita authored
some callers for page_zip_empty_size() ignored possibility its returning 0, and could cause underflow. rb#1837 approved by Marko
-
- 21 Jan, 2013 2 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
buf_page_get_gen(): Do not attempt to decompress a compressed-only page when mode == BUF_PEEK_IF_IN_POOL. This mode is only being used by btr_search_drop_page_hash_when_freed(). There cannot be any adaptive hash index pointing to a page that does not exist in uncompressed format in the buffer pool. innodb_buffer_pool_evict_update(): New function for debug builds, to handle SET GLOBAL innodb_buffer_pool_evicted='uncompressed' by evicting all uncompressed page frames of compressed tablespaces from the buffer pool. rb#1873 approved by Jimmy Yang
-
- 19 Jan, 2013 2 commits
-
-
Venkatesh Duggirala authored
THAN A TABLE. Merging fix from mysql-5.1
-
Venkatesh Duggirala authored
RATHER THAN A TABLE Problem: In RBR, If a table is converted into a view at slave, (i.e., "drop table 'object1'" & "create view 'object1'"), then any DML operations on the table at master are causing crash at slave. Analysis: Slave prepares tables to be opened for DML list when it receives Table_map_log_event(s). And the same list will be sent to open_table function. Open_table logic assumes that if the list contains a view object, it also contains "select_lex" object of that view. In the above special case, the table object does not contain 'select_lex' as it is base table at master. Since it is a view at slave, open_table logic goes to 'mysql_make_view()' function which assumes that 'select_lex' exists for the object. Fix: While preparing 'tables to be opened' list, we should make sure that table required type is 'base table'. If it is not base table while opening the object, mysql_make_view will throw an error similar to 'object is not a base table'
-
- 18 Jan, 2013 3 commits
-
-
Astha Pareek authored
disabled binlog_spurious_ddl_errors on mysql-5.5
-
mysql-builder@oracle.com authored
No commit message
-
Astha Pareek authored
The test, binlog.binlog_spurious_ddl_errors was failing on pb2 at the statement "UNINSTALL PLUGIN example;" with this warning: "Warning 1620 Plugin is busy and will be uninstalled on shutdown " Fix Spurious warnings occur in the test since we do not empty the Query cache, used by the example plugin at the time of creating tables using the plugin. Hence, the query chache is flushed before uninstalling the plugin. Also, as part of running the test across platforms, the plugin installation script is changed.
-
- 17 Jan, 2013 1 commit
-
-
Marko Mäkelä authored
Get rid of O(n^2) scan in dyn array (mtr->memo) operations, accessing the dyn array blocks directly. dyn_array_get_last_block(), dyn_array_get_next_block(), dyn_array_get_prev_block(): Define as a constness-preserving macro. Add const qualifiers to many dyn_array functions. mtr_memo_slot_release_func(): Renamed from mtr_memo_slot_release(): Make mtr_t* a debug-only parameter. Assume that slot->object != NULL. mtr_memo_pop_all(): Access the dyn_array blocks directly, replacing O(n^2) operation with O(n). mtr_memo_release(): Access the dyn_array blocks directly, replacing O(n^2) operation with O(n). This caused the performance problem. rb#1540 approved by Jimmy Yang
-
- 16 Jan, 2013 5 commits
-
-
Anirudh Mangipudi authored
Null Merge from 5.1 to 5.5
-
Anirudh Mangipudi authored
Problem: When a view, with a specific character set and collation, is created on another view with a different character set and collation the dump restoration results in an illegal mix of collations error. SOLUTION: To avoid this confusion of collations, the create table datatype being used is hardcoded as "tinyint NOT NULL". This will not matter as the table created will be dropped at runtime and specifically tinyint is used to avoid hitting the row size conflicts.
-
Neeraj Bisht authored
Consider the following query: SELECT f_1,..,f_m, AGGREGATE_FN(C) FROM t1 WHERE ... GROUP BY ... Loose index scan ("Using index for group-by") can be used for this query if there is an index 'i' covering all fields in the select list, and the GROUP BY clause makes up a prefix f1,...,fn of 'i'. Furthermore, according to rule NGA2 of get_best_group_min_max(), the WHERE clause must contain a conjunction of equality predicates for all fields fn+1,...,fm. The problem in this bug was that a query with WHERE clause that broke NGA2(NGA: Non Group Attribuite) was not detected and therefore used loose index scan. This lead to wrong result. The query had an index covering (c1,c2) and had: "WHERE (c1 = 1 AND c2 = 'a') OR (c1 = 2 AND c2 = 'b') GROUP BY c1" or "WHERE (c1 = 1 ) OR (c1 = 2 AND c2 = 'b') GROUP BY c1" This WHERE clause cannot be transformed to a conjunction of equality predicates. The solution is to introduce another rule, NGA3, that complements NGA2. NGA3 says that if a gap field (field between those listed in GROUP BY and C in the index) has a predicate, then there can only be one range in the query. This requirement is more strict than it has to be in theory. BUG 15947433 will deal with that.
-
Neeraj Bisht authored
Consider the following query: SELECT f_1,..,f_m, AGGREGATE_FN(C) FROM t1 WHERE ... GROUP BY ... Loose index scan ("Using index for group-by") can be used for this query if there is an index 'i' covering all fields in the select list, and the GROUP BY clause makes up a prefix f1,...,fn of 'i'. Furthermore, according to rule NGA2 of get_best_group_min_max(), the WHERE clause must contain a conjunction of equality predicates for all fields fn+1,...,fm. The problem in this bug was that a query with WHERE clause that broke NGA2 was not detected and therefore used loose index scan. This lead to wrong result. The query had an index covering (c1,c2) and had: "WHERE (c1 = 1 AND c2 = 'a') OR (c1 = 2 AND c2 = 'b') GROUP BY c1" or "WHERE (c1 = 1 ) OR (c1 = 2 AND c2 = 'b') GROUP BY c1" This WHERE clause cannot be transformed to a conjunction of equality predicates. The solution is to introduce another rule, NGA3, that complements NGA2. NGA3 says that if a gap field (field between those listed in GROUP BY and C in the index) has a predicate, then there can only be one range in the query. This requirement is more strict than it has to be in theory. BUG 15947433 will deal with that.
-
mysql-builder@oracle.com authored
No commit message
-
- 15 Jan, 2013 4 commits
-
-
Nisha Gopalakrishnan authored
Analysis: --------- When the server is out of memory, an error is raised to indicate the same. Handling the error requires more memory to be allocated which fails, hence the error handling loops in a recursion and causes the server to crash. Fix: --- a) Prevents pushing the 'out of memory' error condition to the diagnostic area as it requires memory allocation. GET DIAGNOSTICS, SHOW WARNINGS and SHOW ERRORS statements will not show information about this error. However the 'out of memory' error is returned to the client. b) It sets the ME_FATALERROR flag when 'out of memory' errors are reported (for places where the flag is not already set). This flag prevents activation of SP error handlers which also require memory allocation and therefore are likely to fail.
-
Neeraj Bisht authored
Problem:- In case of blob data field, UNION ALL doesn't give correct result. Analysis:- In MyISAM table, when we dont want to check for the distinct for particular key, we set the key_map to zero. While writing record in MyISAM table, we check the distinct with the help of keys, by checking whether that key is active in key_map and then writing the record. In case of blob field, we are checking for distinct by unique constraint, where we are not checking whether that unique key is active or not in key_map. Solution: Before checking for distinct, check whether any key is active in key_map.
-
Neeraj Bisht authored
Problem:- In case of blob data field, UNION ALL doesn't give correct result. Analysis:- In MyISAM table, when we dont want to check for the distinct for particular key, we set the key_map to zero. While writing record in MyISAM table, we check the distinct with the help of keys, by checking whether that key is active in key_map and then writing the record. In case of blob field, we are checking for distinct by unique constraint, where we are not checking whether that unique key is active or not in key_map. Solution:- Before checking for distinct, check whether any key is active in key_map.
-
mysql-builder@oracle.com authored
No commit message
-
- 14 Jan, 2013 2 commits
-
-
Neeraj Bisht authored
BUG#14303860 - EXECUTING A SELECT QUERY WITH TOO MANY WILDCARDS CAUSES A SEGFAULT Back port from 5.6 and trunk
-
Olav Sandstaa authored
WITH A VARIABLE AND ORDER BY Bug#16035412 MYSQL SERVER 5.5.29 WRONG SORTING USING COMPLEX INDEX This is a fix for a regression introduced by Bug#12667154: Bug#12667154 attempted to fix a performance problem with subqueries that did filesort. For doing filesort, the optimizer creates a quick select object to use when building the sort index. This quick select object was deleted after the first call to create_sort_index(). Thus, for queries where the subquery was executed multiple times, the quick object was only used for the first execution. For all later executions of the subquery, filesort used a complete table scan for building the sort index. The fix for Bug#12667154 tried to fix this by not deleting the quick object after the first execution of create_sort_index() so that it would be re-used for building the sort index by the following executions of the subquery. This regression introduced in Bug#12667154 is that due to not deleting the quick select object after building the sort index, the quick object could in some cases be used also during the second phase of the execution of the subquery instead of using the created sort index. This caused wrong results to be returned. The fix for this issue is to delete the reference to the select object after it has been used in create_sort_index(). In this way the select and quick objects will not be available when doing the second phase of the execution of the select operation. To ensure that the select object can be re-used for the following executions of the subquery we make a copy of the select pointer. This is used for restoring the select object after the select operation is completed.
-