- 06 Feb, 2013 3 commits
-
-
Ravinder Thakur authored
Currently MySQL MSI installer on Windows installs MySQL in "per user" mode. It means that if a Windows machine has multiple users, they each can install MySQL independently. However the default path of MySQL is "C:\Program Files (x86)\MySQL\" and when two users install MySQL on same machine, the installation by second user just overwrites the MySQL files. This default shared location leads to the issue where if the second user uninstalls MySQL, the installation files are removed for the first user as well. In this fix, we are now making the default installation "per machine". It means that when MySQL is installed with defaults options, all users can see the shortcuts for MySQL in start menu(since installations is for all users). Also when any user relaunches the installer, it will consider that action uninstallation rather than installation for that user. There are command line options in installer that can be used to undo the "per machine" installation but will not consider that scenario.MySQL is a server product and it does not make a lot of sense to install it differently for each user.
-
unknown authored
-
unknown authored
-
- 05 Feb, 2013 6 commits
-
-
Hery Ramilison authored
-
unknown authored
-
unknown authored
-
unknown authored
-
unknown authored
-
Thayumanavar authored
PROBLEM: When large number of connections are continuously made with wait_timeout of 600 seconds for some hours, some connections remain after wait_timeout expired and also new connections get struck under the configuration and the scenario reported in bug#16196591. FIX: The cause of this bug is the issue identified and fixed in the BUG#16088658 in 5.6.Also LOCK_thread_count contention issue fixed in BUG#15921866 in 5.6 need to be in 5.5 as well. Since the issue is not reproducible, it has been verified at customer configuration the issue could not be reproduced after a 48-hour test with a non-debug build which includes the above two fixes backported.
-
- 04 Feb, 2013 2 commits
- 01 Feb, 2013 3 commits
-
-
Inaam Rana authored
I/O IS ASYNC rb://1934 approved by: Mikael Ronstrom (over email) When submitting AIO read request don't signal that the thread is about to wait on DISKIO
-
unknown authored
-
Nisha Gopalakrishnan authored
Analysis: -------- As part of the fix for Bug#11757464, the 'out of memory' error condition was not pushed to the diagnostic area as it requires memory allocation. However in cases of SIGNAL/RESIGNAL 'out of memory' error, the server may not be out of memory. Hence it would be good to report the error in such cases. Fix: --- Push only non fatal 'out of memory' errors to the diagnostic area. Since SIGNAL/RESIGNAL of 'out of memory' error may not be fatal, the error is reported.
-
- 31 Jan, 2013 6 commits
-
-
Gleb Shchepa authored
Manual up-merge from 5.1 to 5.5.
-
Gleb Shchepa authored
Some queries with the "SELECT ... FROM DUAL" nested subqueries failed with an assertion on debug builds. Non-debug builds were not affected. There were a few different issues with similar assertion failures on different queries: 1. The first problem was related to the incomplete propagation of the "non-constant" item status from underlying subquery items to the outer item tree: in some cases non-constants were interpreted as constants and evaluated at the preparation stage (val_int() calls withing fix_fields() etc). Thus, the default implementation of Item_ref::const_item() from the Item parent class didn't take into account the "const_item" status of the referenced item tree -- it used the insufficient "used_tables() == 0" check instead. This worked in most cases since our "non-constant" functions like RAND() and SLEEP() set the RAND_TABLE_BIT in the used table map, so they aren't non-constant from Item_ref's "point of view". However, the "SELECT ... FROM DUAL" subquery may have an empty map of used tables, but at the same time subqueries are never "constant" at the context analysis stage (preparation, view creation etc). So, the non-contantness of such subqueries was missed. Fix: the Item_ref::const_item() function has been overloaded to take into account both (*ref)->const_item() status and tricky Item_ref::used_tables() return values, since the only (*ref)->const_item() call is not enough there. 2. In some cases instead of the const_item() call we check a value of the Item::with_subselect field to recognize items with nested subqueries. However, the Item_ref class didn't propagate this value from the referenced item tree. Fix: Item::has_subquery() and Item_ref::has_subquery() functions have been backported from 5.6. All direct references to the with_subselect fields of nested items have been replaced with the has_subquery() function call. 3. The Item_func_regex class didn't propagate with_subselect as well, since it overloads the Item_func::fix_fields() function with insufficient fix_fields() implementation. Fix: the Item_func_regex::fix_fields() function has been modified to gather "constant" statuses from inner items. 4. The Item_func_isnull::update_used_tables() function has a special branch for the underlying item where the maybe_null value is false: in this case it marks the Item_func_isnull as a "const_item" and sets the cached_value to false. However, the Item_func_isnull::val_int() was not in sync with update_used_tables(): it didn't take into account neither const_item_cache nor cached_value for the case of "args[0]->maybe_null == false optimization". As far as such an Item_func_isnull has "const_item() == true", it's ok to call Item_func_isnull::val_int() etc from outer items on preparation stage. In this case the server tried to call Item_func_isnull::args[0]->isnull(), and if the args[0] item contained a nested not-nullable subquery, it failed with an assertion. Fix: take the value of Item_func_isnull::const_item_cache into account in the val_int() function. 5. The auxiliary Item_is_not_null_test class has a similar optimization in the update_used_tables() function as the Item_func_isnull class has, and the same issue in the val_int() function. In addition to that the Item_is_not_null_test::update_used_tables() doesn't update the const_item_cache value, so the "maybe_null" optimization is useless there. Thus, we missed some optimizations of cases like these (before and after the fix): < <is_not_null_test>(a), --- > <cache>(<is_not_null_test>(a)), or < having (<is_not_null_test>(a) and <is_not_null_test>(a)) --- > having 1 etc. Fix: update Item_is_not_null_test::const_item_cache in update_used_tables() and take in into account in val_int().
-
Yasufumi Kinoshita authored
-
Yasufumi Kinoshita authored
innodb_bug12400341.test is disabled for valgrind daily test. It might be affected by the previous test's undo slots existing, because of slower execution.
-
Chaithra Gopalareddy authored
Backport of fix for Bug#13581962 mysql-test/r/cast.result: Added test result for Bug#13581962,Bug#14096619 mysql-test/r/ctype_utf8mb4.result: Added test result for Bug#13581962,Bug#14096619 mysql-test/t/cast.test: Added test case for Bug#13581962,Bug#14096619 mysql-test/t/ctype_utf8mb4.test: Added test case for Bug#13581962,Bug#14096619 sql/item_func.h: limit max length by MY_INT64_NUM_DECIMAL_DIGITS
-
Chaithra Gopalareddy authored
Backport of Bug#13581962 mysql-test/r/cast.result: Added test result for Bug#13581962,Bug#14096619 mysql-test/t/cast.test: Added test case for Bug#13581962,Bug#14096619 sql/item_func.h: limit max length by MY_INT64_NUM_DECIMAL_DIGITS
-
- 30 Jan, 2013 6 commits
-
-
Mattias Jonsson authored
Due to an internal change in the server code in between 5.1 and 5.5 (wl#2649) the hash function used in KEY partitioning changed for numeric and date/time columns (from binary hash calculation to character based hash calculation). Also enum/set changed from latin1 ci based hash calculation to binary hash between 5.1 and 5.5. (bug#11759782). These changes makes KEY [sub]partitioned tables on any of the affected column types incompatible with 5.5 and above, since the calculation of partition id differs. Also since InnoDB asserts that a deleted row was previously read (positioned), the server asserts on delete of a row that is in the wrong partition. The solution for this situation is: 1) The partitioning engine will check that delete/update will go to the partition the row was read from and give an error otherwise, consisting of the rows partitioning fields. This will avoid asserts in InnoDB and also alert the user that there is a misplaced row. A detailed error message will be given, including an entry to the error log consisting of both table name, partition and row content (PK if exists, otherwise all partitioning columns). 2) A new optional syntax for KEY () partitioning in 5.5 is allowed: [SUB]PARTITION BY KEY [ALGORITHM = N] (list_of_cols) Where N = 1 uses the same hashing as 5.1 (Numeric/date/time fields uses binary hashing, ENUM/SET uses charset hashing) N = 2 uses the same hashing as 5.5 (Numeric/date/time fields uses charset hashing, ENUM/SET uses binary hashing). If not set on CREATE/ALTER it will default to 2. This new syntax should probably be ignored by NDB. 3) Since there is a demand for avoiding scanning through the full table, during upgrade the ALTER TABLE t PARTITION BY ... command is considered a no-op (only .frm change) if everything except ALGORITHM is the same and ALGORITHM was not set before, which allows manually upgrading such table by something like: ALTER TABLE t PARTITION BY KEY ALGORITHM = 1 () or ALTER TABLE t PARTITION BY KEY ALGORITHM = 2 () 4) Enhanced partitioning with CHECK/REPAIR to also check for/repair misplaced rows. (Also works for ALTER TABLE t CHECK/REPAIR PARTITION) CHECK FOR UPGRADE: If the .frm version is < 5.5.3 and uses KEY [sub]partitioning and an affected column type then it will fail with an message: KEY () partitioning changed, please run: ALTER TABLE `test`.`t1` PARTITION BY KEY ALGORITHM = 1 (a) PARTITIONS 12 (i.e. current partitioning clause, with the addition of ALGORITHM = 1) CHECK without FOR UPGRADE: if MEDIUM (default) or EXTENDED options are given: Scan all rows and verify that it is in the correct partition. Fail for the first misplaced row. REPAIR: if default or EXTENDED (i.e. not QUICK/USE_FRM): Scan all rows and every misplaced row is moved into its correct partitions. 5) Updated mysqlcheck (called by mysql_upgrade) to handle the new output from CHECK FOR UPGRADE, to run the ALTER statement instead of running REPAIR. This will allow mysql_upgrade (or CHECK TABLE t FOR UPGRADE) to upgrade a KEY [sub]partitioned table that has any affected field type and a .frm version < 5.5.3 to ALGORITHM = 1 without rebuild. Also notice that if the .frm has a version of >= 5.5.3 and ALGORITHM is not set, it is not possible to know if it consists of rows from 5.1 or 5.5! In these cases I suggest that the user does: (optional) LOCK TABLE t WRITE; SHOW CREATE TABLE t; (verify that it has no ALGORITHM = N, and to be safe, I would suggest backing up the .frm file, to be used if one need to change to another ALGORITHM = N, without needing to rebuild/repair) ALTER TABLE t <old partitioning clause, but with ALGORITHM = N>; which should set the ALGORITHM to N (if the table has rows from 5.1 I would suggest N = 1, otherwise N = 2) CHECK TABLE t; (here one could use the backed up .frm instead and change to a new N and run CHECK again and see if it passes) and if there are misplaced rows: REPAIR TABLE t; (optional) UNLOCK TABLES;
-
unknown authored
No commit message
-
unknown authored
No commit message
-
Aditya A authored
WITH --SKIP-INNODB Description ----------- If the server is started with skip-innodb or InnoDB otherwise fails to start, any one of these queries will crash the server: For (5.5) SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE; SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE_LRU; SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_POOL_STATS; In (5.6+) ,following queries will also crash the server. SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_INDEXES; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_COLUMNS; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FIELDS; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FOREIGN; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FOREIGN_COLS; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESTATS; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_DATAFILES; SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES; FIX ---- When Innodb is not active we must prevent it from processing these tables,so we return a warning saying that innodb is not active. Approved by marko (http://rb.no.oracle.com/rb/r/1891)
-
WITH AN ASSERTION Null merge from mysql-5.1
-
WITH AN ASSERTION Correcting the build failure that was caused because of changes checked-in to below mentioned revision. (Changes: DEBUG_SYNC_C should be disabled for innodb_plugin under Windows enviornment. Note: only for innodb_plugin.) revno: 3915 revision-id: krunal.bauskar@oracle.com-20130114051951-ang92lkirop37431 parent: nisha.gopalakrishnan@oracle.com-20130112054337-gk5pmzf30d2imuw7 committer: Krunal Bauskar krunal.bauskar@oracle.com branch nick: mysql-5.1 timestamp: Mon 2013-01-14 10:49:51 +0530
-
- 29 Jan, 2013 2 commits
-
-
Neeraj Bisht authored
ON COL WITH COMPOSITE INDEX This problem is caused by the patch for the bug#11751794. While checking for the keypart covering non grouping attribute. we are not checking whether the root node of the SEL_ARG* tree for the index have any cvalue or not. sql/opt_range.cc: check whether the keeypart_tree has any range tree.
-
Neeraj Bisht authored
ON COL WITH COMPOSITE INDEX This problem is caused by the patch for the bug#11751794. While checking for the keypart covering non grouping attribute. we are not checking whether the root node of the SEL_ARG* tree for the index have any cvalue or not.
-
- 28 Jan, 2013 5 commits
-
-
Nuno Carvalho authored
Merge from mysql-5.1 into mysql-5.5.
-
Nuno Carvalho authored
On a previous fix, user variables with zero length name were incorrectly considered as event corruption, despite that them are allowed by server. Fix this wrong assumption by allowing again user variables with zero length on binary log.
-
Satya Bodapati authored
With innodb_change_buffering enabled, Innodb buffers all modifications to secondary index leaf pages when the leaf pages are not in buffer pool. Crash InnoDB while an IBUF_OP_DELETE is being applied. Restart and note that the same record can be applied again which may lead to crash. Mark the change buffer record processed, so that it will not be merged again in case the server crashes between the following mtr_commit() and the subsequent mtr_commit() of deleting the change buffer record. Testcase: No testcase because it is difficult to get the timing right with the two asyncronous task purge and change buffering Approved by Marko. rb#1893
-
Venkatesh Duggirala authored
PROPERLY QUOTED IN BINLOG FILE Merging fix from mysql-5.1
-
Venkatesh Duggirala authored
PROPERLY QUOTED IN BINLOG FILE Problem: In load data file query, User variables are allowed inside "Into_list" and "Set_list". These user variables used inside these two lists are not properly guarded with backticks while server is writting into binlog. Hence user variable names like a` cannot be used in this context. Fix: Properly quote these variables while writting into binlog mysql-test/r/func_compress.result: changing result file mysql-test/r/variables.result: changing result file mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result: changing result file sql/item_func.cc: Quote the user variable items
-
- 26 Jan, 2013 1 commit
-
-
Venkatesh Duggirala authored
Due to not resetting a member (last_added) of Deferred events class inside a clean up function (Deferred_log_events::rewind), there is a memory leak on filtered slaves. Fix: Resetting last_added to NULL in rewind() function. sql/rpl_utility.cc: Resetting last_added to NULL to avoid memory leak
-
- 24 Jan, 2013 5 commits
-
-
unknown authored
No commit message
-
Venkata Sidagam authored
Backporting bug patch from 5.5 to 5.1. This fix is applicable to BUG#14362617 as well
-
Venkata Sidagam authored
CERTAIN LEVEL Merging from 5.1 to 5.5
-
Venkata Sidagam authored
CERTAIN LEVEL Problem description: mysqld crashes when we update the max_connections variable to lesser value than the number of currently open connections. Analysis: The "alarm_queue.max_elements" size will be decided at the server start time and it will get modified if we change max_connections value. In the current scenario the value of "alarm_queue.max_elements" is decremented when the max_connections is set to 2. When updating the "alarm_queue.max_elements" value we are not updating "max_used_alarms" value. Hence, instead of getting the warning "thr_alarm queue is full" it is ending up in asserting the server at the time of inserting new elements in the queue. Fix: the fix is to dynamically increase the size of the alarm_queue. In order to do that, queue_insert_safe() should be used instead if queue_insert().
-
Venkatesh Duggirala authored
FROM MYSQL_BINLOG_SEND As part Bug #11747416 A DISK FULL MAKES BINARY LOG CORRUPT, reading the variable "binlog_can_be_corrupted" was removed In the existing code the value of this variable is only set, never read. And also this issue causing compiler warnings. So the variable is completely redundant and should be removed. sql/sql_repl.cc: Removing dead code
-
- 23 Jan, 2013 1 commit
-
-
Yasufumi Kinoshita authored
-