- 24 Sep, 2010 2 commits
-
-
Dmitry Shulga authored
-
Dmitry Shulga authored
sql/sql_cache.cc: Added include of send_data_in_chunks() definiton when macros EMBEDDED_LIBRARY is on.
-
- 22 Sep, 2010 2 commits
-
-
Dmitry Shulga authored
-
Dmitry Shulga authored
sql/log.cc: reopen_fstreams modified: fixed error in processing of stdout/stderr when run mysqld as Windows service.
-
- 21 Sep, 2010 2 commits
-
-
Ingo Struewing authored
Null-merge from 5.1
-
Ingo Struewing authored
Merge from saved bundle.
-
- 17 Sep, 2010 2 commits
-
-
Alfranio Correia authored
-
Alfranio Correia authored
-
- 16 Sep, 2010 8 commits
-
-
Sergey Glukhov authored
-
Sergey Glukhov authored
Subselect executes twice, at JOIN::optimize stage and at JOIN::execute stage. At optimize stage Innodb prebuilt struct which is used for the retrieval of column values is initialized in. ha_innobase::index_read(), prebuilt->sql_stat_start is true. After QUICK_ROR_INTERSECT_SELECT finished his job it restores read_set/write_set bitmaps with initial values and deactivates one of the handlers used by QUICK_ROR_INTERSECT_SELECT in JOIN::cleanup (it's the case when we reuse original handler as one of handlers required by QUICK_ROR_INTERSECT_SELECT object). On second subselect execution inactive handler is activated in QUICK_RANGE_SELECT::reset, file->ha_index_init(). In ha_index_init Innodb prebuilt struct is reinitialized with inappropriate read_set/write_set bitmaps. Further reinitialization in ha_innobase::index_read() does not happen as prebuilt->sql_stat_start is false. It leads to partial retrieval of required field values and we get a mix of field values from different records in the record buffer. The fix is to reset read_set/write_set bitmaps as these values are required for proper intialization of internal InnoDB struct which is used for the retrieval of column values (see build_template(), ha_innodb.cc) mysql-test/include/index_merge_ror_cpk.inc: test case mysql-test/r/index_merge_innodb.result: test case mysql-test/r/index_merge_myisam.result: test case sql/opt_range.cc: if ROR merge scan is used we need to reset read_set/write_set bitmaps as these values are required for proper intialization of internal InnoDB struct which is used for the retrieval of column values (see build_template(), ha_innodb.cc)
-
Magne Mahre authored
-
Magne Mahre authored
adding new indexes A fast alter table requires that the existing (old) table and indices are unchanged (i.e only new indices can be added). To verify this, the layout and flags of the old table/indices are compared for equality with the new. The PACK_KEYS option is a no-op in InnoDB, but the flag exists, and is used in the table compare. We need to check this (table) option flag before deciding whether an index should be packed or not. If the table has explicitly set PACK_KEYS to 0, the created indices should not be marked as packed/packable.
-
Dmitry Shulga authored
-
Dmitry Shulga authored
compression protocol. The loss of connection was caused by a malformed packet sent by the server in case when query cache was in use. When storing data in the query cache, the query cache memory allocation algorithm had a tendency to reduce the amount of memory block necessary to store a result set, up to finally storing the entire result set in a single block. With a significant result set, this memory block could turn out to be quite large - 30, 40 MB and on. When such a result set was sent to the client, the entire memory block was compressed and written to network as a single network packet. However, the length of the network packet is limited by 0xFFFFFF (16MB), since the packet format only allows 3 bytes for packet length. As a result, a malformed, overly large packet with truncated length would be sent to the client and break the client/server protocol. The solution is, when sending result sets from the query cache, to ensure that the data is chopped into network packets of size <= 16MB, so that there is no corruption of packet length. This solution, however, has a shortcoming: since the result set is still stored in the query cache as a single block, at the time of sending, we've lost boundaries of individual logical packets (one logical packet = one row of the result set) and thus can end up sending a truncated logical packet in a compressed network packet. As a result, on the client we may require more memory than max_allowed_packet to keep, both, the truncated last logical packet, and the compressed next packet. This never (or in practice never) happens without compression, since without compression it's very unlikely that a) a truncated logical packet would remain on the client when it's time to read the next packet b) a subsequent logical packet that is being read would be so large that size-of-new-packet + size-of-old-packet-tail > max_allowed_packet. To remedy this issue, we send data in 1MB sized packets, that's below the current client default of 16MB for max_allowed_packet, but large enough to ensure there is no unnecessary overhead from too many syscalls per result set. sql/net_serv.cc: net_realloc() modified: consider already used memory when compare packet buffer length sql/sql_cache.cc: modified Query_cache::send_result_to_client: send result to client in chunks limited by 1 megabyte.
-
Mikael Ronstrom authored
-
Mikael Ronstrom authored
-
- 14 Sep, 2010 1 commit
-
-
Mattias Jonsson authored
-
- 13 Sep, 2010 7 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Martin Hansson authored
-
Martin Hansson authored
ORDER BY computed col GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an index on the first table in the query is used, and that index satisfies ordering according to the GROUP BY clause, the query optimizer estimates the number of tuples that need to be read from this index. If there is a LIMIT clause, table statistics on tables following this 'sort table' are employed. There may be a separate ORDER BY clause however, which mandates reading the whole 'sort table' anyway. But the previous estimate was left untouched. Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in conjunction with an ORDER BY clause that mandates using a temporary table.
-
Gleb Shchepa authored
-
Gleb Shchepa authored
Version "5.1.42 SUSE MySQL RPM" When a query was using a DATE or DATETIME value formatted using different formatting than "yyyy-mm-dd HH:MM:SS", a query with a greater-or-equal '>=' condition matched only greater values in an indexed TIMESTAMP column. The problem was introduced by the fix for the bug 46362 and partially solved (for DATE and DATETIME columns only) by the fix for the bug 47925. The stored_field_cmp_to_item function has been modified to take into account TIMESTAMP columns like we do for DATE and DATETIME columns. mysql-test/r/type_timestamp.result: Test case for bug #55779. mysql-test/t/type_timestamp.test: Test case for bug #55779. sql/item.cc: Bug #55779: select does not work properly in mysql server Version "5.1.42 SUSE MySQL RPM" The stored_field_cmp_to_item function has been modified to take into account TIMESTAMP columns like we do for DATE and DATETIME.
-
- 10 Sep, 2010 5 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Bjorn Munch authored
-
Bjorn Munch authored
-
Bjorn Munch authored
-
- 09 Sep, 2010 5 commits
-
-
Alexey Kopytov authored
to 5.5 (removed one test case as it is no longer valid). mysql-test/r/select.result: Removed a part of the test case for bug#48291 since it is not valid anymore. The comments for the removed part were actually describing a side-effect from the problem addressed by the addendum patch for bug #54190. mysql-test/t/select.test: Removed a part of the test case for bug#48291 since it is not valid anymore. The comments for the removed part were actually describing a side-effect from the problem addressed by the addendum patch for bug #54190.
-
Alexey Kopytov authored
The patch caused some test failures when merged to 5.5 because, unlike 5.1, it utilizes Item_cache_row to actually cache row values. The problem was that Item_cache_row::bring_value() essentially did nothing. In particular, it did not update its null_value, so all Item_cache_row objects were always having their null_values set to TRUE. This went unnoticed previously, but now when Arg_comparator::compare_row() actually depends on the row's null_value to evaluate the comparison, the problem has surfaced. Fixed by calling the underlying item's bring_value() and updating null_value in Item_cache_row::bring_value(). Since the problem also exists in 5.1 code (albeit hidden, since the relevant code is not used anywhere), the addendum patch is against 5.1.
-
Alexey Kopytov authored
-
Alexey Kopytov authored
result Row subqueries producing no rows were not handled as UNKNOWN values in row comparison expressions. That was a result of the following two problems: 1. Item_singlerow_subselect did not mark the resulting row value as NULL/UNKNOWN when no rows were produced. 2. Arg_comparator::compare_row() did not take into account that a whole argument may be NULL rather than just individual scalar values. Before bug#34384 was fixed, the above problems were hidden because an uninitialized (i.e. without any stored value) cached object would appear as NULL for scalar values in a row subquery returning an empty result. After the fix Arg_comparator::compare_row() would try to evaluate uninitialized cached objects. Fixed by removing the aforementioned problems. mysql-test/r/row.result: Added a test case for bug #54190. mysql-test/r/subselect.result: Updated the result for a test relying on wrong behavior. mysql-test/t/row.test: Added a test case for bug #54190. sql/item_cmpfunc.cc: If either of the argument rows is NULL, return NULL as the result of comparison. sql/item_subselect.cc: Adjust null_value for Item_singlerow_subselect depending on whether a row has been produced by the row subquery.
-
Dmitry Shulga authored
The problem was that mysql_stmt_next_result() (new to 5.5) was not properly updated. libmysql/libmysql.c: mysql_stmt_next_result() modified: set mysql->status= MYSQL_STATUS_STATEMENT_GET_RESULT before return if there is a result set.
-
- 07 Sep, 2010 6 commits
-
-
Mattias Jonsson authored
Updated according to reviewers comments.
-
Martin Hansson authored
-
Martin Hansson authored
The EXISTS transformation has additional switches to catch the known corner cases that appear when transforming an IN predicate into EXISTS. Guarded conditions are used which are deactivated when a NULL value is seen in the outer expression's row. When the inner query block supplies NULL values, however, they are filtered out because no distinction is made between the guarded conditions; guarded NOT x IS NULL conditions in the HAVING clause that filter out NULL values cannot be de-activated in isolation from those that match values or from the outer expression or NULL's. The above problem is handled by making the guarded conditions remember whether they have rejected a NULL value or not, and index access methods are taking this into account as well. The bug consisted of 1) Not resetting the property for every nested loop iteration on the inner query's result. 2) Not propagating the NULL result properly from inner query to IN optimizer. 3) A hack that may or may not have been needed at some point. According to a comment it was aimed to fix #2 by returning NULL when FALSE was actually the result. This caused failures when #2 was properly fixed. The hack is now removed. The fix resolves all three points.
-
Dmitry Shulga authored
-
Dmitry Shulga authored
multi-table UPDATE IGNORE. The problem was that if there was an active SELECT statement during trigger execution, an error risen during the execution may cause a crash. The fix is to temporary reset LEX::current_select before trigger execution and restore it afterwards. This way errors risen during the trigger execution are processed as if there was no active SELECT. mysql-test/r/trigger_notembedded.result: added test case result for bug #55421. mysql-test/t/trigger_notembedded.test: added test case for bug #55421. sql/sql_trigger.cc: Reset thd->lex->current_select before start trigger execution and restore its original value after execution is finished. This is neccessery in order to set error status in diagnostic_area in case of trigger execution failure.
-
Martin Hansson authored
-