- 26 Apr, 2007 1 commit
-
-
evgen@moonbone.local authored
DATE and DATETIME can be compared either as strings or as int. Both methods have their disadvantages. Strings can contain valid DATETIME value but have insignificant zeros omitted thus became non-comparable with other DATETIME strings. The comparison as int usually will require conversion from the string representation and the automatic conversion in most cases is carried out in a wrong way thus producing wrong comparison result. Another problem occurs when one tries to compare DATE field with a DATETIME constant. The constant is converted to DATE losing its precision i.e. losing time part. This fix addresses the problems described above by adding a special DATE/DATETIME comparator. The comparator correctly converts DATE/DATETIME string values to int when it's necessary, adds zero time part (00:00:00) to DATE values to compare them correctly to DATETIME values. Due to correct conversion malformed DATETIME string values are correctly compared to other DATE/DATETIME values. As of this patch a DATE value equals to DATETIME value with zero time part. For example '2001-01-01' equals to '2001-01-01 00:00:00'. The compare_datetime() function is added to the Arg_comparator class. It implements the correct comparator for DATE/DATETIME values. Two supplementary functions called get_date_from_str() and get_datetime_value() are added. The first one extracts DATE/DATETIME value from a string and the second one retrieves the correct DATE/DATETIME value from an item. The new Arg_comparator::can_compare_as_dates() function is added and used to check whether two given items can be compared by the compare_datetime() comparator. Two caching variables were added to the Arg_comparator class to speedup the DATE/DATETIME comparison. One more store() method was added to the Item_cache_int class to cache int values. The new is_datetime() function was added to the Item class. It indicates whether the item returns a DATE/DATETIME value.
-
- 11 Apr, 2007 1 commit
-
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B19372-5.0-opt
-
- 10 Apr, 2007 5 commits
-
-
gkodinov/kgeorge@magare.gmz authored
Added a test case. The problem was fixed by the fix for bug #17379. The problem was that because of some conditions the optimizer always preferred range or full index scan access methods to lookup access methods even when the latter were much cheaper.
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B27659-5.0-opt
-
gkodinov/kgeorge@magare.gmz authored
The optimizer transforms DISTINCT into a GROUP BY when possible. It does that by constructing the same structure (a list of ORDER instances) the parser makes when parsing GROUP BY. While doing that it also eliminates duplicates. But if a duplicate is found it doesn't advance the pointer to ref_pointer array, so the next (and subsequent) ORDER structures point to the wrong element in the SELECT list. Fixed by advancing the pointer in ref_pointer_array even in the case of a duplicate.
-
gluh@mysql.com/eagle.(none) authored
into mysql.com:/home/gluh/MySQL/Bugs/5.0.27069
-
gluh@mysql.com/eagle.(none) authored
issue an error in strict mode if enum|set column has duplicates members in 'create table'
-
- 09 Apr, 2007 2 commits
-
-
holyfoot/hf@mysql.com/hfmain.(none) authored
into mysql.com:/d2/hf/mrg/mysql-5.0-opt
-
holyfoot/hf@mysql.com/hfmain.(none) authored
into mysql.com:/d2/hf/mrg/mysql-5.0-opt
-
- 08 Apr, 2007 1 commit
-
-
kent@mysql.com/kent-amd64.(none) authored
into mysql.com:/home/kent/bk/tmp4/mysql-5.0-engines
-
- 07 Apr, 2007 4 commits
-
-
evgen@moonbone.local authored
into moonbone.local:/mnt/gentoo64/work/27586-bug-5.0-opt-mysql
-
holyfoot/hf@mysql.com/hfmain.(none) authored
into mysql.com:/d2/hf/mrg/mysql-5.0-opt
-
holyfoot/hf@mysql.com/hfmain.(none) authored
into mysql.com:/d2/hf/mrg/mysql-5.0-opt
-
holyfoot/hf@mysql.com/hfmain.(none) authored
into mysql.com:/d2/hf/mrg/mysql-4.1-opt
-
- 06 Apr, 2007 6 commits
-
-
evgen@moonbone.local authored
into moonbone.local:/mnt/gentoo64/work/27586-bug-5.0-opt-mysql
-
evgen@moonbone.local authored
NO_AUTO_VALUE_ON_ZERO mode. The table->auto_increment_field_not_null variable wasn't reset after reading a row which may lead to inserting a wrong value to the auto-increment field to the following row. The table->auto_increment_field_not_null variable is reset now right after a row is being written in the read_fixed_length() and the read_sep_field() functions. Removed wrong setting of the table->auto_increment_field_not_null variable in the read_sep_field() function.
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
into xiphis.org:/home/antony/work2/mysql-5.0-engines.merge
-
anozdrin/alik@ibm. authored
-
anozdrin/alik@ibm. authored
-
- 05 Apr, 2007 11 commits
-
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
mskold/marty@mysql.com/linux.site authored
into mysql.com:/windows/Linux_space/MySQL/mysql-5.0-ndb
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
tomas@whalegate.ndb.mysql.com authored
-
tomas@whalegate.ndb.mysql.com authored
- test case workaround to avoid random failures
-
mskold/marty@mysql.com/linux.site authored
into mysql.com:/windows/Linux_space/MySQL/mysql-5.0-ndb
-
mskold/marty@mysql.com/linux.site authored
into mysql.com:/windows/Linux_space/MySQL/mysql-5.0-ndb
-
mskold/marty@mysql.com/linux.site authored
-
mskold/marty@mysql.com/linux.site authored
-
into dev3-240.dev.cn.tlan:/home/justin.he/mysql/mysql-5.0/mysql-5.0-ndb-bj.merge
-
- 04 Apr, 2007 9 commits
-
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
mskold/marty@mysql.com/linux.site authored
In certain cases AFTER UPDATE/DELETE triggers on NDB tables that referenced subject table didn't see the results of operation which caused invocation of those triggers. In other words AFTER trigger invoked as result of update (or deletion) of particular row saw version of this row before update (or deletion). The problem occured because NDB handler in those cases postponed actual update/delete operations to be able to perform them later as one batch. This fix solves the problem by disabling this optimization for particular operation if subject table has AFTER trigger for this operation defined. To achieve this we introduce two new flags for handler::extra() method: HA_EXTRA_DELETE_CANNOT_BATCH and HA_EXTRA_UPDATE_CANNOT_BATCH. These are called if there exists AFTER DELETE/UPDATE triggers during a statement that potentially can generate calls to delete_row()/update_row(). This includes multi_delete/multi_update statements as well as insert statements that do delete/update as part of an ON DUPLICATE statement.
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B27513-5.0-opt
-
gkodinov/kgeorge@magare.gmz authored
-
tsmith@quadxeon.mysql.com authored
into quadxeon.mysql.com:/benchmarks/ext3/TOSAVE/tsmith/bk/maint/mrg04/50
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B27513-5.0-opt
-
igor@olga.mysql.com authored
into olga.mysql.com:/home/igor/mysql-5.0-opt
-
igor@olga.mysql.com authored
-
igor@olga.mysql.com authored
-