- 17 May, 2007 1 commit
-
-
gkodinov/kgeorge@macbook.gmz authored
Conversion errors when constructing the condition for an IN predicates were treated as if the affected column contains NULL. If such a IN predicate is inside NOT we get wrong results. Corrected the handling of conversion errors in an IN predicate that is resolved by unique_subquery (through subselect_uniquesubquery_engine).
-
- 11 Apr, 2007 1 commit
-
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B19372-5.0-opt
-
- 10 Apr, 2007 5 commits
-
-
gkodinov/kgeorge@magare.gmz authored
Added a test case. The problem was fixed by the fix for bug #17379. The problem was that because of some conditions the optimizer always preferred range or full index scan access methods to lookup access methods even when the latter were much cheaper.
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B27659-5.0-opt
-
gkodinov/kgeorge@magare.gmz authored
The optimizer transforms DISTINCT into a GROUP BY when possible. It does that by constructing the same structure (a list of ORDER instances) the parser makes when parsing GROUP BY. While doing that it also eliminates duplicates. But if a duplicate is found it doesn't advance the pointer to ref_pointer array, so the next (and subsequent) ORDER structures point to the wrong element in the SELECT list. Fixed by advancing the pointer in ref_pointer_array even in the case of a duplicate.
-
gluh@mysql.com/eagle.(none) authored
into mysql.com:/home/gluh/MySQL/Bugs/5.0.27069
-
gluh@mysql.com/eagle.(none) authored
issue an error in strict mode if enum|set column has duplicates members in 'create table'
-
- 09 Apr, 2007 2 commits
-
-
holyfoot/hf@mysql.com/hfmain.(none) authored
into mysql.com:/d2/hf/mrg/mysql-5.0-opt
-
holyfoot/hf@mysql.com/hfmain.(none) authored
into mysql.com:/d2/hf/mrg/mysql-5.0-opt
-
- 08 Apr, 2007 1 commit
-
-
kent@mysql.com/kent-amd64.(none) authored
into mysql.com:/home/kent/bk/tmp4/mysql-5.0-engines
-
- 07 Apr, 2007 4 commits
-
-
evgen@moonbone.local authored
into moonbone.local:/mnt/gentoo64/work/27586-bug-5.0-opt-mysql
-
holyfoot/hf@mysql.com/hfmain.(none) authored
into mysql.com:/d2/hf/mrg/mysql-5.0-opt
-
holyfoot/hf@mysql.com/hfmain.(none) authored
into mysql.com:/d2/hf/mrg/mysql-5.0-opt
-
holyfoot/hf@mysql.com/hfmain.(none) authored
into mysql.com:/d2/hf/mrg/mysql-4.1-opt
-
- 06 Apr, 2007 6 commits
-
-
evgen@moonbone.local authored
into moonbone.local:/mnt/gentoo64/work/27586-bug-5.0-opt-mysql
-
evgen@moonbone.local authored
NO_AUTO_VALUE_ON_ZERO mode. The table->auto_increment_field_not_null variable wasn't reset after reading a row which may lead to inserting a wrong value to the auto-increment field to the following row. The table->auto_increment_field_not_null variable is reset now right after a row is being written in the read_fixed_length() and the read_sep_field() functions. Removed wrong setting of the table->auto_increment_field_not_null variable in the read_sep_field() function.
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
into xiphis.org:/home/antony/work2/mysql-5.0-engines.merge
-
anozdrin/alik@ibm. authored
-
anozdrin/alik@ibm. authored
-
- 05 Apr, 2007 11 commits
-
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
mskold/marty@mysql.com/linux.site authored
into mysql.com:/windows/Linux_space/MySQL/mysql-5.0-ndb
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
tomas@whalegate.ndb.mysql.com authored
-
tomas@whalegate.ndb.mysql.com authored
- test case workaround to avoid random failures
-
mskold/marty@mysql.com/linux.site authored
into mysql.com:/windows/Linux_space/MySQL/mysql-5.0-ndb
-
mskold/marty@mysql.com/linux.site authored
into mysql.com:/windows/Linux_space/MySQL/mysql-5.0-ndb
-
mskold/marty@mysql.com/linux.site authored
-
mskold/marty@mysql.com/linux.site authored
-
into dev3-240.dev.cn.tlan:/home/justin.he/mysql/mysql-5.0/mysql-5.0-ndb-bj.merge
-
- 04 Apr, 2007 9 commits
-
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
mskold/marty@mysql.com/linux.site authored
In certain cases AFTER UPDATE/DELETE triggers on NDB tables that referenced subject table didn't see the results of operation which caused invocation of those triggers. In other words AFTER trigger invoked as result of update (or deletion) of particular row saw version of this row before update (or deletion). The problem occured because NDB handler in those cases postponed actual update/delete operations to be able to perform them later as one batch. This fix solves the problem by disabling this optimization for particular operation if subject table has AFTER trigger for this operation defined. To achieve this we introduce two new flags for handler::extra() method: HA_EXTRA_DELETE_CANNOT_BATCH and HA_EXTRA_UPDATE_CANNOT_BATCH. These are called if there exists AFTER DELETE/UPDATE triggers during a statement that potentially can generate calls to delete_row()/update_row(). This includes multi_delete/multi_update statements as well as insert statements that do delete/update as part of an ON DUPLICATE statement.
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B27513-5.0-opt
-
gkodinov/kgeorge@magare.gmz authored
-
tsmith@quadxeon.mysql.com authored
into quadxeon.mysql.com:/benchmarks/ext3/TOSAVE/tsmith/bk/maint/mrg04/50
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B27513-5.0-opt
-
igor@olga.mysql.com authored
into olga.mysql.com:/home/igor/mysql-5.0-opt
-
igor@olga.mysql.com authored
-
igor@olga.mysql.com authored
-