- 12 Jan, 2009 4 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Davi Arnaut authored
-
Tatiana A. Nurnberg authored
Bounds-checks and blocksize corrections were applied to user-input, but constants in the server were trusted implicitly. If these values did not actually meet the requirements, the user could not set change a variable, then set it back to the (wonky) factory default or maximum by explicitly specifying it (SET <var>=<value> vs SET <var>=DEFAULT). Now checks also apply to the server's presets. Wonky values and maxima get corrected at startup. Consequently all non-offsetted values the user sees are valid, and users can set the variable to that exact value if they so desire.
-
- 09 Jan, 2009 17 commits
-
-
Tatiana A. Nurnberg authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Tatiana A. Nurnberg authored
-
Georgi Kodinov authored
-
Tatiana A. Nurnberg authored
-
Mattias Jonsson authored
-
Mattias Jonsson authored
post push fix, added test found a valgrind warning
-
Sven Sandberg authored
Adding comments to some of the high-level functions in replication.
-
Georgi Kodinov authored
No need to mask the error code returned by getting the next row to end of file when doing filesort.
-
Georgi Kodinov authored
When substituting system constant functions with a constant result the server was not expecting that the function may return NULL. Fixed by checking for NULL and returning Item_null (in the relevant collation) if the result of the system constant function was NULL.
-
Sven Sandberg authored
-
Sven Sandberg authored
-
Davi Arnaut authored
The special TRUNCATE TABLE (DDL) transaction wasn't being properly rolled back if a error occurred during row by row deletion. The error can be caused by a foreign key restriction imposed by InnoDB SE and would cause the server to erroneously issue a implicit commit. The solution is to rollback the transaction if a truncation via row by row deletion fails, otherwise commit. All effects of a TRUNCATE ABLE operation are rolled back if a row by row deletion fails.
-
Sven Sandberg authored
Problem: when the server reads a log_event from file, it should read the post-header lengths from the format_description_log_event. Some event types which currently have post-header length 0 did not do this, and instead had a hard-coded zero length for the post-header. That means the current server version will not be able to read future versions of these events. Fix: make the reader functions read the post-header.
-
Horst Hunger authored
-
- 08 Jan, 2009 6 commits
-
-
Davi Arnaut authored
-
Horst Hunger authored
-
Mattias Jonsson authored
-
Tatiana A. Nurnberg authored
Passing dubious "year zero" in non-zero date (not "0000-00-00") could lead to negative value for year internally, while variable was unsigned. This led to Really Bad Things further down the line. Now doing calculations with signed type for year internally.
-
Timothy Smith authored
-
Timothy Smith authored
The binlog_innodb test was sensitive to what tests ran before it. Now run FLUSH STATUS before performing operations that need to be checked. sys_var_thd_ulong::update() was improperly casting an option value from ulonglong to ulong before comparing it to the max allowed value. On systems where ulong and ulonglong are of different size, this caused values greater than ULONG_MAX to wrap around (not be truncated to ULONG_MAX, which appears to have been the intention of the original coder), and caused some checks to work incorrectly. This wasn't generally visible to the user, because later checks would prevent the wrapped-around value from being used. But it caused warning messages to differ between 32- and 64-bit platforms. Fix is to just remove the cast. Also added a DBUG_ASSERT to ensure that the value really is capped properly before finally stuffing it into the ulong.
-
- 07 Jan, 2009 6 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Patrick Crews authored
-
Patrick Crews authored
-
Vladislav Vaintroub authored
-
Davi Arnaut authored
locking type of temp table The problem is that INSERT INTO .. SELECT FROM .. and CREATE TABLE .. SELECT FROM a temporary table could inadvertently overwrite the locking type of the temporary table. The lock type of temporary tables should be a write lock by default. The solution is to reset the lock type of temporary tables back to its default value after they are used in a statement.
-
- 05 Jan, 2009 6 commits
-
-
Patrick Crews authored
-
Georgi Kodinov authored
-
Patrick Crews authored
Added function to check for diff and return an error message if the utility is not present. Previously, the way we did this didn't work on Windows, but did work on *Nix systems.
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
case.
-
- 31 Dec, 2008 1 commit
-
-
Gleb Shchepa authored
Execution of queries containing the CASE function of aggregate function like in "SELECT ... CASE ARGV(...) WHEN ..." crashed the server. The CASE function caches pointers to concrete comparison functions for an each pair of types of CASE-WHERE clause parameters, i.e. for the "CASE INT_RESULT WHERE REAL_RESULT THEN ... WHERE DECIMAL_RESULT ... END" function call it caches comparisons for INT_RESULT with REAL_RESULT and for INT_RESULT with DECIMAL_RESULT. Usually a result type is known after a call to the fix_fields function, however, the setup_copy_fields function call may wrap aggregate items with Item_copy_string that has STRING_RESULT result type, so setup_copy_fields may change argument result types of the CASE function after call to Item_func_case::fix_fields/fix_length_and_dec. Then the Item_func_case::find_item function tries to use comparison function for unexpected pair of the STRING_RESULT and some other type - that caused an assertion failure of server crash. The Item_func_case::fix_length_and_dec function has been modified to take into account possible STRING_RESULT result type in the presence of aggregate arguments of the CASE function.
-