- 05 Oct, 2010 3 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 20 Aug, 2010 2 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 10 Aug, 2010 1 commit
-
-
Georgi Kodinov authored
Updated the README file.
-
- 02 Aug, 2010 2 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 30 Jul, 2010 1 commit
-
-
Davi Arnaut authored
Fix a regression (due to a typo) which caused spurious incorrect argument errors for long data stream parameters if all forms of logging were disabled (binary, general and slow logs).
-
- 21 Jul, 2010 2 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 15 Jul, 2010 1 commit
-
-
Alexey Kopytov authored
Calculating the estimated number of records for a range scan may take a significant time, and it was impossible for a user to interrupt that process by killing the connection or the query. Fixed by checking the thread's 'killed' status in check_quick_keys() and interrupting the calculation process if it is set to a non-zero value.
-
- 07 Jul, 2010 1 commit
-
-
Vasil Dimov authored
(without the unrelated whitespace changes): ------------------------------------------------------------------------ r7009 | jyang | 2010-04-29 20:44:56 +0300 (Thu, 29 Apr 2010) | 6 lines branches/5.0: Port fix for bug #49238 (Creating/Dropping a temporary table while at 1023 transactions will cause assert) from 5.1 to branches/5.1. Separate action for return value DB_TOO_MANY_CONCURRENT_TRXS from that of DB_MUST_GET_MORE_FILE_SPACE in row_drop_table_for_mysql(). ------------------------------------------------------------------------
-
- 02 Jul, 2010 3 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 28 Jun, 2010 1 commit
-
-
Davi Arnaut authored
The problem was that a user could supply supply data in chunks via the COM_STMT_SEND_LONG_DATA command to prepared statement parameter other than of type TEXT or BLOB. This posed a problem since other parameter types aren't setup to handle long data, which would lead to a crash when attempting to use the supplied data. Given that long data can be supplied at any stage of a prepared statement, coupled with the fact that the type of a parameter marker might change between consecutive executions, the solution is to validate at execution time each parameter marker for which a data stream was provided. If the parameter type is not TEXT or BLOB (that is, if the type is not able to handle a data stream), a error is returned.
-
- 21 Jun, 2010 2 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 17 Jun, 2010 1 commit
-
-
Joerg Bruehe authored
line exceeds the limit The number and/or names of our files for the main test suite (contents of "mysql-test/t/") now exceeds the command line length limit on AIX. Solve the problem by using separate "cp" commands for the various file name extensions.
-
- 15 Jun, 2010 1 commit
-
-
Bjorn Munch authored
Reorder code breaks when finding tests skipped due to --skip-rpl etc. Add simple test that master_opt is non-empty
-
- 14 Jun, 2010 1 commit
-
-
Bjorn Munch authored
Kill mysqltest and call mtr_kill_leftovers() before terminating
-
- 10 Jun, 2010 1 commit
-
-
Davi Arnaut authored
Addendum: Work around a compilation failure on Windows due to windows.h not being added to the global namespace.
-
- 08 Jun, 2010 3 commits
-
-
Davi Arnaut authored
-
Davi Arnaut authored
The problem was that the bundled yaSSL library was being built without thread safety support regardless of the thread safeness of the compoments linked with it. The solution is to enable yaSSL thread safety support if any component (server or client) is to be built with thread support. Also, generate new certificates for yaSSL's test suite.
-
Sergey Glukhov authored
The problem is in the Item_func_isnull::update_used_tables() function, bracket is at the wrong place. Because of that isnull item erroneously is treated as const item. The fix is to set brackets in the right place.
-
- 07 Jun, 2010 1 commit
-
-
Georgi Kodinov authored
when an out-of-supported-range date is detected.
-
- 04 Jun, 2010 1 commit
-
-
Georgi Kodinov authored
Some of the server implementations don't support dates later than 2038 due to the internal time type being 32 bit. Added checks so that the server will refuse dates that cannot be handled by either throwing an error when setting date at runtime or by refusing to start or shutting down the server if the system date cannot be stored in my_time_t.
-
- 02 Jun, 2010 1 commit
-
-
Georgi Kodinov authored
-
- 01 Jun, 2010 1 commit
-
-
Georgi Kodinov authored
-
- 25 May, 2010 3 commits
-
-
Ramil Kalimullin authored
-
Ramil Kalimullin authored
Problem: one with SELECT privilege on some table may dump other table performing COM_TABLE_DUMP command due to missed check of the table name. Fix: check the table name.
-
Davi Arnaut authored
This fixes a recently introduced regression, where a variable is not defined for the embedded server. Although the embedded server is not supported in 5.0, make it at least compile.
-
- 19 May, 2010 1 commit
-
-
joerg.bruehe@sun.com authored
-
- 11 May, 2010 1 commit
-
-
Martin Hansson authored
MySQL handles the join syntax "JOIN ... USING( field1, ... )" and natural joins by building the same parse tree as a corresponding join with an "ON t1.field1 = t2.field1 ..." expression would produce. This parse tree was not cleaned up properly in the following scenario. If a thread tries to lock some tables and finds that the tables were dropped and re-created while waiting for the lock, it cleans up column references in the statement by means a per-statement free list. But if the statement was part of a stored procedure, column references on the stored procedure's free list weren't cleaned up and thus contained pointers to freed objects. Fixed by adding a call to clean up the current prepared statement's free list. This is a backport from MySQL 5.1
-
- 06 May, 2010 1 commit
-
-
Martin Hansson authored
greedy_search optimizer_search_depth=0 The algorithm inside restore_prev_nj_state failed to properly update the counters within the NESTED_JOIN tree. The counter was decremented each time a table in the node was removed from the QEP, the correct thing to do being only to decrement it when the last table in the child node was removed from the plan. This lead to node counters getting negative values and the plan thus appeared impossible. An assertion caught this. Fixed by not recursing up the tree unless the last table in the join nest node is removed from the plan
-
- 05 May, 2010 3 commits
-
-
Sunanda Menon authored
revno: 2861 committer: Georgi Kodinov <joro@sun.com> branch nick: B53371-5.0-bugteam timestamp: Mon 2010-05-03 18:16:51 +0300 message: Bug #53371: COM_FIELD_LIST can be abused to bypass table level grants. The server was not checking the supplied to COM_FIELD_LIST table name for validity and compliance to acceptable table names standards. Fixed by checking the table name for compliance similar to how it's normally checked by the parser and returning an error message if it's not compliant.
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 03 May, 2010 1 commit
-
-
Georgi Kodinov authored
The server was not checking the supplied to COM_FIELD_LIST table name for validity and compliance to acceptable table names standards. Fixed by checking the table name for compliance similar to how it's normally checked by the parser and returning an error message if it's not compliant.
-