- 18 Oct, 2012 4 commits
-
-
Neeraj Bisht authored
Problem:- When we execute a query which has subquery with GROUP BY, ORDER BY and have a BLOB column,results a memory leak. Analysis:- In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of BLOB value.This copy_field value can have copies of its in two join(objects), so while freeing this copy_field we have to take care that it is not deleted twice. The double deletion of tmp_table_param.copy_field is handled by two patches. One by Kostja : revid:sp1r-konstantin@mysql.com-20050627101056-55153 Fix the broken test suite in -debug build. and other by Oleksandr revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905 Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851). both of this patches are commited in different branch and while merging they both get placed,but there is no need for Kostja patch as Oleksandr patch handle this. sql/sql_select.cc: Bug13726751, tmp_join clean up is not necessary as later in the code we are taking care of cleaning up of tmp_join copy_field.
-
Neeraj Bisht authored
Problem:- When we execute a query which has subquery with GROUP BY, ORDER BY and have a BLOB column,results a memory leak. Analysis:- In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of BLOB value.This copy_field value can have copies of its in two join(objects), so while freeing this copy_field we have to take care that it is not deleted twice. The double deletion of tmp_table_param.copy_field is handled by two patches. One by Kostja : revid:sp1r-konstantin@mysql.com-20050627101056-55153 Fix the broken test suite in -debug build. and other by Oleksandr revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905 Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851). both of this patches are commited in different branch and while merging they both get placed,but there is no need for Kostja patch as Oleksandr patch handle this. sql/sql_select.cc: Bug13726751, tmp_join clean up is not necessary as later in the code we are taking care of cleaning up of tmp_join copy_field.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
LEN <= SIZEOF(ULONGLONG) This bug was caught in the WL#6255 ALTER TABLE...ADD COLUMN in MySQL 5.6, but there is a bug in all InnoDB versions that support auto-increment columns. row_search_autoinc_read_column(): When reading the maximum value of the auto-increment column, and the column only contains NULL values, return 0. This corresponds to the case when the table is empty in row_search_max_autoinc(). rb:1415 approved by Sunny Bains
-
- 17 Oct, 2012 9 commits
-
-
-
removing .... will re-add using merge. for some reason initial mysql-5.1 version is not connected to mysql-5.5
-
-
Tatjana Azundris Nuernberg authored
-
Tatjana Azundris Nuernberg authored
mysqld_safe script did not heed MySQL specific environment variable $UMASK, leading to divergent behavior between mysqld and mysqld_safe. Patch adds an approximation of mysqld's behavior to mysqld_safe, within the bounds dictated by attempt to have mysqld_safe run on even the most basic of shells (proper '70s sh, not just bash with a fancy symlink). Patch also adds approximation of said behavior to mysqld_multi (in perl). (backport)
-
Yasufumi Kinoshita authored
rb://1334 approved by: Inaam Rana
-
Tatjana Azundris Nuernberg authored
mysqld_safe script did not heed MySQL specific environment variable $UMASK, leading to divergent behavior between mysqld and mysqld_safe. Patch adds an approximation of mysqld's behavior to mysqld_safe, within the bounds dictated by attempt to have mysqld_safe run on even the most basic of shells (proper '70s sh, not just bash with a fancy symlink). Patch also adds approximation of said behavior to mysqld_multi (in perl). (backport) manual merge
-
Yasufumi Kinoshita authored
rb://1334 approved by: Inaam Rana
-
Tatjana Azundris Nuernberg authored
mysqld_safe script did not heed MySQL specific environment variable $UMASK, leading to divergent behavior between mysqld and mysqld_safe. Patch adds an approximation of mysqld's behavior to mysqld_safe, within the bounds dictated by attempt to have mysqld_safe run on even the most basic of shells (proper '70s sh, not just bash with a fancy symlink). Patch also adds approximation of said behavior to mysqld_multi (in perl).
-
- 16 Oct, 2012 5 commits
-
-
Neeraj Bisht authored
Problem:- using last_insert_id() on an auto_incremented bigint unsigned does not work for values which are greater than max-bigint-signed. Analysis:- last_insert_id() returns the first auto_incremented value for a column and an auto_incremented value can have only positive values. In our code, when we are initializing a last_insert_id object, we are taking it as a signed BIGINT, So when the auto_incremented value reaches greater than max signed bigint, last_insert_id gives negative result. Solution: When we are fetching the value from last_insert_id, We are setting the unsigned_flag, so that it take only unsigned BIGINT value. sql/item_func.cc: here unsigned value is converted to signed value. sql/item_func.h: last_insert_id() gives an auto_incremented value which can be positive only,so defined it as a unsigned longlong sets the unsigned_flag to 1.
-
Neeraj Bisht authored
Problem:- using last_insert_id() on an auto_incremented bigint unsigned does not work for values which are greater than max-bigint-signed. Analysis:- last_insert_id() returns the first auto_incremented value for a column and an auto_incremented value can have only positive values. In our code, when we are initializing a last_insert_id object, we are taking it as a signed BIGINT, So when the auto_incremented value reaches greater than max signed bigint, last_insert_id gives negative result. Solution: When we are fetching the value from last_insert_id, We are setting the unsigned_flag, so that it take only unsigned BIGINT value. sql/item_func.cc: here unsigned value is converted to signed value. sql/item_func.h: last_insert_id() gives an auto_incremented value which can be positive only,so defined it as a unsigned longlong sets the unsigned_flag to 1.
-
unknown authored
No commit message
-
Marko Mäkelä authored
-
Marko Mäkelä authored
REAL DUPLICATE VALUE FOR PREFIX KEYS innobase_rec_to_mysql(): Invoke dict_index_get_nth_col_or_prefix_pos() instead of dict_index_get_nth_col_pos() to find the column.
-
- 15 Oct, 2012 3 commits
-
-
removed warning message as they have changed in mysql-5.6 and mysql-trunk and this is left over from changes that got up-merged
-
bug#14704286 SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE LOOKUPS (honoring kill query while accessing sec_index) If secondary index is being used for select query evaluation and this query is operating with consistent read snapshot it might take good time for secondary index to return back control to mysql as MVCC would kick in. If user issues "kill query <id>" while query is actively accessing secondary index it will not be honored as there is no hook to check for this condition. Added hook for this check. ----- Parallely secondary index taking too long to evaluate for consistent read snapshot case is being examined for performance improvement. WL#6540.
-
bug#14704286 SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE LOOKUPS (honoring kill query while accessing sec_index) If secondary index is being used for select query evaluation and this query is operating with consistent read snapshot it might take good time for secondary index to return back control to mysql as MVCC would kick in. If user issues "kill query <id>" while query is actively accessing secondary index it will not be honored as there is no hook to check for this condition. Added hook for this check. ----- Parallely secondary index taking too long to evaluate for consistent read snapshot case is being examined for performance improvement. WL#6540.
-
- 12 Oct, 2012 2 commits
- 10 Oct, 2012 2 commits
-
-
Vasil Dimov authored
exist in 5.5).
-
Vasil Dimov authored
os/os0file.c:1332: error: ISO C90 forbids mixed declarations and code
-
- 09 Oct, 2012 6 commits
-
-
Vasil Dimov authored
-
Vasil Dimov authored
although the bug does not exist in 5.1/innodb.
-
Vasil Dimov authored
-
Vasil Dimov authored
The problem is in the error handling in row_create_table_for_mysql(). In the 'disk full' case we may forget to call dict_mem_table_free() on the table object. Approved by: Marko (rb:1377 and rb:1386)
-
Harin Vadodaria authored
PRIVILEGES Description: (user,host) pair from security context is used privilege checking at the time of granting or revoking proxy privileges. This creates problem when server is started with --skip-name-resolve option because host will not contain any value. Checks should be dependent on consistent values regardless the way server is started. Further, privilege check should use (priv_user,priv_host) pair rather than values obtained from inbound connection because this pair represents the correct account context obtained from mysql.user table.
-
Annamalai Gurusami authored
-
- 08 Oct, 2012 4 commits
-
-
Praveenkumar Hulakund authored
FAILS TO READ EVENT TABLE AT STARTUP. This issue is fixed in 5.5+ versions. This patch adds a test case for this scenario.
-
Annamalai Gurusami authored
CONSISTENT SNAPSHOT OPTION A transaction is started with a consistent snapshot. After the transaction is started new indexes are added to the table. Now when we issue an update statement, the optimizer chooses an index. When the index scan is being initialized via ha_innobase::change_active_index(), InnoDB reports the error code HA_ERR_TABLE_DEF_CHANGED, with message stating that "insufficient history for index". This error message is propagated up to the SQL layer. But the my_error() api is never called. The statement level diagnostics area is not updated with the correct error status (it remains in Diagnostics_area::DA_EMPTY). Hence the following check in the Protocol::end_statement() fails. 516 case Diagnostics_area::DA_EMPTY: 517 default: 518 DBUG_ASSERT(0); 519 error= send_ok(thd->server_status, 0, 0, 0, NULL); 520 break; The fix is to backport the fix of bugs 14365043, 11761652 and 11746399. 14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED 11761652 HA_RND_INIT() RESULT CODE NOT CHECKED 11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED rb://1227 approved by guilhem and mattiasj.
-
Marko Mäkelä authored
Also, add debug check for trx_id sanity to row_upd_rec_sys_fields().
-
Marko Mäkelä authored
We did not allocate enough bits for index->trx_id_offset, causing an UPDATE or DELETE of a table with a PRIMARY KEY longer than 1024 bytes to corrupt the PRIMARY KEY. dict_index_t: Allocate enough bits. dict_index_build_internal_clust(): Check for overflow of index->trx_id_offset. Trip a debug assertion when overflow occurs. rb:1380 approved by Jimmy Yang
-
- 04 Oct, 2012 1 commit
-
-
Jon Olav Hauglid authored
When a SP handler is activated, memory is allocated to hold the MESSAGE_TEXT for the condition that caused the activation. The problem was that this memory was allocated on the MEM_ROOT belonging to the stored program. Since this MEM_ROOT is not freed until the stored program ends, a stored program that causes lots of handler activations can start using lots of memory. In 5.1 and earlier the problem did not exist as no MESSAGE_TEXT was allocated if a condition was raised with a handler present. However, this behavior lead to a number of other issues such as Bug#23032. This patch fixes the problem by allocating enough memory for the necessary MESSAGE_TEXTs in the SP MEM_ROOT when the SP starts and then re-using this memory each time a handler is activated. This is the 5.5 version of the patch.
-
- 03 Oct, 2012 2 commits
-
-
Tor Didriksen authored
This bug depends on cmake version. For cmake 2.6 (which is still in use for some pushbuild trees) the main build would succeed, even if create_initial_db failed. The problem was the chaining of commands in the CUSTOM_COMMAND to produce 'initdb.dep'. It first invokes cmake to run mysqld, then invokes 'touch' to create the file. Moving the 'touch' command makes the error propagate properly for both cmake 2.6 and 2.8
-
Jon Olav Hauglid authored
Follow-up patch - Fix broken build: error: format ‘%u’ expects argument of type ‘unsigned int’, but argument 2 has type ‘key_part_map {aka long unsigned int}’ [-Werror=format]
-
- 01 Oct, 2012 2 commits
-
-
Tor Didriksen authored
n_child_sum_items kept increasing. Since it is used for calculating the size of ref_pointer_array, we will allocate larger and larger chunks of memory, until we hit some operating system limit. The memory is free()d at disconnect, but is most likely *not* returned to the operating system.
-
Joerg Bruehe authored
-