- 06 Oct, 2008 1 commit
-
-
vasil authored
Add entry for Bug#39830 Table autoinc value not updated on first insert in the ChangeLog.
-
- 05 Oct, 2008 1 commit
-
-
sunny authored
is > 1. This is safer and easier to understand.
-
- 04 Oct, 2008 4 commits
-
-
sunny authored
so that it conforms to InnoDB's internal error/return code type.
-
sunny authored
non-determinism related to reading the table's autoinc value for the first time. This change has also reduced the sizeof dict_table_t by sizeof(ibool) bytes because we don't need the dict_table_t::autoinc_inited field anymore. This also fixes Bug#39830 Table autoinc value not updated on first insert. rb://16
-
sunny authored
the table's global autoinc counter value. This is part of simplifying the AUTOINC sub-system. We extract the type info from MySQL data structures at runtime. This fixes Bug#37788 InnoDB Plugin: AUTO_INCREMENT wrong for compressed tables
-
sunny authored
------------------------------------------------------------------------ r2702 | sunny | 2008-09-30 11:41:56 +0300 (Tue, 30 Sep 2008) | 13 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc branches/5.1: Since handler::get_auto_increment() doesn't allow us to return the cause of failure we have to inform MySQL using the sql_print_warning() function to return the cause for autoinc failure. Previously we simply printed the error code, this patch prints the text string representing the following two error codes: DB_LOCK_WAIT_TIMEOUT DB_DEADLOCK. Bug#35498 Cannot get table test/table1 auto-inccounter value in ::info Approved by Marko. ------------------------------------------------------------------------ r2709 | vasil | 2008-10-01 10:13:13 +0300 (Wed, 01 Oct 2008) | 10 lines Changed paths: M /branches/5.1/include/lock0lock.h M /branches/5.1/lock/lock0lock.c A /branches/5.1/mysql-test/innodb_bug38231.result A /branches/5.1/mysql-test/innodb_bug38231.test M /branches/5.1/row/row0mysql.c branches/5.1: Fix Bug#38231 Innodb crash in lock_reset_all_on_table() on TRUNCATE + LOCK / UNLOCK In TRUNCATE TABLE and discard tablespace: do not remove table-level S and X locks and do not assert on such locks not being wait locks. Leave such locks alone. Approved by: Heikki (rb://14) ------------------------------------------------------------------------ r2710 | vasil | 2008-10-01 14:13:58 +0300 (Wed, 01 Oct 2008) | 6 lines Changed paths: M /branches/5.1/include/sync0sync.ic branches/5.1: Silence a compilation warning in UNIV_DEBUG. Approved by: Marko (via IM) ------------------------------------------------------------------------ r2719 | vasil | 2008-10-03 18:17:28 +0300 (Fri, 03 Oct 2008) | 49 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc A /branches/5.1/mysql-test/innodb_bug39438-master.opt A /branches/5.1/mysql-test/innodb_bug39438.result A /branches/5.1/mysql-test/innodb_bug39438.test branches/5.1: Fix Bug#39438 Testcase for Bug#39436 crashes on 5.1 in fil_space_get_latch In ha_innobase::info() - do not try to get the free space for a tablespace which has been discarded with ALTER TABLE ... DISCARD TABLESPACE or if the .ibd file is missing for some other reason. ibd_file_missing and tablespace_discarded are manipulated only in row_discard_tablespace_for_mysql() and in row_import_tablespace_for_mysql() and the manipulation is protected/surrounded by row_mysql_lock_data_dictionary()/row_mysql_unlock_data_dictionary() thus we do the same in ha_innobase::info() when checking the values of those members to avoid race conditions. I have tested the code-path with UNIV_DEBUG and UNIV_SYNC_DEBUG. Looks like it is not possible to avoid mysqld printing warnings in the mysql-test case and thus this test innodb_bug39438 must be added to the list of exceptional test cases that are allowed to print warnings. For this, the following patch must be applied to the mysql source tree: --- cut --- === modified file 'mysql-test/lib/mtr_report.pl' --- mysql-test/lib/mtr_report.pl 2008-08-12 10:26:23 +0000 +++ mysql-test/lib/mtr_report.pl 2008-10-01 11:57:41 +0000 @@ -412,7 +412,10 @@ # When trying to set lower_case_table_names = 2 # on a case sensitive file system. Bug#37402. - /lower_case_table_names was set to 2, even though your the file system '.*' is case sensitive. Now setting lower_case_table_names to 0 to avoid future problems./ + /lower_case_table_names was set to 2, even though your the file system '.*' is case sensitive. Now setting lower_case_table_names to 0 to avoid future problems./ or + + # this test is expected to print warnings + ($testname eq 'main.innodb_bug39438') ) { next; # Skip these lines --- cut --- The mysql-test is currently somewhat disabled (see inside innodb_bug39438.test), after the above patch has been applied to the mysql source tree, the test can be enabled. rb://20 Reviewed by: Inaam, Calvin Approved by: Heikki ------------------------------------------------------------------------ r2720 | vasil | 2008-10-03 19:52:39 +0300 (Fri, 03 Oct 2008) | 8 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc branches/5.1: Print a warning if an attempt is made to get the free space for a table whose .ibd file is missing or the tablespace has been discarded. This is a followup to r2719. Suggested by: Inaam ------------------------------------------------------------------------ r2721 | sunny | 2008-10-04 02:08:23 +0300 (Sat, 04 Oct 2008) | 6 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc branches/5.1: We need to send the messages to the client because handler::get_auto_increment() doesn't allow a way to return the specific error for why it failed. rb://18 ------------------------------------------------------------------------ r2722 | sunny | 2008-10-04 02:48:04 +0300 (Sat, 04 Oct 2008) | 18 lines Changed paths: M /branches/5.1/dict/dict0mem.c M /branches/5.1/handler/ha_innodb.cc M /branches/5.1/include/dict0mem.h M /branches/5.1/include/row0mysql.h M /branches/5.1/mysql-test/innodb-autoinc.result M /branches/5.1/mysql-test/innodb-autoinc.test M /branches/5.1/row/row0mysql.c branches/5.1: This bug has always existed but was masked by other errors. The fix for bug# 38839 triggered this bug. When the offset and increment are > 1 we need to calculate the next value taking into consideration the two variables. Previously we simply assumed they were 1 particularly offset was never used. MySQL does its own calculation and that's probably why it seemed to work in the past. We would return what we thought was the correct next value and then MySQL would recalculate the actual value from that and return it to the caller (e.g., handler::write_row()). Several new tests have been added that try and catch some edge cases. The tests exposed a wrap around error in MySQL next value calculation which was filed as bug#39828. The tests will need to be updated once MySQL fix that bug. One good side effect of this fix is that dict_table_t size has been reduced by 8 bytes because we have moved the autoinc_increment field to the row_prebuilt_t structure. See review-board for a detailed discussion. rb://3 ------------------------------------------------------------------------
-
- 03 Oct, 2008 3 commits
-
-
vasil authored
ChangeLog: Use "Fix Bug#NNNNN bug summary text" for bugfixes, as for other entries in the file.
-
marko authored
more generic. The previous pattern could fail if other test cases were run before this one. Since r2716, the MySQL server is not restarted for this test.
-
marko authored
(Bug #36285, rb://9). innodb-index.test, innodb-index.result: Set innodb_lock_wait_timeout as a session variable instead of relying on the global value. innodb-index-master.opt: Remove. innodb-timeout.test: Test that setting the innodb_lock_wait_timeout works as advertised. thd_lock_wait_timeout(): New function, to retrieve the lock wait timeout for a given MySQL client connection (thd), or the global value (thd==NULL). srv_lock_wait_timeout, innobase_lock_wait_timeout: Remove. Replace MYSQL_SYSVAR_LONG(lock_wait_timeout) with MYSQL_THDVAR_ULONG(lock_wait_timeout).
-
- 01 Oct, 2008 3 commits
-
-
marko authored
should be space left in the modification log of the compressed page. Record deletion does not require any space in the modification log.
-
marko authored
dump the data structures. This was forgotten in r2698.
-
marko authored
------------------------------------------------------------------------ r2702 | sunny | 2008-09-30 11:41:56 +0300 (Tue, 30 Sep 2008) | 13 lines branches/5.1: Since handler::get_auto_increment() doesn't allow us to return the cause of failure we have to inform MySQL using the sql_print_warning() function to return the cause for autoinc failure. Previously we simply printed the error code, this patch prints the text string representing the following two error codes: DB_LOCK_WAIT_TIMEOUT DB_DEADLOCK. Bug#35498 Cannot get table test/table1 auto-inccounter value in ::info Approved by Marko. ------------------------------------------------------------------------ rb://18
-
- 30 Sep, 2008 1 commit
-
-
vasil authored
Change the patch to fix the failing mysql-test index_merge_innodb. The previous variant is inappropriate because myisam results are different (2 instead of 4) and then the index_merge_myisam test fails.
-
- 29 Sep, 2008 3 commits
-
-
marko authored
-
marko authored
page_zip_hexdump_func(): New function, to dump a block of data. ut_print_buf() would dump everything on a single line, which is hard to read. page_zip_hexdump(): Wrapper macro for page_zip_hexdump_func(). page_zip_validate(): dump page_zip, page_zip->data, page, temp_page if !valid.
-
marko authored
in r2631. Include the node pointer field in the size calculation. rec_get_converted_size_comp_prefix(): New function, to compute the storage size of the prefix of an ordinary record in COMPACT format. rec_get_converted_size_comp(): Use rec_get_converted_size_comp_prefix().
-
- 26 Sep, 2008 4 commits
-
-
vasil authored
Add a patch to fix the failing mysql-test index_merge_innodb. The test started failing after an optimization, made in r2625, which results in a different number of rows being returned by EXPLAIN.
-
vasil authored
Fix typos in mysql-test/patches/README.
-
marko authored
ut_a() assertions into ut_ad() assertions.
-
calvin authored
The following files are from MySQL source tree without any changes. They will be changed for building Windows plugin. The original files will be used as the base for diff purpose. * CMakeLists.txt * sql/CMakeLists.txt * win/configure.js
-
- 25 Sep, 2008 5 commits
-
-
marko authored
page_zip_copy_recs(): Rename from page_zip_copy(). Update the function comment.
-
marko authored
-
marko authored
-
marko authored
page_copy_rec_list_end(), page_copy_rec_list_start() and friends do not copy it either.
-
marko authored
fields that are related to the records stored in the page. page_zip_copy() is a fall-back method in certain B-tree operations (tree compression, splitting or merging nodes). The contents of a page may fit in the compressed page frame when it has been modified in a certain sequence, but not when the page is recompressed. Sometimes, copying all or part of the records to an empty page could fail because of compression overflow. In such cases, we copy the compressed and uncompressed pages bit for bit and delete any unwanted records from the copy. (Deletion is guaranteed to succeed.) The method page_zip_copy() is invoked very rarely. In one case, page_zip_copy() was called in btr_lift_page_up() to move the records to the root page of the B-tree. Because page_zip_copy() copied all B-tree page header fields, it overwrote the file segment header fields PAGE_BTR_SEG_LEAF and PAGE_BTR_SEG_TOP. This is the probable cause of the corruption that was reported as Mantis issue #63 and others.
-
- 24 Sep, 2008 5 commits
-
-
marko authored
add some ut_a() assertions to track down file space header corruption that is the probable cause of Mantis issue #63.
-
marko authored
Return earlier when this function is called on an index that is being created. Luckily, mtr_start() does not allocate any resources. Thus, there was no memory leak.
-
marko authored
-
marko authored
be merged to a leaf page.
-
marko authored
B-tree, because there are no file segment headers in the insert buffer B-tree root page. The function was introduced in r2627.
-
- 22 Sep, 2008 4 commits
- 19 Sep, 2008 1 commit
-
-
calvin authored
The memory leak was due to wrong parameters passed into VirtualFree() call. So, the call fails with Windows error 87. MEM_DECOMMIT can NOT be used along with MEM_RELEASE. And if the parameter is MEM_RELEASE, the size parameter must be 0. The function frees the entire region that is reserved in the initial allocation call to VirtualAlloc. This issue was introduced by r984. Approved by: Heikki (on IM)
-
- 18 Sep, 2008 5 commits
-
-
marko authored
ha_thd() whenever possible. EQ_CURRENT_THD(thd): New predicate, for use in assertions. innobase_drop_database(): Tolerate current_thd == NULL, so that the Windows plugin will work. In the Windows plugin, it will be impossible to skip foreign key checks in this function. However, DROP DATABASE will drop each table (that MySQL knows about) individually before calling this function. Thus, the foreign key checks can be disabled also in the Windows plugin, unless some .frm files are missing.
-
marko authored
the maximum record size will never exceed the B-tree page size limit. For uncompressed tables, there should always be enough space for two records in an empty B-tree page. For compressed tables, there should be enough space for storing two node pointer records or one data record in an empty page in uncompressed format. dict_build_table_def_step(): Remove the inaccurate check for table row size. dict_index_too_big_for_tree(): New function: check if the index records would be too big for a B-tree page. dict_index_add_to_cache(): Add the parameter "strict". Invoke dict_index_too_big_for_tree() if it is set. trx_is_strict(), thd_is_strict(): New functions, for determining if innodb_strict_mode is enabled for the current transaction. dict_create_index_step(): Pass the new parameter strict of dict_index_add_to_cache() as trx_is_strict(trx). All other callers pass it as FALSE. innodb.test: Enable innodb_strict_mode before attempting to create a table with a too big record size. innodb-zip.test: Remove the test of inserting random data. Add tests for checking that the maximum record lengths are enforced at table creation time.
-
marko authored
-
marko authored
-
marko authored
general public, and Mantis is for our internal use only. Thanks to Vasil for pointing this out.
-