- 01 Oct, 2008 2 commits
-
-
marko authored
dump the data structures. This was forgotten in r2698.
-
marko authored
------------------------------------------------------------------------ r2702 | sunny | 2008-09-30 11:41:56 +0300 (Tue, 30 Sep 2008) | 13 lines branches/5.1: Since handler::get_auto_increment() doesn't allow us to return the cause of failure we have to inform MySQL using the sql_print_warning() function to return the cause for autoinc failure. Previously we simply printed the error code, this patch prints the text string representing the following two error codes: DB_LOCK_WAIT_TIMEOUT DB_DEADLOCK. Bug#35498 Cannot get table test/table1 auto-inccounter value in ::info Approved by Marko. ------------------------------------------------------------------------ rb://18
-
- 30 Sep, 2008 1 commit
-
-
vasil authored
Change the patch to fix the failing mysql-test index_merge_innodb. The previous variant is inappropriate because myisam results are different (2 instead of 4) and then the index_merge_myisam test fails.
-
- 29 Sep, 2008 3 commits
-
-
marko authored
-
marko authored
page_zip_hexdump_func(): New function, to dump a block of data. ut_print_buf() would dump everything on a single line, which is hard to read. page_zip_hexdump(): Wrapper macro for page_zip_hexdump_func(). page_zip_validate(): dump page_zip, page_zip->data, page, temp_page if !valid.
-
marko authored
in r2631. Include the node pointer field in the size calculation. rec_get_converted_size_comp_prefix(): New function, to compute the storage size of the prefix of an ordinary record in COMPACT format. rec_get_converted_size_comp(): Use rec_get_converted_size_comp_prefix().
-
- 26 Sep, 2008 4 commits
-
-
vasil authored
Add a patch to fix the failing mysql-test index_merge_innodb. The test started failing after an optimization, made in r2625, which results in a different number of rows being returned by EXPLAIN.
-
vasil authored
Fix typos in mysql-test/patches/README.
-
marko authored
ut_a() assertions into ut_ad() assertions.
-
calvin authored
The following files are from MySQL source tree without any changes. They will be changed for building Windows plugin. The original files will be used as the base for diff purpose. * CMakeLists.txt * sql/CMakeLists.txt * win/configure.js
-
- 25 Sep, 2008 5 commits
-
-
marko authored
page_zip_copy_recs(): Rename from page_zip_copy(). Update the function comment.
-
marko authored
-
marko authored
-
marko authored
page_copy_rec_list_end(), page_copy_rec_list_start() and friends do not copy it either.
-
marko authored
fields that are related to the records stored in the page. page_zip_copy() is a fall-back method in certain B-tree operations (tree compression, splitting or merging nodes). The contents of a page may fit in the compressed page frame when it has been modified in a certain sequence, but not when the page is recompressed. Sometimes, copying all or part of the records to an empty page could fail because of compression overflow. In such cases, we copy the compressed and uncompressed pages bit for bit and delete any unwanted records from the copy. (Deletion is guaranteed to succeed.) The method page_zip_copy() is invoked very rarely. In one case, page_zip_copy() was called in btr_lift_page_up() to move the records to the root page of the B-tree. Because page_zip_copy() copied all B-tree page header fields, it overwrote the file segment header fields PAGE_BTR_SEG_LEAF and PAGE_BTR_SEG_TOP. This is the probable cause of the corruption that was reported as Mantis issue #63 and others.
-
- 24 Sep, 2008 5 commits
-
-
marko authored
add some ut_a() assertions to track down file space header corruption that is the probable cause of Mantis issue #63.
-
marko authored
Return earlier when this function is called on an index that is being created. Luckily, mtr_start() does not allocate any resources. Thus, there was no memory leak.
-
marko authored
-
marko authored
be merged to a leaf page.
-
marko authored
B-tree, because there are no file segment headers in the insert buffer B-tree root page. The function was introduced in r2627.
-
- 22 Sep, 2008 4 commits
- 19 Sep, 2008 1 commit
-
-
calvin authored
The memory leak was due to wrong parameters passed into VirtualFree() call. So, the call fails with Windows error 87. MEM_DECOMMIT can NOT be used along with MEM_RELEASE. And if the parameter is MEM_RELEASE, the size parameter must be 0. The function frees the entire region that is reserved in the initial allocation call to VirtualAlloc. This issue was introduced by r984. Approved by: Heikki (on IM)
-
- 18 Sep, 2008 5 commits
-
-
marko authored
ha_thd() whenever possible. EQ_CURRENT_THD(thd): New predicate, for use in assertions. innobase_drop_database(): Tolerate current_thd == NULL, so that the Windows plugin will work. In the Windows plugin, it will be impossible to skip foreign key checks in this function. However, DROP DATABASE will drop each table (that MySQL knows about) individually before calling this function. Thus, the foreign key checks can be disabled also in the Windows plugin, unless some .frm files are missing.
-
marko authored
the maximum record size will never exceed the B-tree page size limit. For uncompressed tables, there should always be enough space for two records in an empty B-tree page. For compressed tables, there should be enough space for storing two node pointer records or one data record in an empty page in uncompressed format. dict_build_table_def_step(): Remove the inaccurate check for table row size. dict_index_too_big_for_tree(): New function: check if the index records would be too big for a B-tree page. dict_index_add_to_cache(): Add the parameter "strict". Invoke dict_index_too_big_for_tree() if it is set. trx_is_strict(), thd_is_strict(): New functions, for determining if innodb_strict_mode is enabled for the current transaction. dict_create_index_step(): Pass the new parameter strict of dict_index_add_to_cache() as trx_is_strict(trx). All other callers pass it as FALSE. innodb.test: Enable innodb_strict_mode before attempting to create a table with a too big record size. innodb-zip.test: Remove the test of inserting random data. Add tests for checking that the maximum record lengths are enforced at table creation time.
-
marko authored
-
marko authored
-
marko authored
general public, and Mantis is for our internal use only. Thanks to Vasil for pointing this out.
-
- 17 Sep, 2008 7 commits
-
-
marko authored
bug#39483 InnoDB hang on adaptive hash because of out of order ::open() call by MySQL Forward port of r2629 Under some conditions MySQL calls ::open with search_latch leading to a deadlock as we try to acquire dict_sys->mutex inside ::open breaking the latching order. The fix is to release search_latch. Reviewed by: Heikki
-
marko authored
Add the parameter struct charset_info_st* cs, so that the call thd_charset(current_thd) can be avoided. The macro current_thd has no defined value in the Windows plugin.
-
marko authored
functions innobase_convert_from_table_id(), innobase_convert_from_id(), innobase_casedn_str(), and innobase_get_charset() to ha_prototypes.h.
-
marko authored
the Microsoft Visual C compiler happy. This fix was from Calvin.
-
marko authored
-
marko authored
ha_innodb.cc: Declare strict_mode as PLUGIN_VAR_OPCMDARG, because we do want to be able to disable innodb_strict_mode. This is a non-functional change, because PLUGIN_VAR_NOCMDARG seems to accept an argument as well. innodb-zip.test: Do not store innodb_strict_mode. It is a session variable. Add a test case for innodb_strict_mode=off.
-
marko authored
there will always be enough space for two node pointer records in an empty B-tree page. This was reported as Mantis issue #73. page_zip_rec_needs_ext(): Add the parameter n_fields, for accurate estimation of the compressed size of the data dictionary information. Given that this function is only invoked for records on leaf pages, require that there be enough space for one record in the compressed page. We check elsewhere that there will be enough room for two node pointer records on higher-level pages. btr_cur_optimistic_insert(): Ensure that there will be enough room for two node pointer records on an empty non-leaf page. The rule for leaf-page records will be enforced by the callers of page_zip_rec_needs_ext(). btr_cur_pessimistic_insert(): Remove the insufficient check that the leaf page record should be compressible by itself. Instead, now we require that two node pointer records fit on a non-leaf page, and one record will fit in uncompressed form on the leaf page. page_zip_write_header(), page_zip_write_rec(): Re-enable the debug assertions that were violated by the insufficient check in btr_cur_pessimistic_insert(). innodb_bug36172.test: Use a larger compressed page size.
-
- 16 Sep, 2008 2 commits
-
-
marko authored
btr_search_drop_page_hash_index(): Add const qualifiers to the local variables page, rec, and index, to ensure that they are not modified by this function. page_get_infimum_offset(), page_get_supremum_offset(): New functions. page_get_infimum_rec(), page_get_supremum_rec(): Replaced by const-preserving macros that invoke the accessor functions.
-
marko authored
help in tracking down issue #63 (memory corruption). UNIV_BTR_DEBUG is currently enabled in univ.i. btr_root_fseg_validate(): New function, for validating a file segment header on a B-tree root page. btr_root_block_get(), btr_free_but_not_root(), btr_root_raise_and_insert(), btr_discard_only_page_on_level(): Check PAGE_BTR_SEG_LEAF and PAGE_BTR_SEG_TOP on the root page with btr_root_fseg_validate(). btr_root_raise_and_insert(): Move the assertion dict_index_get_page(index) == page_get_page_no(root) inside UNIV_BTR_DEBUG. It was previously enabled by UNIV_DEBUG. btr_free_root(): Check PAGE_BTR_SEG_TOP on the root page with btr_root_fseg_validate().
-
- 15 Sep, 2008 1 commit
-
-
vasil authored
Add a test case to check that mysqld does not crash when running ANALYZE TABLE with different values for innodb_stats_sample_pages. Suggested by: Marko Approved by: Marko
-