1. 29 Sep, 2008 3 commits
  2. 26 Sep, 2008 4 commits
  3. 25 Sep, 2008 5 commits
    • marko's avatar
      branches/zip: Non-functional change: · f74acc25
      marko authored
      page_zip_copy_recs(): Rename from page_zip_copy().
      Update the function comment.
      f74acc25
    • marko's avatar
    • marko's avatar
      14d5f725
    • marko's avatar
      branches/zip: page_zip_copy(): Skip PAGE_MAX_TRX_ID, because · fdfdf385
      marko authored
      page_copy_rec_list_end(), page_copy_rec_list_start() and friends do
      not copy it either.
      fdfdf385
    • marko's avatar
      branches/zip: page_zip_copy(): Copy only those B-tree page header · 731ead1d
      marko authored
      fields that are related to the records stored in the page.
      
      page_zip_copy() is a fall-back method in certain B-tree operations
      (tree compression, splitting or merging nodes).  The contents of a
      page may fit in the compressed page frame when it has been modified in
      a certain sequence, but not when the page is recompressed.  Sometimes,
      copying all or part of the records to an empty page could fail because
      of compression overflow.  In such cases, we copy the compressed and
      uncompressed pages bit for bit and delete any unwanted records from
      the copy.  (Deletion is guaranteed to succeed.)  The method
      page_zip_copy() is invoked very rarely.
      
      In one case, page_zip_copy() was called in btr_lift_page_up() to move
      the records to the root page of the B-tree.  Because page_zip_copy()
      copied all B-tree page header fields, it overwrote the file segment
      header fields PAGE_BTR_SEG_LEAF and PAGE_BTR_SEG_TOP.  This is the
      probable cause of the corruption that was reported as Mantis issue #63
      and others.
      731ead1d
  4. 24 Sep, 2008 5 commits
  5. 22 Sep, 2008 4 commits
  6. 19 Sep, 2008 1 commit
    • calvin's avatar
      branches/zip: fix Mantis issue #74 Memory leak on Windows · 6df13976
      calvin authored
      The memory leak was due to wrong parameters passed into VirtualFree()
      call. So, the call fails with Windows error 87. MEM_DECOMMIT can NOT be
      used along with MEM_RELEASE. And if the parameter is MEM_RELEASE, the
      size parameter must be 0. The function frees the entire region that is
      reserved in the initial allocation call to VirtualAlloc.
      
      This issue was introduced by r984.
      
      Approved by:	Heikki (on IM)
      6df13976
  7. 18 Sep, 2008 5 commits
    • marko's avatar
      branches/zip: Map current_thd to NULL in the Windows plugin, and use · ea6b2cea
      marko authored
      ha_thd() whenever possible.
      
      EQ_CURRENT_THD(thd): New predicate, for use in assertions.
      
      innobase_drop_database(): Tolerate current_thd == NULL, so that the
      Windows plugin will work.  In the Windows plugin, it will be
      impossible to skip foreign key checks in this function.  However,
      DROP DATABASE will drop each table (that MySQL knows about) individually
      before calling this function.  Thus, the foreign key checks can be disabled
      also in the Windows plugin, unless some .frm files are missing.
      ea6b2cea
    • marko's avatar
      branches/zip: When creating an index in innodb_strict_mode, check that · 3b2f5c05
      marko authored
      the maximum record size will never exceed the B-tree page size limit.
      For uncompressed tables, there should always be enough space for two
      records in an empty B-tree page.  For compressed tables, there should
      be enough space for storing two node pointer records or one data
      record in an empty page in uncompressed format.
      
      dict_build_table_def_step(): Remove the inaccurate check for table row
      size.
      
      dict_index_too_big_for_tree(): New function: check if the index
      records would be too big for a B-tree page.
      
      dict_index_add_to_cache(): Add the parameter "strict".  Invoke
      dict_index_too_big_for_tree() if it is set.
      
      trx_is_strict(), thd_is_strict(): New functions, for determining if
      innodb_strict_mode is enabled for the current transaction.
      
      dict_create_index_step(): Pass the new parameter strict of
      dict_index_add_to_cache() as trx_is_strict(trx).  All other callers
      pass it as FALSE.
      
      innodb.test: Enable innodb_strict_mode before attempting to create a
      table with a too big record size.
      
      innodb-zip.test: Remove the test of inserting random data.  Add tests
      for checking that the maximum record lengths are enforced at table
      creation time.
      3b2f5c05
    • marko's avatar
    • marko's avatar
    • marko's avatar
      branches/zip: ChangeLog: Remove reference to Mantis. This file is for the · 2fe45fb9
      marko authored
      general public, and Mantis is for our internal use only.
      
      Thanks to Vasil for pointing this out.
      2fe45fb9
  8. 17 Sep, 2008 7 commits
    • marko's avatar
      branches/zip: Merge r2617:r2630 from branches/5.1: · 7a4d2a59
      marko authored
      bug#39483 InnoDB hang on adaptive hash because of out of order ::open()
      call by MySQL
      
      Forward port of r2629
      
      Under some conditions MySQL calls ::open with search_latch leading
      to a deadlock as we try to acquire dict_sys->mutex inside ::open
      breaking the latching order. The fix is to release search_latch.
      
      Reviewed by: Heikki
      7a4d2a59
    • marko's avatar
      branches/zip: innobase_convert_from_id(), innobase_convert_from_table_id(): · 39147158
      marko authored
      Add the parameter struct charset_info_st* cs, so that the call
      thd_charset(current_thd) can be avoided.  The macro current_thd has no
      defined value in the Windows plugin.
      39147158
    • marko's avatar
      branches/zip: Non-functional change: Move the declarations of the · dc32dbdb
      marko authored
      functions innobase_convert_from_table_id(), innobase_convert_from_id(),
      innobase_casedn_str(), and innobase_get_charset() to ha_prototypes.h.
      dc32dbdb
    • marko's avatar
      branches/zip: HASH_INSERT: Add a type conversion that is needed to keep · 37c90941
      marko authored
      the Microsoft Visual C compiler happy.  This fix was from Calvin.
      37c90941
    • marko's avatar
      branches/zip: Add the ChangeLog entry for r2631. · 6a89d1d3
      marko authored
      6a89d1d3
    • marko's avatar
      branches/zip: Add some tests for innodb_strict_mode. · a5db78f7
      marko authored
      ha_innodb.cc: Declare strict_mode as PLUGIN_VAR_OPCMDARG, because we
      do want to be able to disable innodb_strict_mode.  This is a non-functional
      change, because PLUGIN_VAR_NOCMDARG seems to accept an argument as well.
      
      innodb-zip.test: Do not store innodb_strict_mode.  It is a session variable.
      Add a test case for innodb_strict_mode=off.
      a5db78f7
    • marko's avatar
      branches/zip: Prevent infinite B-tree page splits by ensuring that · 1a360822
      marko authored
      there will always be enough space for two node pointer records in an
      empty B-tree page.  This was reported as Mantis issue #73.
      
      page_zip_rec_needs_ext(): Add the parameter n_fields, for accurate
      estimation of the compressed size of the data dictionary information.
      Given that this function is only invoked for records on leaf pages,
      require that there be enough space for one record in the compressed
      page.  We check elsewhere that there will be enough room for two node
      pointer records on higher-level pages.
      
      btr_cur_optimistic_insert(): Ensure that there will be enough room for
      two node pointer records on an empty non-leaf page.  The rule for
      leaf-page records will be enforced by the callers of
      page_zip_rec_needs_ext().
      
      btr_cur_pessimistic_insert(): Remove the insufficient check that the
      leaf page record should be compressible by itself.  Instead, now we
      require that two node pointer records fit on a non-leaf page, and one
      record will fit in uncompressed form on the leaf page.
      
      page_zip_write_header(), page_zip_write_rec(): Re-enable the debug
      assertions that were violated by the insufficient check in
      btr_cur_pessimistic_insert().
      
      innodb_bug36172.test: Use a larger compressed page size.
      1a360822
  9. 16 Sep, 2008 2 commits
    • marko's avatar
      branches/zip: Minor cleanup. · 5983048f
      marko authored
      btr_search_drop_page_hash_index(): Add const qualifiers to the local
      variables page, rec, and index, to ensure that they are not modified
      by this function.
      
      page_get_infimum_offset(), page_get_supremum_offset(): New functions.
      
      page_get_infimum_rec(), page_get_supremum_rec(): Replaced by
      const-preserving macros that invoke the accessor functions.
      5983048f
    • marko's avatar
      branches/zip: btr0btr.c: Add more UNIV_BTR_DEBUG checks. This should · 855c2aa8
      marko authored
      help in tracking down issue #63 (memory corruption).  UNIV_BTR_DEBUG
      is currently enabled in univ.i.
      
      btr_root_fseg_validate(): New function, for validating a file segment
      header on a B-tree root page.
      
      btr_root_block_get(), btr_free_but_not_root(),
      btr_root_raise_and_insert(), btr_discard_only_page_on_level():
      Check PAGE_BTR_SEG_LEAF and PAGE_BTR_SEG_TOP on the root page with
      btr_root_fseg_validate().
      
      btr_root_raise_and_insert(): Move the assertion
      dict_index_get_page(index) == page_get_page_no(root)
      inside UNIV_BTR_DEBUG.  It was previously enabled by UNIV_DEBUG.
      
      btr_free_root(): Check PAGE_BTR_SEG_TOP on the root page with
      btr_root_fseg_validate().
      855c2aa8
  10. 15 Sep, 2008 2 commits
    • vasil's avatar
      branches/zip: · 46b312cd
      vasil authored
      Add a test case to check that mysqld does not crash when running ANALYZE TABLE
      with different values for innodb_stats_sample_pages.
      
      Suggested by:	Marko
      Approved by:	Marko
      46b312cd
    • vasil's avatar
      branches/zip: · 733e615e
      vasil authored
      Limit the number of the pages that are sampled so it is never greater
      than the total number of pages in the index.
      
      The parameter that specifies the number of pages to test is global for
      all tables. By limiting it this way we allow the user to set it "high"
      to suit "large" tables and to avoid unnecessary work for "small" tables
      (e.g. doing 100 dives in a table that has 5 pages, obviously testing
      some pages more than once).
      
      Suggested by:	Ken
      Approved by:	Marko
      733e615e
  11. 13 Sep, 2008 1 commit
    • inaam's avatar
      branches/zip · 489abd22
      inaam authored
      Add missing semicolon. Introduced in r2602 which was obviously not
      compiled with UNIV_DEBUG.
      489abd22
  12. 12 Sep, 2008 1 commit
    • vasil's avatar
      branches/zip: · 4a4f1ef3
      vasil authored
      Update the ChangeLog which has not been updated for a long time, phew!
      4a4f1ef3