- 19 Dec, 2007 4 commits
-
-
marko authored
row_create_index_graph_for_mysql(): Move from row0mysql.c to row0merge.c and rename to row_merge_create_index_graph(). Also change the function comment to say that the function will create and execute the query graph for creating the index. row_merge_create_index(): Remove redundant assignment to trx->error_state.
-
marko authored
acquiring the table lock. The data dictionary should not be locked for long periods. Before this change, in the worst case, the dictionary would be locked until the expiration of innodb_lock_wait_timeout. Virtually, transaction-level locks (locks on database objects, such as records and tables) have a latching order level of SYNC_USER_TRX_LOCK, which is above any InnoDB rw-locks or mutexes. However, the latching order of SYNC_USER_TRX_LOCK is never checked, not even by UNIV_SYNC_DEBUG. ha_innobase::add_index(), ha_innobase::final_drop_index(): Invoke row_mysql_lock_data_dictionary(trx) only after row_merge_lock_table().
-
marko authored
innodb-index.test: Add a test with a large number of externally stored columns. Check that there may not be prefix indexes on too many columns. dict_index_too_big_for_undo(): New function: Check if the undo log may overflow. dict_index_add_to_cache(): Return DB_SUCCESS or DB_TOO_BIG_RECORD. Postpone the creation and linking of some data structures, so that when dict_index_too_big_for_undo() holds, it will be easier to clean up. Check the return status in all callers.
-
marko authored
dict_index_copy(): Remove the prototype, because this static function will be defined before its first use. Add const qualifier to "table". dict_index_build_internal_clust(), dict_index_build_internal_non_clust(): Add const qualifier to "table". Correct the comment about setting indexed[].
-
- 18 Dec, 2007 1 commit
-
-
vasil authored
Non-functional change: Do not include the terminating '\0' in TRX_I_S_LOCK_ID_MAX_LEN.
-
- 17 Dec, 2007 7 commits
-
-
marko authored
row_merge_lock_table(). ha_innobase::final_drop_index(): Set the dictionary operation mode to TRX_DICT_OP_INDEX_MAY_WAIT for the duration of the row_merge_lock_table() call.
-
marko authored
-
marko authored
Active transactions must not switch table or index definitions on the fly, for several reasons, including the following: * copied indexes do not carry any history or locking information; that is, rollbacks, read views, and record locking would be broken * huge potential for race conditions, inconsistent reads and writes, loss of data, and corruption Instead of trying to track down if the table was changed during a transaction, acquire appropriate locks that protect the creation and dropping of indexes. innodb-index.test: Test the locking of CREATE INDEX and DROP INDEX. Test that consistent reads work across dropped indexes. lock_rec_insert_check_and_lock(): Relax the lock_table_has() assertion. When inserting a record into an index, the table must be at least IX-locked. However, when an index is being created, an IS-lock on the table is sufficient. row_merge_lock_table(): Add the parameter enum lock_mode mode, which must be LOCK_X or LOCK_S. row_merge_drop_table(): Assert that n_mysql_handles_opened == 0. Unconditionally drop the table. ha_innobase::add_index(): Acquire an X or S lock on the table, as appropriate. After acquiring an X lock, assert that n_mysql_handles_opened == 1. Remove the comments about dropping tables in the background. ha_innobase::final_drop_index(): Acquire an X lock on the table. dict_table_t: Remove version_number, to_be_dropped, and prebuilts. ins_node_t: Remove table_version_number. enum lock_mode: Move the definition from lock0lock.h to lock0types.h. ROW_PREBUILT_OBSOLETE, row_update_prebuilt(), row_prebuilt_table_obsolete(): Remove. row_prebuilt_t: Remove the declaration from row0types.h. row_drop_table_for_mysql_no_commit(): Always print a warning if a table was added to the background drop queue.
-
marko authored
of thr_get_trx(thr).
-
marko authored
kernel_mutex must be released before calling this function. innobase_mysql_end_print_arbitrary_thd(), innobase_mysql_prepare_print_arbitrary_thd(): Assert that the kernel_mutex is not being held by the current thread.
-
vasil authored
Bugfix: Lock the MySQL mutex LOCK_thread_count before accessing trx->mysql_query_str to avoid race conditions where MySQL sets it to NULL after we have checked that it is not NULL and before we access it. Approved by: Marko
-
vasil authored
Non-functional change: add "out:" comment for the return value.
-
- 16 Dec, 2007 1 commit
-
-
vasil authored
Non-functional change: Move the prototypes of innobase_mysql_prepare_print_arbitrary_thd() and innobase_mysql_end_print_arbitrary_thd() from lock0lock.c to ha_prototypes.h Suggested by: Marko Approved by: Marko
-
- 13 Dec, 2007 5 commits
-
-
marko authored
is an overlap between BLOB pointers and the modification log or the zlib stream. page_zip_decompress_clust_ext(): Remove the improper check. The d_stream->avail_in cannot be decremented here, because we do not know at this point if the record is deleted. No space is reserved for the BLOB pointers in deleted records. page_zip_decompress_clust(): Check for the overlap here, right before copying the BLOB pointers. page_zip_decompress_clust(): Also check that the target column is long enough, and return FALSE instead of ut_ad() failure.
-
vasil authored
Add some clarification to a comment.
-
marko authored
is_clust, to avoid a warning about unused variable when the definition of page_zip_fail() is empty.
-
marko authored
some decompression functions. page_zip_apply_log_ext(), page_zip_apply_log(): Call page_zip_fail() with appropriate diagnostics before returning NULL. page_zip_decompress_node_ptrs(), page_zip_decompress_sec(), page_zip_decompress_clust(): When detecting that the zlib stream followed by the modification log overlaps the trailer, do not let an assertion fail, but invoke page_zip_fail() and return FALSE. Corrupt data should never lead into assertion failures in decompression functions.
-
marko authored
ASSERT_ZERO() and ASSERT_ZERO_BLOB() for asserting that certain blocks of memory are filled with zero.
-
- 12 Dec, 2007 2 commits
-
-
marko authored
allocating compressed page frames or their control blocks. Also note that if buf_buddy_alloc() is used for allocating a control block, it must be initialized before releasing buf_pool->mutex. buf_page_init_for_read(): When the page hash check fails after buf_buddy_alloc(), free the uninitialized control block before freeing the compressed page frame. This fixes a potential error in buf_buddy_relocate_block().
-
marko authored
are interfaced with the buffer pool.
-
- 10 Dec, 2007 3 commits
-
-
marko authored
buf_zip_decompress() and return NULL on decompression failure.
-
marko authored
supposed to be fixed in r2163.
-
marko authored
mutex is temporarily released. buf_LRU_free_block(), buf_buddy_alloc_clean(): Add an output parameter that will be assigned TRUE when the buffer pool mutex is released. This bug was spotted by and fix provided by Sunny.
-
- 07 Dec, 2007 2 commits
-
-
marko authored
columns to be up to REC_MAX_INDEX_COL_LEN + BTR_EXTERN_FIELD_REF_SIZE bytes in a debug assertion. This assertion could fail since r2159 in trx_undo_prev_version_build(), because the undo log records for updates and deletes would contain longer prefixes of externally stored columns. The assertion failure was reported by Sunny.
-
marko authored
value. Document this change in behaviour, and make all callers invoke the function right after dtuple_create(). dict_create_sys_fields_tuple(): Add a missing "break" statement to the loop that checks if there are any column prefixes in the index. row_get_prebuilt_insert_row(): Do not set the fields to the SQL NULL value, now that dict_table_copy_types() takes care of it.
-
- 05 Dec, 2007 3 commits
-
-
marko authored
enough prefixes of externally stored columns, so that purge will not have to dereference any BLOB pointers, which may be invalid. This will not be necessary for logging inserts, because inserts are no-ops in purge, and the record will remain locked during transaction rollback. TODO: in dict_build_table_def_step() or dict_build_index_def_step(), prevent the creation of tables with too many columns for which a prefix index is defined. This is because there is a size limit of undo log records, and for each prefix-indexed column, the log must store REC_MAX_INDEX_COL_LEN + BTR_EXTERN_FIELD_REF_SIZE bytes. trx_undo_page_report_insert(): Assert that the index is clustered. trx_undo_page_fetch_ext(): New function, for fetching the BLOB prefix in trx_undo_page_report_modify(). trx_undo_page_report_modify(): Write long enough prefixes of the externally stored columns to the undo log. trx_undo_rec_get_partial_row(): Remove the parameter "ext". Assert that the undo log contains long enough prefixes of the externally stored columns. purge_node_t: Remove the field "ext".
-
marko authored
prefix indexes from being built on externally stored columns.
-
marko authored
Use rec_offs_any_extern() as a condition for freeing externally stored columns. This is only a performance optimization.
-
- 04 Dec, 2007 1 commit
-
-
marko authored
innodb.result, innodb.test: Revert the changes in r2145. The tests that were removed by MySQL ChangeSet@1.2598.2.6 2007-11-06 15:42:58-07:00 tsmith@hindu.god were moved to a new test, innodb_autoinc_lock_mode_zero, which is kept in the MySQL BitKeeper tree.
-
- 03 Dec, 2007 2 commits
-
-
marko authored
when row_build() was changed to prefetch all externally stored column prefixes that occur in ordering fields of an index. row_build(): Add the parameter col_table for determining which externally stored columns need to be fetched. row_merge_read_clustered_index(): Pass new_table as the said parameter, so that newly added indexes containing column prefix indexes of externally stored columns will work.
-
marko authored
of the record containing the field reference may change.
-
- 30 Nov, 2007 3 commits
- 29 Nov, 2007 6 commits
-
-
vasil authored
* Change terminology: wait lock -> requested lock waited lock -> blocking lock new: requesting transaction (the trx what owns the requested lock) new: blocking transaction (the trx that owns the blocking lock) * Add transaction ids to INFORMATION_SCHEMA.INNODB_LOCK_WAITS. This is somewhat redundant because transaction ids can be found in INNODB_LOCKS (which can be joined with INNODB_LOCK_WAITS) but would help users to write shorter joins (one table less) in some cases where they want to find which transaction is blocking which. Suggested by: Ken Approved by: Heikki
-
marko authored
in r2131.
-
marko authored
have been removed in r2131.
-
marko authored
Only add indexed BLOBs to row_ext. trx_undo_rec_get_partial_row(): Move the BLOB fetching to row_ext_create(). row_build(): Pass only those BLOBs to row_ext_create() that are referenced by ordering columns of some indexes, similar to trx_undo_rec_get_partial_row(). row_ext_create(): Add the parameter "tuple". Move the implementation from row0ext.ic to row0ext.c. row_ext_lookup_ith(), row_ext_lookup(): Return a const pointer. Remove the parameters "field" and "f_len". Make the row_ext_t* parameter const. row_ext_t: Remove the field zip_size. field_ref_zero[]: Declare in btr0types.h instead of btr0cur.h. row_ext_lookup_low(): Rename to row_ext_cache_fill() and change the signature.
-
marko authored
univ.i: Do not define UNIV_DEBUG, UNIV_ZIP_DEBUG. btr_cur_del_unmark_for_ibuf(): Use the same comment in both btr0cur.c and btr0cur.h. Wrap long lines.
-
sunny authored
contents end up with conflicting versions of a record's state. The zipped page record was not being marked as "(un)deleted" because we were not passing the zipped page contents to the (un)delete function, which first (un)delete marks the uncompressed version and then based on whether page_zip is NULL or not (un)delete marks the record in the compressed page.
-