Commit e5acf58b authored by unknown's avatar unknown

This ChangeSet must be null-merged to 5.1. Applied innodb-5.0-ss982, -ss998, -ss1003

Fixes:
- Bug #15815: Very poor performance with multiple queries running concurrently
- Bug #22868: 'Thread thrashing' with > 50 concurrent conns under an upd-intensive workloadw
- Bug #23769: Debug assertion failure with innodb_locks_unsafe_for_binlog
- Bug #24089: Race condition in fil_flush_file_spaces()


innobase/buf/buf0buf.c:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r1000:
  branches/5.0: Merge r999 from trunk:
  
  Reduce buffer pool mutex contention under >= 4 big concurrent
  CPU-bound SELECT queries.  (Bug #22868)
  
  Fix: replace the mutex by one mutex protecting the 'flush list'
  (and the free list) and several mutexes protecting portions of the
  buffer pool, where we keep several indivudual LRU lists of pages.
  
  This patch is from Sunny Bains and Heikki Tuuri.
innobase/buf/buf0flu.c:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r1000:
  branches/5.0: Merge r999 from trunk:
  
  Reduce buffer pool mutex contention under >= 4 big concurrent
  CPU-bound SELECT queries.  (Bug #22868)
  
  Fix: replace the mutex by one mutex protecting the 'flush list'
  (and the free list) and several mutexes protecting portions of the
  buffer pool, where we keep several indivudual LRU lists of pages.
  
  This patch is from Sunny Bains and Heikki Tuuri.
innobase/buf/buf0lru.c:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r1000:
  branches/5.0: Merge r999 from trunk:
  
  Reduce buffer pool mutex contention under >= 4 big concurrent
  CPU-bound SELECT queries.  (Bug #22868)
  
  Fix: replace the mutex by one mutex protecting the 'flush list'
  (and the free list) and several mutexes protecting portions of the
  buffer pool, where we keep several indivudual LRU lists of pages.
  
  This patch is from Sunny Bains and Heikki Tuuri.
innobase/dict/dict0crea.c:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r974:
  branches/5.0: Port r973 from trunk.
  
  Do not break the latching order in TRUNCATE TABLE.
  
  dict_truncate_index_tree(): Replace parameter rec_t* rec with
  btr_pcur_t* pcur.  Reposition pcur before calling btr_create().
  
  sync_thread_add_level(): Remove the relaxation of the assertion added in r968.
innobase/fil/fil0fil.c:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r1003:
  branches/5.0: Merge r1002 from trunk:
  
  fil_flush_file_spaces(): Copy the system->unflushed_spaces list to an
  array while holding the mutex.  This removes the crash-triggering
  race condition that was introduced when fixing Bug 15653.  (Bug #24089)
innobase/include/buf0buf.h:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r1000:
  branches/5.0: Merge r999 from trunk:
  
  Reduce buffer pool mutex contention under >= 4 big concurrent
  CPU-bound SELECT queries.  (Bug #22868)
  
  Fix: replace the mutex by one mutex protecting the 'flush list'
  (and the free list) and several mutexes protecting portions of the
  buffer pool, where we keep several indivudual LRU lists of pages.
  
  This patch is from Sunny Bains and Heikki Tuuri.
innobase/include/buf0buf.ic:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r1000:
  branches/5.0: Merge r999 from trunk:
  
  Reduce buffer pool mutex contention under >= 4 big concurrent
  CPU-bound SELECT queries.  (Bug #22868)
  
  Fix: replace the mutex by one mutex protecting the 'flush list'
  (and the free list) and several mutexes protecting portions of the
  buffer pool, where we keep several indivudual LRU lists of pages.
  
  This patch is from Sunny Bains and Heikki Tuuri.
innobase/include/dict0crea.h:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r974:
  branches/5.0: Port r973 from trunk.
  
  Do not break the latching order in TRUNCATE TABLE.
  
  dict_truncate_index_tree(): Replace parameter rec_t* rec with
  btr_pcur_t* pcur.  Reposition pcur before calling btr_create().
  
  sync_thread_add_level(): Remove the relaxation of the assertion added in r968.
innobase/include/sync0arr.h:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r1001:
  branches/5.0: Reduce locking contention:
  
  Bug #15815: 'Thread thrashing' with > 50 concurrent connections under
  an update-intensive workload.
  Fix: Introduce one event per InnoDB semaphore.
  
  This patch is from Sunny Bains and Heikki Tuuri.
  This patch will not be merged to trunk (MySQL/InnoDB 5.1) yet,
  because it tries to address the problem in a different way.
innobase/include/sync0rw.h:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r1001:
  branches/5.0: Reduce locking contention:
  
  Bug #15815: 'Thread thrashing' with > 50 concurrent connections under
  an update-intensive workload.
  Fix: Introduce one event per InnoDB semaphore.
  
  This patch is from Sunny Bains and Heikki Tuuri.
  This patch will not be merged to trunk (MySQL/InnoDB 5.1) yet,
  because it tries to address the problem in a different way.
innobase/include/sync0rw.ic:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r1001:
  branches/5.0: Reduce locking contention:
  
  Bug #15815: 'Thread thrashing' with > 50 concurrent connections under
  an update-intensive workload.
  Fix: Introduce one event per InnoDB semaphore.
  
  This patch is from Sunny Bains and Heikki Tuuri.
  This patch will not be merged to trunk (MySQL/InnoDB 5.1) yet,
  because it tries to address the problem in a different way.
innobase/include/sync0sync.h:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r1001:
  branches/5.0: Reduce locking contention:
  
  Bug #15815: 'Thread thrashing' with > 50 concurrent connections under
  an update-intensive workload.
  Fix: Introduce one event per InnoDB semaphore.
  
  This patch is from Sunny Bains and Heikki Tuuri.
  This patch will not be merged to trunk (MySQL/InnoDB 5.1) yet,
  because it tries to address the problem in a different way.
innobase/os/os0sync.c:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r1001:
  branches/5.0: Reduce locking contention:
  
  Bug #15815: 'Thread thrashing' with > 50 concurrent connections under
  an update-intensive workload.
  Fix: Introduce one event per InnoDB semaphore.
  
  This patch is from Sunny Bains and Heikki Tuuri.
  This patch will not be merged to trunk (MySQL/InnoDB 5.1) yet,
  because it tries to address the problem in a different way.
innobase/row/row0mysql.c:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r974:
  branches/5.0: Port r973 from trunk.
  
  Do not break the latching order in TRUNCATE TABLE.
  
  dict_truncate_index_tree(): Replace parameter rec_t* rec with
  btr_pcur_t* pcur.  Reposition pcur before calling btr_create().
  
  sync_thread_add_level(): Remove the relaxation of the assertion added in r968.
innobase/row/row0sel.c:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r982:
  branches/5.0: row_sel(): Do not try to acquire a LOCK_REC_NOT_GAP lock
  on the supremum record.  Instead, skip to the next record.  (Bug #23769)
  This fix was backported from r623 in the 5.1 tree.
innobase/srv/srv0start.c:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r926:
  Refer to bug: 22268. Since no one tries to run 5.0 on Windows 95/ME it was
  decided to raise the limit of srv_max_n_threads to 10000 on Windows.
innobase/sync/sync0arr.c:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r1001:
  branches/5.0: Reduce locking contention:
  
  Bug #15815: 'Thread thrashing' with > 50 concurrent connections under
  an update-intensive workload.
  Fix: Introduce one event per InnoDB semaphore.
  
  This patch is from Sunny Bains and Heikki Tuuri.
  This patch will not be merged to trunk (MySQL/InnoDB 5.1) yet,
  because it tries to address the problem in a different way.
innobase/sync/sync0rw.c:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r1001:
  branches/5.0: Reduce locking contention:
  
  Bug #15815: 'Thread thrashing' with > 50 concurrent connections under
  an update-intensive workload.
  Fix: Introduce one event per InnoDB semaphore.
  
  This patch is from Sunny Bains and Heikki Tuuri.
  This patch will not be merged to trunk (MySQL/InnoDB 5.1) yet,
  because it tries to address the problem in a different way.
innobase/sync/sync0sync.c:
  Applied innodb-5.0-ss982, -ss998, -ss1003
  
  Revision r969:
  branches/5.0: Port r968 from trunk:
  
  sync_thread_add_level(): When level == SYNC_TREE_NODE, allow the latching
  order to be violated if the thread holds dict_operation_lock, whose level is
  SYNC_DICT_OPERATION.  This removes the assertion failure of TRUNCATE TABLE
  #ifdef UNIV_SYNC_DEBUG.
  
  
  Revision r974:
  branches/5.0: Port r973 from trunk.
  
  Do not break the latching order in TRUNCATE TABLE.
  
  dict_truncate_index_tree(): Replace parameter rec_t* rec with
  btr_pcur_t* pcur.  Reposition pcur before calling btr_create().
  
  sync_thread_add_level(): Remove the relaxation of the assertion added in r968.
  
  
  Revision r1001:
  branches/5.0: Reduce locking contention:
  
  Bug #15815: 'Thread thrashing' with > 50 concurrent connections under
  an update-intensive workload.
  Fix: Introduce one event per InnoDB semaphore.
  
  This patch is from Sunny Bains and Heikki Tuuri.
  This patch will not be merged to trunk (MySQL/InnoDB 5.1) yet,
  because it tries to address the problem in a different way.
parent 6ac5b153
This diff is collapsed.
...@@ -114,6 +114,7 @@ buf_flush_ready_for_replace( ...@@ -114,6 +114,7 @@ buf_flush_ready_for_replace(
{ {
#ifdef UNIV_SYNC_DEBUG #ifdef UNIV_SYNC_DEBUG
ut_ad(mutex_own(&(buf_pool->mutex))); ut_ad(mutex_own(&(buf_pool->mutex)));
ut_ad(mutex_own(&block->mutex));
#endif /* UNIV_SYNC_DEBUG */ #endif /* UNIV_SYNC_DEBUG */
if (block->state != BUF_BLOCK_FILE_PAGE) { if (block->state != BUF_BLOCK_FILE_PAGE) {
ut_print_timestamp(stderr); ut_print_timestamp(stderr);
...@@ -148,6 +149,7 @@ buf_flush_ready_for_flush( ...@@ -148,6 +149,7 @@ buf_flush_ready_for_flush(
{ {
#ifdef UNIV_SYNC_DEBUG #ifdef UNIV_SYNC_DEBUG
ut_ad(mutex_own(&(buf_pool->mutex))); ut_ad(mutex_own(&(buf_pool->mutex)));
ut_ad(mutex_own(&(block->mutex)));
#endif /* UNIV_SYNC_DEBUG */ #endif /* UNIV_SYNC_DEBUG */
ut_a(block->state == BUF_BLOCK_FILE_PAGE); ut_a(block->state == BUF_BLOCK_FILE_PAGE);
...@@ -539,8 +541,15 @@ buf_flush_try_page( ...@@ -539,8 +541,15 @@ buf_flush_try_page(
ut_a(!block || block->state == BUF_BLOCK_FILE_PAGE); ut_a(!block || block->state == BUF_BLOCK_FILE_PAGE);
if (!block) {
mutex_exit(&(buf_pool->mutex));
return(0);
}
mutex_enter(&block->mutex);
if (flush_type == BUF_FLUSH_LIST if (flush_type == BUF_FLUSH_LIST
&& block && buf_flush_ready_for_flush(block, flush_type)) { && buf_flush_ready_for_flush(block, flush_type)) {
block->io_fix = BUF_IO_WRITE; block->io_fix = BUF_IO_WRITE;
...@@ -578,6 +587,7 @@ buf_flush_try_page( ...@@ -578,6 +587,7 @@ buf_flush_try_page(
locked = TRUE; locked = TRUE;
} }
mutex_exit(&block->mutex);
mutex_exit(&(buf_pool->mutex)); mutex_exit(&(buf_pool->mutex));
if (!locked) { if (!locked) {
...@@ -598,7 +608,7 @@ buf_flush_try_page( ...@@ -598,7 +608,7 @@ buf_flush_try_page(
return(1); return(1);
} else if (flush_type == BUF_FLUSH_LRU && block } else if (flush_type == BUF_FLUSH_LRU
&& buf_flush_ready_for_flush(block, flush_type)) { && buf_flush_ready_for_flush(block, flush_type)) {
/* VERY IMPORTANT: /* VERY IMPORTANT:
...@@ -639,13 +649,14 @@ buf_flush_try_page( ...@@ -639,13 +649,14 @@ buf_flush_try_page(
buf_pool mutex: this ensures that the latch is acquired buf_pool mutex: this ensures that the latch is acquired
immediately. */ immediately. */
mutex_exit(&block->mutex);
mutex_exit(&(buf_pool->mutex)); mutex_exit(&(buf_pool->mutex));
buf_flush_write_block_low(block); buf_flush_write_block_low(block);
return(1); return(1);
} else if (flush_type == BUF_FLUSH_SINGLE_PAGE && block } else if (flush_type == BUF_FLUSH_SINGLE_PAGE
&& buf_flush_ready_for_flush(block, flush_type)) { && buf_flush_ready_for_flush(block, flush_type)) {
block->io_fix = BUF_IO_WRITE; block->io_fix = BUF_IO_WRITE;
...@@ -672,6 +683,7 @@ buf_flush_try_page( ...@@ -672,6 +683,7 @@ buf_flush_try_page(
(buf_pool->n_flush[flush_type])++; (buf_pool->n_flush[flush_type])++;
mutex_exit(&block->mutex);
mutex_exit(&(buf_pool->mutex)); mutex_exit(&(buf_pool->mutex));
rw_lock_s_lock_gen(&(block->lock), BUF_IO_WRITE); rw_lock_s_lock_gen(&(block->lock), BUF_IO_WRITE);
...@@ -688,11 +700,12 @@ buf_flush_try_page( ...@@ -688,11 +700,12 @@ buf_flush_try_page(
buf_flush_write_block_low(block); buf_flush_write_block_low(block);
return(1); return(1);
} else { }
mutex_exit(&block->mutex);
mutex_exit(&(buf_pool->mutex)); mutex_exit(&(buf_pool->mutex));
return(0); return(0);
}
} }
/*************************************************************** /***************************************************************
...@@ -737,34 +750,48 @@ buf_flush_try_neighbors( ...@@ -737,34 +750,48 @@ buf_flush_try_neighbors(
block = buf_page_hash_get(space, i); block = buf_page_hash_get(space, i);
ut_a(!block || block->state == BUF_BLOCK_FILE_PAGE); ut_a(!block || block->state == BUF_BLOCK_FILE_PAGE);
if (block && flush_type == BUF_FLUSH_LRU && i != offset if (!block) {
continue;
} else if (flush_type == BUF_FLUSH_LRU && i != offset
&& !block->old) { && !block->old) {
/* We avoid flushing 'non-old' blocks in an LRU flush, /* We avoid flushing 'non-old' blocks in an LRU flush,
because the flushed blocks are soon freed */ because the flushed blocks are soon freed */
continue; continue;
} } else {
mutex_enter(&block->mutex);
if (block && buf_flush_ready_for_flush(block, flush_type) if (buf_flush_ready_for_flush(block, flush_type)
&& (i == offset || block->buf_fix_count == 0)) { && (i == offset || block->buf_fix_count == 0)) {
/* We only try to flush those neighbors != offset /* We only try to flush those
where the buf fix count is zero, as we then know that neighbors != offset where the buf fix count is
we probably can latch the page without a semaphore zero, as we then know that we probably can
wait. Semaphore waits are expensive because we must latch the page without a semaphore wait.
Semaphore waits are expensive because we must
flush the doublewrite buffer before we start flush the doublewrite buffer before we start
waiting. */ waiting. */
mutex_exit(&block->mutex);
mutex_exit(&(buf_pool->mutex)); mutex_exit(&(buf_pool->mutex));
/* Note: as we release the buf_pool mutex above, in /* Note: as we release the buf_pool mutex
buf_flush_try_page we cannot be sure the page is still above, in buf_flush_try_page we cannot be sure
in a flushable state: therefore we check it again the page is still in a flushable state:
inside that function. */ therefore we check it again inside that
function. */
count += buf_flush_try_page(space, i, flush_type); count += buf_flush_try_page(space, i,
flush_type);
mutex_enter(&(buf_pool->mutex)); mutex_enter(&(buf_pool->mutex));
} else {
mutex_exit(&block->mutex);
}
} }
} }
...@@ -858,12 +885,15 @@ buf_flush_batch( ...@@ -858,12 +885,15 @@ buf_flush_batch(
while ((block != NULL) && !found) { while ((block != NULL) && !found) {
ut_a(block->state == BUF_BLOCK_FILE_PAGE); ut_a(block->state == BUF_BLOCK_FILE_PAGE);
mutex_enter(&block->mutex);
if (buf_flush_ready_for_flush(block, flush_type)) { if (buf_flush_ready_for_flush(block, flush_type)) {
found = TRUE; found = TRUE;
space = block->space; space = block->space;
offset = block->offset; offset = block->offset;
mutex_exit(&block->mutex);
mutex_exit(&(buf_pool->mutex)); mutex_exit(&(buf_pool->mutex));
old_page_count = page_count; old_page_count = page_count;
...@@ -881,10 +911,14 @@ buf_flush_batch( ...@@ -881,10 +911,14 @@ buf_flush_batch(
} else if (flush_type == BUF_FLUSH_LRU) { } else if (flush_type == BUF_FLUSH_LRU) {
mutex_exit(&block->mutex);
block = UT_LIST_GET_PREV(LRU, block); block = UT_LIST_GET_PREV(LRU, block);
} else { } else {
ut_ad(flush_type == BUF_FLUSH_LIST); ut_ad(flush_type == BUF_FLUSH_LIST);
mutex_exit(&block->mutex);
block = UT_LIST_GET_PREV(flush_list, block); block = UT_LIST_GET_PREV(flush_list, block);
} }
} }
...@@ -966,10 +1000,14 @@ buf_flush_LRU_recommendation(void) ...@@ -966,10 +1000,14 @@ buf_flush_LRU_recommendation(void)
+ BUF_FLUSH_EXTRA_MARGIN) + BUF_FLUSH_EXTRA_MARGIN)
&& (distance < BUF_LRU_FREE_SEARCH_LEN)) { && (distance < BUF_LRU_FREE_SEARCH_LEN)) {
mutex_enter(&block->mutex);
if (buf_flush_ready_for_replace(block)) { if (buf_flush_ready_for_replace(block)) {
n_replaceable++; n_replaceable++;
} }
mutex_exit(&block->mutex);
distance++; distance++;
block = UT_LIST_GET_PREV(LRU, block); block = UT_LIST_GET_PREV(LRU, block);
......
...@@ -86,6 +86,9 @@ buf_LRU_invalidate_tablespace( ...@@ -86,6 +86,9 @@ buf_LRU_invalidate_tablespace(
block = UT_LIST_GET_LAST(buf_pool->LRU); block = UT_LIST_GET_LAST(buf_pool->LRU);
while (block != NULL) { while (block != NULL) {
mutex_enter(&block->mutex);
ut_a(block->state == BUF_BLOCK_FILE_PAGE); ut_a(block->state == BUF_BLOCK_FILE_PAGE);
if (block->space == id if (block->space == id
...@@ -112,6 +115,8 @@ buf_LRU_invalidate_tablespace( ...@@ -112,6 +115,8 @@ buf_LRU_invalidate_tablespace(
if (block->is_hashed) { if (block->is_hashed) {
page_no = block->offset; page_no = block->offset;
mutex_exit(&block->mutex);
mutex_exit(&(buf_pool->mutex)); mutex_exit(&(buf_pool->mutex));
/* Note that the following call will acquire /* Note that the following call will acquire
...@@ -138,6 +143,7 @@ buf_LRU_invalidate_tablespace( ...@@ -138,6 +143,7 @@ buf_LRU_invalidate_tablespace(
buf_LRU_block_free_hashed_page(block); buf_LRU_block_free_hashed_page(block);
} }
next_page: next_page:
mutex_exit(&block->mutex);
block = UT_LIST_GET_PREV(LRU, block); block = UT_LIST_GET_PREV(LRU, block);
} }
...@@ -211,6 +217,9 @@ buf_LRU_search_and_free_block( ...@@ -211,6 +217,9 @@ buf_LRU_search_and_free_block(
while (block != NULL) { while (block != NULL) {
ut_a(block->in_LRU_list); ut_a(block->in_LRU_list);
mutex_enter(&block->mutex);
if (buf_flush_ready_for_replace(block)) { if (buf_flush_ready_for_replace(block)) {
#ifdef UNIV_DEBUG #ifdef UNIV_DEBUG
...@@ -225,6 +234,7 @@ buf_LRU_search_and_free_block( ...@@ -225,6 +234,7 @@ buf_LRU_search_and_free_block(
buf_LRU_block_remove_hashed_page(block); buf_LRU_block_remove_hashed_page(block);
mutex_exit(&(buf_pool->mutex)); mutex_exit(&(buf_pool->mutex));
mutex_exit(&block->mutex);
/* Remove possible adaptive hash index built on the /* Remove possible adaptive hash index built on the
page; in the case of AWE the block may not have a page; in the case of AWE the block may not have a
...@@ -233,15 +243,21 @@ buf_LRU_search_and_free_block( ...@@ -233,15 +243,21 @@ buf_LRU_search_and_free_block(
if (block->frame) { if (block->frame) {
btr_search_drop_page_hash_index(block->frame); btr_search_drop_page_hash_index(block->frame);
} }
mutex_enter(&(buf_pool->mutex));
ut_a(block->buf_fix_count == 0); ut_a(block->buf_fix_count == 0);
mutex_enter(&(buf_pool->mutex));
mutex_enter(&block->mutex);
buf_LRU_block_free_hashed_page(block); buf_LRU_block_free_hashed_page(block);
freed = TRUE; freed = TRUE;
mutex_exit(&block->mutex);
break; break;
} }
mutex_exit(&block->mutex);
block = UT_LIST_GET_PREV(LRU, block); block = UT_LIST_GET_PREV(LRU, block);
distance++; distance++;
...@@ -415,8 +431,12 @@ buf_LRU_get_free_block(void) ...@@ -415,8 +431,12 @@ buf_LRU_get_free_block(void)
} }
} }
mutex_enter(&block->mutex);
block->state = BUF_BLOCK_READY_FOR_USE; block->state = BUF_BLOCK_READY_FOR_USE;
mutex_exit(&block->mutex);
mutex_exit(&(buf_pool->mutex)); mutex_exit(&(buf_pool->mutex));
if (started_monitor) { if (started_monitor) {
...@@ -818,6 +838,7 @@ buf_LRU_block_free_non_file_page( ...@@ -818,6 +838,7 @@ buf_LRU_block_free_non_file_page(
{ {
#ifdef UNIV_SYNC_DEBUG #ifdef UNIV_SYNC_DEBUG
ut_ad(mutex_own(&(buf_pool->mutex))); ut_ad(mutex_own(&(buf_pool->mutex)));
ut_ad(mutex_own(&block->mutex));
#endif /* UNIV_SYNC_DEBUG */ #endif /* UNIV_SYNC_DEBUG */
ut_ad(block); ut_ad(block);
...@@ -857,6 +878,7 @@ buf_LRU_block_remove_hashed_page( ...@@ -857,6 +878,7 @@ buf_LRU_block_remove_hashed_page(
{ {
#ifdef UNIV_SYNC_DEBUG #ifdef UNIV_SYNC_DEBUG
ut_ad(mutex_own(&(buf_pool->mutex))); ut_ad(mutex_own(&(buf_pool->mutex)));
ut_ad(mutex_own(&block->mutex));
#endif /* UNIV_SYNC_DEBUG */ #endif /* UNIV_SYNC_DEBUG */
ut_ad(block); ut_ad(block);
...@@ -914,6 +936,7 @@ buf_LRU_block_free_hashed_page( ...@@ -914,6 +936,7 @@ buf_LRU_block_free_hashed_page(
{ {
#ifdef UNIV_SYNC_DEBUG #ifdef UNIV_SYNC_DEBUG
ut_ad(mutex_own(&(buf_pool->mutex))); ut_ad(mutex_own(&(buf_pool->mutex)));
ut_ad(mutex_own(&block->mutex));
#endif /* UNIV_SYNC_DEBUG */ #endif /* UNIV_SYNC_DEBUG */
ut_a(block->state == BUF_BLOCK_REMOVE_HASH); ut_a(block->state == BUF_BLOCK_REMOVE_HASH);
......
...@@ -724,8 +724,10 @@ dict_truncate_index_tree( ...@@ -724,8 +724,10 @@ dict_truncate_index_tree(
/* out: new root page number, or /* out: new root page number, or
FIL_NULL on failure */ FIL_NULL on failure */
dict_table_t* table, /* in: the table the index belongs to */ dict_table_t* table, /* in: the table the index belongs to */
rec_t* rec, /* in: record in the clustered index of btr_pcur_t* pcur, /* in/out: persistent cursor pointing to
SYS_INDEXES table */ record in the clustered index of
SYS_INDEXES table. The cursor may be
repositioned in this call. */
mtr_t* mtr) /* in: mtr having the latch mtr_t* mtr) /* in: mtr having the latch
on the record page. The mtr may be on the record page. The mtr may be
committed and restarted in this call. */ committed and restarted in this call. */
...@@ -734,6 +736,7 @@ dict_truncate_index_tree( ...@@ -734,6 +736,7 @@ dict_truncate_index_tree(
ulint space; ulint space;
ulint type; ulint type;
dulint index_id; dulint index_id;
rec_t* rec;
byte* ptr; byte* ptr;
ulint len; ulint len;
ulint comp; ulint comp;
...@@ -744,6 +747,7 @@ dict_truncate_index_tree( ...@@ -744,6 +747,7 @@ dict_truncate_index_tree(
#endif /* UNIV_SYNC_DEBUG */ #endif /* UNIV_SYNC_DEBUG */
ut_a(!dict_sys->sys_indexes->comp); ut_a(!dict_sys->sys_indexes->comp);
rec = btr_pcur_get_rec(pcur);
ptr = rec_get_nth_field_old(rec, DICT_SYS_INDEXES_PAGE_NO_FIELD, &len); ptr = rec_get_nth_field_old(rec, DICT_SYS_INDEXES_PAGE_NO_FIELD, &len);
ut_ad(len == 4); ut_ad(len == 4);
...@@ -809,10 +813,11 @@ dict_truncate_index_tree( ...@@ -809,10 +813,11 @@ dict_truncate_index_tree(
/* We will need to commit the mini-transaction in order to avoid /* We will need to commit the mini-transaction in order to avoid
deadlocks in the btr_create() call, because otherwise we would deadlocks in the btr_create() call, because otherwise we would
be freeing and allocating pages in the same mini-transaction. */ be freeing and allocating pages in the same mini-transaction. */
btr_pcur_store_position(pcur, mtr);
mtr_commit(mtr); mtr_commit(mtr);
/* mtr_commit() will invalidate rec. */
rec = NULL;
mtr_start(mtr); mtr_start(mtr);
btr_pcur_restore_position(BTR_MODIFY_LEAF, pcur, mtr);
/* Find the index corresponding to this SYS_INDEXES record. */ /* Find the index corresponding to this SYS_INDEXES record. */
for (index = UT_LIST_GET_FIRST(table->indexes); for (index = UT_LIST_GET_FIRST(table->indexes);
......
...@@ -4285,29 +4285,47 @@ fil_flush_file_spaces( ...@@ -4285,29 +4285,47 @@ fil_flush_file_spaces(
{ {
fil_system_t* system = fil_system; fil_system_t* system = fil_system;
fil_space_t* space; fil_space_t* space;
ulint* space_ids;
ulint n_space_ids;
ulint i;
mutex_enter(&(system->mutex)); mutex_enter(&(system->mutex));
space = UT_LIST_GET_FIRST(system->unflushed_spaces); n_space_ids = UT_LIST_GET_LEN(system->unflushed_spaces);
if (n_space_ids == 0) {
while (space) { mutex_exit(&system->mutex);
if (space->purpose == purpose && !space->is_being_deleted) { return;
}
space->n_pending_flushes++; /* prevent dropping of the /* Assemble a list of space ids to flush. Previously, we
space while we are traversed system->unflushed_spaces and called UT_LIST_GET_NEXT()
flushing */ on a space that was just removed from the list by fil_flush().
mutex_exit(&(system->mutex)); Thus, the space could be dropped and the memory overwritten. */
space_ids = mem_alloc(n_space_ids * sizeof *space_ids);
fil_flush(space->id); n_space_ids = 0;
mutex_enter(&(system->mutex)); for (space = UT_LIST_GET_FIRST(system->unflushed_spaces);
space;
space = UT_LIST_GET_NEXT(unflushed_spaces, space)) {
space->n_pending_flushes--; if (space->purpose == purpose && !space->is_being_deleted) {
space_ids[n_space_ids++] = space->id;
} }
space = UT_LIST_GET_NEXT(unflushed_spaces, space);
} }
mutex_exit(&(system->mutex)); mutex_exit(&system->mutex);
/* Flush the spaces. It will not hurt to call fil_flush() on
a non-existing space id. */
for (i = 0; i < n_space_ids; i++) {
fil_flush(space_ids[i]);
}
mem_free(space_ids);
} }
/********************************************************************** /**********************************************************************
......
...@@ -461,8 +461,8 @@ Gets the mutex number protecting the page record lock hash chain in the lock ...@@ -461,8 +461,8 @@ Gets the mutex number protecting the page record lock hash chain in the lock
table. */ table. */
UNIV_INLINE UNIV_INLINE
mutex_t* mutex_t*
buf_frame_get_lock_mutex( buf_frame_get_mutex(
/*=====================*/ /*================*/
/* out: mutex */ /* out: mutex */
byte* ptr); /* in: pointer to within a buffer frame */ byte* ptr); /* in: pointer to within a buffer frame */
/*********************************************************************** /***********************************************************************
...@@ -713,7 +713,10 @@ struct buf_block_struct{ ...@@ -713,7 +713,10 @@ struct buf_block_struct{
ulint magic_n; /* magic number to check */ ulint magic_n; /* magic number to check */
ulint state; /* state of the control block: ulint state; /* state of the control block:
BUF_BLOCK_NOT_USED, ... */ BUF_BLOCK_NOT_USED, ...; changing
this is only allowed when a thread
has BOTH the buffer pool mutex AND
block->mutex locked */
byte* frame; /* pointer to buffer frame which byte* frame; /* pointer to buffer frame which
is of size UNIV_PAGE_SIZE, and is of size UNIV_PAGE_SIZE, and
aligned to an address divisible by aligned to an address divisible by
...@@ -731,8 +734,12 @@ struct buf_block_struct{ ...@@ -731,8 +734,12 @@ struct buf_block_struct{
ulint offset; /* page number within the space */ ulint offset; /* page number within the space */
ulint lock_hash_val; /* hashed value of the page address ulint lock_hash_val; /* hashed value of the page address
in the record lock hash table */ in the record lock hash table */
mutex_t* lock_mutex; /* mutex protecting the chain in the mutex_t mutex; /* mutex protecting this block:
record lock hash table */ state (also protected by the buffer
pool mutex), io_fix, buf_fix_count,
and accessed; we introduce this new
mutex in InnoDB-5.1 to relieve
contention on the buffer pool mutex */
rw_lock_t lock; /* read-write lock of the buffer rw_lock_t lock; /* read-write lock of the buffer
frame */ frame */
buf_block_t* hash; /* node used in chaining to the page buf_block_t* hash; /* node used in chaining to the page
...@@ -788,20 +795,27 @@ struct buf_block_struct{ ...@@ -788,20 +795,27 @@ struct buf_block_struct{
in heuristic algorithms, because of in heuristic algorithms, because of
the possibility of a wrap-around! */ the possibility of a wrap-around! */
ulint freed_page_clock;/* the value of freed_page_clock ulint freed_page_clock;/* the value of freed_page_clock
buffer pool when this block was of the buffer pool when this block was
last time put to the head of the the last time put to the head of the
LRU list */ LRU list; a thread is allowed to
read this for heuristic purposes
without holding any mutex or latch */
ibool old; /* TRUE if the block is in the old ibool old; /* TRUE if the block is in the old
blocks in the LRU list */ blocks in the LRU list */
ibool accessed; /* TRUE if the page has been accessed ibool accessed; /* TRUE if the page has been accessed
while in the buffer pool: read-ahead while in the buffer pool: read-ahead
may read in pages which have not been may read in pages which have not been
accessed yet */ accessed yet; this is protected by
block->mutex; a thread is allowed to
read this for heuristic purposes
without holding any mutex or latch */
ulint buf_fix_count; /* count of how manyfold this block ulint buf_fix_count; /* count of how manyfold this block
is currently bufferfixed */ is currently bufferfixed; this is
protected by block->mutex */
ulint io_fix; /* if a read is pending to the frame, ulint io_fix; /* if a read is pending to the frame,
io_fix is BUF_IO_READ, in the case io_fix is BUF_IO_READ, in the case
of a write BUF_IO_WRITE, otherwise 0 */ of a write BUF_IO_WRITE, otherwise 0;
this is protected by block->mutex */
/* 4. Optimistic search field */ /* 4. Optimistic search field */
dulint modify_clock; /* this clock is incremented every dulint modify_clock; /* this clock is incremented every
...@@ -962,7 +976,9 @@ struct buf_pool_struct{ ...@@ -962,7 +976,9 @@ struct buf_pool_struct{
number of buffer blocks removed from number of buffer blocks removed from
the end of the LRU list; NOTE that the end of the LRU list; NOTE that
this counter may wrap around at 4 this counter may wrap around at 4
billion! */ billion! A thread is allowed to
read this for heuristic purposes
without holding any mutex or latch */
ulint LRU_flush_ended;/* when an LRU flush ends for a page, ulint LRU_flush_ended;/* when an LRU flush ends for a page,
this is incremented by one; this is this is incremented by one; this is
set to zero when a buffer block is set to zero when a buffer block is
......
...@@ -330,8 +330,8 @@ Gets the mutex number protecting the page record lock hash chain in the lock ...@@ -330,8 +330,8 @@ Gets the mutex number protecting the page record lock hash chain in the lock
table. */ table. */
UNIV_INLINE UNIV_INLINE
mutex_t* mutex_t*
buf_frame_get_lock_mutex( buf_frame_get_mutex(
/*=====================*/ /*================*/
/* out: mutex */ /* out: mutex */
byte* ptr) /* in: pointer to within a buffer frame */ byte* ptr) /* in: pointer to within a buffer frame */
{ {
...@@ -339,7 +339,7 @@ buf_frame_get_lock_mutex( ...@@ -339,7 +339,7 @@ buf_frame_get_lock_mutex(
block = buf_block_align(ptr); block = buf_block_align(ptr);
return(block->lock_mutex); return(&block->mutex);
} }
/************************************************************************* /*************************************************************************
...@@ -512,6 +512,7 @@ buf_block_buf_fix_inc_debug( ...@@ -512,6 +512,7 @@ buf_block_buf_fix_inc_debug(
ret = rw_lock_s_lock_func_nowait(&(block->debug_latch), file, line); ret = rw_lock_s_lock_func_nowait(&(block->debug_latch), file, line);
ut_ad(ret == TRUE); ut_ad(ret == TRUE);
ut_ad(mutex_own(&block->mutex));
#endif #endif
block->buf_fix_count++; block->buf_fix_count++;
} }
...@@ -524,6 +525,9 @@ buf_block_buf_fix_inc( ...@@ -524,6 +525,9 @@ buf_block_buf_fix_inc(
/*==================*/ /*==================*/
buf_block_t* block) /* in: block to bufferfix */ buf_block_t* block) /* in: block to bufferfix */
{ {
#ifdef UNIV_SYNC_DEBUG
ut_ad(mutex_own(&block->mutex));
#endif
block->buf_fix_count++; block->buf_fix_count++;
} }
#endif /* UNIV_SYNC_DEBUG */ #endif /* UNIV_SYNC_DEBUG */
...@@ -618,23 +622,24 @@ buf_page_release( ...@@ -618,23 +622,24 @@ buf_page_release(
ut_ad(block); ut_ad(block);
mutex_enter_fast(&(buf_pool->mutex));
ut_a(block->state == BUF_BLOCK_FILE_PAGE); ut_a(block->state == BUF_BLOCK_FILE_PAGE);
ut_a(block->buf_fix_count > 0); ut_a(block->buf_fix_count > 0);
if (rw_latch == RW_X_LATCH && mtr->modifications) { if (rw_latch == RW_X_LATCH && mtr->modifications) {
mutex_enter(&buf_pool->mutex);
buf_flush_note_modification(block, mtr); buf_flush_note_modification(block, mtr);
mutex_exit(&buf_pool->mutex);
} }
mutex_enter(&block->mutex);
#ifdef UNIV_SYNC_DEBUG #ifdef UNIV_SYNC_DEBUG
rw_lock_s_unlock(&(block->debug_latch)); rw_lock_s_unlock(&(block->debug_latch));
#endif #endif
buf_fix_count = block->buf_fix_count; buf_fix_count = block->buf_fix_count;
block->buf_fix_count = buf_fix_count - 1; block->buf_fix_count = buf_fix_count - 1;
mutex_exit(&(buf_pool->mutex)); mutex_exit(&block->mutex);
if (rw_latch == RW_S_LATCH) { if (rw_latch == RW_S_LATCH) {
rw_lock_s_unlock(&(block->lock)); rw_lock_s_unlock(&(block->lock));
......
...@@ -62,8 +62,10 @@ dict_truncate_index_tree( ...@@ -62,8 +62,10 @@ dict_truncate_index_tree(
/* out: new root page number, or /* out: new root page number, or
FIL_NULL on failure */ FIL_NULL on failure */
dict_table_t* table, /* in: the table the index belongs to */ dict_table_t* table, /* in: the table the index belongs to */
rec_t* rec, /* in: record in the clustered index of btr_pcur_t* pcur, /* in/out: persistent cursor pointing to
SYS_INDEXES table */ record in the clustered index of
SYS_INDEXES table. The cursor may be
repositioned in this call. */
mtr_t* mtr); /* in: mtr having the latch mtr_t* mtr); /* in: mtr having the latch
on the record page. The mtr may be on the record page. The mtr may be
committed and restarted in this call. */ committed and restarted in this call. */
......
...@@ -75,17 +75,12 @@ sync_array_free_cell( ...@@ -75,17 +75,12 @@ sync_array_free_cell(
sync_array_t* arr, /* in: wait array */ sync_array_t* arr, /* in: wait array */
ulint index); /* in: index of the cell in array */ ulint index); /* in: index of the cell in array */
/************************************************************************** /**************************************************************************
Looks for the cells in the wait array which refer Note that one of the wait objects was signalled. */
to the wait object specified,
and sets their corresponding events to the signaled state. In this
way releases the threads waiting for the object to contend for the object.
It is possible that no such cell is found, in which case does nothing. */
void void
sync_array_signal_object( sync_array_object_signalled(
/*=====================*/ /*========================*/
sync_array_t* arr, /* in: wait array */ sync_array_t* arr); /* in: wait array */
void* object);/* in: wait object */
/************************************************************************** /**************************************************************************
If the wakeup algorithm does not work perfectly at semaphore relases, If the wakeup algorithm does not work perfectly at semaphore relases,
this function will do the waking (see the comment in mutex_exit). This this function will do the waking (see the comment in mutex_exit). This
......
...@@ -411,6 +411,7 @@ blocked by readers, a writer may queue for the lock by setting the writer ...@@ -411,6 +411,7 @@ blocked by readers, a writer may queue for the lock by setting the writer
field. Then no new readers are allowed in. */ field. Then no new readers are allowed in. */
struct rw_lock_struct { struct rw_lock_struct {
os_event_t event; /* Used by sync0arr.c for thread queueing */
ulint reader_count; /* Number of readers who have locked this ulint reader_count; /* Number of readers who have locked this
lock in the shared mode */ lock in the shared mode */
ulint writer; /* This field is set to RW_LOCK_EX if there ulint writer; /* This field is set to RW_LOCK_EX if there
......
...@@ -382,7 +382,8 @@ rw_lock_s_unlock_func( ...@@ -382,7 +382,8 @@ rw_lock_s_unlock_func(
mutex_exit(mutex); mutex_exit(mutex);
if (UNIV_UNLIKELY(sg)) { if (UNIV_UNLIKELY(sg)) {
sync_array_signal_object(sync_primary_wait_array, lock); os_event_set(lock->event);
sync_array_object_signalled(sync_primary_wait_array);
} }
ut_ad(rw_lock_validate(lock)); ut_ad(rw_lock_validate(lock));
...@@ -462,7 +463,8 @@ rw_lock_x_unlock_func( ...@@ -462,7 +463,8 @@ rw_lock_x_unlock_func(
mutex_exit(&(lock->mutex)); mutex_exit(&(lock->mutex));
if (UNIV_UNLIKELY(sg)) { if (UNIV_UNLIKELY(sg)) {
sync_array_signal_object(sync_primary_wait_array, lock); os_event_set(lock->event);
sync_array_object_signalled(sync_primary_wait_array);
} }
ut_ad(rw_lock_validate(lock)); ut_ad(rw_lock_validate(lock));
......
...@@ -453,6 +453,7 @@ Do not use its fields directly! The structure used in the spin lock ...@@ -453,6 +453,7 @@ Do not use its fields directly! The structure used in the spin lock
implementation of a mutual exclusion semaphore. */ implementation of a mutual exclusion semaphore. */
struct mutex_struct { struct mutex_struct {
os_event_t event; /* Used by sync0arr.c for the wait queue */
ulint lock_word; /* This ulint is the target of the atomic ulint lock_word; /* This ulint is the target of the atomic
test-and-set instruction in Win32 */ test-and-set instruction in Win32 */
#if !defined(_WIN32) || !defined(UNIV_CAN_USE_X86_ASSEMBLER) #if !defined(_WIN32) || !defined(UNIV_CAN_USE_X86_ASSEMBLER)
......
...@@ -21,6 +21,7 @@ Created 9/6/1995 Heikki Tuuri ...@@ -21,6 +21,7 @@ Created 9/6/1995 Heikki Tuuri
/* Type definition for an operating system mutex struct */ /* Type definition for an operating system mutex struct */
struct os_mutex_struct{ struct os_mutex_struct{
os_event_t event; /* Used by sync0arr.c for queing threads */
void* handle; /* OS handle to mutex */ void* handle; /* OS handle to mutex */
ulint count; /* we use this counter to check ulint count; /* we use this counter to check
that the same thread does not that the same thread does not
...@@ -35,6 +36,7 @@ struct os_mutex_struct{ ...@@ -35,6 +36,7 @@ struct os_mutex_struct{
/* Mutex protecting counts and the lists of OS mutexes and events */ /* Mutex protecting counts and the lists of OS mutexes and events */
os_mutex_t os_sync_mutex; os_mutex_t os_sync_mutex;
ibool os_sync_mutex_inited = FALSE; ibool os_sync_mutex_inited = FALSE;
ibool os_sync_free_called = FALSE;
/* This is incremented by 1 in os_thread_create and decremented by 1 in /* This is incremented by 1 in os_thread_create and decremented by 1 in
os_thread_exit */ os_thread_exit */
...@@ -50,6 +52,10 @@ ulint os_event_count = 0; ...@@ -50,6 +52,10 @@ ulint os_event_count = 0;
ulint os_mutex_count = 0; ulint os_mutex_count = 0;
ulint os_fast_mutex_count = 0; ulint os_fast_mutex_count = 0;
/* Because a mutex is embedded inside an event and there is an
event embedded inside a mutex, on free, this generates a recursive call.
This version of the free event function doesn't acquire the global lock */
static void os_event_free_internal(os_event_t event);
/************************************************************* /*************************************************************
Initializes global event and OS 'slow' mutex lists. */ Initializes global event and OS 'slow' mutex lists. */
...@@ -76,6 +82,7 @@ os_sync_free(void) ...@@ -76,6 +82,7 @@ os_sync_free(void)
os_event_t event; os_event_t event;
os_mutex_t mutex; os_mutex_t mutex;
os_sync_free_called = TRUE;
event = UT_LIST_GET_FIRST(os_event_list); event = UT_LIST_GET_FIRST(os_event_list);
while (event) { while (event) {
...@@ -99,6 +106,7 @@ os_sync_free(void) ...@@ -99,6 +106,7 @@ os_sync_free(void)
mutex = UT_LIST_GET_FIRST(os_mutex_list); mutex = UT_LIST_GET_FIRST(os_mutex_list);
} }
os_sync_free_called = FALSE;
} }
/************************************************************* /*************************************************************
...@@ -146,14 +154,21 @@ os_event_create( ...@@ -146,14 +154,21 @@ os_event_create(
event->signal_count = 0; event->signal_count = 0;
#endif /* __WIN__ */ #endif /* __WIN__ */
/* Put to the list of events */ /* The os_sync_mutex can be NULL because during startup an event
can be created [ because it's embedded in the mutex/rwlock ] before
this module has been initialized */
if (os_sync_mutex != NULL) {
os_mutex_enter(os_sync_mutex); os_mutex_enter(os_sync_mutex);
}
/* Put to the list of events */
UT_LIST_ADD_FIRST(os_event_list, os_event_list, event); UT_LIST_ADD_FIRST(os_event_list, os_event_list, event);
os_event_count++; os_event_count++;
if (os_sync_mutex != NULL) {
os_mutex_exit(os_sync_mutex); os_mutex_exit(os_sync_mutex);
}
return(event); return(event);
} }
...@@ -255,6 +270,35 @@ os_event_reset( ...@@ -255,6 +270,35 @@ os_event_reset(
#endif #endif
} }
/**************************************************************
Frees an event object, without acquiring the global lock. */
static
void
os_event_free_internal(
/*===================*/
os_event_t event) /* in: event to free */
{
#ifdef __WIN__
ut_a(event);
ut_a(CloseHandle(event->handle));
#else
ut_a(event);
/* This is to avoid freeing the mutex twice */
os_fast_mutex_free(&(event->os_mutex));
ut_a(0 == pthread_cond_destroy(&(event->cond_var)));
#endif
/* Remove from the list of events */
UT_LIST_REMOVE(os_event_list, os_event_list, event);
os_event_count--;
ut_free(event);
}
/************************************************************** /**************************************************************
Frees an event object. */ Frees an event object. */
...@@ -456,6 +500,7 @@ os_mutex_create( ...@@ -456,6 +500,7 @@ os_mutex_create(
mutex_str->handle = mutex; mutex_str->handle = mutex;
mutex_str->count = 0; mutex_str->count = 0;
mutex_str->event = os_event_create(NULL);
if (os_sync_mutex_inited) { if (os_sync_mutex_inited) {
/* When creating os_sync_mutex itself we cannot reserve it */ /* When creating os_sync_mutex itself we cannot reserve it */
...@@ -532,6 +577,10 @@ os_mutex_free( ...@@ -532,6 +577,10 @@ os_mutex_free(
{ {
ut_a(mutex); ut_a(mutex);
if (!os_sync_free_called) {
os_event_free_internal(mutex->event);
}
if (os_sync_mutex_inited) { if (os_sync_mutex_inited) {
os_mutex_enter(os_sync_mutex); os_mutex_enter(os_sync_mutex);
} }
......
...@@ -2920,12 +2920,10 @@ do not allow the TRUNCATE. We also reserve the data dictionary latch. */ ...@@ -2920,12 +2920,10 @@ do not allow the TRUNCATE. We also reserve the data dictionary latch. */
goto next_rec; goto next_rec;
} }
btr_pcur_store_position(&pcur, &mtr); /* This call may commit and restart mtr
and reposition pcur. */
root_page_no = dict_truncate_index_tree(table, &pcur, &mtr);
/* This call may commit and restart mtr. */
root_page_no = dict_truncate_index_tree(table, rec, &mtr);
btr_pcur_restore_position(BTR_MODIFY_LEAF, &pcur, &mtr);
rec = btr_pcur_get_rec(&pcur); rec = btr_pcur_get_rec(&pcur);
if (root_page_no != FIL_NULL) { if (root_page_no != FIL_NULL) {
......
...@@ -1323,6 +1323,12 @@ row_sel( ...@@ -1323,6 +1323,12 @@ row_sel(
ULINT_UNDEFINED, &heap); ULINT_UNDEFINED, &heap);
if (srv_locks_unsafe_for_binlog) { if (srv_locks_unsafe_for_binlog) {
if (page_rec_is_supremum(rec)) {
goto next_rec;
}
lock_type = LOCK_REC_NOT_GAP; lock_type = LOCK_REC_NOT_GAP;
} else { } else {
lock_type = LOCK_ORDINARY; lock_type = LOCK_ORDINARY;
......
...@@ -1142,10 +1142,7 @@ innobase_start_or_create_for_mysql(void) ...@@ -1142,10 +1142,7 @@ innobase_start_or_create_for_mysql(void)
#if defined(__NETWARE__) #if defined(__NETWARE__)
/* Create less event semaphores because Win 98/ME had difficulty creating /* Comment from Novell, Inc.: These take a lot of memory on NetWare.*/
40000 event semaphores.
Comment from Novell, Inc.: also, these just take a lot of memory on
NetWare. */
srv_max_n_threads = 1000; srv_max_n_threads = 1000;
#else #else
if (srv_pool_size >= 1000 * 1024) { if (srv_pool_size >= 1000 * 1024) {
......
...@@ -62,9 +62,6 @@ struct sync_cell_struct { ...@@ -62,9 +62,6 @@ struct sync_cell_struct {
ibool waiting; /* TRUE if the thread has already ibool waiting; /* TRUE if the thread has already
called sync_array_event_wait called sync_array_event_wait
on this cell */ on this cell */
ibool event_set; /* TRUE if the event is set */
os_event_t event; /* operating system event
semaphore handle */
time_t reservation_time;/* time when the thread reserved time_t reservation_time;/* time when the thread reserved
the wait cell */ the wait cell */
}; };
...@@ -218,10 +215,7 @@ sync_array_create( ...@@ -218,10 +215,7 @@ sync_array_create(
for (i = 0; i < n_cells; i++) { for (i = 0; i < n_cells; i++) {
cell = sync_array_get_nth_cell(arr, i); cell = sync_array_get_nth_cell(arr, i);
cell->wait_object = NULL; cell->wait_object = NULL;
cell->waiting = FALSE;
/* Create an operating system event semaphore with no name */
cell->event = os_event_create(NULL);
cell->event_set = FALSE; /* it is created in reset state */
} }
return(arr); return(arr);
...@@ -235,19 +229,12 @@ sync_array_free( ...@@ -235,19 +229,12 @@ sync_array_free(
/*============*/ /*============*/
sync_array_t* arr) /* in, own: sync wait array */ sync_array_t* arr) /* in, own: sync wait array */
{ {
ulint i;
sync_cell_t* cell;
ulint protection; ulint protection;
ut_a(arr->n_reserved == 0); ut_a(arr->n_reserved == 0);
sync_array_validate(arr); sync_array_validate(arr);
for (i = 0; i < arr->n_cells; i++) {
cell = sync_array_get_nth_cell(arr, i);
os_event_free(cell->event);
}
protection = arr->protection; protection = arr->protection;
/* Release the mutex protecting the wait array complex */ /* Release the mutex protecting the wait array complex */
...@@ -292,28 +279,20 @@ sync_array_validate( ...@@ -292,28 +279,20 @@ sync_array_validate(
sync_array_exit(arr); sync_array_exit(arr);
} }
/***********************************************************************
Puts the cell event in set state. */
static
void
sync_cell_event_set(
/*================*/
sync_cell_t* cell) /* in: array cell */
{
os_event_set(cell->event);
cell->event_set = TRUE;
}
/*********************************************************************** /***********************************************************************
Puts the cell event in reset state. */ Puts the cell event in reset state. */
static static
void void
sync_cell_event_reset( sync_cell_event_reset(
/*==================*/ /*==================*/
sync_cell_t* cell) /* in: array cell */ ulint type, /* in: lock type mutex/rw_lock */
void* object) /* in: the rw_lock/mutex object */
{ {
os_event_reset(cell->event); if (type == SYNC_MUTEX) {
cell->event_set = FALSE; os_event_reset(((mutex_t *) object)->event);
} else {
os_event_reset(((rw_lock_t *) object)->event);
}
} }
/********************************************************************** /**********************************************************************
...@@ -346,14 +325,7 @@ sync_array_reserve_cell( ...@@ -346,14 +325,7 @@ sync_array_reserve_cell(
if (cell->wait_object == NULL) { if (cell->wait_object == NULL) {
/* Make sure the event is reset */ cell->waiting = FALSE;
if (cell->event_set) {
sync_cell_event_reset(cell);
}
cell->reservation_time = time(NULL);
cell->thread = os_thread_get_curr_id();
cell->wait_object = object; cell->wait_object = object;
if (type == SYNC_MUTEX) { if (type == SYNC_MUTEX) {
...@@ -363,7 +335,6 @@ sync_array_reserve_cell( ...@@ -363,7 +335,6 @@ sync_array_reserve_cell(
} }
cell->request_type = type; cell->request_type = type;
cell->waiting = FALSE;
cell->file = file; cell->file = file;
cell->line = line; cell->line = line;
...@@ -374,6 +345,13 @@ sync_array_reserve_cell( ...@@ -374,6 +345,13 @@ sync_array_reserve_cell(
sync_array_exit(arr); sync_array_exit(arr);
/* Make sure the event is reset */
sync_cell_event_reset(type, object);
cell->reservation_time = time(NULL);
cell->thread = os_thread_get_curr_id();
return; return;
} }
} }
...@@ -408,7 +386,12 @@ sync_array_wait_event( ...@@ -408,7 +386,12 @@ sync_array_wait_event(
ut_a(!cell->waiting); ut_a(!cell->waiting);
ut_ad(os_thread_get_curr_id() == cell->thread); ut_ad(os_thread_get_curr_id() == cell->thread);
event = cell->event; if (cell->request_type == SYNC_MUTEX) {
event = ((mutex_t*) cell->wait_object)->event;
} else {
event = ((rw_lock_t*) cell->wait_object)->event;
}
cell->waiting = TRUE; cell->waiting = TRUE;
#ifdef UNIV_SYNC_DEBUG #ifdef UNIV_SYNC_DEBUG
...@@ -510,10 +493,6 @@ sync_array_cell_print( ...@@ -510,10 +493,6 @@ sync_array_cell_print(
if (!cell->waiting) { if (!cell->waiting) {
fputs("wait has ended\n", file); fputs("wait has ended\n", file);
} }
if (cell->event_set) {
fputs("wait is ending\n", file);
}
} }
#ifdef UNIV_SYNC_DEBUG #ifdef UNIV_SYNC_DEBUG
...@@ -623,7 +602,7 @@ sync_array_detect_deadlock( ...@@ -623,7 +602,7 @@ sync_array_detect_deadlock(
depth++; depth++;
if (cell->event_set || !cell->waiting) { if (!cell->waiting) {
return(FALSE); /* No deadlock here */ return(FALSE); /* No deadlock here */
} }
...@@ -802,6 +781,7 @@ sync_array_free_cell( ...@@ -802,6 +781,7 @@ sync_array_free_cell(
ut_a(cell->wait_object != NULL); ut_a(cell->wait_object != NULL);
cell->waiting = FALSE;
cell->wait_object = NULL; cell->wait_object = NULL;
ut_a(arr->n_reserved > 0); ut_a(arr->n_reserved > 0);
...@@ -811,44 +791,17 @@ sync_array_free_cell( ...@@ -811,44 +791,17 @@ sync_array_free_cell(
} }
/************************************************************************** /**************************************************************************
Looks for the cells in the wait array which refer to the wait object Increments the signalled count. */
specified, and sets their corresponding events to the signaled state. In this
way releases the threads waiting for the object to contend for the object.
It is possible that no such cell is found, in which case does nothing. */
void void
sync_array_signal_object( sync_array_object_signalled(
/*=====================*/ /*========================*/
sync_array_t* arr, /* in: wait array */ sync_array_t* arr) /* in: wait array */
void* object) /* in: wait object */
{ {
sync_cell_t* cell;
ulint count;
ulint i;
sync_array_enter(arr); sync_array_enter(arr);
arr->sg_count++; arr->sg_count++;
i = 0;
count = 0;
while (count < arr->n_reserved) {
cell = sync_array_get_nth_cell(arr, i);
if (cell->wait_object != NULL) {
count++;
if (cell->wait_object == object) {
sync_cell_event_set(cell);
}
}
i++;
}
sync_array_exit(arr); sync_array_exit(arr);
} }
...@@ -881,7 +834,17 @@ sync_arr_wake_threads_if_sema_free(void) ...@@ -881,7 +834,17 @@ sync_arr_wake_threads_if_sema_free(void)
if (sync_arr_cell_can_wake_up(cell)) { if (sync_arr_cell_can_wake_up(cell)) {
sync_cell_event_set(cell); if (cell->request_type == SYNC_MUTEX) {
mutex_t* mutex;
mutex = cell->wait_object;
os_event_set(mutex->event);
} else {
rw_lock_t* lock;
lock = cell->wait_object;
os_event_set(lock->event);
}
} }
} }
...@@ -911,7 +874,7 @@ sync_array_print_long_waits(void) ...@@ -911,7 +874,7 @@ sync_array_print_long_waits(void)
cell = sync_array_get_nth_cell(sync_primary_wait_array, i); cell = sync_array_get_nth_cell(sync_primary_wait_array, i);
if (cell->wait_object != NULL if (cell->wait_object != NULL && cell->waiting
&& difftime(time(NULL), cell->reservation_time) > 240) { && difftime(time(NULL), cell->reservation_time) > 240) {
fputs("InnoDB: Warning: a long semaphore wait:\n", fputs("InnoDB: Warning: a long semaphore wait:\n",
stderr); stderr);
...@@ -919,7 +882,7 @@ sync_array_print_long_waits(void) ...@@ -919,7 +882,7 @@ sync_array_print_long_waits(void)
noticed = TRUE; noticed = TRUE;
} }
if (cell->wait_object != NULL if (cell->wait_object != NULL && cell->waiting
&& difftime(time(NULL), cell->reservation_time) && difftime(time(NULL), cell->reservation_time)
> fatal_timeout) { > fatal_timeout) {
fatal = TRUE; fatal = TRUE;
......
...@@ -128,6 +128,7 @@ rw_lock_create_func( ...@@ -128,6 +128,7 @@ rw_lock_create_func(
lock->last_x_file_name = "not yet reserved"; lock->last_x_file_name = "not yet reserved";
lock->last_s_line = 0; lock->last_s_line = 0;
lock->last_x_line = 0; lock->last_x_line = 0;
lock->event = os_event_create(NULL);
mutex_enter(&rw_lock_list_mutex); mutex_enter(&rw_lock_list_mutex);
...@@ -163,6 +164,7 @@ rw_lock_free( ...@@ -163,6 +164,7 @@ rw_lock_free(
mutex_free(rw_lock_get_mutex(lock)); mutex_free(rw_lock_get_mutex(lock));
mutex_enter(&rw_lock_list_mutex); mutex_enter(&rw_lock_list_mutex);
os_event_free(lock->event);
if (UT_LIST_GET_PREV(list, lock)) { if (UT_LIST_GET_PREV(list, lock)) {
ut_a(UT_LIST_GET_PREV(list, lock)->magic_n == RW_LOCK_MAGIC_N); ut_a(UT_LIST_GET_PREV(list, lock)->magic_n == RW_LOCK_MAGIC_N);
......
...@@ -212,6 +212,7 @@ mutex_create_func( ...@@ -212,6 +212,7 @@ mutex_create_func(
os_fast_mutex_init(&(mutex->os_fast_mutex)); os_fast_mutex_init(&(mutex->os_fast_mutex));
mutex->lock_word = 0; mutex->lock_word = 0;
#endif #endif
mutex->event = os_event_create(NULL);
mutex_set_waiters(mutex, 0); mutex_set_waiters(mutex, 0);
mutex->magic_n = MUTEX_MAGIC_N; mutex->magic_n = MUTEX_MAGIC_N;
#ifdef UNIV_SYNC_DEBUG #ifdef UNIV_SYNC_DEBUG
...@@ -288,6 +289,8 @@ mutex_free( ...@@ -288,6 +289,8 @@ mutex_free(
mutex_exit(&mutex_list_mutex); mutex_exit(&mutex_list_mutex);
} }
os_event_free(mutex->event);
#if !defined(_WIN32) || !defined(UNIV_CAN_USE_X86_ASSEMBLER) #if !defined(_WIN32) || !defined(UNIV_CAN_USE_X86_ASSEMBLER)
os_fast_mutex_free(&(mutex->os_fast_mutex)); os_fast_mutex_free(&(mutex->os_fast_mutex));
#endif #endif
...@@ -564,8 +567,8 @@ mutex_signal_object( ...@@ -564,8 +567,8 @@ mutex_signal_object(
/* The memory order of resetting the waiters field and /* The memory order of resetting the waiters field and
signaling the object is important. See LEMMA 1 above. */ signaling the object is important. See LEMMA 1 above. */
os_event_set(mutex->event);
sync_array_signal_object(sync_primary_wait_array, mutex); sync_array_object_signalled(sync_primary_wait_array);
} }
#ifdef UNIV_SYNC_DEBUG #ifdef UNIV_SYNC_DEBUG
...@@ -1114,6 +1117,7 @@ sync_thread_add_level( ...@@ -1114,6 +1117,7 @@ sync_thread_add_level(
ut_a(sync_thread_levels_g(array, SYNC_PURGE_SYS)); ut_a(sync_thread_levels_g(array, SYNC_PURGE_SYS));
} else if (level == SYNC_TREE_NODE) { } else if (level == SYNC_TREE_NODE) {
ut_a(sync_thread_levels_contain(array, SYNC_INDEX_TREE) ut_a(sync_thread_levels_contain(array, SYNC_INDEX_TREE)
|| sync_thread_levels_contain(array, SYNC_DICT_OPERATION)
|| sync_thread_levels_g(array, SYNC_TREE_NODE - 1)); || sync_thread_levels_g(array, SYNC_TREE_NODE - 1));
} else if (level == SYNC_TREE_NODE_FROM_HASH) { } else if (level == SYNC_TREE_NODE_FROM_HASH) {
ut_a(1); ut_a(1);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment