- 02 Oct, 2013 3 commits
-
-
Zardosht Kasheff authored
-
Rich Prohaska authored
-
Zardosht Kasheff authored
- have closed cachefiles not immedietely free pairs, but set them to the side - leave freeing of pairs to the evictor and/or shutdown - should a cachefile be reopened before all pairs are freed, the pairs belonging to that cachefile are reintegrated into the cachetable
-
- 01 Oct, 2013 1 commit
-
-
Rich Prohaska authored
#59 get test_lock_timeout_callback to work with valgrind. change the type of a sync_fetch_and_add from bool to int
-
- 26 Sep, 2013 3 commits
-
-
Zardosht Kasheff authored
Revert "stuff" This reverts commit 2423c9d0.
-
Zardosht Kasheff authored
-
Zardosht Kasheff authored
fix tests that use it to not need it
-
- 25 Sep, 2013 4 commits
-
-
Zardosht Kasheff authored
-
Zardosht Kasheff authored
- break up cachetable_flush_cachefile into more digestable functions, - decouple hash_id from filenum - break up close_userdata into close_userdata and free_userdata
-
Zardosht Kasheff authored
cachefiles_list class and move some functionality in there.
-
Yoni Fogel authored
Isolate mempool and OMT into a new class, bndata. Remove key from the leafentry.
-
- 19 Sep, 2013 2 commits
-
-
Rich Prohaska authored
-
John Esmet authored
-
- 18 Sep, 2013 13 commits
-
-
Rich Prohaska authored
-
John Esmet authored
txn object after it commits or aborts
-
Rich Prohaska authored
-
Rich Prohaska authored
-
Rich Prohaska authored
-
John Esmet authored
-
John Esmet authored
-
John Esmet authored
BUILD_TESTING=Off in the cmake config.
-
Rich Prohaska authored
-
John Esmet authored
-
John Esmet authored
properly after a small append into a large append
-
John Esmet authored
-
John Esmet authored
fixes #70
-
- 17 Sep, 2013 3 commits
-
-
Rich Prohaska authored
-
Rich Prohaska authored
-
Rich Prohaska authored
-
- 14 Sep, 2013 1 commit
-
-
John Esmet authored
timing-dependent (though it still is)
-
- 13 Sep, 2013 1 commit
-
-
John Esmet authored
and a new operation in test_stress0 for stress testing coverage
-
- 12 Sep, 2013 2 commits
-
-
Leif Walsh authored
-
Leif Walsh authored
fixes #65
-
- 08 Sep, 2013 1 commit
-
-
Rich Prohaska authored
-
- 21 Aug, 2013 3 commits
-
-
Rich Prohaska authored
-
zkasheff authored
-
Zardosht Kasheff authored
- gets indexer to run in reverse, that is, start at the end and run to beginning - refines locking a bit. An estimate of the position of the hot indexer is stored, that is cheap to look at. Threads that use this estimate with a mutex either do only a quick comparison or set it to a new value. Threads doing writes (with XXX_multiple calls) will check their position with respect to the estimate, and if they see the hot indexer is already past where they will modify, they don't grab the more expensive indexer lock. For insertion workloads that go to the end of the main dictionary of a table/collection, this check should practically always pass.
-
- 20 Aug, 2013 2 commits
-
-
Rich Prohaska authored
-
zkasheff authored
-
- 19 Aug, 2013 1 commit
-
-
Rich Prohaska authored
-