- 22 Oct, 2023 40 commits
-
-
Kent Overstreet authored
This switches where we take quota reservations to be per bch_wirte_op instead of per dio_write, so we can drop the quota reservation in the same place as we call i_sectors_acct(), and only take/release ei_quota_lock once. In the future we'd like ei_quota_lock to not be a mutex, so that we can avoid punting to process context before deliving write completions in nocow mode. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
The genradix code can handle multiple threads trying to allocate at the same time - we don't need the genradix_ptr_alloc() call to happen under a lock. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This fixes a regression from percpu freedlists in the btree key cache code: in a rare error path, we were immediately freeing a bkey_cached that had been used before and should've waited for an SRCU barrier. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
These were wrappers around atomic operations that verified that the counter wasn't negative, but they're dead code - delete. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
- Marking a non-static function as inline doesn't actually work and is now causing problems - drop that - Introduce BCACHEFS_LOG_PREFIX for when we want to prefix log messages with bcachefs (filesystem name) - Userspace doesn't have real percpu variables (maybe we can get this fixed someday), put an #ifdef around bch2_disk_reservation_add() fastpath Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
We have a unique lock used for controlling adding to the pagecache: the lock has two states, where both states are shared - the lock may be held multiple times for either state - but not both states at the same time. This is exactly what we need for nocow mode locking, so this patch pulls it out of fs.c into its own file. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
BCH_WRITE_FLUSH is a write flag that causes a journal flush. It's only used in the direct IO path, and this will allow for some consolidation with the regular fsync path, which will help with the upcoming nocow mode. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
- factor out more slowpath code into non-inline function - use bch2_print_string_as_lines(), so our error message doesn't get truncated Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Only used in one place, just inline it there. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
btree_path_copy() doesn't need to call bch2_btree_path_check_sort_fast() - the newly allocated path will always be in the correct position, post copy; also delete some redundant branches from __bch2_btree_path_make_mut(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
- Don't call into bch2_encrypt_bio() when we're not encrypting - Pull slowpath out of trans_lock_write() - Make sure bc2h_trans_journal_res_get() gets inlined. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
- With BCH_WRITE_SYNC, we no longer need the completion in struct dio_write - Pull out bch2_dio_write_copy_iov() into a separate non-inline function, it's code that doesn't run in the common case - Copy mapping and inode pointers into dio_write, avoiding pointer chasing at the start of bch2_dio_write_loop() - kthread_use_mm() is not needed in the common case; move it into bch2_dio_write_loop_async() - factor out various helpers from bch2_dio_write_loop() and rework control flow for better icache utilization Other small optimizations: - bch2_keylist_free() is only used in one place, at the end of the bch2_write() path - drop the reinit - in bch2_disk_reservation_put(), check if res->sectors is nonzero before touching c->online_reserved, since that will likely be a cache miss Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> bcachefs: More DIO write path optimization Better code prefetching (?) Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This adds a new flag for the write path, BCH_WRITE_SYNC, and switches the O_DIRECT write path to use it when we're not running asynchronously. It runs the btree update after the write in the original thread's context instead of a kworker, cutting context switches in half. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Fixes for various checkpatch errors. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Dead code, delete. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This factors out a properly-documented helper for deciding when we want to sort a btree node with MAX_BSETS bsets down to a single bset. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This replaces sysfs btree_avg_write_size with btree_write_stats, which now breaks out statistics by the source of the btree write. Btree writes that are too small are a source of inefficiency, and excessive btree resort overhead - this will let us see what's causing them. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Fixes fstests generic/648 Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Per fstests generic/275, on -ENOSPC we're supposed write until the filesystem is full - i.e. do a partial write instead of failing the full write. This is a partial fix for the buffered write path: we'll still fail on a page boundary. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
- In the btree iterator code that overlays keys from the journal, we were incorrectly specifying level=0 instead of the btree_path's current level in a few places - When we didn't do journal replay, we shouldn't free the journal keys: this fixes cmd_list and cmd_dump, which run in norecovery mode Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
roundup_pow_of_two() is undefined for 0 - oops. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Use __func__ in error messages that refer to function name, and do so more uniformly - suggested by checkpatch.pl Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This adds an inlined version of bch2_bkey_cmp_packed(), and uses it in bch2_sort_keys(), where it's part of the inner loop. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Long ago, bkey_unpack_key() was added to bset.h instead of bkey.h because bkey.h didn't include btree_types.h, which it needs for the compiled unpack function. This patch finally moves it to the proper location. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This replaces an expensive memmove() call with an open-coded version. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This moves the JOURNAL_REPLAY_DONE flag check from bch2_trans_iter_init() to bch2_trans_init(), where we stash a copy in btree_trans - gaining us a small performance improvement. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
checkpatch.pl gives lots of warnings that we don't want - suggested ignore list: ASSIGN_IN_IF UNSPECIFIED_INT - bcachefs coding style prefers single token type names NEW_TYPEDEFS - typedefs are occasionally good FUNCTION_ARGUMENTS - we prefer to look at functions in .c files (hopefully with docbook documentation), not .h file prototypes MULTISTATEMENT_MACRO_USE_DO_WHILE - we have _many_ x-macros and other macros where we can't do this Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
- add bch2_dev_usage_read_fast(), which doesn't return by value - bch_dev_usage is big enough that we don't want the silent memcpy - tweak the allocation path to only call bch2_dev_usage_read() once per bucket allocated Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Daniel Hill authored
crc.compression_type & nouce gets reset to inside bch2_rechecksum_bio(), we set it back to the previous values calculated. This fixes incompressible extents being marked as uncompressed. Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Daniel B. Hill authored
Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This shouldn't be needed anymore, since we don't rely on the pointer validity that this was guarding against anymore - we get a new good reference and save it right after this function. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This separates out the slowpath of bch2_trans_update_by_path_trace() into a new non-inlined helper. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Delete some code when CONFIG_BCACHEFS_DEBUG=n Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
It's mainly used from bch2_inode_write(), so inline it there. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
We don't want to fire the bucket_alloc_fail tracepoint on transaction restart, when we can retry immediately - only when we the allocation actually has to block. Also, switch from sched_clock() to local_clock(), as we've been doing elsewhere. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Now we store the transaction's fn idx in a local variable, instead of redoing the lookup every time we call bch2_trans_init(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This breaks up btree_path_up_until_good_node() so that only the fastpath gets inlined. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
The shrinker assumes freed key cache items are ordered by age, so that it doesn't have to scan the full list to find items that are old enough (according to the srcu code) to be freed. But percpu freelists broke this ordering; this patch fixes this by ensuring we insert items into the proper position. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-