• Coly Li's avatar
    bcache: remove incremental dirty sector counting for bch_sectors_dirty_init() · 80db4e47
    Coly Li authored
    After making bch_sectors_dirty_init() being multithreaded, the existing
    incremental dirty sector counting in bch_root_node_dirty_init() doesn't
    release btree occupation after iterating 500000 (INIT_KEYS_EACH_TIME)
    bkeys. Because a read lock is added on btree root node to prevent the
    btree to be split during the dirty sectors counting, other I/O requester
    has no chance to gain the write lock even restart bcache_btree().
    
    That is to say, the incremental dirty sectors counting is incompatible
    to the multhreaded bch_sectors_dirty_init(). We have to choose one and
    drop another one.
    
    In my testing, with 512 bytes random writes, I generate 1.2T dirty data
    and a btree with 400K nodes. With single thread and incremental dirty
    sectors counting, it takes 30+ minites to register the backing device.
    And with multithreaded dirty sectors counting, the backing device
    registration can be accomplished within 2 minutes.
    
    The 30+ minutes V.S. 2- minutes difference makes me decide to keep
    multithreaded bch_sectors_dirty_init() and drop the incremental dirty
    sectors counting. This is what this patch does.
    
    But INIT_KEYS_EACH_TIME is kept, in sectors_dirty_init_fn() the CPU
    will be released by cond_resched() after every INIT_KEYS_EACH_TIME keys
    iterated. This is to avoid the watchdog reports a bogus soft lockup
    warning.
    
    Fixes: b144e45f
    
     ("bcache: make bch_sectors_dirty_init() to be multithreaded")
    Signed-off-by: default avatarColy Li <colyli@suse.de>
    Cc: stable@vger.kernel.org
    Link: https://lore.kernel.org/r/20220524102336.10684-4-colyli@suse.de
    
    Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
    80db4e47
writeback.c 27.1 KB