• Zhiguo Niu's avatar
    block/mq-deadline: use correct way to throttling write requests · d47f9717
    Zhiguo Niu authored
    The original formula was inaccurate:
    dd->async_depth = max(1UL, 3 * q->nr_requests / 4);
    
    For write requests, when we assign a tags from sched_tags,
    data->shallow_depth will be passed to sbitmap_find_bit,
    see the following code:
    
    nr = sbitmap_find_bit_in_word(&sb->map[index],
    			min_t (unsigned int,
    			__map_depth(sb, index),
    			depth),
    			alloc_hint, wrap);
    
    The smaller of data->shallow_depth and __map_depth(sb, index)
    will be used as the maximum range when allocating bits.
    
    For a mmc device (one hw queue, deadline I/O scheduler):
    q->nr_requests = sched_tags = 128, so according to the previous
    calculation method, dd->async_depth = data->shallow_depth = 96,
    and the platform is 64bits with 8 cpus, sched_tags.bitmap_tags.sb.shift=5,
    sb.maps[]=32/32/32/32, 32 is smaller than 96, whether it is a read or
    a write I/O, tags can be allocated to the maximum range each time,
    which has not throttling effect.
    
    In addition, refer to the methods of bfg/kyber I/O scheduler,
    limit ratiois are calculated base on sched_tags.bitmap_tags.sb.shift.
    
    This patch can throttle write requests really.
    
    Fixes: 07757588 ("block/mq-deadline: Reserve 25% of scheduler tags for synchronous requests")
    Signed-off-by: default avatarZhiguo Niu <zhiguo.niu@unisoc.com>
    Reviewed-by: default avatarBart Van Assche <bvanassche@acm.org>
    Link: https://lore.kernel.org/r/1691061162-22898-1-git-send-email-zhiguo.niu@unisoc.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
    d47f9717
mq-deadline.c 35 KB