Commit 82ef1370 authored by Chunguang Xu's avatar Chunguang Xu Committed by Theodore Ts'o

ext4: avoid s_mb_prefetch to be zero in individual scenarios

Commit cfd73237 ("ext4: add prefetching for block allocation
bitmaps") introduced block bitmap prefetch, and expects to read block
bitmaps of flex_bg through an IO.  However, it seems to ignore the
value range of s_log_groups_per_flex.  In the scenario where the value
of s_log_groups_per_flex is greater than 27, s_mb_prefetch or
s_mb_prefetch_limit will overflow, cause a divide zero exception.

In addition, the logic of calculating nr is also flawed, because the
size of flexbg is fixed during a single mount, but s_mb_prefetch can
be modified, which causes nr to fail to meet the value condition of
[1, flexbg_size].

To solve this problem, we need to set the upper limit of
s_mb_prefetch.  Since we expect to load block bitmaps of a flex_bg
through an IO, we can consider determining a reasonable upper limit
among the IO limit parameters.  After consideration, we chose
BLK_MAX_SEGMENT_SIZE.  This is a good choice to solve divide zero
problem and avoiding performance degradation.

[ Some minor code simplifications to make the changes easy to follow -- TYT ]
Reported-by: default avatarTosk Robot <tencent_os_robot@tencent.com>
Signed-off-by: default avatarChunguang Xu <brookxu@tencent.com>
Reviewed-by: default avatarSamuel Liao <samuelliao@tencent.com>
Reviewed-by: default avatarAndreas Dilger <adilger@dilger.ca>
Link: https://lore.kernel.org/r/1607051143-24508-1-git-send-email-brookxu@tencent.comSigned-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
parent c92dc856
...@@ -2372,9 +2372,9 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac) ...@@ -2372,9 +2372,9 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
nr = sbi->s_mb_prefetch; nr = sbi->s_mb_prefetch;
if (ext4_has_feature_flex_bg(sb)) { if (ext4_has_feature_flex_bg(sb)) {
nr = (group / sbi->s_mb_prefetch) * nr = 1 << sbi->s_log_groups_per_flex;
sbi->s_mb_prefetch; nr -= group & (nr - 1);
nr = nr + sbi->s_mb_prefetch - group; nr = min(nr, sbi->s_mb_prefetch);
} }
prefetch_grp = ext4_mb_prefetch(sb, group, prefetch_grp = ext4_mb_prefetch(sb, group,
nr, &prefetch_ios); nr, &prefetch_ios);
...@@ -2710,7 +2710,8 @@ static int ext4_mb_init_backend(struct super_block *sb) ...@@ -2710,7 +2710,8 @@ static int ext4_mb_init_backend(struct super_block *sb)
if (ext4_has_feature_flex_bg(sb)) { if (ext4_has_feature_flex_bg(sb)) {
/* a single flex group is supposed to be read by a single IO */ /* a single flex group is supposed to be read by a single IO */
sbi->s_mb_prefetch = 1 << sbi->s_es->s_log_groups_per_flex; sbi->s_mb_prefetch = min(1 << sbi->s_es->s_log_groups_per_flex,
BLK_MAX_SEGMENT_SIZE >> (sb->s_blocksize_bits - 9));
sbi->s_mb_prefetch *= 8; /* 8 prefetch IOs in flight at most */ sbi->s_mb_prefetch *= 8; /* 8 prefetch IOs in flight at most */
} else { } else {
sbi->s_mb_prefetch = 32; sbi->s_mb_prefetch = 32;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment