Commit 573aecaf authored by Josef Bacik's avatar Josef Bacik Committed by Chris Mason

Btrfs: actually limit the size of delalloc range

So forever we have had this thing to limit the amount of delalloc pages we'll
setup to be written out to 128mb.  This is because we have to lock all the pages
in this range, so anything above this gets a bit unweildly, and also without a
limit we'll happily allocate gigantic chunks of disk space.  Turns out our check
for this wasn't quite right, we wouldn't actually limit the chunk we wanted to
write out, we'd just stop looking for more space after we went over the limit.
So if you do a giant 20gb dd on my box with lots of ram I could get 2gig
extents.  This is fine normally, except when you go to relocate these extents
and we can't find enough space to relocate these moster extents, since we have
to be able to allocate exactly the same sized extent to move it around.  So fix
this by actually enforcing the limit.  With this patch I'm no longer seeing
giant 1.5gb extents.  Thanks,
Signed-off-by: default avatarJosef Bacik <jbacik@fusionio.com>
Signed-off-by: default avatarChris Mason <chris.mason@fusionio.com>
parent a4820398
...@@ -1481,10 +1481,12 @@ static noinline u64 find_delalloc_range(struct extent_io_tree *tree, ...@@ -1481,10 +1481,12 @@ static noinline u64 find_delalloc_range(struct extent_io_tree *tree,
*end = state->end; *end = state->end;
cur_start = state->end + 1; cur_start = state->end + 1;
node = rb_next(node); node = rb_next(node);
if (!node)
break;
total_bytes += state->end - state->start + 1; total_bytes += state->end - state->start + 1;
if (total_bytes >= max_bytes) if (total_bytes >= max_bytes) {
*end = *start + max_bytes - 1;
break;
}
if (!node)
break; break;
} }
out: out:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment