Commit 792ceaef authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

mm/fs: fix pessimization in hole-punching pagecache

I wanted to revert my v3.1 commit d0823576 ("mm: pincer in
truncate_inode_pages_range"), to keep truncate_inode_pages_range() in
synch with shmem_undo_range(); but have stepped back - a change to
hole-punching in truncate_inode_pages_range() is a change to
hole-punching in every filesystem (except tmpfs) that supports it.

If there's a logical proof why no filesystem can depend for its own
correctness on the pincer guarantee in truncate_inode_pages_range() - an
instant when the entire hole is removed from pagecache - then let's
revisit later.  But the evidence is that only tmpfs suffered from the
livelock, and we have no intention of extending hole-punch to ramfs.  So
for now just add a few comments (to match or differ from those in
shmem_undo_range()), and fix one silliness noticed in d0823576...

Its "index == start" addition to the hole-punch termination test was
incomplete: it opened a way for the end condition to be missed, and the
loop go on looking through the radix_tree, all the way to end of file.
Fix that pessimization by resetting index when detected in inner loop.

Note that it's actually hard to hit this case, without the obsessive
concurrent faulting that trinity does: normally all pages are removed in
the initial trylock_page() pass, and this loop finds nothing to do.  I
had to "#if 0" out the initial pass to reproduce bug and test fix.
Signed-off-by: default avatarHugh Dickins <hughd@google.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Lukas Czerner <lczerner@redhat.com>
Cc: Dave Jones <davej@redhat.com>
Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent b1a36650
...@@ -355,14 +355,16 @@ void truncate_inode_pages_range(struct address_space *mapping, ...@@ -355,14 +355,16 @@ void truncate_inode_pages_range(struct address_space *mapping,
for ( ; ; ) { for ( ; ; ) {
cond_resched(); cond_resched();
if (!pagevec_lookup_entries(&pvec, mapping, index, if (!pagevec_lookup_entries(&pvec, mapping, index,
min(end - index, (pgoff_t)PAGEVEC_SIZE), min(end - index, (pgoff_t)PAGEVEC_SIZE), indices)) {
indices)) { /* If all gone from start onwards, we're done */
if (index == start) if (index == start)
break; break;
/* Otherwise restart to make sure all gone */
index = start; index = start;
continue; continue;
} }
if (index == start && indices[0] >= end) { if (index == start && indices[0] >= end) {
/* All gone out of hole to be punched, we're done */
pagevec_remove_exceptionals(&pvec); pagevec_remove_exceptionals(&pvec);
pagevec_release(&pvec); pagevec_release(&pvec);
break; break;
...@@ -373,8 +375,11 @@ void truncate_inode_pages_range(struct address_space *mapping, ...@@ -373,8 +375,11 @@ void truncate_inode_pages_range(struct address_space *mapping,
/* We rely upon deletion not changing page->index */ /* We rely upon deletion not changing page->index */
index = indices[i]; index = indices[i];
if (index >= end) if (index >= end) {
/* Restart punch to make sure all gone */
index = start - 1;
break; break;
}
if (radix_tree_exceptional_entry(page)) { if (radix_tree_exceptional_entry(page)) {
clear_exceptional_entry(mapping, index, page); clear_exceptional_entry(mapping, index, page);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment