Commit 67886cc6 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] fix a race between set_page_dirty and truncate

Fix a race between set_page_dirty() and truncate.

The page could have been removed from the mapping while this CPU is
spinning on the lock.  __free_pages_ok() will go BUG.

This has not been observed in practice - most callers of
set_page_dirty() hold the page lock which gives exclusion from
truncate.  But zap_pte_range() does not.

A fix for this has been sent to Marcelo also.
parent 0ea7efc9
......@@ -433,8 +433,10 @@ int __set_page_dirty_buffers(struct page *page)
if (!TestSetPageDirty(page)) {
write_lock(&mapping->page_lock);
list_del(&page->list);
list_add(&page->list, &mapping->dirty_pages);
if (page->mapping) { /* Race with truncate? */
list_del(&page->list);
list_add(&page->list, &mapping->dirty_pages);
}
write_unlock(&mapping->page_lock);
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
}
......@@ -467,8 +469,10 @@ int __set_page_dirty_nobuffers(struct page *page)
if (mapping) {
write_lock(&mapping->page_lock);
list_del(&page->list);
list_add(&page->list, &mapping->dirty_pages);
if (page->mapping) { /* Race with truncate? */
list_del(&page->list);
list_add(&page->list, &mapping->dirty_pages);
}
write_unlock(&mapping->page_lock);
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment