Commit c24ad77d authored by Lucas Stach's avatar Lucas Stach Committed by Linus Torvalds

mm/page_alloc.c: avoid excessive IRQ disabled times in free_unref_page_list()

Since commit 9cca35d4 ("mm, page_alloc: enable/disable IRQs once
when freeing a list of pages") we see excessive IRQ disabled times of up
to 25ms on an embedded ARM system (tracing overhead included).

This is due to graphics buffers being freed back to the system via
release_pages().  Graphics buffers can be huge, so it's not hard to hit
cases where the list of pages to free has 2048 entries.  Disabling IRQs
while freeing all those pages is clearly not a good idea.

Introduce a batch limit, which allows IRQ servicing once every few
pages.  The batch count is the same as used in other parts of the MM
subsystem when dealing with IRQ disabled regions.

Link: http://lkml.kernel.org/r/20171207170314.4419-1-l.stach@pengutronix.de
Fixes: 9cca35d4 ("mm, page_alloc: enable/disable IRQs once when freeing a list of pages")
Signed-off-by: default avatarLucas Stach <l.stach@pengutronix.de>
Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 183f24aa
...@@ -2684,6 +2684,7 @@ void free_unref_page_list(struct list_head *list) ...@@ -2684,6 +2684,7 @@ void free_unref_page_list(struct list_head *list)
{ {
struct page *page, *next; struct page *page, *next;
unsigned long flags, pfn; unsigned long flags, pfn;
int batch_count = 0;
/* Prepare pages for freeing */ /* Prepare pages for freeing */
list_for_each_entry_safe(page, next, list, lru) { list_for_each_entry_safe(page, next, list, lru) {
...@@ -2700,6 +2701,16 @@ void free_unref_page_list(struct list_head *list) ...@@ -2700,6 +2701,16 @@ void free_unref_page_list(struct list_head *list)
set_page_private(page, 0); set_page_private(page, 0);
trace_mm_page_free_batched(page); trace_mm_page_free_batched(page);
free_unref_page_commit(page, pfn); free_unref_page_commit(page, pfn);
/*
* Guard against excessive IRQ disabled times when we get
* a large list of pages to free.
*/
if (++batch_count == SWAP_CLUSTER_MAX) {
local_irq_restore(flags);
batch_count = 0;
local_irq_save(flags);
}
} }
local_irq_restore(flags); local_irq_restore(flags);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment